amd-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* slow rx 5600 xt fps
@ 2020-05-19 18:59 Javad Karabi
  2020-05-19 19:13 ` Alex Deucher
  0 siblings, 1 reply; 24+ messages in thread
From: Javad Karabi @ 2020-05-19 18:59 UTC (permalink / raw)
  To: amd-gfx list

given this setup:
laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
DRI_PRIME=1 glxgears gears gives me ~300fps

given this setup:
laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
laptop -hdmi-> monitor

glx gears gives me ~1800fps

this doesnt make sense to me because i thought that having the monitor
plugged directly into the card should give best performance.

theres another really weird issue...

given setup 1, where the monitor is plugged in to the card:
when i close the laptop lid, my monitor is "active" and whatnot, and i
can "use it" in a sense

however, heres the weirdness:
the mouse cursor will move along the monitor perfectly smooth and
fine, but all the other updates to the screen are delayed by about 2
or 3 seconds.
that is to say, its as if the laptop is doing everything (e.g. if i
open a terminal, the terminal will open, but it will take 2 seconds
for me to see it)

its almost as if all the frames and everything are being drawn, and
the laptop is running fine and everything, but i simply just dont get
to see it on the monitor, except for one time every 2 seconds.

its hard to articulate, because its so bizarre. its not like, a "low
fps" per se, because the cursor is totally smooth. but its that
_everything else_ is only updated once every couple seconds.
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-19 18:59 slow rx 5600 xt fps Javad Karabi
@ 2020-05-19 19:13 ` Alex Deucher
  2020-05-19 19:20   ` Javad Karabi
  0 siblings, 1 reply; 24+ messages in thread
From: Alex Deucher @ 2020-05-19 19:13 UTC (permalink / raw)
  To: Javad Karabi; +Cc: amd-gfx list

On Tue, May 19, 2020 at 2:59 PM Javad Karabi <karabijavad@gmail.com> wrote:
>
> given this setup:
> laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> DRI_PRIME=1 glxgears gears gives me ~300fps
>
> given this setup:
> laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> laptop -hdmi-> monitor
>
> glx gears gives me ~1800fps
>
> this doesnt make sense to me because i thought that having the monitor
> plugged directly into the card should give best performance.
>

Do you have displays connected to both GPUs?  If you are using X which
ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
IIRC, xf86-video-amdgpu has some optimizations for prime which are not
yet in xf86-video-modesetting.  Which GPU is set up as the primary?
Note that the GPU which does the rendering is not necessarily the one
that the displays are attached to.  The render GPU renders to it's
render buffer and then that data may end up being copied other GPUs
for display.  Also, at this point, all shared buffers have to go
through system memory (this will be changing eventually now that we
support device memory via dma-buf), so there is often an extra copy
involved.

> theres another really weird issue...
>
> given setup 1, where the monitor is plugged in to the card:
> when i close the laptop lid, my monitor is "active" and whatnot, and i
> can "use it" in a sense
>
> however, heres the weirdness:
> the mouse cursor will move along the monitor perfectly smooth and
> fine, but all the other updates to the screen are delayed by about 2
> or 3 seconds.
> that is to say, its as if the laptop is doing everything (e.g. if i
> open a terminal, the terminal will open, but it will take 2 seconds
> for me to see it)
>
> its almost as if all the frames and everything are being drawn, and
> the laptop is running fine and everything, but i simply just dont get
> to see it on the monitor, except for one time every 2 seconds.
>
> its hard to articulate, because its so bizarre. its not like, a "low
> fps" per se, because the cursor is totally smooth. but its that
> _everything else_ is only updated once every couple seconds.

This might also be related to which GPU is the primary.  It still may
be the integrated GPU since that is what is attached to the laptop
panel.  Also the platform and some drivers may do certain things when
the lid is closed.  E.g., for thermal reasons, the integrated GPU or
CPU may have a more limited TDP because the laptop cannot cool as
efficiently.

Alex
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-19 19:13 ` Alex Deucher
@ 2020-05-19 19:20   ` Javad Karabi
  2020-05-19 19:44     ` Javad Karabi
  2020-05-19 21:32     ` Alex Deucher
  0 siblings, 2 replies; 24+ messages in thread
From: Javad Karabi @ 2020-05-19 19:20 UTC (permalink / raw)
  To: Alex Deucher; +Cc: amd-gfx list

im using Driver "amdgpu" in my xorg conf

how does one verify which gpu is the primary? im assuming my intel
card is the primary, since i have not done anything to change that.

also, if all shared buffers have to go through system memory, then
that means an eGPU amdgpu wont work very well in general right?
because going through system memory for the egpu means going over the
thunderbolt connection

and what are the shared buffers youre referring to? for example, if an
application is drawing to a buffer, is that an example of a shared
buffer that has to go through system memory? if so, thats fine, right?
because the application's memory is in system memory, so that copy
wouldnt be an issue.

in general, do you think the "copy buffer across system memory might
be a hindrance for thunderbolt? im trying to figure out which
directions to go to debug and im totally lost, so maybe i can do some
testing that direction?

and for what its worth, when i turn the display "off" via the gnome
display settings, its the same issue as when the laptop lid is closed,
so unless the motherboard reads the "closed lid" the same as "display
off", then im not sure if its thermal issues.

On Tue, May 19, 2020 at 2:14 PM Alex Deucher <alexdeucher@gmail.com> wrote:
>
> On Tue, May 19, 2020 at 2:59 PM Javad Karabi <karabijavad@gmail.com> wrote:
> >
> > given this setup:
> > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> > DRI_PRIME=1 glxgears gears gives me ~300fps
> >
> > given this setup:
> > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > laptop -hdmi-> monitor
> >
> > glx gears gives me ~1800fps
> >
> > this doesnt make sense to me because i thought that having the monitor
> > plugged directly into the card should give best performance.
> >
>
> Do you have displays connected to both GPUs?  If you are using X which
> ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> Note that the GPU which does the rendering is not necessarily the one
> that the displays are attached to.  The render GPU renders to it's
> render buffer and then that data may end up being copied other GPUs
> for display.  Also, at this point, all shared buffers have to go
> through system memory (this will be changing eventually now that we
> support device memory via dma-buf), so there is often an extra copy
> involved.
>
> > theres another really weird issue...
> >
> > given setup 1, where the monitor is plugged in to the card:
> > when i close the laptop lid, my monitor is "active" and whatnot, and i
> > can "use it" in a sense
> >
> > however, heres the weirdness:
> > the mouse cursor will move along the monitor perfectly smooth and
> > fine, but all the other updates to the screen are delayed by about 2
> > or 3 seconds.
> > that is to say, its as if the laptop is doing everything (e.g. if i
> > open a terminal, the terminal will open, but it will take 2 seconds
> > for me to see it)
> >
> > its almost as if all the frames and everything are being drawn, and
> > the laptop is running fine and everything, but i simply just dont get
> > to see it on the monitor, except for one time every 2 seconds.
> >
> > its hard to articulate, because its so bizarre. its not like, a "low
> > fps" per se, because the cursor is totally smooth. but its that
> > _everything else_ is only updated once every couple seconds.
>
> This might also be related to which GPU is the primary.  It still may
> be the integrated GPU since that is what is attached to the laptop
> panel.  Also the platform and some drivers may do certain things when
> the lid is closed.  E.g., for thermal reasons, the integrated GPU or
> CPU may have a more limited TDP because the laptop cannot cool as
> efficiently.
>
> Alex
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-19 19:20   ` Javad Karabi
@ 2020-05-19 19:44     ` Javad Karabi
  2020-05-19 20:01       ` Javad Karabi
  2020-05-19 21:13       ` Alex Deucher
  2020-05-19 21:32     ` Alex Deucher
  1 sibling, 2 replies; 24+ messages in thread
From: Javad Karabi @ 2020-05-19 19:44 UTC (permalink / raw)
  To: Alex Deucher; +Cc: amd-gfx list

just a couple more questions:

- based on what you are aware of, the technical details such as
"shared buffers go through system memory", and all that, do you see
any issues that might exist that i might be missing in my setup? i
cant imagine this being the case because the card works great in
windows, unless the windows driver does something different?

- as far as kernel config, is there anything in particular which
_should_ or _should not_ be enabled/disabled?

- does the vendor matter? for instance, this is an xfx card. when it
comes to different vendors, are there interface changes that might
make one vendor work better for linux than another? i dont really
understand the differences in vendors, but i imagine that the vbios
differs between vendors, and as such, the linux compatibility would
maybe change?

- is the pcie bandwidth possible an issue? the pcie_bw file changes
between values like this:
18446683600662707640 18446744071581623085 128
and sometimes i see this:
4096 0 128
as you can see, the second value seems significantly lower. is that
possibly an issue? possibly due to aspm?

On Tue, May 19, 2020 at 2:20 PM Javad Karabi <karabijavad@gmail.com> wrote:
>
> im using Driver "amdgpu" in my xorg conf
>
> how does one verify which gpu is the primary? im assuming my intel
> card is the primary, since i have not done anything to change that.
>
> also, if all shared buffers have to go through system memory, then
> that means an eGPU amdgpu wont work very well in general right?
> because going through system memory for the egpu means going over the
> thunderbolt connection
>
> and what are the shared buffers youre referring to? for example, if an
> application is drawing to a buffer, is that an example of a shared
> buffer that has to go through system memory? if so, thats fine, right?
> because the application's memory is in system memory, so that copy
> wouldnt be an issue.
>
> in general, do you think the "copy buffer across system memory might
> be a hindrance for thunderbolt? im trying to figure out which
> directions to go to debug and im totally lost, so maybe i can do some
> testing that direction?
>
> and for what its worth, when i turn the display "off" via the gnome
> display settings, its the same issue as when the laptop lid is closed,
> so unless the motherboard reads the "closed lid" the same as "display
> off", then im not sure if its thermal issues.
>
> On Tue, May 19, 2020 at 2:14 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> >
> > On Tue, May 19, 2020 at 2:59 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > >
> > > given this setup:
> > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> > > DRI_PRIME=1 glxgears gears gives me ~300fps
> > >
> > > given this setup:
> > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > > laptop -hdmi-> monitor
> > >
> > > glx gears gives me ~1800fps
> > >
> > > this doesnt make sense to me because i thought that having the monitor
> > > plugged directly into the card should give best performance.
> > >
> >
> > Do you have displays connected to both GPUs?  If you are using X which
> > ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> > IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> > yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> > Note that the GPU which does the rendering is not necessarily the one
> > that the displays are attached to.  The render GPU renders to it's
> > render buffer and then that data may end up being copied other GPUs
> > for display.  Also, at this point, all shared buffers have to go
> > through system memory (this will be changing eventually now that we
> > support device memory via dma-buf), so there is often an extra copy
> > involved.
> >
> > > theres another really weird issue...
> > >
> > > given setup 1, where the monitor is plugged in to the card:
> > > when i close the laptop lid, my monitor is "active" and whatnot, and i
> > > can "use it" in a sense
> > >
> > > however, heres the weirdness:
> > > the mouse cursor will move along the monitor perfectly smooth and
> > > fine, but all the other updates to the screen are delayed by about 2
> > > or 3 seconds.
> > > that is to say, its as if the laptop is doing everything (e.g. if i
> > > open a terminal, the terminal will open, but it will take 2 seconds
> > > for me to see it)
> > >
> > > its almost as if all the frames and everything are being drawn, and
> > > the laptop is running fine and everything, but i simply just dont get
> > > to see it on the monitor, except for one time every 2 seconds.
> > >
> > > its hard to articulate, because its so bizarre. its not like, a "low
> > > fps" per se, because the cursor is totally smooth. but its that
> > > _everything else_ is only updated once every couple seconds.
> >
> > This might also be related to which GPU is the primary.  It still may
> > be the integrated GPU since that is what is attached to the laptop
> > panel.  Also the platform and some drivers may do certain things when
> > the lid is closed.  E.g., for thermal reasons, the integrated GPU or
> > CPU may have a more limited TDP because the laptop cannot cool as
> > efficiently.
> >
> > Alex
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-19 19:44     ` Javad Karabi
@ 2020-05-19 20:01       ` Javad Karabi
  2020-05-19 21:34         ` Alex Deucher
  2020-05-19 21:13       ` Alex Deucher
  1 sibling, 1 reply; 24+ messages in thread
From: Javad Karabi @ 2020-05-19 20:01 UTC (permalink / raw)
  To: Alex Deucher; +Cc: amd-gfx list

another tidbit:
when in linux, the gpu's fans _never_ come on.

even when i run 4 instances of glmark2, the fans do not come on :/
i see the temp hitting just below 50 deg c, and i saw some value that
says that 50c was the max?
isnt 50c low for a max gpu temp?


On Tue, May 19, 2020 at 2:44 PM Javad Karabi <karabijavad@gmail.com> wrote:
>
> just a couple more questions:
>
> - based on what you are aware of, the technical details such as
> "shared buffers go through system memory", and all that, do you see
> any issues that might exist that i might be missing in my setup? i
> cant imagine this being the case because the card works great in
> windows, unless the windows driver does something different?
>
> - as far as kernel config, is there anything in particular which
> _should_ or _should not_ be enabled/disabled?
>
> - does the vendor matter? for instance, this is an xfx card. when it
> comes to different vendors, are there interface changes that might
> make one vendor work better for linux than another? i dont really
> understand the differences in vendors, but i imagine that the vbios
> differs between vendors, and as such, the linux compatibility would
> maybe change?
>
> - is the pcie bandwidth possible an issue? the pcie_bw file changes
> between values like this:
> 18446683600662707640 18446744071581623085 128
> and sometimes i see this:
> 4096 0 128
> as you can see, the second value seems significantly lower. is that
> possibly an issue? possibly due to aspm?
>
> On Tue, May 19, 2020 at 2:20 PM Javad Karabi <karabijavad@gmail.com> wrote:
> >
> > im using Driver "amdgpu" in my xorg conf
> >
> > how does one verify which gpu is the primary? im assuming my intel
> > card is the primary, since i have not done anything to change that.
> >
> > also, if all shared buffers have to go through system memory, then
> > that means an eGPU amdgpu wont work very well in general right?
> > because going through system memory for the egpu means going over the
> > thunderbolt connection
> >
> > and what are the shared buffers youre referring to? for example, if an
> > application is drawing to a buffer, is that an example of a shared
> > buffer that has to go through system memory? if so, thats fine, right?
> > because the application's memory is in system memory, so that copy
> > wouldnt be an issue.
> >
> > in general, do you think the "copy buffer across system memory might
> > be a hindrance for thunderbolt? im trying to figure out which
> > directions to go to debug and im totally lost, so maybe i can do some
> > testing that direction?
> >
> > and for what its worth, when i turn the display "off" via the gnome
> > display settings, its the same issue as when the laptop lid is closed,
> > so unless the motherboard reads the "closed lid" the same as "display
> > off", then im not sure if its thermal issues.
> >
> > On Tue, May 19, 2020 at 2:14 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > >
> > > On Tue, May 19, 2020 at 2:59 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > >
> > > > given this setup:
> > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> > > > DRI_PRIME=1 glxgears gears gives me ~300fps
> > > >
> > > > given this setup:
> > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > > > laptop -hdmi-> monitor
> > > >
> > > > glx gears gives me ~1800fps
> > > >
> > > > this doesnt make sense to me because i thought that having the monitor
> > > > plugged directly into the card should give best performance.
> > > >
> > >
> > > Do you have displays connected to both GPUs?  If you are using X which
> > > ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> > > IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> > > yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> > > Note that the GPU which does the rendering is not necessarily the one
> > > that the displays are attached to.  The render GPU renders to it's
> > > render buffer and then that data may end up being copied other GPUs
> > > for display.  Also, at this point, all shared buffers have to go
> > > through system memory (this will be changing eventually now that we
> > > support device memory via dma-buf), so there is often an extra copy
> > > involved.
> > >
> > > > theres another really weird issue...
> > > >
> > > > given setup 1, where the monitor is plugged in to the card:
> > > > when i close the laptop lid, my monitor is "active" and whatnot, and i
> > > > can "use it" in a sense
> > > >
> > > > however, heres the weirdness:
> > > > the mouse cursor will move along the monitor perfectly smooth and
> > > > fine, but all the other updates to the screen are delayed by about 2
> > > > or 3 seconds.
> > > > that is to say, its as if the laptop is doing everything (e.g. if i
> > > > open a terminal, the terminal will open, but it will take 2 seconds
> > > > for me to see it)
> > > >
> > > > its almost as if all the frames and everything are being drawn, and
> > > > the laptop is running fine and everything, but i simply just dont get
> > > > to see it on the monitor, except for one time every 2 seconds.
> > > >
> > > > its hard to articulate, because its so bizarre. its not like, a "low
> > > > fps" per se, because the cursor is totally smooth. but its that
> > > > _everything else_ is only updated once every couple seconds.
> > >
> > > This might also be related to which GPU is the primary.  It still may
> > > be the integrated GPU since that is what is attached to the laptop
> > > panel.  Also the platform and some drivers may do certain things when
> > > the lid is closed.  E.g., for thermal reasons, the integrated GPU or
> > > CPU may have a more limited TDP because the laptop cannot cool as
> > > efficiently.
> > >
> > > Alex
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-19 19:44     ` Javad Karabi
  2020-05-19 20:01       ` Javad Karabi
@ 2020-05-19 21:13       ` Alex Deucher
  2020-05-19 21:22         ` Javad Karabi
  1 sibling, 1 reply; 24+ messages in thread
From: Alex Deucher @ 2020-05-19 21:13 UTC (permalink / raw)
  To: Javad Karabi; +Cc: amd-gfx list

On Tue, May 19, 2020 at 3:44 PM Javad Karabi <karabijavad@gmail.com> wrote:
>
> just a couple more questions:
>
> - based on what you are aware of, the technical details such as
> "shared buffers go through system memory", and all that, do you see
> any issues that might exist that i might be missing in my setup? i
> cant imagine this being the case because the card works great in
> windows, unless the windows driver does something different?
>

Windows has supported peer to peer DMA for years so it already has a
numbers of optimizations that are only now becoming possible on Linux.

> - as far as kernel config, is there anything in particular which
> _should_ or _should not_ be enabled/disabled?

You'll need the GPU drivers for your devices and dma-buf support.

>
> - does the vendor matter? for instance, this is an xfx card. when it
> comes to different vendors, are there interface changes that might
> make one vendor work better for linux than another? i dont really
> understand the differences in vendors, but i imagine that the vbios
> differs between vendors, and as such, the linux compatibility would
> maybe change?

board vendor shouldn't matter.

>
> - is the pcie bandwidth possible an issue? the pcie_bw file changes
> between values like this:
> 18446683600662707640 18446744071581623085 128
> and sometimes i see this:
> 4096 0 128
> as you can see, the second value seems significantly lower. is that
> possibly an issue? possibly due to aspm?

pcie_bw is not implemented for navi yet so you are just seeing
uninitialized data.  This patch set should clear that up.
https://patchwork.freedesktop.org/patch/366262/

Alex

>
> On Tue, May 19, 2020 at 2:20 PM Javad Karabi <karabijavad@gmail.com> wrote:
> >
> > im using Driver "amdgpu" in my xorg conf
> >
> > how does one verify which gpu is the primary? im assuming my intel
> > card is the primary, since i have not done anything to change that.
> >
> > also, if all shared buffers have to go through system memory, then
> > that means an eGPU amdgpu wont work very well in general right?
> > because going through system memory for the egpu means going over the
> > thunderbolt connection
> >
> > and what are the shared buffers youre referring to? for example, if an
> > application is drawing to a buffer, is that an example of a shared
> > buffer that has to go through system memory? if so, thats fine, right?
> > because the application's memory is in system memory, so that copy
> > wouldnt be an issue.
> >
> > in general, do you think the "copy buffer across system memory might
> > be a hindrance for thunderbolt? im trying to figure out which
> > directions to go to debug and im totally lost, so maybe i can do some
> > testing that direction?
> >
> > and for what its worth, when i turn the display "off" via the gnome
> > display settings, its the same issue as when the laptop lid is closed,
> > so unless the motherboard reads the "closed lid" the same as "display
> > off", then im not sure if its thermal issues.
> >
> > On Tue, May 19, 2020 at 2:14 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > >
> > > On Tue, May 19, 2020 at 2:59 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > >
> > > > given this setup:
> > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> > > > DRI_PRIME=1 glxgears gears gives me ~300fps
> > > >
> > > > given this setup:
> > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > > > laptop -hdmi-> monitor
> > > >
> > > > glx gears gives me ~1800fps
> > > >
> > > > this doesnt make sense to me because i thought that having the monitor
> > > > plugged directly into the card should give best performance.
> > > >
> > >
> > > Do you have displays connected to both GPUs?  If you are using X which
> > > ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> > > IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> > > yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> > > Note that the GPU which does the rendering is not necessarily the one
> > > that the displays are attached to.  The render GPU renders to it's
> > > render buffer and then that data may end up being copied other GPUs
> > > for display.  Also, at this point, all shared buffers have to go
> > > through system memory (this will be changing eventually now that we
> > > support device memory via dma-buf), so there is often an extra copy
> > > involved.
> > >
> > > > theres another really weird issue...
> > > >
> > > > given setup 1, where the monitor is plugged in to the card:
> > > > when i close the laptop lid, my monitor is "active" and whatnot, and i
> > > > can "use it" in a sense
> > > >
> > > > however, heres the weirdness:
> > > > the mouse cursor will move along the monitor perfectly smooth and
> > > > fine, but all the other updates to the screen are delayed by about 2
> > > > or 3 seconds.
> > > > that is to say, its as if the laptop is doing everything (e.g. if i
> > > > open a terminal, the terminal will open, but it will take 2 seconds
> > > > for me to see it)
> > > >
> > > > its almost as if all the frames and everything are being drawn, and
> > > > the laptop is running fine and everything, but i simply just dont get
> > > > to see it on the monitor, except for one time every 2 seconds.
> > > >
> > > > its hard to articulate, because its so bizarre. its not like, a "low
> > > > fps" per se, because the cursor is totally smooth. but its that
> > > > _everything else_ is only updated once every couple seconds.
> > >
> > > This might also be related to which GPU is the primary.  It still may
> > > be the integrated GPU since that is what is attached to the laptop
> > > panel.  Also the platform and some drivers may do certain things when
> > > the lid is closed.  E.g., for thermal reasons, the integrated GPU or
> > > CPU may have a more limited TDP because the laptop cannot cool as
> > > efficiently.
> > >
> > > Alex
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-19 21:13       ` Alex Deucher
@ 2020-05-19 21:22         ` Javad Karabi
  2020-05-19 21:42           ` Alex Deucher
  0 siblings, 1 reply; 24+ messages in thread
From: Javad Karabi @ 2020-05-19 21:22 UTC (permalink / raw)
  To: Alex Deucher; +Cc: amd-gfx list

lol youre quick!

"Windows has supported peer to peer DMA for years so it already has a
numbers of optimizations that are only now becoming possible on Linux"

whoa, i figured linux would be ahead of windows when it comes to
things like that. but peer-to-peer dma is something that is only
recently possible on linux, but has been possible on windows? what
changed recently that allows for peer to peer dma in linux?

also, in the context of a game running opengl on some gpu, is the
"peer-to-peer" dma transfer something like: the game draw's to some
memory it has allocated, then a DMA transfer gets that and moves it
into the graphics card output?

also, i know it can be super annoying trying to debug an issue like
this, with someone like me who has all types of differences from a
normal setup (e.g. using it via egpu, using a kernel with custom
configs and stuff) so as a token of my appreciation i donated 50$ to
the red cross' corona virus outbreak charity thing, on behalf of
amd-gfx.

On Tue, May 19, 2020 at 4:13 PM Alex Deucher <alexdeucher@gmail.com> wrote:
>
> On Tue, May 19, 2020 at 3:44 PM Javad Karabi <karabijavad@gmail.com> wrote:
> >
> > just a couple more questions:
> >
> > - based on what you are aware of, the technical details such as
> > "shared buffers go through system memory", and all that, do you see
> > any issues that might exist that i might be missing in my setup? i
> > cant imagine this being the case because the card works great in
> > windows, unless the windows driver does something different?
> >
>
> Windows has supported peer to peer DMA for years so it already has a
> numbers of optimizations that are only now becoming possible on Linux.
>
> > - as far as kernel config, is there anything in particular which
> > _should_ or _should not_ be enabled/disabled?
>
> You'll need the GPU drivers for your devices and dma-buf support.
>
> >
> > - does the vendor matter? for instance, this is an xfx card. when it
> > comes to different vendors, are there interface changes that might
> > make one vendor work better for linux than another? i dont really
> > understand the differences in vendors, but i imagine that the vbios
> > differs between vendors, and as such, the linux compatibility would
> > maybe change?
>
> board vendor shouldn't matter.
>
> >
> > - is the pcie bandwidth possible an issue? the pcie_bw file changes
> > between values like this:
> > 18446683600662707640 18446744071581623085 128
> > and sometimes i see this:
> > 4096 0 128
> > as you can see, the second value seems significantly lower. is that
> > possibly an issue? possibly due to aspm?
>
> pcie_bw is not implemented for navi yet so you are just seeing
> uninitialized data.  This patch set should clear that up.
> https://patchwork.freedesktop.org/patch/366262/
>
> Alex
>
> >
> > On Tue, May 19, 2020 at 2:20 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > >
> > > im using Driver "amdgpu" in my xorg conf
> > >
> > > how does one verify which gpu is the primary? im assuming my intel
> > > card is the primary, since i have not done anything to change that.
> > >
> > > also, if all shared buffers have to go through system memory, then
> > > that means an eGPU amdgpu wont work very well in general right?
> > > because going through system memory for the egpu means going over the
> > > thunderbolt connection
> > >
> > > and what are the shared buffers youre referring to? for example, if an
> > > application is drawing to a buffer, is that an example of a shared
> > > buffer that has to go through system memory? if so, thats fine, right?
> > > because the application's memory is in system memory, so that copy
> > > wouldnt be an issue.
> > >
> > > in general, do you think the "copy buffer across system memory might
> > > be a hindrance for thunderbolt? im trying to figure out which
> > > directions to go to debug and im totally lost, so maybe i can do some
> > > testing that direction?
> > >
> > > and for what its worth, when i turn the display "off" via the gnome
> > > display settings, its the same issue as when the laptop lid is closed,
> > > so unless the motherboard reads the "closed lid" the same as "display
> > > off", then im not sure if its thermal issues.
> > >
> > > On Tue, May 19, 2020 at 2:14 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > >
> > > > On Tue, May 19, 2020 at 2:59 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > >
> > > > > given this setup:
> > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> > > > > DRI_PRIME=1 glxgears gears gives me ~300fps
> > > > >
> > > > > given this setup:
> > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > > > > laptop -hdmi-> monitor
> > > > >
> > > > > glx gears gives me ~1800fps
> > > > >
> > > > > this doesnt make sense to me because i thought that having the monitor
> > > > > plugged directly into the card should give best performance.
> > > > >
> > > >
> > > > Do you have displays connected to both GPUs?  If you are using X which
> > > > ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> > > > IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> > > > yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> > > > Note that the GPU which does the rendering is not necessarily the one
> > > > that the displays are attached to.  The render GPU renders to it's
> > > > render buffer and then that data may end up being copied other GPUs
> > > > for display.  Also, at this point, all shared buffers have to go
> > > > through system memory (this will be changing eventually now that we
> > > > support device memory via dma-buf), so there is often an extra copy
> > > > involved.
> > > >
> > > > > theres another really weird issue...
> > > > >
> > > > > given setup 1, where the monitor is plugged in to the card:
> > > > > when i close the laptop lid, my monitor is "active" and whatnot, and i
> > > > > can "use it" in a sense
> > > > >
> > > > > however, heres the weirdness:
> > > > > the mouse cursor will move along the monitor perfectly smooth and
> > > > > fine, but all the other updates to the screen are delayed by about 2
> > > > > or 3 seconds.
> > > > > that is to say, its as if the laptop is doing everything (e.g. if i
> > > > > open a terminal, the terminal will open, but it will take 2 seconds
> > > > > for me to see it)
> > > > >
> > > > > its almost as if all the frames and everything are being drawn, and
> > > > > the laptop is running fine and everything, but i simply just dont get
> > > > > to see it on the monitor, except for one time every 2 seconds.
> > > > >
> > > > > its hard to articulate, because its so bizarre. its not like, a "low
> > > > > fps" per se, because the cursor is totally smooth. but its that
> > > > > _everything else_ is only updated once every couple seconds.
> > > >
> > > > This might also be related to which GPU is the primary.  It still may
> > > > be the integrated GPU since that is what is attached to the laptop
> > > > panel.  Also the platform and some drivers may do certain things when
> > > > the lid is closed.  E.g., for thermal reasons, the integrated GPU or
> > > > CPU may have a more limited TDP because the laptop cannot cool as
> > > > efficiently.
> > > >
> > > > Alex
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-19 19:20   ` Javad Karabi
  2020-05-19 19:44     ` Javad Karabi
@ 2020-05-19 21:32     ` Alex Deucher
  1 sibling, 0 replies; 24+ messages in thread
From: Alex Deucher @ 2020-05-19 21:32 UTC (permalink / raw)
  To: Javad Karabi; +Cc: amd-gfx list

On Tue, May 19, 2020 at 3:20 PM Javad Karabi <karabijavad@gmail.com> wrote:
>
> im using Driver "amdgpu" in my xorg conf
>
> how does one verify which gpu is the primary? im assuming my intel
> card is the primary, since i have not done anything to change that.
>

Check your xorg log.

> also, if all shared buffers have to go through system memory, then
> that means an eGPU amdgpu wont work very well in general right?
> because going through system memory for the egpu means going over the
> thunderbolt connection

If you want to render on the dGPU and display on the integrated GPU,
then the content will have to traverse the bus.

>
> and what are the shared buffers youre referring to? for example, if an
> application is drawing to a buffer, is that an example of a shared
> buffer that has to go through system memory? if so, thats fine, right?
> because the application's memory is in system memory, so that copy
> wouldnt be an issue.

For optimal performance, dGPUs will want to render to local vram.  So
when a dGPU is rendering it will render to a buffer in vram.  However,
if the display is connected to the integrated GPU, it can't directly
access the buffer in the dGPU's vram.  In order to transfer the buffer
from the dGPU to the integrated GPU for display, it has to be copied
from vram to buffer in system memory.  This buffer is then shared with
the integrated GPU.  Depending on the integrated GPU's capabilities,
it may be able to use the buffer as is, or it may have to copy the
buffer again to a buffer that it can display from.

>
> in general, do you think the "copy buffer across system memory might
> be a hindrance for thunderbolt? im trying to figure out which
> directions to go to debug and im totally lost, so maybe i can do some
> testing that direction?

If you are mainly concerned with checking the performance of the the
dGPU itself (where the dGPU handles display and rendering), I would
make sure your desktop environment is configured to be running on the
dGPU only.  Take the integrated GPU out of the picture.

>
> and for what its worth, when i turn the display "off" via the gnome
> display settings, its the same issue as when the laptop lid is closed,
> so unless the motherboard reads the "closed lid" the same as "display
> off", then im not sure if its thermal issues.

If the integrated GPU is the primary display, turning the displays off
or closing the lid may signal to the integarted GPU driver that it's
not in use so it can power down.  So it may go to a lower power state
which has a relatively high exit latency.  Every time a copy is
required the integrated GPU has to wake up and do the copy.  The copy
is probably not necessary, but I'm not sure how well optimized most
display servers are in this regard.  Really if all the displays on one
GPU are off and the display server should fallback to same GPU render
and display, but I'm not sure how well this is handled.  The current
multi-GPU support in X is mostly focused on the following two use
cases:
1. Hybrid graphics, where you have a integrated GPU which handles
displays and you have a dGPU which is mainly for render offload.  The
render GPU renders content and it it shared with the display GPU.
2. Multi-GPU displays, where you have a large desktop spread across
multiple GPUs.  The render GPU renders content and it is shared with
the display GPUs.

Alex


>
> On Tue, May 19, 2020 at 2:14 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> >
> > On Tue, May 19, 2020 at 2:59 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > >
> > > given this setup:
> > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> > > DRI_PRIME=1 glxgears gears gives me ~300fps
> > >
> > > given this setup:
> > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > > laptop -hdmi-> monitor
> > >
> > > glx gears gives me ~1800fps
> > >
> > > this doesnt make sense to me because i thought that having the monitor
> > > plugged directly into the card should give best performance.
> > >
> >
> > Do you have displays connected to both GPUs?  If you are using X which
> > ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> > IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> > yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> > Note that the GPU which does the rendering is not necessarily the one
> > that the displays are attached to.  The render GPU renders to it's
> > render buffer and then that data may end up being copied other GPUs
> > for display.  Also, at this point, all shared buffers have to go
> > through system memory (this will be changing eventually now that we
> > support device memory via dma-buf), so there is often an extra copy
> > involved.
> >
> > > theres another really weird issue...
> > >
> > > given setup 1, where the monitor is plugged in to the card:
> > > when i close the laptop lid, my monitor is "active" and whatnot, and i
> > > can "use it" in a sense
> > >
> > > however, heres the weirdness:
> > > the mouse cursor will move along the monitor perfectly smooth and
> > > fine, but all the other updates to the screen are delayed by about 2
> > > or 3 seconds.
> > > that is to say, its as if the laptop is doing everything (e.g. if i
> > > open a terminal, the terminal will open, but it will take 2 seconds
> > > for me to see it)
> > >
> > > its almost as if all the frames and everything are being drawn, and
> > > the laptop is running fine and everything, but i simply just dont get
> > > to see it on the monitor, except for one time every 2 seconds.
> > >
> > > its hard to articulate, because its so bizarre. its not like, a "low
> > > fps" per se, because the cursor is totally smooth. but its that
> > > _everything else_ is only updated once every couple seconds.
> >
> > This might also be related to which GPU is the primary.  It still may
> > be the integrated GPU since that is what is attached to the laptop
> > panel.  Also the platform and some drivers may do certain things when
> > the lid is closed.  E.g., for thermal reasons, the integrated GPU or
> > CPU may have a more limited TDP because the laptop cannot cool as
> > efficiently.
> >
> > Alex
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-19 20:01       ` Javad Karabi
@ 2020-05-19 21:34         ` Alex Deucher
  0 siblings, 0 replies; 24+ messages in thread
From: Alex Deucher @ 2020-05-19 21:34 UTC (permalink / raw)
  To: Javad Karabi; +Cc: amd-gfx list

On Tue, May 19, 2020 at 4:01 PM Javad Karabi <karabijavad@gmail.com> wrote:
>
> another tidbit:
> when in linux, the gpu's fans _never_ come on.
>
> even when i run 4 instances of glmark2, the fans do not come on :/
> i see the temp hitting just below 50 deg c, and i saw some value that
> says that 50c was the max?
> isnt 50c low for a max gpu temp?
>

Maybe you are not using the dGPU for most things.  Use something like
glxinfo to figure out which GPU you are using for different
situations.  E.g.,
glxinfo | grep renderer
vs
DRI_PRIME=1 glxinfo | grep renderer

Alex

>
> On Tue, May 19, 2020 at 2:44 PM Javad Karabi <karabijavad@gmail.com> wrote:
> >
> > just a couple more questions:
> >
> > - based on what you are aware of, the technical details such as
> > "shared buffers go through system memory", and all that, do you see
> > any issues that might exist that i might be missing in my setup? i
> > cant imagine this being the case because the card works great in
> > windows, unless the windows driver does something different?
> >
> > - as far as kernel config, is there anything in particular which
> > _should_ or _should not_ be enabled/disabled?
> >
> > - does the vendor matter? for instance, this is an xfx card. when it
> > comes to different vendors, are there interface changes that might
> > make one vendor work better for linux than another? i dont really
> > understand the differences in vendors, but i imagine that the vbios
> > differs between vendors, and as such, the linux compatibility would
> > maybe change?
> >
> > - is the pcie bandwidth possible an issue? the pcie_bw file changes
> > between values like this:
> > 18446683600662707640 18446744071581623085 128
> > and sometimes i see this:
> > 4096 0 128
> > as you can see, the second value seems significantly lower. is that
> > possibly an issue? possibly due to aspm?
> >
> > On Tue, May 19, 2020 at 2:20 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > >
> > > im using Driver "amdgpu" in my xorg conf
> > >
> > > how does one verify which gpu is the primary? im assuming my intel
> > > card is the primary, since i have not done anything to change that.
> > >
> > > also, if all shared buffers have to go through system memory, then
> > > that means an eGPU amdgpu wont work very well in general right?
> > > because going through system memory for the egpu means going over the
> > > thunderbolt connection
> > >
> > > and what are the shared buffers youre referring to? for example, if an
> > > application is drawing to a buffer, is that an example of a shared
> > > buffer that has to go through system memory? if so, thats fine, right?
> > > because the application's memory is in system memory, so that copy
> > > wouldnt be an issue.
> > >
> > > in general, do you think the "copy buffer across system memory might
> > > be a hindrance for thunderbolt? im trying to figure out which
> > > directions to go to debug and im totally lost, so maybe i can do some
> > > testing that direction?
> > >
> > > and for what its worth, when i turn the display "off" via the gnome
> > > display settings, its the same issue as when the laptop lid is closed,
> > > so unless the motherboard reads the "closed lid" the same as "display
> > > off", then im not sure if its thermal issues.
> > >
> > > On Tue, May 19, 2020 at 2:14 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > >
> > > > On Tue, May 19, 2020 at 2:59 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > >
> > > > > given this setup:
> > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> > > > > DRI_PRIME=1 glxgears gears gives me ~300fps
> > > > >
> > > > > given this setup:
> > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > > > > laptop -hdmi-> monitor
> > > > >
> > > > > glx gears gives me ~1800fps
> > > > >
> > > > > this doesnt make sense to me because i thought that having the monitor
> > > > > plugged directly into the card should give best performance.
> > > > >
> > > >
> > > > Do you have displays connected to both GPUs?  If you are using X which
> > > > ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> > > > IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> > > > yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> > > > Note that the GPU which does the rendering is not necessarily the one
> > > > that the displays are attached to.  The render GPU renders to it's
> > > > render buffer and then that data may end up being copied other GPUs
> > > > for display.  Also, at this point, all shared buffers have to go
> > > > through system memory (this will be changing eventually now that we
> > > > support device memory via dma-buf), so there is often an extra copy
> > > > involved.
> > > >
> > > > > theres another really weird issue...
> > > > >
> > > > > given setup 1, where the monitor is plugged in to the card:
> > > > > when i close the laptop lid, my monitor is "active" and whatnot, and i
> > > > > can "use it" in a sense
> > > > >
> > > > > however, heres the weirdness:
> > > > > the mouse cursor will move along the monitor perfectly smooth and
> > > > > fine, but all the other updates to the screen are delayed by about 2
> > > > > or 3 seconds.
> > > > > that is to say, its as if the laptop is doing everything (e.g. if i
> > > > > open a terminal, the terminal will open, but it will take 2 seconds
> > > > > for me to see it)
> > > > >
> > > > > its almost as if all the frames and everything are being drawn, and
> > > > > the laptop is running fine and everything, but i simply just dont get
> > > > > to see it on the monitor, except for one time every 2 seconds.
> > > > >
> > > > > its hard to articulate, because its so bizarre. its not like, a "low
> > > > > fps" per se, because the cursor is totally smooth. but its that
> > > > > _everything else_ is only updated once every couple seconds.
> > > >
> > > > This might also be related to which GPU is the primary.  It still may
> > > > be the integrated GPU since that is what is attached to the laptop
> > > > panel.  Also the platform and some drivers may do certain things when
> > > > the lid is closed.  E.g., for thermal reasons, the integrated GPU or
> > > > CPU may have a more limited TDP because the laptop cannot cool as
> > > > efficiently.
> > > >
> > > > Alex
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-19 21:22         ` Javad Karabi
@ 2020-05-19 21:42           ` Alex Deucher
  2020-05-20  1:16             ` Javad Karabi
  0 siblings, 1 reply; 24+ messages in thread
From: Alex Deucher @ 2020-05-19 21:42 UTC (permalink / raw)
  To: Javad Karabi; +Cc: amd-gfx list

On Tue, May 19, 2020 at 5:22 PM Javad Karabi <karabijavad@gmail.com> wrote:
>
> lol youre quick!
>
> "Windows has supported peer to peer DMA for years so it already has a
> numbers of optimizations that are only now becoming possible on Linux"
>
> whoa, i figured linux would be ahead of windows when it comes to
> things like that. but peer-to-peer dma is something that is only
> recently possible on linux, but has been possible on windows? what
> changed recently that allows for peer to peer dma in linux?
>

A few things that made this more complicated on Linux:
1. Linux uses IOMMUs more extensively than windows so you can't just
pass around physical bus addresses.
2. Linux supports lots of strange architectures that have a lot of
limitations with respect to peer to peer transactions

It just took years to get all the necessary bits in place in Linux and
make everyone happy.

> also, in the context of a game running opengl on some gpu, is the
> "peer-to-peer" dma transfer something like: the game draw's to some
> memory it has allocated, then a DMA transfer gets that and moves it
> into the graphics card output?

Peer to peer DMA just lets devices access another devices local memory
directly.  So if you have a buffer in vram on one device, you can
share that directly with another device rather than having to copy it
to system memory first.  For example, if you have two GPUs, you can
have one of them copy it's content directly to a buffer in the other
GPU's vram rather than having to go through system memory first.

>
> also, i know it can be super annoying trying to debug an issue like
> this, with someone like me who has all types of differences from a
> normal setup (e.g. using it via egpu, using a kernel with custom
> configs and stuff) so as a token of my appreciation i donated 50$ to
> the red cross' corona virus outbreak charity thing, on behalf of
> amd-gfx.

Thanks,

Alex

>
> On Tue, May 19, 2020 at 4:13 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> >
> > On Tue, May 19, 2020 at 3:44 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > >
> > > just a couple more questions:
> > >
> > > - based on what you are aware of, the technical details such as
> > > "shared buffers go through system memory", and all that, do you see
> > > any issues that might exist that i might be missing in my setup? i
> > > cant imagine this being the case because the card works great in
> > > windows, unless the windows driver does something different?
> > >
> >
> > Windows has supported peer to peer DMA for years so it already has a
> > numbers of optimizations that are only now becoming possible on Linux.
> >
> > > - as far as kernel config, is there anything in particular which
> > > _should_ or _should not_ be enabled/disabled?
> >
> > You'll need the GPU drivers for your devices and dma-buf support.
> >
> > >
> > > - does the vendor matter? for instance, this is an xfx card. when it
> > > comes to different vendors, are there interface changes that might
> > > make one vendor work better for linux than another? i dont really
> > > understand the differences in vendors, but i imagine that the vbios
> > > differs between vendors, and as such, the linux compatibility would
> > > maybe change?
> >
> > board vendor shouldn't matter.
> >
> > >
> > > - is the pcie bandwidth possible an issue? the pcie_bw file changes
> > > between values like this:
> > > 18446683600662707640 18446744071581623085 128
> > > and sometimes i see this:
> > > 4096 0 128
> > > as you can see, the second value seems significantly lower. is that
> > > possibly an issue? possibly due to aspm?
> >
> > pcie_bw is not implemented for navi yet so you are just seeing
> > uninitialized data.  This patch set should clear that up.
> > https://patchwork.freedesktop.org/patch/366262/
> >
> > Alex
> >
> > >
> > > On Tue, May 19, 2020 at 2:20 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > >
> > > > im using Driver "amdgpu" in my xorg conf
> > > >
> > > > how does one verify which gpu is the primary? im assuming my intel
> > > > card is the primary, since i have not done anything to change that.
> > > >
> > > > also, if all shared buffers have to go through system memory, then
> > > > that means an eGPU amdgpu wont work very well in general right?
> > > > because going through system memory for the egpu means going over the
> > > > thunderbolt connection
> > > >
> > > > and what are the shared buffers youre referring to? for example, if an
> > > > application is drawing to a buffer, is that an example of a shared
> > > > buffer that has to go through system memory? if so, thats fine, right?
> > > > because the application's memory is in system memory, so that copy
> > > > wouldnt be an issue.
> > > >
> > > > in general, do you think the "copy buffer across system memory might
> > > > be a hindrance for thunderbolt? im trying to figure out which
> > > > directions to go to debug and im totally lost, so maybe i can do some
> > > > testing that direction?
> > > >
> > > > and for what its worth, when i turn the display "off" via the gnome
> > > > display settings, its the same issue as when the laptop lid is closed,
> > > > so unless the motherboard reads the "closed lid" the same as "display
> > > > off", then im not sure if its thermal issues.
> > > >
> > > > On Tue, May 19, 2020 at 2:14 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > > >
> > > > > On Tue, May 19, 2020 at 2:59 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > > >
> > > > > > given this setup:
> > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> > > > > > DRI_PRIME=1 glxgears gears gives me ~300fps
> > > > > >
> > > > > > given this setup:
> > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > > > > > laptop -hdmi-> monitor
> > > > > >
> > > > > > glx gears gives me ~1800fps
> > > > > >
> > > > > > this doesnt make sense to me because i thought that having the monitor
> > > > > > plugged directly into the card should give best performance.
> > > > > >
> > > > >
> > > > > Do you have displays connected to both GPUs?  If you are using X which
> > > > > ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> > > > > IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> > > > > yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> > > > > Note that the GPU which does the rendering is not necessarily the one
> > > > > that the displays are attached to.  The render GPU renders to it's
> > > > > render buffer and then that data may end up being copied other GPUs
> > > > > for display.  Also, at this point, all shared buffers have to go
> > > > > through system memory (this will be changing eventually now that we
> > > > > support device memory via dma-buf), so there is often an extra copy
> > > > > involved.
> > > > >
> > > > > > theres another really weird issue...
> > > > > >
> > > > > > given setup 1, where the monitor is plugged in to the card:
> > > > > > when i close the laptop lid, my monitor is "active" and whatnot, and i
> > > > > > can "use it" in a sense
> > > > > >
> > > > > > however, heres the weirdness:
> > > > > > the mouse cursor will move along the monitor perfectly smooth and
> > > > > > fine, but all the other updates to the screen are delayed by about 2
> > > > > > or 3 seconds.
> > > > > > that is to say, its as if the laptop is doing everything (e.g. if i
> > > > > > open a terminal, the terminal will open, but it will take 2 seconds
> > > > > > for me to see it)
> > > > > >
> > > > > > its almost as if all the frames and everything are being drawn, and
> > > > > > the laptop is running fine and everything, but i simply just dont get
> > > > > > to see it on the monitor, except for one time every 2 seconds.
> > > > > >
> > > > > > its hard to articulate, because its so bizarre. its not like, a "low
> > > > > > fps" per se, because the cursor is totally smooth. but its that
> > > > > > _everything else_ is only updated once every couple seconds.
> > > > >
> > > > > This might also be related to which GPU is the primary.  It still may
> > > > > be the integrated GPU since that is what is attached to the laptop
> > > > > panel.  Also the platform and some drivers may do certain things when
> > > > > the lid is closed.  E.g., for thermal reasons, the integrated GPU or
> > > > > CPU may have a more limited TDP because the laptop cannot cool as
> > > > > efficiently.
> > > > >
> > > > > Alex
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-19 21:42           ` Alex Deucher
@ 2020-05-20  1:16             ` Javad Karabi
  2020-05-20  1:19               ` Javad Karabi
                                 ` (2 more replies)
  0 siblings, 3 replies; 24+ messages in thread
From: Javad Karabi @ 2020-05-20  1:16 UTC (permalink / raw)
  To: Alex Deucher; +Cc: amd-gfx list

thanks for the answers alex.

so, i went ahead and got a displayport cable to see if that changes
anything. and now, when i run monitor only, and the monitor connected
to the card, it has no issues like before! so i am thinking that
somethings up with either the hdmi cable, or some hdmi related setting
in my system? who knows, but im just gonna roll with only using
displayport cables now.
the previous hdmi cable was actually pretty long, because i was
extending it with an hdmi extension cable, so maybe the signal was
really bad or something :/

but yea, i guess the only real issue now is maybe something simple
related to some sysfs entry about enabling some powermode, voltage,
clock frequency, or something, so that glxgears will give me more than
300 fps. but atleast now i can use a single monitor configuration with
the monitor displayported up to the card.

also, one other thing i think you might be interested in, that was
happening before.

so, previously, with laptop -tb3-> egpu-hdmi> monitor, there was a
funny thing happening which i never could figure out.
when i would look at the X logs, i would see that "modesetting" (for
the intel integrated graphics) was reporting that MonitorA was used
with "eDP-1",  which is correct and what i expected.
when i scrolled further down, i then saw that "HDMI-A-1-2" was being
used for another MonitorB, which also is what i expected (albeit i
have no idea why its saying A-1-2)
but amdgpu was _also_ saying that DisplayPort-1-2 (a port on the
radeon card) was being used for MonitorA, which is the same Monitor
that the modesetting driver had claimed to be using with eDP-1!

so the point is that amdgpu was "using" Monitor0 with DisplayPort-1-2,
although that is what modesetting was using for eDP-1.

anyway, thats a little aside, i doubt it was related to the terrible
hdmi experience i was getting, since its about display port and stuff,
but i thought id let you know about that.

if you think that is a possible issue, im more than happy to plug the
hdmi setup back in and create an issue on gitlab with the logs and
everything

On Tue, May 19, 2020 at 4:42 PM Alex Deucher <alexdeucher@gmail.com> wrote:
>
> On Tue, May 19, 2020 at 5:22 PM Javad Karabi <karabijavad@gmail.com> wrote:
> >
> > lol youre quick!
> >
> > "Windows has supported peer to peer DMA for years so it already has a
> > numbers of optimizations that are only now becoming possible on Linux"
> >
> > whoa, i figured linux would be ahead of windows when it comes to
> > things like that. but peer-to-peer dma is something that is only
> > recently possible on linux, but has been possible on windows? what
> > changed recently that allows for peer to peer dma in linux?
> >
>
> A few things that made this more complicated on Linux:
> 1. Linux uses IOMMUs more extensively than windows so you can't just
> pass around physical bus addresses.
> 2. Linux supports lots of strange architectures that have a lot of
> limitations with respect to peer to peer transactions
>
> It just took years to get all the necessary bits in place in Linux and
> make everyone happy.
>
> > also, in the context of a game running opengl on some gpu, is the
> > "peer-to-peer" dma transfer something like: the game draw's to some
> > memory it has allocated, then a DMA transfer gets that and moves it
> > into the graphics card output?
>
> Peer to peer DMA just lets devices access another devices local memory
> directly.  So if you have a buffer in vram on one device, you can
> share that directly with another device rather than having to copy it
> to system memory first.  For example, if you have two GPUs, you can
> have one of them copy it's content directly to a buffer in the other
> GPU's vram rather than having to go through system memory first.
>
> >
> > also, i know it can be super annoying trying to debug an issue like
> > this, with someone like me who has all types of differences from a
> > normal setup (e.g. using it via egpu, using a kernel with custom
> > configs and stuff) so as a token of my appreciation i donated 50$ to
> > the red cross' corona virus outbreak charity thing, on behalf of
> > amd-gfx.
>
> Thanks,
>
> Alex
>
> >
> > On Tue, May 19, 2020 at 4:13 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > >
> > > On Tue, May 19, 2020 at 3:44 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > >
> > > > just a couple more questions:
> > > >
> > > > - based on what you are aware of, the technical details such as
> > > > "shared buffers go through system memory", and all that, do you see
> > > > any issues that might exist that i might be missing in my setup? i
> > > > cant imagine this being the case because the card works great in
> > > > windows, unless the windows driver does something different?
> > > >
> > >
> > > Windows has supported peer to peer DMA for years so it already has a
> > > numbers of optimizations that are only now becoming possible on Linux.
> > >
> > > > - as far as kernel config, is there anything in particular which
> > > > _should_ or _should not_ be enabled/disabled?
> > >
> > > You'll need the GPU drivers for your devices and dma-buf support.
> > >
> > > >
> > > > - does the vendor matter? for instance, this is an xfx card. when it
> > > > comes to different vendors, are there interface changes that might
> > > > make one vendor work better for linux than another? i dont really
> > > > understand the differences in vendors, but i imagine that the vbios
> > > > differs between vendors, and as such, the linux compatibility would
> > > > maybe change?
> > >
> > > board vendor shouldn't matter.
> > >
> > > >
> > > > - is the pcie bandwidth possible an issue? the pcie_bw file changes
> > > > between values like this:
> > > > 18446683600662707640 18446744071581623085 128
> > > > and sometimes i see this:
> > > > 4096 0 128
> > > > as you can see, the second value seems significantly lower. is that
> > > > possibly an issue? possibly due to aspm?
> > >
> > > pcie_bw is not implemented for navi yet so you are just seeing
> > > uninitialized data.  This patch set should clear that up.
> > > https://patchwork.freedesktop.org/patch/366262/
> > >
> > > Alex
> > >
> > > >
> > > > On Tue, May 19, 2020 at 2:20 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > >
> > > > > im using Driver "amdgpu" in my xorg conf
> > > > >
> > > > > how does one verify which gpu is the primary? im assuming my intel
> > > > > card is the primary, since i have not done anything to change that.
> > > > >
> > > > > also, if all shared buffers have to go through system memory, then
> > > > > that means an eGPU amdgpu wont work very well in general right?
> > > > > because going through system memory for the egpu means going over the
> > > > > thunderbolt connection
> > > > >
> > > > > and what are the shared buffers youre referring to? for example, if an
> > > > > application is drawing to a buffer, is that an example of a shared
> > > > > buffer that has to go through system memory? if so, thats fine, right?
> > > > > because the application's memory is in system memory, so that copy
> > > > > wouldnt be an issue.
> > > > >
> > > > > in general, do you think the "copy buffer across system memory might
> > > > > be a hindrance for thunderbolt? im trying to figure out which
> > > > > directions to go to debug and im totally lost, so maybe i can do some
> > > > > testing that direction?
> > > > >
> > > > > and for what its worth, when i turn the display "off" via the gnome
> > > > > display settings, its the same issue as when the laptop lid is closed,
> > > > > so unless the motherboard reads the "closed lid" the same as "display
> > > > > off", then im not sure if its thermal issues.
> > > > >
> > > > > On Tue, May 19, 2020 at 2:14 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > > > >
> > > > > > On Tue, May 19, 2020 at 2:59 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > > > >
> > > > > > > given this setup:
> > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> > > > > > > DRI_PRIME=1 glxgears gears gives me ~300fps
> > > > > > >
> > > > > > > given this setup:
> > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > > > > > > laptop -hdmi-> monitor
> > > > > > >
> > > > > > > glx gears gives me ~1800fps
> > > > > > >
> > > > > > > this doesnt make sense to me because i thought that having the monitor
> > > > > > > plugged directly into the card should give best performance.
> > > > > > >
> > > > > >
> > > > > > Do you have displays connected to both GPUs?  If you are using X which
> > > > > > ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> > > > > > IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> > > > > > yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> > > > > > Note that the GPU which does the rendering is not necessarily the one
> > > > > > that the displays are attached to.  The render GPU renders to it's
> > > > > > render buffer and then that data may end up being copied other GPUs
> > > > > > for display.  Also, at this point, all shared buffers have to go
> > > > > > through system memory (this will be changing eventually now that we
> > > > > > support device memory via dma-buf), so there is often an extra copy
> > > > > > involved.
> > > > > >
> > > > > > > theres another really weird issue...
> > > > > > >
> > > > > > > given setup 1, where the monitor is plugged in to the card:
> > > > > > > when i close the laptop lid, my monitor is "active" and whatnot, and i
> > > > > > > can "use it" in a sense
> > > > > > >
> > > > > > > however, heres the weirdness:
> > > > > > > the mouse cursor will move along the monitor perfectly smooth and
> > > > > > > fine, but all the other updates to the screen are delayed by about 2
> > > > > > > or 3 seconds.
> > > > > > > that is to say, its as if the laptop is doing everything (e.g. if i
> > > > > > > open a terminal, the terminal will open, but it will take 2 seconds
> > > > > > > for me to see it)
> > > > > > >
> > > > > > > its almost as if all the frames and everything are being drawn, and
> > > > > > > the laptop is running fine and everything, but i simply just dont get
> > > > > > > to see it on the monitor, except for one time every 2 seconds.
> > > > > > >
> > > > > > > its hard to articulate, because its so bizarre. its not like, a "low
> > > > > > > fps" per se, because the cursor is totally smooth. but its that
> > > > > > > _everything else_ is only updated once every couple seconds.
> > > > > >
> > > > > > This might also be related to which GPU is the primary.  It still may
> > > > > > be the integrated GPU since that is what is attached to the laptop
> > > > > > panel.  Also the platform and some drivers may do certain things when
> > > > > > the lid is closed.  E.g., for thermal reasons, the integrated GPU or
> > > > > > CPU may have a more limited TDP because the laptop cannot cool as
> > > > > > efficiently.
> > > > > >
> > > > > > Alex
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-20  1:16             ` Javad Karabi
@ 2020-05-20  1:19               ` Javad Karabi
  2020-05-20  1:20               ` Bridgman, John
  2020-05-20  2:29               ` Alex Deucher
  2 siblings, 0 replies; 24+ messages in thread
From: Javad Karabi @ 2020-05-20  1:19 UTC (permalink / raw)
  To: Alex Deucher; +Cc: amd-gfx list

s/Monitor0/MonitorA

(the Monitor0 and Monitor1 are actually Monitor4 (for the laptop) and
Montor0 (for the hdmi output), atleast i think that was the numbers.)
they were autogenerated Monitor identifiers by xorg, so i dont
remember the exact numbers, but either way, for some reason the
radeon's DisplayPort-1-2 was "using" the same monitor as modesetting's
eDP1

On Tue, May 19, 2020 at 8:16 PM Javad Karabi <karabijavad@gmail.com> wrote:
>
> thanks for the answers alex.
>
> so, i went ahead and got a displayport cable to see if that changes
> anything. and now, when i run monitor only, and the monitor connected
> to the card, it has no issues like before! so i am thinking that
> somethings up with either the hdmi cable, or some hdmi related setting
> in my system? who knows, but im just gonna roll with only using
> displayport cables now.
> the previous hdmi cable was actually pretty long, because i was
> extending it with an hdmi extension cable, so maybe the signal was
> really bad or something :/
>
> but yea, i guess the only real issue now is maybe something simple
> related to some sysfs entry about enabling some powermode, voltage,
> clock frequency, or something, so that glxgears will give me more than
> 300 fps. but atleast now i can use a single monitor configuration with
> the monitor displayported up to the card.
>
> also, one other thing i think you might be interested in, that was
> happening before.
>
> so, previously, with laptop -tb3-> egpu-hdmi> monitor, there was a
> funny thing happening which i never could figure out.
> when i would look at the X logs, i would see that "modesetting" (for
> the intel integrated graphics) was reporting that MonitorA was used
> with "eDP-1",  which is correct and what i expected.
> when i scrolled further down, i then saw that "HDMI-A-1-2" was being
> used for another MonitorB, which also is what i expected (albeit i
> have no idea why its saying A-1-2)
> but amdgpu was _also_ saying that DisplayPort-1-2 (a port on the
> radeon card) was being used for MonitorA, which is the same Monitor
> that the modesetting driver had claimed to be using with eDP-1!
>
> so the point is that amdgpu was "using" Monitor0 with DisplayPort-1-2,
> although that is what modesetting was using for eDP-1.
>
> anyway, thats a little aside, i doubt it was related to the terrible
> hdmi experience i was getting, since its about display port and stuff,
> but i thought id let you know about that.
>
> if you think that is a possible issue, im more than happy to plug the
> hdmi setup back in and create an issue on gitlab with the logs and
> everything
>
> On Tue, May 19, 2020 at 4:42 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> >
> > On Tue, May 19, 2020 at 5:22 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > >
> > > lol youre quick!
> > >
> > > "Windows has supported peer to peer DMA for years so it already has a
> > > numbers of optimizations that are only now becoming possible on Linux"
> > >
> > > whoa, i figured linux would be ahead of windows when it comes to
> > > things like that. but peer-to-peer dma is something that is only
> > > recently possible on linux, but has been possible on windows? what
> > > changed recently that allows for peer to peer dma in linux?
> > >
> >
> > A few things that made this more complicated on Linux:
> > 1. Linux uses IOMMUs more extensively than windows so you can't just
> > pass around physical bus addresses.
> > 2. Linux supports lots of strange architectures that have a lot of
> > limitations with respect to peer to peer transactions
> >
> > It just took years to get all the necessary bits in place in Linux and
> > make everyone happy.
> >
> > > also, in the context of a game running opengl on some gpu, is the
> > > "peer-to-peer" dma transfer something like: the game draw's to some
> > > memory it has allocated, then a DMA transfer gets that and moves it
> > > into the graphics card output?
> >
> > Peer to peer DMA just lets devices access another devices local memory
> > directly.  So if you have a buffer in vram on one device, you can
> > share that directly with another device rather than having to copy it
> > to system memory first.  For example, if you have two GPUs, you can
> > have one of them copy it's content directly to a buffer in the other
> > GPU's vram rather than having to go through system memory first.
> >
> > >
> > > also, i know it can be super annoying trying to debug an issue like
> > > this, with someone like me who has all types of differences from a
> > > normal setup (e.g. using it via egpu, using a kernel with custom
> > > configs and stuff) so as a token of my appreciation i donated 50$ to
> > > the red cross' corona virus outbreak charity thing, on behalf of
> > > amd-gfx.
> >
> > Thanks,
> >
> > Alex
> >
> > >
> > > On Tue, May 19, 2020 at 4:13 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > >
> > > > On Tue, May 19, 2020 at 3:44 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > >
> > > > > just a couple more questions:
> > > > >
> > > > > - based on what you are aware of, the technical details such as
> > > > > "shared buffers go through system memory", and all that, do you see
> > > > > any issues that might exist that i might be missing in my setup? i
> > > > > cant imagine this being the case because the card works great in
> > > > > windows, unless the windows driver does something different?
> > > > >
> > > >
> > > > Windows has supported peer to peer DMA for years so it already has a
> > > > numbers of optimizations that are only now becoming possible on Linux.
> > > >
> > > > > - as far as kernel config, is there anything in particular which
> > > > > _should_ or _should not_ be enabled/disabled?
> > > >
> > > > You'll need the GPU drivers for your devices and dma-buf support.
> > > >
> > > > >
> > > > > - does the vendor matter? for instance, this is an xfx card. when it
> > > > > comes to different vendors, are there interface changes that might
> > > > > make one vendor work better for linux than another? i dont really
> > > > > understand the differences in vendors, but i imagine that the vbios
> > > > > differs between vendors, and as such, the linux compatibility would
> > > > > maybe change?
> > > >
> > > > board vendor shouldn't matter.
> > > >
> > > > >
> > > > > - is the pcie bandwidth possible an issue? the pcie_bw file changes
> > > > > between values like this:
> > > > > 18446683600662707640 18446744071581623085 128
> > > > > and sometimes i see this:
> > > > > 4096 0 128
> > > > > as you can see, the second value seems significantly lower. is that
> > > > > possibly an issue? possibly due to aspm?
> > > >
> > > > pcie_bw is not implemented for navi yet so you are just seeing
> > > > uninitialized data.  This patch set should clear that up.
> > > > https://patchwork.freedesktop.org/patch/366262/
> > > >
> > > > Alex
> > > >
> > > > >
> > > > > On Tue, May 19, 2020 at 2:20 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > > >
> > > > > > im using Driver "amdgpu" in my xorg conf
> > > > > >
> > > > > > how does one verify which gpu is the primary? im assuming my intel
> > > > > > card is the primary, since i have not done anything to change that.
> > > > > >
> > > > > > also, if all shared buffers have to go through system memory, then
> > > > > > that means an eGPU amdgpu wont work very well in general right?
> > > > > > because going through system memory for the egpu means going over the
> > > > > > thunderbolt connection
> > > > > >
> > > > > > and what are the shared buffers youre referring to? for example, if an
> > > > > > application is drawing to a buffer, is that an example of a shared
> > > > > > buffer that has to go through system memory? if so, thats fine, right?
> > > > > > because the application's memory is in system memory, so that copy
> > > > > > wouldnt be an issue.
> > > > > >
> > > > > > in general, do you think the "copy buffer across system memory might
> > > > > > be a hindrance for thunderbolt? im trying to figure out which
> > > > > > directions to go to debug and im totally lost, so maybe i can do some
> > > > > > testing that direction?
> > > > > >
> > > > > > and for what its worth, when i turn the display "off" via the gnome
> > > > > > display settings, its the same issue as when the laptop lid is closed,
> > > > > > so unless the motherboard reads the "closed lid" the same as "display
> > > > > > off", then im not sure if its thermal issues.
> > > > > >
> > > > > > On Tue, May 19, 2020 at 2:14 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > > > > >
> > > > > > > On Tue, May 19, 2020 at 2:59 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > > > > >
> > > > > > > > given this setup:
> > > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> > > > > > > > DRI_PRIME=1 glxgears gears gives me ~300fps
> > > > > > > >
> > > > > > > > given this setup:
> > > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > > > > > > > laptop -hdmi-> monitor
> > > > > > > >
> > > > > > > > glx gears gives me ~1800fps
> > > > > > > >
> > > > > > > > this doesnt make sense to me because i thought that having the monitor
> > > > > > > > plugged directly into the card should give best performance.
> > > > > > > >
> > > > > > >
> > > > > > > Do you have displays connected to both GPUs?  If you are using X which
> > > > > > > ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> > > > > > > IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> > > > > > > yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> > > > > > > Note that the GPU which does the rendering is not necessarily the one
> > > > > > > that the displays are attached to.  The render GPU renders to it's
> > > > > > > render buffer and then that data may end up being copied other GPUs
> > > > > > > for display.  Also, at this point, all shared buffers have to go
> > > > > > > through system memory (this will be changing eventually now that we
> > > > > > > support device memory via dma-buf), so there is often an extra copy
> > > > > > > involved.
> > > > > > >
> > > > > > > > theres another really weird issue...
> > > > > > > >
> > > > > > > > given setup 1, where the monitor is plugged in to the card:
> > > > > > > > when i close the laptop lid, my monitor is "active" and whatnot, and i
> > > > > > > > can "use it" in a sense
> > > > > > > >
> > > > > > > > however, heres the weirdness:
> > > > > > > > the mouse cursor will move along the monitor perfectly smooth and
> > > > > > > > fine, but all the other updates to the screen are delayed by about 2
> > > > > > > > or 3 seconds.
> > > > > > > > that is to say, its as if the laptop is doing everything (e.g. if i
> > > > > > > > open a terminal, the terminal will open, but it will take 2 seconds
> > > > > > > > for me to see it)
> > > > > > > >
> > > > > > > > its almost as if all the frames and everything are being drawn, and
> > > > > > > > the laptop is running fine and everything, but i simply just dont get
> > > > > > > > to see it on the monitor, except for one time every 2 seconds.
> > > > > > > >
> > > > > > > > its hard to articulate, because its so bizarre. its not like, a "low
> > > > > > > > fps" per se, because the cursor is totally smooth. but its that
> > > > > > > > _everything else_ is only updated once every couple seconds.
> > > > > > >
> > > > > > > This might also be related to which GPU is the primary.  It still may
> > > > > > > be the integrated GPU since that is what is attached to the laptop
> > > > > > > panel.  Also the platform and some drivers may do certain things when
> > > > > > > the lid is closed.  E.g., for thermal reasons, the integrated GPU or
> > > > > > > CPU may have a more limited TDP because the laptop cannot cool as
> > > > > > > efficiently.
> > > > > > >
> > > > > > > Alex
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-20  1:16             ` Javad Karabi
  2020-05-20  1:19               ` Javad Karabi
@ 2020-05-20  1:20               ` Bridgman, John
  2020-05-20  1:35                 ` Javad Karabi
  2020-05-20  2:29               ` Alex Deucher
  2 siblings, 1 reply; 24+ messages in thread
From: Bridgman, John @ 2020-05-20  1:20 UTC (permalink / raw)
  To: Javad Karabi, Alex Deucher; +Cc: amd-gfx list


[-- Attachment #1.1: Type: text/plain, Size: 12765 bytes --]

[AMD Official Use Only - Internal Distribution Only]

Suggest you use something more demanding that glxgears as a test - part of the problem is that glxgears runs so fast normally (30x faster than your display) that even a small amount of overhead copying a frame from one place to another makes a huge difference in FPS.

If you use a test program that normally runs at 90 FPS you'll probably find that the "slow" speed is something like 85 FPS, rather than the 6:1 difference you see with glxgears.

________________________________
From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> on behalf of Javad Karabi <karabijavad@gmail.com>
Sent: May 19, 2020 9:16 PM
To: Alex Deucher <alexdeucher@gmail.com>
Cc: amd-gfx list <amd-gfx@lists.freedesktop.org>
Subject: Re: slow rx 5600 xt fps

thanks for the answers alex.

so, i went ahead and got a displayport cable to see if that changes
anything. and now, when i run monitor only, and the monitor connected
to the card, it has no issues like before! so i am thinking that
somethings up with either the hdmi cable, or some hdmi related setting
in my system? who knows, but im just gonna roll with only using
displayport cables now.
the previous hdmi cable was actually pretty long, because i was
extending it with an hdmi extension cable, so maybe the signal was
really bad or something :/

but yea, i guess the only real issue now is maybe something simple
related to some sysfs entry about enabling some powermode, voltage,
clock frequency, or something, so that glxgears will give me more than
300 fps. but atleast now i can use a single monitor configuration with
the monitor displayported up to the card.

also, one other thing i think you might be interested in, that was
happening before.

so, previously, with laptop -tb3-> egpu-hdmi> monitor, there was a
funny thing happening which i never could figure out.
when i would look at the X logs, i would see that "modesetting" (for
the intel integrated graphics) was reporting that MonitorA was used
with "eDP-1",  which is correct and what i expected.
when i scrolled further down, i then saw that "HDMI-A-1-2" was being
used for another MonitorB, which also is what i expected (albeit i
have no idea why its saying A-1-2)
but amdgpu was _also_ saying that DisplayPort-1-2 (a port on the
radeon card) was being used for MonitorA, which is the same Monitor
that the modesetting driver had claimed to be using with eDP-1!

so the point is that amdgpu was "using" Monitor0 with DisplayPort-1-2,
although that is what modesetting was using for eDP-1.

anyway, thats a little aside, i doubt it was related to the terrible
hdmi experience i was getting, since its about display port and stuff,
but i thought id let you know about that.

if you think that is a possible issue, im more than happy to plug the
hdmi setup back in and create an issue on gitlab with the logs and
everything

On Tue, May 19, 2020 at 4:42 PM Alex Deucher <alexdeucher@gmail.com> wrote:
>
> On Tue, May 19, 2020 at 5:22 PM Javad Karabi <karabijavad@gmail.com> wrote:
> >
> > lol youre quick!
> >
> > "Windows has supported peer to peer DMA for years so it already has a
> > numbers of optimizations that are only now becoming possible on Linux"
> >
> > whoa, i figured linux would be ahead of windows when it comes to
> > things like that. but peer-to-peer dma is something that is only
> > recently possible on linux, but has been possible on windows? what
> > changed recently that allows for peer to peer dma in linux?
> >
>
> A few things that made this more complicated on Linux:
> 1. Linux uses IOMMUs more extensively than windows so you can't just
> pass around physical bus addresses.
> 2. Linux supports lots of strange architectures that have a lot of
> limitations with respect to peer to peer transactions
>
> It just took years to get all the necessary bits in place in Linux and
> make everyone happy.
>
> > also, in the context of a game running opengl on some gpu, is the
> > "peer-to-peer" dma transfer something like: the game draw's to some
> > memory it has allocated, then a DMA transfer gets that and moves it
> > into the graphics card output?
>
> Peer to peer DMA just lets devices access another devices local memory
> directly.  So if you have a buffer in vram on one device, you can
> share that directly with another device rather than having to copy it
> to system memory first.  For example, if you have two GPUs, you can
> have one of them copy it's content directly to a buffer in the other
> GPU's vram rather than having to go through system memory first.
>
> >
> > also, i know it can be super annoying trying to debug an issue like
> > this, with someone like me who has all types of differences from a
> > normal setup (e.g. using it via egpu, using a kernel with custom
> > configs and stuff) so as a token of my appreciation i donated 50$ to
> > the red cross' corona virus outbreak charity thing, on behalf of
> > amd-gfx.
>
> Thanks,
>
> Alex
>
> >
> > On Tue, May 19, 2020 at 4:13 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > >
> > > On Tue, May 19, 2020 at 3:44 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > >
> > > > just a couple more questions:
> > > >
> > > > - based on what you are aware of, the technical details such as
> > > > "shared buffers go through system memory", and all that, do you see
> > > > any issues that might exist that i might be missing in my setup? i
> > > > cant imagine this being the case because the card works great in
> > > > windows, unless the windows driver does something different?
> > > >
> > >
> > > Windows has supported peer to peer DMA for years so it already has a
> > > numbers of optimizations that are only now becoming possible on Linux.
> > >
> > > > - as far as kernel config, is there anything in particular which
> > > > _should_ or _should not_ be enabled/disabled?
> > >
> > > You'll need the GPU drivers for your devices and dma-buf support.
> > >
> > > >
> > > > - does the vendor matter? for instance, this is an xfx card. when it
> > > > comes to different vendors, are there interface changes that might
> > > > make one vendor work better for linux than another? i dont really
> > > > understand the differences in vendors, but i imagine that the vbios
> > > > differs between vendors, and as such, the linux compatibility would
> > > > maybe change?
> > >
> > > board vendor shouldn't matter.
> > >
> > > >
> > > > - is the pcie bandwidth possible an issue? the pcie_bw file changes
> > > > between values like this:
> > > > 18446683600662707640 18446744071581623085 128
> > > > and sometimes i see this:
> > > > 4096 0 128
> > > > as you can see, the second value seems significantly lower. is that
> > > > possibly an issue? possibly due to aspm?
> > >
> > > pcie_bw is not implemented for navi yet so you are just seeing
> > > uninitialized data.  This patch set should clear that up.
> > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.freedesktop.org%2Fpatch%2F366262%2F&amp;data=02%7C01%7Cjohn.bridgman%40amd.com%7C07bde460768d4af97a0a08d7fc5b7e3f%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637255342498350409&amp;sdata=LnoK84DCjYelteqMR7w2UZ2VH6lM0vojz9eeTH7odXI%3D&amp;reserved=0
> > >
> > > Alex
> > >
> > > >
> > > > On Tue, May 19, 2020 at 2:20 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > >
> > > > > im using Driver "amdgpu" in my xorg conf
> > > > >
> > > > > how does one verify which gpu is the primary? im assuming my intel
> > > > > card is the primary, since i have not done anything to change that.
> > > > >
> > > > > also, if all shared buffers have to go through system memory, then
> > > > > that means an eGPU amdgpu wont work very well in general right?
> > > > > because going through system memory for the egpu means going over the
> > > > > thunderbolt connection
> > > > >
> > > > > and what are the shared buffers youre referring to? for example, if an
> > > > > application is drawing to a buffer, is that an example of a shared
> > > > > buffer that has to go through system memory? if so, thats fine, right?
> > > > > because the application's memory is in system memory, so that copy
> > > > > wouldnt be an issue.
> > > > >
> > > > > in general, do you think the "copy buffer across system memory might
> > > > > be a hindrance for thunderbolt? im trying to figure out which
> > > > > directions to go to debug and im totally lost, so maybe i can do some
> > > > > testing that direction?
> > > > >
> > > > > and for what its worth, when i turn the display "off" via the gnome
> > > > > display settings, its the same issue as when the laptop lid is closed,
> > > > > so unless the motherboard reads the "closed lid" the same as "display
> > > > > off", then im not sure if its thermal issues.
> > > > >
> > > > > On Tue, May 19, 2020 at 2:14 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > > > >
> > > > > > On Tue, May 19, 2020 at 2:59 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > > > >
> > > > > > > given this setup:
> > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> > > > > > > DRI_PRIME=1 glxgears gears gives me ~300fps
> > > > > > >
> > > > > > > given this setup:
> > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > > > > > > laptop -hdmi-> monitor
> > > > > > >
> > > > > > > glx gears gives me ~1800fps
> > > > > > >
> > > > > > > this doesnt make sense to me because i thought that having the monitor
> > > > > > > plugged directly into the card should give best performance.
> > > > > > >
> > > > > >
> > > > > > Do you have displays connected to both GPUs?  If you are using X which
> > > > > > ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> > > > > > IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> > > > > > yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> > > > > > Note that the GPU which does the rendering is not necessarily the one
> > > > > > that the displays are attached to.  The render GPU renders to it's
> > > > > > render buffer and then that data may end up being copied other GPUs
> > > > > > for display.  Also, at this point, all shared buffers have to go
> > > > > > through system memory (this will be changing eventually now that we
> > > > > > support device memory via dma-buf), so there is often an extra copy
> > > > > > involved.
> > > > > >
> > > > > > > theres another really weird issue...
> > > > > > >
> > > > > > > given setup 1, where the monitor is plugged in to the card:
> > > > > > > when i close the laptop lid, my monitor is "active" and whatnot, and i
> > > > > > > can "use it" in a sense
> > > > > > >
> > > > > > > however, heres the weirdness:
> > > > > > > the mouse cursor will move along the monitor perfectly smooth and
> > > > > > > fine, but all the other updates to the screen are delayed by about 2
> > > > > > > or 3 seconds.
> > > > > > > that is to say, its as if the laptop is doing everything (e.g. if i
> > > > > > > open a terminal, the terminal will open, but it will take 2 seconds
> > > > > > > for me to see it)
> > > > > > >
> > > > > > > its almost as if all the frames and everything are being drawn, and
> > > > > > > the laptop is running fine and everything, but i simply just dont get
> > > > > > > to see it on the monitor, except for one time every 2 seconds.
> > > > > > >
> > > > > > > its hard to articulate, because its so bizarre. its not like, a "low
> > > > > > > fps" per se, because the cursor is totally smooth. but its that
> > > > > > > _everything else_ is only updated once every couple seconds.
> > > > > >
> > > > > > This might also be related to which GPU is the primary.  It still may
> > > > > > be the integrated GPU since that is what is attached to the laptop
> > > > > > panel.  Also the platform and some drivers may do certain things when
> > > > > > the lid is closed.  E.g., for thermal reasons, the integrated GPU or
> > > > > > CPU may have a more limited TDP because the laptop cannot cool as
> > > > > > efficiently.
> > > > > >
> > > > > > Alex
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cjohn.bridgman%40amd.com%7C07bde460768d4af97a0a08d7fc5b7e3f%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637255342498350409&amp;sdata=h%2BvYNKSx6vPQ5PNDKW2dBO3I4oEr3wV%2FkyLHJv08e6A%3D&amp;reserved=0

[-- Attachment #1.2: Type: text/html, Size: 18242 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-20  1:20               ` Bridgman, John
@ 2020-05-20  1:35                 ` Javad Karabi
  0 siblings, 0 replies; 24+ messages in thread
From: Javad Karabi @ 2020-05-20  1:35 UTC (permalink / raw)
  To: Bridgman, John; +Cc: Alex Deucher, amd-gfx list

John,

yea, totally agree with you.
one other thing i havent mentioned is that, each time, ive also been
testing everything by running dota 2 with graphics settings all the
way up. and the behavior in dota2 has been consistent

its funny: when i run dota 2, it consistently hovers at 40fps, but the
weird thing is that with graphics settings all the way low, or
graphics settings all the way up, it sticks to 40fps. regardless of
vsync on / off.
i didnt mention my testing of dota 2 because i figured that glxgears
would summarize the issue best, but i do understand what you mean by
trying a more demanding test.
ive also been testing with glmark2, and it would only give 300-400fps too

heres an example:

$ vblank_mode=0 DRI_PRIME=1 glmark2
ATTENTION: default value of option vblank_mode overridden by environment.
ATTENTION: option value of option vblank_mode ignored.
=======================================================
    glmark2 2014.03+git20150611.fa71af2d
=======================================================
    OpenGL Information
    GL_VENDOR:     X.Org
    GL_RENDERER:   AMD Radeon RX 5600 XT (NAVI10, DRM 3.36.0,
5.6.13-karabijavad, LLVM 9.0.1)
    GL_VERSION:    4.6 (Compatibility Profile) Mesa 20.0.4
=======================================================
[build] use-vbo=false: FPS: 128 FrameTime: 7.812 ms
[build] use-vbo=true: FPS: 129 FrameTime: 7.752 ms



On Tue, May 19, 2020 at 8:20 PM Bridgman, John <John.Bridgman@amd.com> wrote:
>
> [AMD Official Use Only - Internal Distribution Only]
>
>
> Suggest you use something more demanding that glxgears as a test - part of the problem is that glxgears runs so fast normally (30x faster than your display) that even a small amount of overhead copying a frame from one place to another makes a huge difference in FPS.
>
> If you use a test program that normally runs at 90 FPS you'll probably find that the "slow" speed is something like 85 FPS, rather than the 6:1 difference you see with glxgears.
>
> ________________________________
> From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> on behalf of Javad Karabi <karabijavad@gmail.com>
> Sent: May 19, 2020 9:16 PM
> To: Alex Deucher <alexdeucher@gmail.com>
> Cc: amd-gfx list <amd-gfx@lists.freedesktop.org>
> Subject: Re: slow rx 5600 xt fps
>
> thanks for the answers alex.
>
> so, i went ahead and got a displayport cable to see if that changes
> anything. and now, when i run monitor only, and the monitor connected
> to the card, it has no issues like before! so i am thinking that
> somethings up with either the hdmi cable, or some hdmi related setting
> in my system? who knows, but im just gonna roll with only using
> displayport cables now.
> the previous hdmi cable was actually pretty long, because i was
> extending it with an hdmi extension cable, so maybe the signal was
> really bad or something :/
>
> but yea, i guess the only real issue now is maybe something simple
> related to some sysfs entry about enabling some powermode, voltage,
> clock frequency, or something, so that glxgears will give me more than
> 300 fps. but atleast now i can use a single monitor configuration with
> the monitor displayported up to the card.
>
> also, one other thing i think you might be interested in, that was
> happening before.
>
> so, previously, with laptop -tb3-> egpu-hdmi> monitor, there was a
> funny thing happening which i never could figure out.
> when i would look at the X logs, i would see that "modesetting" (for
> the intel integrated graphics) was reporting that MonitorA was used
> with "eDP-1",  which is correct and what i expected.
> when i scrolled further down, i then saw that "HDMI-A-1-2" was being
> used for another MonitorB, which also is what i expected (albeit i
> have no idea why its saying A-1-2)
> but amdgpu was _also_ saying that DisplayPort-1-2 (a port on the
> radeon card) was being used for MonitorA, which is the same Monitor
> that the modesetting driver had claimed to be using with eDP-1!
>
> so the point is that amdgpu was "using" Monitor0 with DisplayPort-1-2,
> although that is what modesetting was using for eDP-1.
>
> anyway, thats a little aside, i doubt it was related to the terrible
> hdmi experience i was getting, since its about display port and stuff,
> but i thought id let you know about that.
>
> if you think that is a possible issue, im more than happy to plug the
> hdmi setup back in and create an issue on gitlab with the logs and
> everything
>
> On Tue, May 19, 2020 at 4:42 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> >
> > On Tue, May 19, 2020 at 5:22 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > >
> > > lol youre quick!
> > >
> > > "Windows has supported peer to peer DMA for years so it already has a
> > > numbers of optimizations that are only now becoming possible on Linux"
> > >
> > > whoa, i figured linux would be ahead of windows when it comes to
> > > things like that. but peer-to-peer dma is something that is only
> > > recently possible on linux, but has been possible on windows? what
> > > changed recently that allows for peer to peer dma in linux?
> > >
> >
> > A few things that made this more complicated on Linux:
> > 1. Linux uses IOMMUs more extensively than windows so you can't just
> > pass around physical bus addresses.
> > 2. Linux supports lots of strange architectures that have a lot of
> > limitations with respect to peer to peer transactions
> >
> > It just took years to get all the necessary bits in place in Linux and
> > make everyone happy.
> >
> > > also, in the context of a game running opengl on some gpu, is the
> > > "peer-to-peer" dma transfer something like: the game draw's to some
> > > memory it has allocated, then a DMA transfer gets that and moves it
> > > into the graphics card output?
> >
> > Peer to peer DMA just lets devices access another devices local memory
> > directly.  So if you have a buffer in vram on one device, you can
> > share that directly with another device rather than having to copy it
> > to system memory first.  For example, if you have two GPUs, you can
> > have one of them copy it's content directly to a buffer in the other
> > GPU's vram rather than having to go through system memory first.
> >
> > >
> > > also, i know it can be super annoying trying to debug an issue like
> > > this, with someone like me who has all types of differences from a
> > > normal setup (e.g. using it via egpu, using a kernel with custom
> > > configs and stuff) so as a token of my appreciation i donated 50$ to
> > > the red cross' corona virus outbreak charity thing, on behalf of
> > > amd-gfx.
> >
> > Thanks,
> >
> > Alex
> >
> > >
> > > On Tue, May 19, 2020 at 4:13 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > >
> > > > On Tue, May 19, 2020 at 3:44 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > >
> > > > > just a couple more questions:
> > > > >
> > > > > - based on what you are aware of, the technical details such as
> > > > > "shared buffers go through system memory", and all that, do you see
> > > > > any issues that might exist that i might be missing in my setup? i
> > > > > cant imagine this being the case because the card works great in
> > > > > windows, unless the windows driver does something different?
> > > > >
> > > >
> > > > Windows has supported peer to peer DMA for years so it already has a
> > > > numbers of optimizations that are only now becoming possible on Linux.
> > > >
> > > > > - as far as kernel config, is there anything in particular which
> > > > > _should_ or _should not_ be enabled/disabled?
> > > >
> > > > You'll need the GPU drivers for your devices and dma-buf support.
> > > >
> > > > >
> > > > > - does the vendor matter? for instance, this is an xfx card. when it
> > > > > comes to different vendors, are there interface changes that might
> > > > > make one vendor work better for linux than another? i dont really
> > > > > understand the differences in vendors, but i imagine that the vbios
> > > > > differs between vendors, and as such, the linux compatibility would
> > > > > maybe change?
> > > >
> > > > board vendor shouldn't matter.
> > > >
> > > > >
> > > > > - is the pcie bandwidth possible an issue? the pcie_bw file changes
> > > > > between values like this:
> > > > > 18446683600662707640 18446744071581623085 128
> > > > > and sometimes i see this:
> > > > > 4096 0 128
> > > > > as you can see, the second value seems significantly lower. is that
> > > > > possibly an issue? possibly due to aspm?
> > > >
> > > > pcie_bw is not implemented for navi yet so you are just seeing
> > > > uninitialized data.  This patch set should clear that up.
> > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.freedesktop.org%2Fpatch%2F366262%2F&amp;data=02%7C01%7Cjohn.bridgman%40amd.com%7C07bde460768d4af97a0a08d7fc5b7e3f%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637255342498350409&amp;sdata=LnoK84DCjYelteqMR7w2UZ2VH6lM0vojz9eeTH7odXI%3D&amp;reserved=0
> > > >
> > > > Alex
> > > >
> > > > >
> > > > > On Tue, May 19, 2020 at 2:20 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > > >
> > > > > > im using Driver "amdgpu" in my xorg conf
> > > > > >
> > > > > > how does one verify which gpu is the primary? im assuming my intel
> > > > > > card is the primary, since i have not done anything to change that.
> > > > > >
> > > > > > also, if all shared buffers have to go through system memory, then
> > > > > > that means an eGPU amdgpu wont work very well in general right?
> > > > > > because going through system memory for the egpu means going over the
> > > > > > thunderbolt connection
> > > > > >
> > > > > > and what are the shared buffers youre referring to? for example, if an
> > > > > > application is drawing to a buffer, is that an example of a shared
> > > > > > buffer that has to go through system memory? if so, thats fine, right?
> > > > > > because the application's memory is in system memory, so that copy
> > > > > > wouldnt be an issue.
> > > > > >
> > > > > > in general, do you think the "copy buffer across system memory might
> > > > > > be a hindrance for thunderbolt? im trying to figure out which
> > > > > > directions to go to debug and im totally lost, so maybe i can do some
> > > > > > testing that direction?
> > > > > >
> > > > > > and for what its worth, when i turn the display "off" via the gnome
> > > > > > display settings, its the same issue as when the laptop lid is closed,
> > > > > > so unless the motherboard reads the "closed lid" the same as "display
> > > > > > off", then im not sure if its thermal issues.
> > > > > >
> > > > > > On Tue, May 19, 2020 at 2:14 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > > > > >
> > > > > > > On Tue, May 19, 2020 at 2:59 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > > > > >
> > > > > > > > given this setup:
> > > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> > > > > > > > DRI_PRIME=1 glxgears gears gives me ~300fps
> > > > > > > >
> > > > > > > > given this setup:
> > > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > > > > > > > laptop -hdmi-> monitor
> > > > > > > >
> > > > > > > > glx gears gives me ~1800fps
> > > > > > > >
> > > > > > > > this doesnt make sense to me because i thought that having the monitor
> > > > > > > > plugged directly into the card should give best performance.
> > > > > > > >
> > > > > > >
> > > > > > > Do you have displays connected to both GPUs?  If you are using X which
> > > > > > > ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> > > > > > > IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> > > > > > > yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> > > > > > > Note that the GPU which does the rendering is not necessarily the one
> > > > > > > that the displays are attached to.  The render GPU renders to it's
> > > > > > > render buffer and then that data may end up being copied other GPUs
> > > > > > > for display.  Also, at this point, all shared buffers have to go
> > > > > > > through system memory (this will be changing eventually now that we
> > > > > > > support device memory via dma-buf), so there is often an extra copy
> > > > > > > involved.
> > > > > > >
> > > > > > > > theres another really weird issue...
> > > > > > > >
> > > > > > > > given setup 1, where the monitor is plugged in to the card:
> > > > > > > > when i close the laptop lid, my monitor is "active" and whatnot, and i
> > > > > > > > can "use it" in a sense
> > > > > > > >
> > > > > > > > however, heres the weirdness:
> > > > > > > > the mouse cursor will move along the monitor perfectly smooth and
> > > > > > > > fine, but all the other updates to the screen are delayed by about 2
> > > > > > > > or 3 seconds.
> > > > > > > > that is to say, its as if the laptop is doing everything (e.g. if i
> > > > > > > > open a terminal, the terminal will open, but it will take 2 seconds
> > > > > > > > for me to see it)
> > > > > > > >
> > > > > > > > its almost as if all the frames and everything are being drawn, and
> > > > > > > > the laptop is running fine and everything, but i simply just dont get
> > > > > > > > to see it on the monitor, except for one time every 2 seconds.
> > > > > > > >
> > > > > > > > its hard to articulate, because its so bizarre. its not like, a "low
> > > > > > > > fps" per se, because the cursor is totally smooth. but its that
> > > > > > > > _everything else_ is only updated once every couple seconds.
> > > > > > >
> > > > > > > This might also be related to which GPU is the primary.  It still may
> > > > > > > be the integrated GPU since that is what is attached to the laptop
> > > > > > > panel.  Also the platform and some drivers may do certain things when
> > > > > > > the lid is closed.  E.g., for thermal reasons, the integrated GPU or
> > > > > > > CPU may have a more limited TDP because the laptop cannot cool as
> > > > > > > efficiently.
> > > > > > >
> > > > > > > Alex
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cjohn.bridgman%40amd.com%7C07bde460768d4af97a0a08d7fc5b7e3f%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637255342498350409&amp;sdata=h%2BvYNKSx6vPQ5PNDKW2dBO3I4oEr3wV%2FkyLHJv08e6A%3D&amp;reserved=0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-20  1:16             ` Javad Karabi
  2020-05-20  1:19               ` Javad Karabi
  2020-05-20  1:20               ` Bridgman, John
@ 2020-05-20  2:29               ` Alex Deucher
  2020-05-20 22:04                 ` Javad Karabi
  2 siblings, 1 reply; 24+ messages in thread
From: Alex Deucher @ 2020-05-20  2:29 UTC (permalink / raw)
  To: Javad Karabi; +Cc: amd-gfx list

On Tue, May 19, 2020 at 9:16 PM Javad Karabi <karabijavad@gmail.com> wrote:
>
> thanks for the answers alex.
>
> so, i went ahead and got a displayport cable to see if that changes
> anything. and now, when i run monitor only, and the monitor connected
> to the card, it has no issues like before! so i am thinking that
> somethings up with either the hdmi cable, or some hdmi related setting
> in my system? who knows, but im just gonna roll with only using
> displayport cables now.
> the previous hdmi cable was actually pretty long, because i was
> extending it with an hdmi extension cable, so maybe the signal was
> really bad or something :/
>
> but yea, i guess the only real issue now is maybe something simple
> related to some sysfs entry about enabling some powermode, voltage,
> clock frequency, or something, so that glxgears will give me more than
> 300 fps. but atleast now i can use a single monitor configuration with
> the monitor displayported up to the card.
>

The GPU dynamically adjusts the clocks and voltages based on load.  No
manual configuration is required.

At this point, we probably need to see you xorg log and dmesg output
to try and figure out exactly what is going on.  I still suspect there
is some interaction going on with both GPUs and the integrated GPU
being the primary, so as I mentioned before, you should try and run X
on just the amdgpu rather than trying to use both of them.

Alex


> also, one other thing i think you might be interested in, that was
> happening before.
>
> so, previously, with laptop -tb3-> egpu-hdmi> monitor, there was a
> funny thing happening which i never could figure out.
> when i would look at the X logs, i would see that "modesetting" (for
> the intel integrated graphics) was reporting that MonitorA was used
> with "eDP-1",  which is correct and what i expected.
> when i scrolled further down, i then saw that "HDMI-A-1-2" was being
> used for another MonitorB, which also is what i expected (albeit i
> have no idea why its saying A-1-2)
> but amdgpu was _also_ saying that DisplayPort-1-2 (a port on the
> radeon card) was being used for MonitorA, which is the same Monitor
> that the modesetting driver had claimed to be using with eDP-1!
>
> so the point is that amdgpu was "using" Monitor0 with DisplayPort-1-2,
> although that is what modesetting was using for eDP-1.
>
> anyway, thats a little aside, i doubt it was related to the terrible
> hdmi experience i was getting, since its about display port and stuff,
> but i thought id let you know about that.
>
> if you think that is a possible issue, im more than happy to plug the
> hdmi setup back in and create an issue on gitlab with the logs and
> everything
>
> On Tue, May 19, 2020 at 4:42 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> >
> > On Tue, May 19, 2020 at 5:22 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > >
> > > lol youre quick!
> > >
> > > "Windows has supported peer to peer DMA for years so it already has a
> > > numbers of optimizations that are only now becoming possible on Linux"
> > >
> > > whoa, i figured linux would be ahead of windows when it comes to
> > > things like that. but peer-to-peer dma is something that is only
> > > recently possible on linux, but has been possible on windows? what
> > > changed recently that allows for peer to peer dma in linux?
> > >
> >
> > A few things that made this more complicated on Linux:
> > 1. Linux uses IOMMUs more extensively than windows so you can't just
> > pass around physical bus addresses.
> > 2. Linux supports lots of strange architectures that have a lot of
> > limitations with respect to peer to peer transactions
> >
> > It just took years to get all the necessary bits in place in Linux and
> > make everyone happy.
> >
> > > also, in the context of a game running opengl on some gpu, is the
> > > "peer-to-peer" dma transfer something like: the game draw's to some
> > > memory it has allocated, then a DMA transfer gets that and moves it
> > > into the graphics card output?
> >
> > Peer to peer DMA just lets devices access another devices local memory
> > directly.  So if you have a buffer in vram on one device, you can
> > share that directly with another device rather than having to copy it
> > to system memory first.  For example, if you have two GPUs, you can
> > have one of them copy it's content directly to a buffer in the other
> > GPU's vram rather than having to go through system memory first.
> >
> > >
> > > also, i know it can be super annoying trying to debug an issue like
> > > this, with someone like me who has all types of differences from a
> > > normal setup (e.g. using it via egpu, using a kernel with custom
> > > configs and stuff) so as a token of my appreciation i donated 50$ to
> > > the red cross' corona virus outbreak charity thing, on behalf of
> > > amd-gfx.
> >
> > Thanks,
> >
> > Alex
> >
> > >
> > > On Tue, May 19, 2020 at 4:13 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > >
> > > > On Tue, May 19, 2020 at 3:44 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > >
> > > > > just a couple more questions:
> > > > >
> > > > > - based on what you are aware of, the technical details such as
> > > > > "shared buffers go through system memory", and all that, do you see
> > > > > any issues that might exist that i might be missing in my setup? i
> > > > > cant imagine this being the case because the card works great in
> > > > > windows, unless the windows driver does something different?
> > > > >
> > > >
> > > > Windows has supported peer to peer DMA for years so it already has a
> > > > numbers of optimizations that are only now becoming possible on Linux.
> > > >
> > > > > - as far as kernel config, is there anything in particular which
> > > > > _should_ or _should not_ be enabled/disabled?
> > > >
> > > > You'll need the GPU drivers for your devices and dma-buf support.
> > > >
> > > > >
> > > > > - does the vendor matter? for instance, this is an xfx card. when it
> > > > > comes to different vendors, are there interface changes that might
> > > > > make one vendor work better for linux than another? i dont really
> > > > > understand the differences in vendors, but i imagine that the vbios
> > > > > differs between vendors, and as such, the linux compatibility would
> > > > > maybe change?
> > > >
> > > > board vendor shouldn't matter.
> > > >
> > > > >
> > > > > - is the pcie bandwidth possible an issue? the pcie_bw file changes
> > > > > between values like this:
> > > > > 18446683600662707640 18446744071581623085 128
> > > > > and sometimes i see this:
> > > > > 4096 0 128
> > > > > as you can see, the second value seems significantly lower. is that
> > > > > possibly an issue? possibly due to aspm?
> > > >
> > > > pcie_bw is not implemented for navi yet so you are just seeing
> > > > uninitialized data.  This patch set should clear that up.
> > > > https://patchwork.freedesktop.org/patch/366262/
> > > >
> > > > Alex
> > > >
> > > > >
> > > > > On Tue, May 19, 2020 at 2:20 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > > >
> > > > > > im using Driver "amdgpu" in my xorg conf
> > > > > >
> > > > > > how does one verify which gpu is the primary? im assuming my intel
> > > > > > card is the primary, since i have not done anything to change that.
> > > > > >
> > > > > > also, if all shared buffers have to go through system memory, then
> > > > > > that means an eGPU amdgpu wont work very well in general right?
> > > > > > because going through system memory for the egpu means going over the
> > > > > > thunderbolt connection
> > > > > >
> > > > > > and what are the shared buffers youre referring to? for example, if an
> > > > > > application is drawing to a buffer, is that an example of a shared
> > > > > > buffer that has to go through system memory? if so, thats fine, right?
> > > > > > because the application's memory is in system memory, so that copy
> > > > > > wouldnt be an issue.
> > > > > >
> > > > > > in general, do you think the "copy buffer across system memory might
> > > > > > be a hindrance for thunderbolt? im trying to figure out which
> > > > > > directions to go to debug and im totally lost, so maybe i can do some
> > > > > > testing that direction?
> > > > > >
> > > > > > and for what its worth, when i turn the display "off" via the gnome
> > > > > > display settings, its the same issue as when the laptop lid is closed,
> > > > > > so unless the motherboard reads the "closed lid" the same as "display
> > > > > > off", then im not sure if its thermal issues.
> > > > > >
> > > > > > On Tue, May 19, 2020 at 2:14 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > > > > >
> > > > > > > On Tue, May 19, 2020 at 2:59 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > > > > >
> > > > > > > > given this setup:
> > > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> > > > > > > > DRI_PRIME=1 glxgears gears gives me ~300fps
> > > > > > > >
> > > > > > > > given this setup:
> > > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > > > > > > > laptop -hdmi-> monitor
> > > > > > > >
> > > > > > > > glx gears gives me ~1800fps
> > > > > > > >
> > > > > > > > this doesnt make sense to me because i thought that having the monitor
> > > > > > > > plugged directly into the card should give best performance.
> > > > > > > >
> > > > > > >
> > > > > > > Do you have displays connected to both GPUs?  If you are using X which
> > > > > > > ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> > > > > > > IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> > > > > > > yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> > > > > > > Note that the GPU which does the rendering is not necessarily the one
> > > > > > > that the displays are attached to.  The render GPU renders to it's
> > > > > > > render buffer and then that data may end up being copied other GPUs
> > > > > > > for display.  Also, at this point, all shared buffers have to go
> > > > > > > through system memory (this will be changing eventually now that we
> > > > > > > support device memory via dma-buf), so there is often an extra copy
> > > > > > > involved.
> > > > > > >
> > > > > > > > theres another really weird issue...
> > > > > > > >
> > > > > > > > given setup 1, where the monitor is plugged in to the card:
> > > > > > > > when i close the laptop lid, my monitor is "active" and whatnot, and i
> > > > > > > > can "use it" in a sense
> > > > > > > >
> > > > > > > > however, heres the weirdness:
> > > > > > > > the mouse cursor will move along the monitor perfectly smooth and
> > > > > > > > fine, but all the other updates to the screen are delayed by about 2
> > > > > > > > or 3 seconds.
> > > > > > > > that is to say, its as if the laptop is doing everything (e.g. if i
> > > > > > > > open a terminal, the terminal will open, but it will take 2 seconds
> > > > > > > > for me to see it)
> > > > > > > >
> > > > > > > > its almost as if all the frames and everything are being drawn, and
> > > > > > > > the laptop is running fine and everything, but i simply just dont get
> > > > > > > > to see it on the monitor, except for one time every 2 seconds.
> > > > > > > >
> > > > > > > > its hard to articulate, because its so bizarre. its not like, a "low
> > > > > > > > fps" per se, because the cursor is totally smooth. but its that
> > > > > > > > _everything else_ is only updated once every couple seconds.
> > > > > > >
> > > > > > > This might also be related to which GPU is the primary.  It still may
> > > > > > > be the integrated GPU since that is what is attached to the laptop
> > > > > > > panel.  Also the platform and some drivers may do certain things when
> > > > > > > the lid is closed.  E.g., for thermal reasons, the integrated GPU or
> > > > > > > CPU may have a more limited TDP because the laptop cannot cool as
> > > > > > > efficiently.
> > > > > > >
> > > > > > > Alex
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-20  2:29               ` Alex Deucher
@ 2020-05-20 22:04                 ` Javad Karabi
  2020-05-21  3:11                   ` Alex Deucher
  0 siblings, 1 reply; 24+ messages in thread
From: Javad Karabi @ 2020-05-20 22:04 UTC (permalink / raw)
  To: Alex Deucher; +Cc: amd-gfx list


[-- Attachment #1.1: Type: text/plain, Size: 14242 bytes --]

Thanks Alex,
Here's my plan:

since my laptop's os is pretty customized, e.g. compiling my own kernel,
building latest xorg, latest xorg-driver-amdgpu, etc etc,
im going to use the intel iommu and pass through my rx 5600 into a virtual
machine, which will be a 100% stock ubuntu installation.
then, inside that vm, i will continue to debug

does that sound like it would make sense for testing? for example, with
that scenario, it adds the iommu into the mix, so who knows if that causes
performance issues. but i think its worth a shot, to see if a stock kernel
will handle it better

also, quick question:
from what i understand, a thunderbolt 3 pci express connection should
handle 8 GT/s x4, however, along the chain of bridges to my device, i
notice that the bridge closest to the graphics card is at 2.5 GT/s x4, and
it also says "downgraded" (this is via the lspci output)

now, when i boot into windows, it _also_ says 2.5 GT/s x4, and it runs
extremely well. no issues at all.

so my question is: the fact that the bridge is at 2.5 GT/s x4, and not at
its theoretical "full link speed" of 8 GT/s x4, do you suppose that _could_
be an issue?
i do not think so, because, like i said, in windows it also reports that
link speed.
i would assume that you would want the fastest link speed possible, because
i would assume that of _all_ tb3 pci express devices, a GPU would be the #1
most demanding on the link

just curious if you think 2.5 GT/s could be the bottleneck

i will pass through the device into a ubuntu vm and let you know how it
goes. thanks



On Tue, May 19, 2020 at 9:29 PM Alex Deucher <alexdeucher@gmail.com> wrote:

> On Tue, May 19, 2020 at 9:16 PM Javad Karabi <karabijavad@gmail.com>
> wrote:
> >
> > thanks for the answers alex.
> >
> > so, i went ahead and got a displayport cable to see if that changes
> > anything. and now, when i run monitor only, and the monitor connected
> > to the card, it has no issues like before! so i am thinking that
> > somethings up with either the hdmi cable, or some hdmi related setting
> > in my system? who knows, but im just gonna roll with only using
> > displayport cables now.
> > the previous hdmi cable was actually pretty long, because i was
> > extending it with an hdmi extension cable, so maybe the signal was
> > really bad or something :/
> >
> > but yea, i guess the only real issue now is maybe something simple
> > related to some sysfs entry about enabling some powermode, voltage,
> > clock frequency, or something, so that glxgears will give me more than
> > 300 fps. but atleast now i can use a single monitor configuration with
> > the monitor displayported up to the card.
> >
>
> The GPU dynamically adjusts the clocks and voltages based on load.  No
> manual configuration is required.
>
> At this point, we probably need to see you xorg log and dmesg output
> to try and figure out exactly what is going on.  I still suspect there
> is some interaction going on with both GPUs and the integrated GPU
> being the primary, so as I mentioned before, you should try and run X
> on just the amdgpu rather than trying to use both of them.
>
> Alex
>
>
> > also, one other thing i think you might be interested in, that was
> > happening before.
> >
> > so, previously, with laptop -tb3-> egpu-hdmi> monitor, there was a
> > funny thing happening which i never could figure out.
> > when i would look at the X logs, i would see that "modesetting" (for
> > the intel integrated graphics) was reporting that MonitorA was used
> > with "eDP-1",  which is correct and what i expected.
> > when i scrolled further down, i then saw that "HDMI-A-1-2" was being
> > used for another MonitorB, which also is what i expected (albeit i
> > have no idea why its saying A-1-2)
> > but amdgpu was _also_ saying that DisplayPort-1-2 (a port on the
> > radeon card) was being used for MonitorA, which is the same Monitor
> > that the modesetting driver had claimed to be using with eDP-1!
> >
> > so the point is that amdgpu was "using" Monitor0 with DisplayPort-1-2,
> > although that is what modesetting was using for eDP-1.
> >
> > anyway, thats a little aside, i doubt it was related to the terrible
> > hdmi experience i was getting, since its about display port and stuff,
> > but i thought id let you know about that.
> >
> > if you think that is a possible issue, im more than happy to plug the
> > hdmi setup back in and create an issue on gitlab with the logs and
> > everything
> >
> > On Tue, May 19, 2020 at 4:42 PM Alex Deucher <alexdeucher@gmail.com>
> wrote:
> > >
> > > On Tue, May 19, 2020 at 5:22 PM Javad Karabi <karabijavad@gmail.com>
> wrote:
> > > >
> > > > lol youre quick!
> > > >
> > > > "Windows has supported peer to peer DMA for years so it already has a
> > > > numbers of optimizations that are only now becoming possible on
> Linux"
> > > >
> > > > whoa, i figured linux would be ahead of windows when it comes to
> > > > things like that. but peer-to-peer dma is something that is only
> > > > recently possible on linux, but has been possible on windows? what
> > > > changed recently that allows for peer to peer dma in linux?
> > > >
> > >
> > > A few things that made this more complicated on Linux:
> > > 1. Linux uses IOMMUs more extensively than windows so you can't just
> > > pass around physical bus addresses.
> > > 2. Linux supports lots of strange architectures that have a lot of
> > > limitations with respect to peer to peer transactions
> > >
> > > It just took years to get all the necessary bits in place in Linux and
> > > make everyone happy.
> > >
> > > > also, in the context of a game running opengl on some gpu, is the
> > > > "peer-to-peer" dma transfer something like: the game draw's to some
> > > > memory it has allocated, then a DMA transfer gets that and moves it
> > > > into the graphics card output?
> > >
> > > Peer to peer DMA just lets devices access another devices local memory
> > > directly.  So if you have a buffer in vram on one device, you can
> > > share that directly with another device rather than having to copy it
> > > to system memory first.  For example, if you have two GPUs, you can
> > > have one of them copy it's content directly to a buffer in the other
> > > GPU's vram rather than having to go through system memory first.
> > >
> > > >
> > > > also, i know it can be super annoying trying to debug an issue like
> > > > this, with someone like me who has all types of differences from a
> > > > normal setup (e.g. using it via egpu, using a kernel with custom
> > > > configs and stuff) so as a token of my appreciation i donated 50$ to
> > > > the red cross' corona virus outbreak charity thing, on behalf of
> > > > amd-gfx.
> > >
> > > Thanks,
> > >
> > > Alex
> > >
> > > >
> > > > On Tue, May 19, 2020 at 4:13 PM Alex Deucher <alexdeucher@gmail.com>
> wrote:
> > > > >
> > > > > On Tue, May 19, 2020 at 3:44 PM Javad Karabi <
> karabijavad@gmail.com> wrote:
> > > > > >
> > > > > > just a couple more questions:
> > > > > >
> > > > > > - based on what you are aware of, the technical details such as
> > > > > > "shared buffers go through system memory", and all that, do you
> see
> > > > > > any issues that might exist that i might be missing in my setup?
> i
> > > > > > cant imagine this being the case because the card works great in
> > > > > > windows, unless the windows driver does something different?
> > > > > >
> > > > >
> > > > > Windows has supported peer to peer DMA for years so it already has
> a
> > > > > numbers of optimizations that are only now becoming possible on
> Linux.
> > > > >
> > > > > > - as far as kernel config, is there anything in particular which
> > > > > > _should_ or _should not_ be enabled/disabled?
> > > > >
> > > > > You'll need the GPU drivers for your devices and dma-buf support.
> > > > >
> > > > > >
> > > > > > - does the vendor matter? for instance, this is an xfx card.
> when it
> > > > > > comes to different vendors, are there interface changes that
> might
> > > > > > make one vendor work better for linux than another? i dont really
> > > > > > understand the differences in vendors, but i imagine that the
> vbios
> > > > > > differs between vendors, and as such, the linux compatibility
> would
> > > > > > maybe change?
> > > > >
> > > > > board vendor shouldn't matter.
> > > > >
> > > > > >
> > > > > > - is the pcie bandwidth possible an issue? the pcie_bw file
> changes
> > > > > > between values like this:
> > > > > > 18446683600662707640 18446744071581623085 128
> > > > > > and sometimes i see this:
> > > > > > 4096 0 128
> > > > > > as you can see, the second value seems significantly lower. is
> that
> > > > > > possibly an issue? possibly due to aspm?
> > > > >
> > > > > pcie_bw is not implemented for navi yet so you are just seeing
> > > > > uninitialized data.  This patch set should clear that up.
> > > > > https://patchwork.freedesktop.org/patch/366262/
> > > > >
> > > > > Alex
> > > > >
> > > > > >
> > > > > > On Tue, May 19, 2020 at 2:20 PM Javad Karabi <
> karabijavad@gmail.com> wrote:
> > > > > > >
> > > > > > > im using Driver "amdgpu" in my xorg conf
> > > > > > >
> > > > > > > how does one verify which gpu is the primary? im assuming my
> intel
> > > > > > > card is the primary, since i have not done anything to change
> that.
> > > > > > >
> > > > > > > also, if all shared buffers have to go through system memory,
> then
> > > > > > > that means an eGPU amdgpu wont work very well in general right?
> > > > > > > because going through system memory for the egpu means going
> over the
> > > > > > > thunderbolt connection
> > > > > > >
> > > > > > > and what are the shared buffers youre referring to? for
> example, if an
> > > > > > > application is drawing to a buffer, is that an example of a
> shared
> > > > > > > buffer that has to go through system memory? if so, thats
> fine, right?
> > > > > > > because the application's memory is in system memory, so that
> copy
> > > > > > > wouldnt be an issue.
> > > > > > >
> > > > > > > in general, do you think the "copy buffer across system memory
> might
> > > > > > > be a hindrance for thunderbolt? im trying to figure out which
> > > > > > > directions to go to debug and im totally lost, so maybe i can
> do some
> > > > > > > testing that direction?
> > > > > > >
> > > > > > > and for what its worth, when i turn the display "off" via the
> gnome
> > > > > > > display settings, its the same issue as when the laptop lid is
> closed,
> > > > > > > so unless the motherboard reads the "closed lid" the same as
> "display
> > > > > > > off", then im not sure if its thermal issues.
> > > > > > >
> > > > > > > On Tue, May 19, 2020 at 2:14 PM Alex Deucher <
> alexdeucher@gmail.com> wrote:
> > > > > > > >
> > > > > > > > On Tue, May 19, 2020 at 2:59 PM Javad Karabi <
> karabijavad@gmail.com> wrote:
> > > > > > > > >
> > > > > > > > > given this setup:
> > > > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> -hdmi-> monitor
> > > > > > > > > DRI_PRIME=1 glxgears gears gives me ~300fps
> > > > > > > > >
> > > > > > > > > given this setup:
> > > > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > > > > > > > > laptop -hdmi-> monitor
> > > > > > > > >
> > > > > > > > > glx gears gives me ~1800fps
> > > > > > > > >
> > > > > > > > > this doesnt make sense to me because i thought that having
> the monitor
> > > > > > > > > plugged directly into the card should give best
> performance.
> > > > > > > > >
> > > > > > > >
> > > > > > > > Do you have displays connected to both GPUs?  If you are
> using X which
> > > > > > > > ddx are you using?  xf86-video-modesetting or
> xf86-video-amdgpu?
> > > > > > > > IIRC, xf86-video-amdgpu has some optimizations for prime
> which are not
> > > > > > > > yet in xf86-video-modesetting.  Which GPU is set up as the
> primary?
> > > > > > > > Note that the GPU which does the rendering is not
> necessarily the one
> > > > > > > > that the displays are attached to.  The render GPU renders
> to it's
> > > > > > > > render buffer and then that data may end up being copied
> other GPUs
> > > > > > > > for display.  Also, at this point, all shared buffers have
> to go
> > > > > > > > through system memory (this will be changing eventually now
> that we
> > > > > > > > support device memory via dma-buf), so there is often an
> extra copy
> > > > > > > > involved.
> > > > > > > >
> > > > > > > > > theres another really weird issue...
> > > > > > > > >
> > > > > > > > > given setup 1, where the monitor is plugged in to the card:
> > > > > > > > > when i close the laptop lid, my monitor is "active" and
> whatnot, and i
> > > > > > > > > can "use it" in a sense
> > > > > > > > >
> > > > > > > > > however, heres the weirdness:
> > > > > > > > > the mouse cursor will move along the monitor perfectly
> smooth and
> > > > > > > > > fine, but all the other updates to the screen are delayed
> by about 2
> > > > > > > > > or 3 seconds.
> > > > > > > > > that is to say, its as if the laptop is doing everything
> (e.g. if i
> > > > > > > > > open a terminal, the terminal will open, but it will take
> 2 seconds
> > > > > > > > > for me to see it)
> > > > > > > > >
> > > > > > > > > its almost as if all the frames and everything are being
> drawn, and
> > > > > > > > > the laptop is running fine and everything, but i simply
> just dont get
> > > > > > > > > to see it on the monitor, except for one time every 2
> seconds.
> > > > > > > > >
> > > > > > > > > its hard to articulate, because its so bizarre. its not
> like, a "low
> > > > > > > > > fps" per se, because the cursor is totally smooth. but its
> that
> > > > > > > > > _everything else_ is only updated once every couple
> seconds.
> > > > > > > >
> > > > > > > > This might also be related to which GPU is the primary.  It
> still may
> > > > > > > > be the integrated GPU since that is what is attached to the
> laptop
> > > > > > > > panel.  Also the platform and some drivers may do certain
> things when
> > > > > > > > the lid is closed.  E.g., for thermal reasons, the
> integrated GPU or
> > > > > > > > CPU may have a more limited TDP because the laptop cannot
> cool as
> > > > > > > > efficiently.
> > > > > > > >
> > > > > > > > Alex
>

[-- Attachment #1.2: Type: text/html, Size: 19298 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-20 22:04                 ` Javad Karabi
@ 2020-05-21  3:11                   ` Alex Deucher
  2020-05-21 19:03                     ` Javad Karabi
  0 siblings, 1 reply; 24+ messages in thread
From: Alex Deucher @ 2020-05-21  3:11 UTC (permalink / raw)
  To: Javad Karabi; +Cc: amd-gfx list

I think you are overcomplicating things.  Just try and get X running
on just the AMD GPU on bare metal.  Introducing virtualization is just
adding more uncertainty.  If you can't configure X to not use the
integrated GPU, just blacklist the i915 driver (append
modprobe.blacklist=i915 to the kernel command line in grub) and X
should come up on the dGPU.

Alex

On Wed, May 20, 2020 at 6:05 PM Javad Karabi <karabijavad@gmail.com> wrote:
>
> Thanks Alex,
> Here's my plan:
>
> since my laptop's os is pretty customized, e.g. compiling my own kernel, building latest xorg, latest xorg-driver-amdgpu, etc etc,
> im going to use the intel iommu and pass through my rx 5600 into a virtual machine, which will be a 100% stock ubuntu installation.
> then, inside that vm, i will continue to debug
>
> does that sound like it would make sense for testing? for example, with that scenario, it adds the iommu into the mix, so who knows if that causes performance issues. but i think its worth a shot, to see if a stock kernel will handle it better
>
> also, quick question:
> from what i understand, a thunderbolt 3 pci express connection should handle 8 GT/s x4, however, along the chain of bridges to my device, i notice that the bridge closest to the graphics card is at 2.5 GT/s x4, and it also says "downgraded" (this is via the lspci output)
>
> now, when i boot into windows, it _also_ says 2.5 GT/s x4, and it runs extremely well. no issues at all.
>
> so my question is: the fact that the bridge is at 2.5 GT/s x4, and not at its theoretical "full link speed" of 8 GT/s x4, do you suppose that _could_ be an issue?
> i do not think so, because, like i said, in windows it also reports that link speed.
> i would assume that you would want the fastest link speed possible, because i would assume that of _all_ tb3 pci express devices, a GPU would be the #1 most demanding on the link
>
> just curious if you think 2.5 GT/s could be the bottleneck
>
> i will pass through the device into a ubuntu vm and let you know how it goes. thanks
>
>
>
> On Tue, May 19, 2020 at 9:29 PM Alex Deucher <alexdeucher@gmail.com> wrote:
>>
>> On Tue, May 19, 2020 at 9:16 PM Javad Karabi <karabijavad@gmail.com> wrote:
>> >
>> > thanks for the answers alex.
>> >
>> > so, i went ahead and got a displayport cable to see if that changes
>> > anything. and now, when i run monitor only, and the monitor connected
>> > to the card, it has no issues like before! so i am thinking that
>> > somethings up with either the hdmi cable, or some hdmi related setting
>> > in my system? who knows, but im just gonna roll with only using
>> > displayport cables now.
>> > the previous hdmi cable was actually pretty long, because i was
>> > extending it with an hdmi extension cable, so maybe the signal was
>> > really bad or something :/
>> >
>> > but yea, i guess the only real issue now is maybe something simple
>> > related to some sysfs entry about enabling some powermode, voltage,
>> > clock frequency, or something, so that glxgears will give me more than
>> > 300 fps. but atleast now i can use a single monitor configuration with
>> > the monitor displayported up to the card.
>> >
>>
>> The GPU dynamically adjusts the clocks and voltages based on load.  No
>> manual configuration is required.
>>
>> At this point, we probably need to see you xorg log and dmesg output
>> to try and figure out exactly what is going on.  I still suspect there
>> is some interaction going on with both GPUs and the integrated GPU
>> being the primary, so as I mentioned before, you should try and run X
>> on just the amdgpu rather than trying to use both of them.
>>
>> Alex
>>
>>
>> > also, one other thing i think you might be interested in, that was
>> > happening before.
>> >
>> > so, previously, with laptop -tb3-> egpu-hdmi> monitor, there was a
>> > funny thing happening which i never could figure out.
>> > when i would look at the X logs, i would see that "modesetting" (for
>> > the intel integrated graphics) was reporting that MonitorA was used
>> > with "eDP-1",  which is correct and what i expected.
>> > when i scrolled further down, i then saw that "HDMI-A-1-2" was being
>> > used for another MonitorB, which also is what i expected (albeit i
>> > have no idea why its saying A-1-2)
>> > but amdgpu was _also_ saying that DisplayPort-1-2 (a port on the
>> > radeon card) was being used for MonitorA, which is the same Monitor
>> > that the modesetting driver had claimed to be using with eDP-1!
>> >
>> > so the point is that amdgpu was "using" Monitor0 with DisplayPort-1-2,
>> > although that is what modesetting was using for eDP-1.
>> >
>> > anyway, thats a little aside, i doubt it was related to the terrible
>> > hdmi experience i was getting, since its about display port and stuff,
>> > but i thought id let you know about that.
>> >
>> > if you think that is a possible issue, im more than happy to plug the
>> > hdmi setup back in and create an issue on gitlab with the logs and
>> > everything
>> >
>> > On Tue, May 19, 2020 at 4:42 PM Alex Deucher <alexdeucher@gmail.com> wrote:
>> > >
>> > > On Tue, May 19, 2020 at 5:22 PM Javad Karabi <karabijavad@gmail.com> wrote:
>> > > >
>> > > > lol youre quick!
>> > > >
>> > > > "Windows has supported peer to peer DMA for years so it already has a
>> > > > numbers of optimizations that are only now becoming possible on Linux"
>> > > >
>> > > > whoa, i figured linux would be ahead of windows when it comes to
>> > > > things like that. but peer-to-peer dma is something that is only
>> > > > recently possible on linux, but has been possible on windows? what
>> > > > changed recently that allows for peer to peer dma in linux?
>> > > >
>> > >
>> > > A few things that made this more complicated on Linux:
>> > > 1. Linux uses IOMMUs more extensively than windows so you can't just
>> > > pass around physical bus addresses.
>> > > 2. Linux supports lots of strange architectures that have a lot of
>> > > limitations with respect to peer to peer transactions
>> > >
>> > > It just took years to get all the necessary bits in place in Linux and
>> > > make everyone happy.
>> > >
>> > > > also, in the context of a game running opengl on some gpu, is the
>> > > > "peer-to-peer" dma transfer something like: the game draw's to some
>> > > > memory it has allocated, then a DMA transfer gets that and moves it
>> > > > into the graphics card output?
>> > >
>> > > Peer to peer DMA just lets devices access another devices local memory
>> > > directly.  So if you have a buffer in vram on one device, you can
>> > > share that directly with another device rather than having to copy it
>> > > to system memory first.  For example, if you have two GPUs, you can
>> > > have one of them copy it's content directly to a buffer in the other
>> > > GPU's vram rather than having to go through system memory first.
>> > >
>> > > >
>> > > > also, i know it can be super annoying trying to debug an issue like
>> > > > this, with someone like me who has all types of differences from a
>> > > > normal setup (e.g. using it via egpu, using a kernel with custom
>> > > > configs and stuff) so as a token of my appreciation i donated 50$ to
>> > > > the red cross' corona virus outbreak charity thing, on behalf of
>> > > > amd-gfx.
>> > >
>> > > Thanks,
>> > >
>> > > Alex
>> > >
>> > > >
>> > > > On Tue, May 19, 2020 at 4:13 PM Alex Deucher <alexdeucher@gmail.com> wrote:
>> > > > >
>> > > > > On Tue, May 19, 2020 at 3:44 PM Javad Karabi <karabijavad@gmail.com> wrote:
>> > > > > >
>> > > > > > just a couple more questions:
>> > > > > >
>> > > > > > - based on what you are aware of, the technical details such as
>> > > > > > "shared buffers go through system memory", and all that, do you see
>> > > > > > any issues that might exist that i might be missing in my setup? i
>> > > > > > cant imagine this being the case because the card works great in
>> > > > > > windows, unless the windows driver does something different?
>> > > > > >
>> > > > >
>> > > > > Windows has supported peer to peer DMA for years so it already has a
>> > > > > numbers of optimizations that are only now becoming possible on Linux.
>> > > > >
>> > > > > > - as far as kernel config, is there anything in particular which
>> > > > > > _should_ or _should not_ be enabled/disabled?
>> > > > >
>> > > > > You'll need the GPU drivers for your devices and dma-buf support.
>> > > > >
>> > > > > >
>> > > > > > - does the vendor matter? for instance, this is an xfx card. when it
>> > > > > > comes to different vendors, are there interface changes that might
>> > > > > > make one vendor work better for linux than another? i dont really
>> > > > > > understand the differences in vendors, but i imagine that the vbios
>> > > > > > differs between vendors, and as such, the linux compatibility would
>> > > > > > maybe change?
>> > > > >
>> > > > > board vendor shouldn't matter.
>> > > > >
>> > > > > >
>> > > > > > - is the pcie bandwidth possible an issue? the pcie_bw file changes
>> > > > > > between values like this:
>> > > > > > 18446683600662707640 18446744071581623085 128
>> > > > > > and sometimes i see this:
>> > > > > > 4096 0 128
>> > > > > > as you can see, the second value seems significantly lower. is that
>> > > > > > possibly an issue? possibly due to aspm?
>> > > > >
>> > > > > pcie_bw is not implemented for navi yet so you are just seeing
>> > > > > uninitialized data.  This patch set should clear that up.
>> > > > > https://patchwork.freedesktop.org/patch/366262/
>> > > > >
>> > > > > Alex
>> > > > >
>> > > > > >
>> > > > > > On Tue, May 19, 2020 at 2:20 PM Javad Karabi <karabijavad@gmail.com> wrote:
>> > > > > > >
>> > > > > > > im using Driver "amdgpu" in my xorg conf
>> > > > > > >
>> > > > > > > how does one verify which gpu is the primary? im assuming my intel
>> > > > > > > card is the primary, since i have not done anything to change that.
>> > > > > > >
>> > > > > > > also, if all shared buffers have to go through system memory, then
>> > > > > > > that means an eGPU amdgpu wont work very well in general right?
>> > > > > > > because going through system memory for the egpu means going over the
>> > > > > > > thunderbolt connection
>> > > > > > >
>> > > > > > > and what are the shared buffers youre referring to? for example, if an
>> > > > > > > application is drawing to a buffer, is that an example of a shared
>> > > > > > > buffer that has to go through system memory? if so, thats fine, right?
>> > > > > > > because the application's memory is in system memory, so that copy
>> > > > > > > wouldnt be an issue.
>> > > > > > >
>> > > > > > > in general, do you think the "copy buffer across system memory might
>> > > > > > > be a hindrance for thunderbolt? im trying to figure out which
>> > > > > > > directions to go to debug and im totally lost, so maybe i can do some
>> > > > > > > testing that direction?
>> > > > > > >
>> > > > > > > and for what its worth, when i turn the display "off" via the gnome
>> > > > > > > display settings, its the same issue as when the laptop lid is closed,
>> > > > > > > so unless the motherboard reads the "closed lid" the same as "display
>> > > > > > > off", then im not sure if its thermal issues.
>> > > > > > >
>> > > > > > > On Tue, May 19, 2020 at 2:14 PM Alex Deucher <alexdeucher@gmail.com> wrote:
>> > > > > > > >
>> > > > > > > > On Tue, May 19, 2020 at 2:59 PM Javad Karabi <karabijavad@gmail.com> wrote:
>> > > > > > > > >
>> > > > > > > > > given this setup:
>> > > > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
>> > > > > > > > > DRI_PRIME=1 glxgears gears gives me ~300fps
>> > > > > > > > >
>> > > > > > > > > given this setup:
>> > > > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
>> > > > > > > > > laptop -hdmi-> monitor
>> > > > > > > > >
>> > > > > > > > > glx gears gives me ~1800fps
>> > > > > > > > >
>> > > > > > > > > this doesnt make sense to me because i thought that having the monitor
>> > > > > > > > > plugged directly into the card should give best performance.
>> > > > > > > > >
>> > > > > > > >
>> > > > > > > > Do you have displays connected to both GPUs?  If you are using X which
>> > > > > > > > ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
>> > > > > > > > IIRC, xf86-video-amdgpu has some optimizations for prime which are not
>> > > > > > > > yet in xf86-video-modesetting.  Which GPU is set up as the primary?
>> > > > > > > > Note that the GPU which does the rendering is not necessarily the one
>> > > > > > > > that the displays are attached to.  The render GPU renders to it's
>> > > > > > > > render buffer and then that data may end up being copied other GPUs
>> > > > > > > > for display.  Also, at this point, all shared buffers have to go
>> > > > > > > > through system memory (this will be changing eventually now that we
>> > > > > > > > support device memory via dma-buf), so there is often an extra copy
>> > > > > > > > involved.
>> > > > > > > >
>> > > > > > > > > theres another really weird issue...
>> > > > > > > > >
>> > > > > > > > > given setup 1, where the monitor is plugged in to the card:
>> > > > > > > > > when i close the laptop lid, my monitor is "active" and whatnot, and i
>> > > > > > > > > can "use it" in a sense
>> > > > > > > > >
>> > > > > > > > > however, heres the weirdness:
>> > > > > > > > > the mouse cursor will move along the monitor perfectly smooth and
>> > > > > > > > > fine, but all the other updates to the screen are delayed by about 2
>> > > > > > > > > or 3 seconds.
>> > > > > > > > > that is to say, its as if the laptop is doing everything (e.g. if i
>> > > > > > > > > open a terminal, the terminal will open, but it will take 2 seconds
>> > > > > > > > > for me to see it)
>> > > > > > > > >
>> > > > > > > > > its almost as if all the frames and everything are being drawn, and
>> > > > > > > > > the laptop is running fine and everything, but i simply just dont get
>> > > > > > > > > to see it on the monitor, except for one time every 2 seconds.
>> > > > > > > > >
>> > > > > > > > > its hard to articulate, because its so bizarre. its not like, a "low
>> > > > > > > > > fps" per se, because the cursor is totally smooth. but its that
>> > > > > > > > > _everything else_ is only updated once every couple seconds.
>> > > > > > > >
>> > > > > > > > This might also be related to which GPU is the primary.  It still may
>> > > > > > > > be the integrated GPU since that is what is attached to the laptop
>> > > > > > > > panel.  Also the platform and some drivers may do certain things when
>> > > > > > > > the lid is closed.  E.g., for thermal reasons, the integrated GPU or
>> > > > > > > > CPU may have a more limited TDP because the laptop cannot cool as
>> > > > > > > > efficiently.
>> > > > > > > >
>> > > > > > > > Alex
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-21  3:11                   ` Alex Deucher
@ 2020-05-21 19:03                     ` Javad Karabi
  2020-05-21 19:15                       ` Alex Deucher
  0 siblings, 1 reply; 24+ messages in thread
From: Javad Karabi @ 2020-05-21 19:03 UTC (permalink / raw)
  To: Alex Deucher; +Cc: amd-gfx list

Alex,
yea, youre totally right i was overcomplicating it lol
so i was able to get the radeon to run super fast, by doing as you
suggested and blacklisting i915.
(had to use module_blacklist= though because modprobe.blacklist still
allows i915, if a dependency wants to load it)
but with one caveat:
using the amdgpu driver, there was some error saying something about
telling me that i need to add BusID to my device or something.
maybe amdgpu wasnt able to find the card or something, i dont
remember. so i used modesetting instead and it seemed to work.
i will try going back to amdgpu and seeing what that error message was.
i recall you saying that modesetting doesnt have some features that
amdgpu provides.
what are some examples of that?
is the direction that graphics drivers are going, to be simply used as
"modesetting" via xorg?

On Wed, May 20, 2020 at 10:12 PM Alex Deucher <alexdeucher@gmail.com> wrote:
>
> I think you are overcomplicating things.  Just try and get X running
> on just the AMD GPU on bare metal.  Introducing virtualization is just
> adding more uncertainty.  If you can't configure X to not use the
> integrated GPU, just blacklist the i915 driver (append
> modprobe.blacklist=i915 to the kernel command line in grub) and X
> should come up on the dGPU.
>
> Alex
>
> On Wed, May 20, 2020 at 6:05 PM Javad Karabi <karabijavad@gmail.com> wrote:
> >
> > Thanks Alex,
> > Here's my plan:
> >
> > since my laptop's os is pretty customized, e.g. compiling my own kernel, building latest xorg, latest xorg-driver-amdgpu, etc etc,
> > im going to use the intel iommu and pass through my rx 5600 into a virtual machine, which will be a 100% stock ubuntu installation.
> > then, inside that vm, i will continue to debug
> >
> > does that sound like it would make sense for testing? for example, with that scenario, it adds the iommu into the mix, so who knows if that causes performance issues. but i think its worth a shot, to see if a stock kernel will handle it better
> >
> > also, quick question:
> > from what i understand, a thunderbolt 3 pci express connection should handle 8 GT/s x4, however, along the chain of bridges to my device, i notice that the bridge closest to the graphics card is at 2.5 GT/s x4, and it also says "downgraded" (this is via the lspci output)
> >
> > now, when i boot into windows, it _also_ says 2.5 GT/s x4, and it runs extremely well. no issues at all.
> >
> > so my question is: the fact that the bridge is at 2.5 GT/s x4, and not at its theoretical "full link speed" of 8 GT/s x4, do you suppose that _could_ be an issue?
> > i do not think so, because, like i said, in windows it also reports that link speed.
> > i would assume that you would want the fastest link speed possible, because i would assume that of _all_ tb3 pci express devices, a GPU would be the #1 most demanding on the link
> >
> > just curious if you think 2.5 GT/s could be the bottleneck
> >
> > i will pass through the device into a ubuntu vm and let you know how it goes. thanks
> >
> >
> >
> > On Tue, May 19, 2020 at 9:29 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> >>
> >> On Tue, May 19, 2020 at 9:16 PM Javad Karabi <karabijavad@gmail.com> wrote:
> >> >
> >> > thanks for the answers alex.
> >> >
> >> > so, i went ahead and got a displayport cable to see if that changes
> >> > anything. and now, when i run monitor only, and the monitor connected
> >> > to the card, it has no issues like before! so i am thinking that
> >> > somethings up with either the hdmi cable, or some hdmi related setting
> >> > in my system? who knows, but im just gonna roll with only using
> >> > displayport cables now.
> >> > the previous hdmi cable was actually pretty long, because i was
> >> > extending it with an hdmi extension cable, so maybe the signal was
> >> > really bad or something :/
> >> >
> >> > but yea, i guess the only real issue now is maybe something simple
> >> > related to some sysfs entry about enabling some powermode, voltage,
> >> > clock frequency, or something, so that glxgears will give me more than
> >> > 300 fps. but atleast now i can use a single monitor configuration with
> >> > the monitor displayported up to the card.
> >> >
> >>
> >> The GPU dynamically adjusts the clocks and voltages based on load.  No
> >> manual configuration is required.
> >>
> >> At this point, we probably need to see you xorg log and dmesg output
> >> to try and figure out exactly what is going on.  I still suspect there
> >> is some interaction going on with both GPUs and the integrated GPU
> >> being the primary, so as I mentioned before, you should try and run X
> >> on just the amdgpu rather than trying to use both of them.
> >>
> >> Alex
> >>
> >>
> >> > also, one other thing i think you might be interested in, that was
> >> > happening before.
> >> >
> >> > so, previously, with laptop -tb3-> egpu-hdmi> monitor, there was a
> >> > funny thing happening which i never could figure out.
> >> > when i would look at the X logs, i would see that "modesetting" (for
> >> > the intel integrated graphics) was reporting that MonitorA was used
> >> > with "eDP-1",  which is correct and what i expected.
> >> > when i scrolled further down, i then saw that "HDMI-A-1-2" was being
> >> > used for another MonitorB, which also is what i expected (albeit i
> >> > have no idea why its saying A-1-2)
> >> > but amdgpu was _also_ saying that DisplayPort-1-2 (a port on the
> >> > radeon card) was being used for MonitorA, which is the same Monitor
> >> > that the modesetting driver had claimed to be using with eDP-1!
> >> >
> >> > so the point is that amdgpu was "using" Monitor0 with DisplayPort-1-2,
> >> > although that is what modesetting was using for eDP-1.
> >> >
> >> > anyway, thats a little aside, i doubt it was related to the terrible
> >> > hdmi experience i was getting, since its about display port and stuff,
> >> > but i thought id let you know about that.
> >> >
> >> > if you think that is a possible issue, im more than happy to plug the
> >> > hdmi setup back in and create an issue on gitlab with the logs and
> >> > everything
> >> >
> >> > On Tue, May 19, 2020 at 4:42 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> >> > >
> >> > > On Tue, May 19, 2020 at 5:22 PM Javad Karabi <karabijavad@gmail.com> wrote:
> >> > > >
> >> > > > lol youre quick!
> >> > > >
> >> > > > "Windows has supported peer to peer DMA for years so it already has a
> >> > > > numbers of optimizations that are only now becoming possible on Linux"
> >> > > >
> >> > > > whoa, i figured linux would be ahead of windows when it comes to
> >> > > > things like that. but peer-to-peer dma is something that is only
> >> > > > recently possible on linux, but has been possible on windows? what
> >> > > > changed recently that allows for peer to peer dma in linux?
> >> > > >
> >> > >
> >> > > A few things that made this more complicated on Linux:
> >> > > 1. Linux uses IOMMUs more extensively than windows so you can't just
> >> > > pass around physical bus addresses.
> >> > > 2. Linux supports lots of strange architectures that have a lot of
> >> > > limitations with respect to peer to peer transactions
> >> > >
> >> > > It just took years to get all the necessary bits in place in Linux and
> >> > > make everyone happy.
> >> > >
> >> > > > also, in the context of a game running opengl on some gpu, is the
> >> > > > "peer-to-peer" dma transfer something like: the game draw's to some
> >> > > > memory it has allocated, then a DMA transfer gets that and moves it
> >> > > > into the graphics card output?
> >> > >
> >> > > Peer to peer DMA just lets devices access another devices local memory
> >> > > directly.  So if you have a buffer in vram on one device, you can
> >> > > share that directly with another device rather than having to copy it
> >> > > to system memory first.  For example, if you have two GPUs, you can
> >> > > have one of them copy it's content directly to a buffer in the other
> >> > > GPU's vram rather than having to go through system memory first.
> >> > >
> >> > > >
> >> > > > also, i know it can be super annoying trying to debug an issue like
> >> > > > this, with someone like me who has all types of differences from a
> >> > > > normal setup (e.g. using it via egpu, using a kernel with custom
> >> > > > configs and stuff) so as a token of my appreciation i donated 50$ to
> >> > > > the red cross' corona virus outbreak charity thing, on behalf of
> >> > > > amd-gfx.
> >> > >
> >> > > Thanks,
> >> > >
> >> > > Alex
> >> > >
> >> > > >
> >> > > > On Tue, May 19, 2020 at 4:13 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> >> > > > >
> >> > > > > On Tue, May 19, 2020 at 3:44 PM Javad Karabi <karabijavad@gmail.com> wrote:
> >> > > > > >
> >> > > > > > just a couple more questions:
> >> > > > > >
> >> > > > > > - based on what you are aware of, the technical details such as
> >> > > > > > "shared buffers go through system memory", and all that, do you see
> >> > > > > > any issues that might exist that i might be missing in my setup? i
> >> > > > > > cant imagine this being the case because the card works great in
> >> > > > > > windows, unless the windows driver does something different?
> >> > > > > >
> >> > > > >
> >> > > > > Windows has supported peer to peer DMA for years so it already has a
> >> > > > > numbers of optimizations that are only now becoming possible on Linux.
> >> > > > >
> >> > > > > > - as far as kernel config, is there anything in particular which
> >> > > > > > _should_ or _should not_ be enabled/disabled?
> >> > > > >
> >> > > > > You'll need the GPU drivers for your devices and dma-buf support.
> >> > > > >
> >> > > > > >
> >> > > > > > - does the vendor matter? for instance, this is an xfx card. when it
> >> > > > > > comes to different vendors, are there interface changes that might
> >> > > > > > make one vendor work better for linux than another? i dont really
> >> > > > > > understand the differences in vendors, but i imagine that the vbios
> >> > > > > > differs between vendors, and as such, the linux compatibility would
> >> > > > > > maybe change?
> >> > > > >
> >> > > > > board vendor shouldn't matter.
> >> > > > >
> >> > > > > >
> >> > > > > > - is the pcie bandwidth possible an issue? the pcie_bw file changes
> >> > > > > > between values like this:
> >> > > > > > 18446683600662707640 18446744071581623085 128
> >> > > > > > and sometimes i see this:
> >> > > > > > 4096 0 128
> >> > > > > > as you can see, the second value seems significantly lower. is that
> >> > > > > > possibly an issue? possibly due to aspm?
> >> > > > >
> >> > > > > pcie_bw is not implemented for navi yet so you are just seeing
> >> > > > > uninitialized data.  This patch set should clear that up.
> >> > > > > https://patchwork.freedesktop.org/patch/366262/
> >> > > > >
> >> > > > > Alex
> >> > > > >
> >> > > > > >
> >> > > > > > On Tue, May 19, 2020 at 2:20 PM Javad Karabi <karabijavad@gmail.com> wrote:
> >> > > > > > >
> >> > > > > > > im using Driver "amdgpu" in my xorg conf
> >> > > > > > >
> >> > > > > > > how does one verify which gpu is the primary? im assuming my intel
> >> > > > > > > card is the primary, since i have not done anything to change that.
> >> > > > > > >
> >> > > > > > > also, if all shared buffers have to go through system memory, then
> >> > > > > > > that means an eGPU amdgpu wont work very well in general right?
> >> > > > > > > because going through system memory for the egpu means going over the
> >> > > > > > > thunderbolt connection
> >> > > > > > >
> >> > > > > > > and what are the shared buffers youre referring to? for example, if an
> >> > > > > > > application is drawing to a buffer, is that an example of a shared
> >> > > > > > > buffer that has to go through system memory? if so, thats fine, right?
> >> > > > > > > because the application's memory is in system memory, so that copy
> >> > > > > > > wouldnt be an issue.
> >> > > > > > >
> >> > > > > > > in general, do you think the "copy buffer across system memory might
> >> > > > > > > be a hindrance for thunderbolt? im trying to figure out which
> >> > > > > > > directions to go to debug and im totally lost, so maybe i can do some
> >> > > > > > > testing that direction?
> >> > > > > > >
> >> > > > > > > and for what its worth, when i turn the display "off" via the gnome
> >> > > > > > > display settings, its the same issue as when the laptop lid is closed,
> >> > > > > > > so unless the motherboard reads the "closed lid" the same as "display
> >> > > > > > > off", then im not sure if its thermal issues.
> >> > > > > > >
> >> > > > > > > On Tue, May 19, 2020 at 2:14 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> >> > > > > > > >
> >> > > > > > > > On Tue, May 19, 2020 at 2:59 PM Javad Karabi <karabijavad@gmail.com> wrote:
> >> > > > > > > > >
> >> > > > > > > > > given this setup:
> >> > > > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> >> > > > > > > > > DRI_PRIME=1 glxgears gears gives me ~300fps
> >> > > > > > > > >
> >> > > > > > > > > given this setup:
> >> > > > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> >> > > > > > > > > laptop -hdmi-> monitor
> >> > > > > > > > >
> >> > > > > > > > > glx gears gives me ~1800fps
> >> > > > > > > > >
> >> > > > > > > > > this doesnt make sense to me because i thought that having the monitor
> >> > > > > > > > > plugged directly into the card should give best performance.
> >> > > > > > > > >
> >> > > > > > > >
> >> > > > > > > > Do you have displays connected to both GPUs?  If you are using X which
> >> > > > > > > > ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> >> > > > > > > > IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> >> > > > > > > > yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> >> > > > > > > > Note that the GPU which does the rendering is not necessarily the one
> >> > > > > > > > that the displays are attached to.  The render GPU renders to it's
> >> > > > > > > > render buffer and then that data may end up being copied other GPUs
> >> > > > > > > > for display.  Also, at this point, all shared buffers have to go
> >> > > > > > > > through system memory (this will be changing eventually now that we
> >> > > > > > > > support device memory via dma-buf), so there is often an extra copy
> >> > > > > > > > involved.
> >> > > > > > > >
> >> > > > > > > > > theres another really weird issue...
> >> > > > > > > > >
> >> > > > > > > > > given setup 1, where the monitor is plugged in to the card:
> >> > > > > > > > > when i close the laptop lid, my monitor is "active" and whatnot, and i
> >> > > > > > > > > can "use it" in a sense
> >> > > > > > > > >
> >> > > > > > > > > however, heres the weirdness:
> >> > > > > > > > > the mouse cursor will move along the monitor perfectly smooth and
> >> > > > > > > > > fine, but all the other updates to the screen are delayed by about 2
> >> > > > > > > > > or 3 seconds.
> >> > > > > > > > > that is to say, its as if the laptop is doing everything (e.g. if i
> >> > > > > > > > > open a terminal, the terminal will open, but it will take 2 seconds
> >> > > > > > > > > for me to see it)
> >> > > > > > > > >
> >> > > > > > > > > its almost as if all the frames and everything are being drawn, and
> >> > > > > > > > > the laptop is running fine and everything, but i simply just dont get
> >> > > > > > > > > to see it on the monitor, except for one time every 2 seconds.
> >> > > > > > > > >
> >> > > > > > > > > its hard to articulate, because its so bizarre. its not like, a "low
> >> > > > > > > > > fps" per se, because the cursor is totally smooth. but its that
> >> > > > > > > > > _everything else_ is only updated once every couple seconds.
> >> > > > > > > >
> >> > > > > > > > This might also be related to which GPU is the primary.  It still may
> >> > > > > > > > be the integrated GPU since that is what is attached to the laptop
> >> > > > > > > > panel.  Also the platform and some drivers may do certain things when
> >> > > > > > > > the lid is closed.  E.g., for thermal reasons, the integrated GPU or
> >> > > > > > > > CPU may have a more limited TDP because the laptop cannot cool as
> >> > > > > > > > efficiently.
> >> > > > > > > >
> >> > > > > > > > Alex
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-21 19:03                     ` Javad Karabi
@ 2020-05-21 19:15                       ` Alex Deucher
  2020-05-21 21:21                         ` Javad Karabi
  0 siblings, 1 reply; 24+ messages in thread
From: Alex Deucher @ 2020-05-21 19:15 UTC (permalink / raw)
  To: Javad Karabi; +Cc: amd-gfx list

Please provide your dmesg output and xorg log.

Alex

On Thu, May 21, 2020 at 3:03 PM Javad Karabi <karabijavad@gmail.com> wrote:
>
> Alex,
> yea, youre totally right i was overcomplicating it lol
> so i was able to get the radeon to run super fast, by doing as you
> suggested and blacklisting i915.
> (had to use module_blacklist= though because modprobe.blacklist still
> allows i915, if a dependency wants to load it)
> but with one caveat:
> using the amdgpu driver, there was some error saying something about
> telling me that i need to add BusID to my device or something.
> maybe amdgpu wasnt able to find the card or something, i dont
> remember. so i used modesetting instead and it seemed to work.
> i will try going back to amdgpu and seeing what that error message was.
> i recall you saying that modesetting doesnt have some features that
> amdgpu provides.
> what are some examples of that?
> is the direction that graphics drivers are going, to be simply used as
> "modesetting" via xorg?
>
> On Wed, May 20, 2020 at 10:12 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> >
> > I think you are overcomplicating things.  Just try and get X running
> > on just the AMD GPU on bare metal.  Introducing virtualization is just
> > adding more uncertainty.  If you can't configure X to not use the
> > integrated GPU, just blacklist the i915 driver (append
> > modprobe.blacklist=i915 to the kernel command line in grub) and X
> > should come up on the dGPU.
> >
> > Alex
> >
> > On Wed, May 20, 2020 at 6:05 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > >
> > > Thanks Alex,
> > > Here's my plan:
> > >
> > > since my laptop's os is pretty customized, e.g. compiling my own kernel, building latest xorg, latest xorg-driver-amdgpu, etc etc,
> > > im going to use the intel iommu and pass through my rx 5600 into a virtual machine, which will be a 100% stock ubuntu installation.
> > > then, inside that vm, i will continue to debug
> > >
> > > does that sound like it would make sense for testing? for example, with that scenario, it adds the iommu into the mix, so who knows if that causes performance issues. but i think its worth a shot, to see if a stock kernel will handle it better
> > >
> > > also, quick question:
> > > from what i understand, a thunderbolt 3 pci express connection should handle 8 GT/s x4, however, along the chain of bridges to my device, i notice that the bridge closest to the graphics card is at 2.5 GT/s x4, and it also says "downgraded" (this is via the lspci output)
> > >
> > > now, when i boot into windows, it _also_ says 2.5 GT/s x4, and it runs extremely well. no issues at all.
> > >
> > > so my question is: the fact that the bridge is at 2.5 GT/s x4, and not at its theoretical "full link speed" of 8 GT/s x4, do you suppose that _could_ be an issue?
> > > i do not think so, because, like i said, in windows it also reports that link speed.
> > > i would assume that you would want the fastest link speed possible, because i would assume that of _all_ tb3 pci express devices, a GPU would be the #1 most demanding on the link
> > >
> > > just curious if you think 2.5 GT/s could be the bottleneck
> > >
> > > i will pass through the device into a ubuntu vm and let you know how it goes. thanks
> > >
> > >
> > >
> > > On Tue, May 19, 2020 at 9:29 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > >>
> > >> On Tue, May 19, 2020 at 9:16 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > >> >
> > >> > thanks for the answers alex.
> > >> >
> > >> > so, i went ahead and got a displayport cable to see if that changes
> > >> > anything. and now, when i run monitor only, and the monitor connected
> > >> > to the card, it has no issues like before! so i am thinking that
> > >> > somethings up with either the hdmi cable, or some hdmi related setting
> > >> > in my system? who knows, but im just gonna roll with only using
> > >> > displayport cables now.
> > >> > the previous hdmi cable was actually pretty long, because i was
> > >> > extending it with an hdmi extension cable, so maybe the signal was
> > >> > really bad or something :/
> > >> >
> > >> > but yea, i guess the only real issue now is maybe something simple
> > >> > related to some sysfs entry about enabling some powermode, voltage,
> > >> > clock frequency, or something, so that glxgears will give me more than
> > >> > 300 fps. but atleast now i can use a single monitor configuration with
> > >> > the monitor displayported up to the card.
> > >> >
> > >>
> > >> The GPU dynamically adjusts the clocks and voltages based on load.  No
> > >> manual configuration is required.
> > >>
> > >> At this point, we probably need to see you xorg log and dmesg output
> > >> to try and figure out exactly what is going on.  I still suspect there
> > >> is some interaction going on with both GPUs and the integrated GPU
> > >> being the primary, so as I mentioned before, you should try and run X
> > >> on just the amdgpu rather than trying to use both of them.
> > >>
> > >> Alex
> > >>
> > >>
> > >> > also, one other thing i think you might be interested in, that was
> > >> > happening before.
> > >> >
> > >> > so, previously, with laptop -tb3-> egpu-hdmi> monitor, there was a
> > >> > funny thing happening which i never could figure out.
> > >> > when i would look at the X logs, i would see that "modesetting" (for
> > >> > the intel integrated graphics) was reporting that MonitorA was used
> > >> > with "eDP-1",  which is correct and what i expected.
> > >> > when i scrolled further down, i then saw that "HDMI-A-1-2" was being
> > >> > used for another MonitorB, which also is what i expected (albeit i
> > >> > have no idea why its saying A-1-2)
> > >> > but amdgpu was _also_ saying that DisplayPort-1-2 (a port on the
> > >> > radeon card) was being used for MonitorA, which is the same Monitor
> > >> > that the modesetting driver had claimed to be using with eDP-1!
> > >> >
> > >> > so the point is that amdgpu was "using" Monitor0 with DisplayPort-1-2,
> > >> > although that is what modesetting was using for eDP-1.
> > >> >
> > >> > anyway, thats a little aside, i doubt it was related to the terrible
> > >> > hdmi experience i was getting, since its about display port and stuff,
> > >> > but i thought id let you know about that.
> > >> >
> > >> > if you think that is a possible issue, im more than happy to plug the
> > >> > hdmi setup back in and create an issue on gitlab with the logs and
> > >> > everything
> > >> >
> > >> > On Tue, May 19, 2020 at 4:42 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > >> > >
> > >> > > On Tue, May 19, 2020 at 5:22 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > >> > > >
> > >> > > > lol youre quick!
> > >> > > >
> > >> > > > "Windows has supported peer to peer DMA for years so it already has a
> > >> > > > numbers of optimizations that are only now becoming possible on Linux"
> > >> > > >
> > >> > > > whoa, i figured linux would be ahead of windows when it comes to
> > >> > > > things like that. but peer-to-peer dma is something that is only
> > >> > > > recently possible on linux, but has been possible on windows? what
> > >> > > > changed recently that allows for peer to peer dma in linux?
> > >> > > >
> > >> > >
> > >> > > A few things that made this more complicated on Linux:
> > >> > > 1. Linux uses IOMMUs more extensively than windows so you can't just
> > >> > > pass around physical bus addresses.
> > >> > > 2. Linux supports lots of strange architectures that have a lot of
> > >> > > limitations with respect to peer to peer transactions
> > >> > >
> > >> > > It just took years to get all the necessary bits in place in Linux and
> > >> > > make everyone happy.
> > >> > >
> > >> > > > also, in the context of a game running opengl on some gpu, is the
> > >> > > > "peer-to-peer" dma transfer something like: the game draw's to some
> > >> > > > memory it has allocated, then a DMA transfer gets that and moves it
> > >> > > > into the graphics card output?
> > >> > >
> > >> > > Peer to peer DMA just lets devices access another devices local memory
> > >> > > directly.  So if you have a buffer in vram on one device, you can
> > >> > > share that directly with another device rather than having to copy it
> > >> > > to system memory first.  For example, if you have two GPUs, you can
> > >> > > have one of them copy it's content directly to a buffer in the other
> > >> > > GPU's vram rather than having to go through system memory first.
> > >> > >
> > >> > > >
> > >> > > > also, i know it can be super annoying trying to debug an issue like
> > >> > > > this, with someone like me who has all types of differences from a
> > >> > > > normal setup (e.g. using it via egpu, using a kernel with custom
> > >> > > > configs and stuff) so as a token of my appreciation i donated 50$ to
> > >> > > > the red cross' corona virus outbreak charity thing, on behalf of
> > >> > > > amd-gfx.
> > >> > >
> > >> > > Thanks,
> > >> > >
> > >> > > Alex
> > >> > >
> > >> > > >
> > >> > > > On Tue, May 19, 2020 at 4:13 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > >> > > > >
> > >> > > > > On Tue, May 19, 2020 at 3:44 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > >> > > > > >
> > >> > > > > > just a couple more questions:
> > >> > > > > >
> > >> > > > > > - based on what you are aware of, the technical details such as
> > >> > > > > > "shared buffers go through system memory", and all that, do you see
> > >> > > > > > any issues that might exist that i might be missing in my setup? i
> > >> > > > > > cant imagine this being the case because the card works great in
> > >> > > > > > windows, unless the windows driver does something different?
> > >> > > > > >
> > >> > > > >
> > >> > > > > Windows has supported peer to peer DMA for years so it already has a
> > >> > > > > numbers of optimizations that are only now becoming possible on Linux.
> > >> > > > >
> > >> > > > > > - as far as kernel config, is there anything in particular which
> > >> > > > > > _should_ or _should not_ be enabled/disabled?
> > >> > > > >
> > >> > > > > You'll need the GPU drivers for your devices and dma-buf support.
> > >> > > > >
> > >> > > > > >
> > >> > > > > > - does the vendor matter? for instance, this is an xfx card. when it
> > >> > > > > > comes to different vendors, are there interface changes that might
> > >> > > > > > make one vendor work better for linux than another? i dont really
> > >> > > > > > understand the differences in vendors, but i imagine that the vbios
> > >> > > > > > differs between vendors, and as such, the linux compatibility would
> > >> > > > > > maybe change?
> > >> > > > >
> > >> > > > > board vendor shouldn't matter.
> > >> > > > >
> > >> > > > > >
> > >> > > > > > - is the pcie bandwidth possible an issue? the pcie_bw file changes
> > >> > > > > > between values like this:
> > >> > > > > > 18446683600662707640 18446744071581623085 128
> > >> > > > > > and sometimes i see this:
> > >> > > > > > 4096 0 128
> > >> > > > > > as you can see, the second value seems significantly lower. is that
> > >> > > > > > possibly an issue? possibly due to aspm?
> > >> > > > >
> > >> > > > > pcie_bw is not implemented for navi yet so you are just seeing
> > >> > > > > uninitialized data.  This patch set should clear that up.
> > >> > > > > https://patchwork.freedesktop.org/patch/366262/
> > >> > > > >
> > >> > > > > Alex
> > >> > > > >
> > >> > > > > >
> > >> > > > > > On Tue, May 19, 2020 at 2:20 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > >> > > > > > >
> > >> > > > > > > im using Driver "amdgpu" in my xorg conf
> > >> > > > > > >
> > >> > > > > > > how does one verify which gpu is the primary? im assuming my intel
> > >> > > > > > > card is the primary, since i have not done anything to change that.
> > >> > > > > > >
> > >> > > > > > > also, if all shared buffers have to go through system memory, then
> > >> > > > > > > that means an eGPU amdgpu wont work very well in general right?
> > >> > > > > > > because going through system memory for the egpu means going over the
> > >> > > > > > > thunderbolt connection
> > >> > > > > > >
> > >> > > > > > > and what are the shared buffers youre referring to? for example, if an
> > >> > > > > > > application is drawing to a buffer, is that an example of a shared
> > >> > > > > > > buffer that has to go through system memory? if so, thats fine, right?
> > >> > > > > > > because the application's memory is in system memory, so that copy
> > >> > > > > > > wouldnt be an issue.
> > >> > > > > > >
> > >> > > > > > > in general, do you think the "copy buffer across system memory might
> > >> > > > > > > be a hindrance for thunderbolt? im trying to figure out which
> > >> > > > > > > directions to go to debug and im totally lost, so maybe i can do some
> > >> > > > > > > testing that direction?
> > >> > > > > > >
> > >> > > > > > > and for what its worth, when i turn the display "off" via the gnome
> > >> > > > > > > display settings, its the same issue as when the laptop lid is closed,
> > >> > > > > > > so unless the motherboard reads the "closed lid" the same as "display
> > >> > > > > > > off", then im not sure if its thermal issues.
> > >> > > > > > >
> > >> > > > > > > On Tue, May 19, 2020 at 2:14 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > >> > > > > > > >
> > >> > > > > > > > On Tue, May 19, 2020 at 2:59 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > >> > > > > > > > >
> > >> > > > > > > > > given this setup:
> > >> > > > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> > >> > > > > > > > > DRI_PRIME=1 glxgears gears gives me ~300fps
> > >> > > > > > > > >
> > >> > > > > > > > > given this setup:
> > >> > > > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > >> > > > > > > > > laptop -hdmi-> monitor
> > >> > > > > > > > >
> > >> > > > > > > > > glx gears gives me ~1800fps
> > >> > > > > > > > >
> > >> > > > > > > > > this doesnt make sense to me because i thought that having the monitor
> > >> > > > > > > > > plugged directly into the card should give best performance.
> > >> > > > > > > > >
> > >> > > > > > > >
> > >> > > > > > > > Do you have displays connected to both GPUs?  If you are using X which
> > >> > > > > > > > ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> > >> > > > > > > > IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> > >> > > > > > > > yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> > >> > > > > > > > Note that the GPU which does the rendering is not necessarily the one
> > >> > > > > > > > that the displays are attached to.  The render GPU renders to it's
> > >> > > > > > > > render buffer and then that data may end up being copied other GPUs
> > >> > > > > > > > for display.  Also, at this point, all shared buffers have to go
> > >> > > > > > > > through system memory (this will be changing eventually now that we
> > >> > > > > > > > support device memory via dma-buf), so there is often an extra copy
> > >> > > > > > > > involved.
> > >> > > > > > > >
> > >> > > > > > > > > theres another really weird issue...
> > >> > > > > > > > >
> > >> > > > > > > > > given setup 1, where the monitor is plugged in to the card:
> > >> > > > > > > > > when i close the laptop lid, my monitor is "active" and whatnot, and i
> > >> > > > > > > > > can "use it" in a sense
> > >> > > > > > > > >
> > >> > > > > > > > > however, heres the weirdness:
> > >> > > > > > > > > the mouse cursor will move along the monitor perfectly smooth and
> > >> > > > > > > > > fine, but all the other updates to the screen are delayed by about 2
> > >> > > > > > > > > or 3 seconds.
> > >> > > > > > > > > that is to say, its as if the laptop is doing everything (e.g. if i
> > >> > > > > > > > > open a terminal, the terminal will open, but it will take 2 seconds
> > >> > > > > > > > > for me to see it)
> > >> > > > > > > > >
> > >> > > > > > > > > its almost as if all the frames and everything are being drawn, and
> > >> > > > > > > > > the laptop is running fine and everything, but i simply just dont get
> > >> > > > > > > > > to see it on the monitor, except for one time every 2 seconds.
> > >> > > > > > > > >
> > >> > > > > > > > > its hard to articulate, because its so bizarre. its not like, a "low
> > >> > > > > > > > > fps" per se, because the cursor is totally smooth. but its that
> > >> > > > > > > > > _everything else_ is only updated once every couple seconds.
> > >> > > > > > > >
> > >> > > > > > > > This might also be related to which GPU is the primary.  It still may
> > >> > > > > > > > be the integrated GPU since that is what is attached to the laptop
> > >> > > > > > > > panel.  Also the platform and some drivers may do certain things when
> > >> > > > > > > > the lid is closed.  E.g., for thermal reasons, the integrated GPU or
> > >> > > > > > > > CPU may have a more limited TDP because the laptop cannot cool as
> > >> > > > > > > > efficiently.
> > >> > > > > > > >
> > >> > > > > > > > Alex
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-21 19:15                       ` Alex Deucher
@ 2020-05-21 21:21                         ` Javad Karabi
  2020-05-22 22:48                           ` Javad Karabi
  0 siblings, 1 reply; 24+ messages in thread
From: Javad Karabi @ 2020-05-21 21:21 UTC (permalink / raw)
  To: Alex Deucher; +Cc: amd-gfx list

[-- Attachment #1: Type: text/plain, Size: 18335 bytes --]

the files i attached are using the amdgpu ddx

also, one thing to note: i just switched to modesetting but it seems
it has the same issue.
i got it working last night, forgot what i changed. but that was one
of things i changed. but here are the files for when i use the amdgpu
ddx

On Thu, May 21, 2020 at 2:15 PM Alex Deucher <alexdeucher@gmail.com> wrote:
>
> Please provide your dmesg output and xorg log.
>
> Alex
>
> On Thu, May 21, 2020 at 3:03 PM Javad Karabi <karabijavad@gmail.com> wrote:
> >
> > Alex,
> > yea, youre totally right i was overcomplicating it lol
> > so i was able to get the radeon to run super fast, by doing as you
> > suggested and blacklisting i915.
> > (had to use module_blacklist= though because modprobe.blacklist still
> > allows i915, if a dependency wants to load it)
> > but with one caveat:
> > using the amdgpu driver, there was some error saying something about
> > telling me that i need to add BusID to my device or something.
> > maybe amdgpu wasnt able to find the card or something, i dont
> > remember. so i used modesetting instead and it seemed to work.
> > i will try going back to amdgpu and seeing what that error message was.
> > i recall you saying that modesetting doesnt have some features that
> > amdgpu provides.
> > what are some examples of that?
> > is the direction that graphics drivers are going, to be simply used as
> > "modesetting" via xorg?
> >
> > On Wed, May 20, 2020 at 10:12 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > >
> > > I think you are overcomplicating things.  Just try and get X running
> > > on just the AMD GPU on bare metal.  Introducing virtualization is just
> > > adding more uncertainty.  If you can't configure X to not use the
> > > integrated GPU, just blacklist the i915 driver (append
> > > modprobe.blacklist=i915 to the kernel command line in grub) and X
> > > should come up on the dGPU.
> > >
> > > Alex
> > >
> > > On Wed, May 20, 2020 at 6:05 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > >
> > > > Thanks Alex,
> > > > Here's my plan:
> > > >
> > > > since my laptop's os is pretty customized, e.g. compiling my own kernel, building latest xorg, latest xorg-driver-amdgpu, etc etc,
> > > > im going to use the intel iommu and pass through my rx 5600 into a virtual machine, which will be a 100% stock ubuntu installation.
> > > > then, inside that vm, i will continue to debug
> > > >
> > > > does that sound like it would make sense for testing? for example, with that scenario, it adds the iommu into the mix, so who knows if that causes performance issues. but i think its worth a shot, to see if a stock kernel will handle it better
> > > >
> > > > also, quick question:
> > > > from what i understand, a thunderbolt 3 pci express connection should handle 8 GT/s x4, however, along the chain of bridges to my device, i notice that the bridge closest to the graphics card is at 2.5 GT/s x4, and it also says "downgraded" (this is via the lspci output)
> > > >
> > > > now, when i boot into windows, it _also_ says 2.5 GT/s x4, and it runs extremely well. no issues at all.
> > > >
> > > > so my question is: the fact that the bridge is at 2.5 GT/s x4, and not at its theoretical "full link speed" of 8 GT/s x4, do you suppose that _could_ be an issue?
> > > > i do not think so, because, like i said, in windows it also reports that link speed.
> > > > i would assume that you would want the fastest link speed possible, because i would assume that of _all_ tb3 pci express devices, a GPU would be the #1 most demanding on the link
> > > >
> > > > just curious if you think 2.5 GT/s could be the bottleneck
> > > >
> > > > i will pass through the device into a ubuntu vm and let you know how it goes. thanks
> > > >
> > > >
> > > >
> > > > On Tue, May 19, 2020 at 9:29 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > >>
> > > >> On Tue, May 19, 2020 at 9:16 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > >> >
> > > >> > thanks for the answers alex.
> > > >> >
> > > >> > so, i went ahead and got a displayport cable to see if that changes
> > > >> > anything. and now, when i run monitor only, and the monitor connected
> > > >> > to the card, it has no issues like before! so i am thinking that
> > > >> > somethings up with either the hdmi cable, or some hdmi related setting
> > > >> > in my system? who knows, but im just gonna roll with only using
> > > >> > displayport cables now.
> > > >> > the previous hdmi cable was actually pretty long, because i was
> > > >> > extending it with an hdmi extension cable, so maybe the signal was
> > > >> > really bad or something :/
> > > >> >
> > > >> > but yea, i guess the only real issue now is maybe something simple
> > > >> > related to some sysfs entry about enabling some powermode, voltage,
> > > >> > clock frequency, or something, so that glxgears will give me more than
> > > >> > 300 fps. but atleast now i can use a single monitor configuration with
> > > >> > the monitor displayported up to the card.
> > > >> >
> > > >>
> > > >> The GPU dynamically adjusts the clocks and voltages based on load.  No
> > > >> manual configuration is required.
> > > >>
> > > >> At this point, we probably need to see you xorg log and dmesg output
> > > >> to try and figure out exactly what is going on.  I still suspect there
> > > >> is some interaction going on with both GPUs and the integrated GPU
> > > >> being the primary, so as I mentioned before, you should try and run X
> > > >> on just the amdgpu rather than trying to use both of them.
> > > >>
> > > >> Alex
> > > >>
> > > >>
> > > >> > also, one other thing i think you might be interested in, that was
> > > >> > happening before.
> > > >> >
> > > >> > so, previously, with laptop -tb3-> egpu-hdmi> monitor, there was a
> > > >> > funny thing happening which i never could figure out.
> > > >> > when i would look at the X logs, i would see that "modesetting" (for
> > > >> > the intel integrated graphics) was reporting that MonitorA was used
> > > >> > with "eDP-1",  which is correct and what i expected.
> > > >> > when i scrolled further down, i then saw that "HDMI-A-1-2" was being
> > > >> > used for another MonitorB, which also is what i expected (albeit i
> > > >> > have no idea why its saying A-1-2)
> > > >> > but amdgpu was _also_ saying that DisplayPort-1-2 (a port on the
> > > >> > radeon card) was being used for MonitorA, which is the same Monitor
> > > >> > that the modesetting driver had claimed to be using with eDP-1!
> > > >> >
> > > >> > so the point is that amdgpu was "using" Monitor0 with DisplayPort-1-2,
> > > >> > although that is what modesetting was using for eDP-1.
> > > >> >
> > > >> > anyway, thats a little aside, i doubt it was related to the terrible
> > > >> > hdmi experience i was getting, since its about display port and stuff,
> > > >> > but i thought id let you know about that.
> > > >> >
> > > >> > if you think that is a possible issue, im more than happy to plug the
> > > >> > hdmi setup back in and create an issue on gitlab with the logs and
> > > >> > everything
> > > >> >
> > > >> > On Tue, May 19, 2020 at 4:42 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > >> > >
> > > >> > > On Tue, May 19, 2020 at 5:22 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > >> > > >
> > > >> > > > lol youre quick!
> > > >> > > >
> > > >> > > > "Windows has supported peer to peer DMA for years so it already has a
> > > >> > > > numbers of optimizations that are only now becoming possible on Linux"
> > > >> > > >
> > > >> > > > whoa, i figured linux would be ahead of windows when it comes to
> > > >> > > > things like that. but peer-to-peer dma is something that is only
> > > >> > > > recently possible on linux, but has been possible on windows? what
> > > >> > > > changed recently that allows for peer to peer dma in linux?
> > > >> > > >
> > > >> > >
> > > >> > > A few things that made this more complicated on Linux:
> > > >> > > 1. Linux uses IOMMUs more extensively than windows so you can't just
> > > >> > > pass around physical bus addresses.
> > > >> > > 2. Linux supports lots of strange architectures that have a lot of
> > > >> > > limitations with respect to peer to peer transactions
> > > >> > >
> > > >> > > It just took years to get all the necessary bits in place in Linux and
> > > >> > > make everyone happy.
> > > >> > >
> > > >> > > > also, in the context of a game running opengl on some gpu, is the
> > > >> > > > "peer-to-peer" dma transfer something like: the game draw's to some
> > > >> > > > memory it has allocated, then a DMA transfer gets that and moves it
> > > >> > > > into the graphics card output?
> > > >> > >
> > > >> > > Peer to peer DMA just lets devices access another devices local memory
> > > >> > > directly.  So if you have a buffer in vram on one device, you can
> > > >> > > share that directly with another device rather than having to copy it
> > > >> > > to system memory first.  For example, if you have two GPUs, you can
> > > >> > > have one of them copy it's content directly to a buffer in the other
> > > >> > > GPU's vram rather than having to go through system memory first.
> > > >> > >
> > > >> > > >
> > > >> > > > also, i know it can be super annoying trying to debug an issue like
> > > >> > > > this, with someone like me who has all types of differences from a
> > > >> > > > normal setup (e.g. using it via egpu, using a kernel with custom
> > > >> > > > configs and stuff) so as a token of my appreciation i donated 50$ to
> > > >> > > > the red cross' corona virus outbreak charity thing, on behalf of
> > > >> > > > amd-gfx.
> > > >> > >
> > > >> > > Thanks,
> > > >> > >
> > > >> > > Alex
> > > >> > >
> > > >> > > >
> > > >> > > > On Tue, May 19, 2020 at 4:13 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > >> > > > >
> > > >> > > > > On Tue, May 19, 2020 at 3:44 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > >> > > > > >
> > > >> > > > > > just a couple more questions:
> > > >> > > > > >
> > > >> > > > > > - based on what you are aware of, the technical details such as
> > > >> > > > > > "shared buffers go through system memory", and all that, do you see
> > > >> > > > > > any issues that might exist that i might be missing in my setup? i
> > > >> > > > > > cant imagine this being the case because the card works great in
> > > >> > > > > > windows, unless the windows driver does something different?
> > > >> > > > > >
> > > >> > > > >
> > > >> > > > > Windows has supported peer to peer DMA for years so it already has a
> > > >> > > > > numbers of optimizations that are only now becoming possible on Linux.
> > > >> > > > >
> > > >> > > > > > - as far as kernel config, is there anything in particular which
> > > >> > > > > > _should_ or _should not_ be enabled/disabled?
> > > >> > > > >
> > > >> > > > > You'll need the GPU drivers for your devices and dma-buf support.
> > > >> > > > >
> > > >> > > > > >
> > > >> > > > > > - does the vendor matter? for instance, this is an xfx card. when it
> > > >> > > > > > comes to different vendors, are there interface changes that might
> > > >> > > > > > make one vendor work better for linux than another? i dont really
> > > >> > > > > > understand the differences in vendors, but i imagine that the vbios
> > > >> > > > > > differs between vendors, and as such, the linux compatibility would
> > > >> > > > > > maybe change?
> > > >> > > > >
> > > >> > > > > board vendor shouldn't matter.
> > > >> > > > >
> > > >> > > > > >
> > > >> > > > > > - is the pcie bandwidth possible an issue? the pcie_bw file changes
> > > >> > > > > > between values like this:
> > > >> > > > > > 18446683600662707640 18446744071581623085 128
> > > >> > > > > > and sometimes i see this:
> > > >> > > > > > 4096 0 128
> > > >> > > > > > as you can see, the second value seems significantly lower. is that
> > > >> > > > > > possibly an issue? possibly due to aspm?
> > > >> > > > >
> > > >> > > > > pcie_bw is not implemented for navi yet so you are just seeing
> > > >> > > > > uninitialized data.  This patch set should clear that up.
> > > >> > > > > https://patchwork.freedesktop.org/patch/366262/
> > > >> > > > >
> > > >> > > > > Alex
> > > >> > > > >
> > > >> > > > > >
> > > >> > > > > > On Tue, May 19, 2020 at 2:20 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > >> > > > > > >
> > > >> > > > > > > im using Driver "amdgpu" in my xorg conf
> > > >> > > > > > >
> > > >> > > > > > > how does one verify which gpu is the primary? im assuming my intel
> > > >> > > > > > > card is the primary, since i have not done anything to change that.
> > > >> > > > > > >
> > > >> > > > > > > also, if all shared buffers have to go through system memory, then
> > > >> > > > > > > that means an eGPU amdgpu wont work very well in general right?
> > > >> > > > > > > because going through system memory for the egpu means going over the
> > > >> > > > > > > thunderbolt connection
> > > >> > > > > > >
> > > >> > > > > > > and what are the shared buffers youre referring to? for example, if an
> > > >> > > > > > > application is drawing to a buffer, is that an example of a shared
> > > >> > > > > > > buffer that has to go through system memory? if so, thats fine, right?
> > > >> > > > > > > because the application's memory is in system memory, so that copy
> > > >> > > > > > > wouldnt be an issue.
> > > >> > > > > > >
> > > >> > > > > > > in general, do you think the "copy buffer across system memory might
> > > >> > > > > > > be a hindrance for thunderbolt? im trying to figure out which
> > > >> > > > > > > directions to go to debug and im totally lost, so maybe i can do some
> > > >> > > > > > > testing that direction?
> > > >> > > > > > >
> > > >> > > > > > > and for what its worth, when i turn the display "off" via the gnome
> > > >> > > > > > > display settings, its the same issue as when the laptop lid is closed,
> > > >> > > > > > > so unless the motherboard reads the "closed lid" the same as "display
> > > >> > > > > > > off", then im not sure if its thermal issues.
> > > >> > > > > > >
> > > >> > > > > > > On Tue, May 19, 2020 at 2:14 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > >> > > > > > > >
> > > >> > > > > > > > On Tue, May 19, 2020 at 2:59 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > >> > > > > > > > >
> > > >> > > > > > > > > given this setup:
> > > >> > > > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> > > >> > > > > > > > > DRI_PRIME=1 glxgears gears gives me ~300fps
> > > >> > > > > > > > >
> > > >> > > > > > > > > given this setup:
> > > >> > > > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > > >> > > > > > > > > laptop -hdmi-> monitor
> > > >> > > > > > > > >
> > > >> > > > > > > > > glx gears gives me ~1800fps
> > > >> > > > > > > > >
> > > >> > > > > > > > > this doesnt make sense to me because i thought that having the monitor
> > > >> > > > > > > > > plugged directly into the card should give best performance.
> > > >> > > > > > > > >
> > > >> > > > > > > >
> > > >> > > > > > > > Do you have displays connected to both GPUs?  If you are using X which
> > > >> > > > > > > > ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> > > >> > > > > > > > IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> > > >> > > > > > > > yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> > > >> > > > > > > > Note that the GPU which does the rendering is not necessarily the one
> > > >> > > > > > > > that the displays are attached to.  The render GPU renders to it's
> > > >> > > > > > > > render buffer and then that data may end up being copied other GPUs
> > > >> > > > > > > > for display.  Also, at this point, all shared buffers have to go
> > > >> > > > > > > > through system memory (this will be changing eventually now that we
> > > >> > > > > > > > support device memory via dma-buf), so there is often an extra copy
> > > >> > > > > > > > involved.
> > > >> > > > > > > >
> > > >> > > > > > > > > theres another really weird issue...
> > > >> > > > > > > > >
> > > >> > > > > > > > > given setup 1, where the monitor is plugged in to the card:
> > > >> > > > > > > > > when i close the laptop lid, my monitor is "active" and whatnot, and i
> > > >> > > > > > > > > can "use it" in a sense
> > > >> > > > > > > > >
> > > >> > > > > > > > > however, heres the weirdness:
> > > >> > > > > > > > > the mouse cursor will move along the monitor perfectly smooth and
> > > >> > > > > > > > > fine, but all the other updates to the screen are delayed by about 2
> > > >> > > > > > > > > or 3 seconds.
> > > >> > > > > > > > > that is to say, its as if the laptop is doing everything (e.g. if i
> > > >> > > > > > > > > open a terminal, the terminal will open, but it will take 2 seconds
> > > >> > > > > > > > > for me to see it)
> > > >> > > > > > > > >
> > > >> > > > > > > > > its almost as if all the frames and everything are being drawn, and
> > > >> > > > > > > > > the laptop is running fine and everything, but i simply just dont get
> > > >> > > > > > > > > to see it on the monitor, except for one time every 2 seconds.
> > > >> > > > > > > > >
> > > >> > > > > > > > > its hard to articulate, because its so bizarre. its not like, a "low
> > > >> > > > > > > > > fps" per se, because the cursor is totally smooth. but its that
> > > >> > > > > > > > > _everything else_ is only updated once every couple seconds.
> > > >> > > > > > > >
> > > >> > > > > > > > This might also be related to which GPU is the primary.  It still may
> > > >> > > > > > > > be the integrated GPU since that is what is attached to the laptop
> > > >> > > > > > > > panel.  Also the platform and some drivers may do certain things when
> > > >> > > > > > > > the lid is closed.  E.g., for thermal reasons, the integrated GPU or
> > > >> > > > > > > > CPU may have a more limited TDP because the laptop cannot cool as
> > > >> > > > > > > > efficiently.
> > > >> > > > > > > >
> > > >> > > > > > > > Alex

[-- Attachment #2: dmesg --]
[-- Type: application/octet-stream, Size: 187347 bytes --]

[    0.214236] 0xfffffe00000e6000-0xfffffe00000e7000           4K                               pte
[    0.214240] 0xfffffe00000e7000-0xfffffe00000e8000           4K     RW                 GLB NX pte
[    0.214245] 0xfffffe00000e8000-0xfffffe00000e9000           4K                               pte
[    0.214249] 0xfffffe00000e9000-0xfffffe00000ea000           4K     RW                 GLB NX pte
[    0.214254] 0xfffffe00000ea000-0xfffffe000010a000         128K                               pte
[    0.214258] 0xfffffe000010a000-0xfffffe000010b000           4K     ro                 GLB NX pte
[    0.214263] 0xfffffe000010b000-0xfffffe000010c000           4K     RW                 GLB NX pte
[    0.214267] 0xfffffe000010c000-0xfffffe0000111000          20K     ro                 GLB NX pte
[    0.214272] 0xfffffe0000111000-0xfffffe0000112000           4K                               pte
[    0.214276] 0xfffffe0000112000-0xfffffe0000113000           4K     RW                 GLB NX pte
[    0.214281] 0xfffffe0000113000-0xfffffe0000114000           4K                               pte
[    0.214284] 0xfffffe0000114000-0xfffffe0000115000           4K     RW                 GLB NX pte
[    0.214289] 0xfffffe0000115000-0xfffffe0000118000          12K                               pte
[    0.214293] 0xfffffe0000118000-0xfffffe0000119000           4K     RW                 GLB NX pte
[    0.214298] 0xfffffe0000119000-0xfffffe000011a000           4K                               pte
[    0.214302] 0xfffffe000011a000-0xfffffe000011b000           4K     RW                 GLB NX pte
[    0.214306] 0xfffffe000011b000-0xfffffe000011c000           4K                               pte
[    0.214310] 0xfffffe000011c000-0xfffffe000011d000           4K     RW                 GLB NX pte
[    0.214315] 0xfffffe000011d000-0xfffffe000011e000           4K                               pte
[    0.214319] 0xfffffe000011e000-0xfffffe000011f000           4K     RW                 GLB NX pte
[    0.214324] 0xfffffe000011f000-0xfffffe000013f000         128K                               pte
[    0.214328] 0xfffffe000013f000-0xfffffe0000140000           4K     ro                 GLB NX pte
[    0.214333] 0xfffffe0000140000-0xfffffe0000141000           4K     RW                 GLB NX pte
[    0.214337] 0xfffffe0000141000-0xfffffe0000146000          20K     ro                 GLB NX pte
[    0.214342] 0xfffffe0000146000-0xfffffe0000147000           4K                               pte
[    0.214346] 0xfffffe0000147000-0xfffffe0000148000           4K     RW                 GLB NX pte
[    0.214351] 0xfffffe0000148000-0xfffffe0000149000           4K                               pte
[    0.214355] 0xfffffe0000149000-0xfffffe000014a000           4K     RW                 GLB NX pte
[    0.214359] 0xfffffe000014a000-0xfffffe000014d000          12K                               pte
[    0.214363] 0xfffffe000014d000-0xfffffe000014e000           4K     RW                 GLB NX pte
[    0.214368] 0xfffffe000014e000-0xfffffe000014f000           4K                               pte
[    0.214372] 0xfffffe000014f000-0xfffffe0000150000           4K     RW                 GLB NX pte
[    0.214377] 0xfffffe0000150000-0xfffffe0000151000           4K                               pte
[    0.214381] 0xfffffe0000151000-0xfffffe0000152000           4K     RW                 GLB NX pte
[    0.214385] 0xfffffe0000152000-0xfffffe0000153000           4K                               pte
[    0.214389] 0xfffffe0000153000-0xfffffe0000154000           4K     RW                 GLB NX pte
[    0.214394] 0xfffffe0000154000-0xfffffe0000174000         128K                               pte
[    0.214398] 0xfffffe0000174000-0xfffffe0000175000           4K     ro                 GLB NX pte
[    0.214403] 0xfffffe0000175000-0xfffffe0000176000           4K     RW                 GLB NX pte
[    0.214408] 0xfffffe0000176000-0xfffffe000017b000          20K     ro                 GLB NX pte
[    0.214412] 0xfffffe000017b000-0xfffffe000017c000           4K                               pte
[    0.214416] 0xfffffe000017c000-0xfffffe000017d000           4K     RW                 GLB NX pte
[    0.214421] 0xfffffe000017d000-0xfffffe000017e000           4K                               pte
[    0.214425] 0xfffffe000017e000-0xfffffe000017f000           4K     RW                 GLB NX pte
[    0.214430] 0xfffffe000017f000-0xfffffe0000182000          12K                               pte
[    0.214434] 0xfffffe0000182000-0xfffffe0000183000           4K     RW                 GLB NX pte
[    0.214438] 0xfffffe0000183000-0xfffffe0000184000           4K                               pte
[    0.214442] 0xfffffe0000184000-0xfffffe0000185000           4K     RW                 GLB NX pte
[    0.214447] 0xfffffe0000185000-0xfffffe0000186000           4K                               pte
[    0.214451] 0xfffffe0000186000-0xfffffe0000187000           4K     RW                 GLB NX pte
[    0.214455] 0xfffffe0000187000-0xfffffe0000188000           4K                               pte
[    0.214459] 0xfffffe0000188000-0xfffffe0000189000           4K     RW                 GLB NX pte
[    0.214464] 0xfffffe0000189000-0xfffffe00001a9000         128K                               pte
[    0.214470] 0xfffffe00001a9000-0xfffffe0000200000         348K                               pte
[    0.214483] 0xfffffe0000200000-0xfffffe0040000000        1022M                               pmd
[    0.214497] 0xfffffe0040000000-0xfffffe8000000000         511G                               pud
[    0.214501] 0xfffffe8000000000-0xffffff8000000000           1T                               pgd
[    0.214514] 0xffffff8000000000-0xffffffef00000000         444G                               pud
[    0.214518] ---[ EFI Runtime Services ]---
[    0.214521] 0xffffffef00000000-0xfffffffec0000000          63G                               pud
[    0.214533] 0xfffffffec0000000-0xfffffffef5c00000         860M                               pmd
[    0.214537] 0xfffffffef5c00000-0xfffffffef5c1c000         112K                               pte
[    0.214542] 0xfffffffef5c1c000-0xfffffffef5c3c000         128K                               pte
[    0.214552] 0xfffffffef5c3c000-0xfffffffef5d34000         992K                               pte
[    0.214563] 0xfffffffef5d34000-0xfffffffef5f57000        2188K                               pte
[    0.214570] 0xfffffffef5f57000-0xfffffffef5ff6000         636K                               pte
[    0.214574] 0xfffffffef5ff6000-0xfffffffef6011000         108K                               pte
[    0.214578] 0xfffffffef6011000-0xfffffffef6034000         140K                               pte
[    0.214593] 0xfffffffef6034000-0xfffffffef7346000       19528K                               pte
[    0.214598] 0xfffffffef7346000-0xfffffffef734f000          36K                               pte
[    0.214602] 0xfffffffef734f000-0xfffffffef7364000          84K                               pte
[    0.214606] 0xfffffffef7364000-0xfffffffef7374000          64K                               pte
[    0.214612] 0xfffffffef7374000-0xfffffffef7414000         640K                               pte
[    0.214616] 0xfffffffef7414000-0xfffffffef741c000          32K                               pte
[    0.214620] 0xfffffffef741c000-0xfffffffef7437000         108K                               pte
[    0.214634] 0xfffffffef7437000-0xfffffffef7ec4000       10804K                               pte
[    0.214646] 0xfffffffef7ec4000-0xfffffffef8105000        2308K     RW                     NX pte
[    0.214660] 0xfffffffef8105000-0xfffffffef99a5000       25216K                               pte
[    0.214666] 0xfffffffef99a5000-0xfffffffef9a42000         628K                               pte
[    0.214670] 0xfffffffef9a42000-0xfffffffef9a4d000          44K                               pte
[    0.214675] 0xfffffffef9a4d000-0xfffffffef9a6f000         136K                               pte
[    0.214679] 0xfffffffef9a6f000-0xfffffffef9a84000          84K                               pte
[    0.214683] 0xfffffffef9a84000-0xfffffffef9a8e000          40K                               pte
[    0.214692] 0xfffffffef9a8e000-0xfffffffef9c00000        1480K     RW                     NX pte
[    0.214698] 0xfffffffef9c00000-0xfffffffefc800000          44M     RW         PSE         NX pmd
[    0.214703] 0xfffffffefc800000-0xfffffffefc82a000         168K     RW                     NX pte
[    0.214712] 0xfffffffefc82a000-0xfffffffefc95c000        1224K                               pte
[    0.214722] 0xfffffffefc95c000-0xfffffffefd52a000       12088K                               pte
[    0.214727] 0xfffffffefd52a000-0xfffffffefd552000         160K     RW                     NX pte
[    0.214732] 0xfffffffefd552000-0xfffffffefd553000           4K     ro                     x  pte
[    0.214736] 0xfffffffefd553000-0xfffffffefd558000          20K     RW                     NX pte
[    0.214741] 0xfffffffefd558000-0xfffffffefd559000           4K     ro                     x  pte
[    0.214746] 0xfffffffefd559000-0xfffffffefd55d000          16K     RW                     NX pte
[    0.214750] 0xfffffffefd55d000-0xfffffffefd55f000           8K     ro                     x  pte
[    0.214755] 0xfffffffefd55f000-0xfffffffefd563000          16K     RW                     NX pte
[    0.214760] 0xfffffffefd563000-0xfffffffefd564000           4K     ro                     x  pte
[    0.214765] 0xfffffffefd564000-0xfffffffefd569000          20K     RW                     NX pte
[    0.214769] 0xfffffffefd569000-0xfffffffefd56a000           4K     ro                     x  pte
[    0.214774] 0xfffffffefd56a000-0xfffffffefd56f000          20K     RW                     NX pte
[    0.214779] 0xfffffffefd56f000-0xfffffffefd570000           4K     ro                     x  pte
[    0.214784] 0xfffffffefd570000-0xfffffffefd575000          20K     RW                     NX pte
[    0.214788] 0xfffffffefd575000-0xfffffffefd576000           4K     ro                     x  pte
[    0.214793] 0xfffffffefd576000-0xfffffffefd57b000          20K     RW                     NX pte
[    0.214798] 0xfffffffefd57b000-0xfffffffefd57c000           4K     ro                     x  pte
[    0.214802] 0xfffffffefd57c000-0xfffffffefd581000          20K     RW                     NX pte
[    0.214807] 0xfffffffefd581000-0xfffffffefd583000           8K     ro                     x  pte
[    0.214812] 0xfffffffefd583000-0xfffffffefd588000          20K     RW                     NX pte
[    0.214817] 0xfffffffefd588000-0xfffffffefd58a000           8K     ro                     x  pte
[    0.214821] 0xfffffffefd58a000-0xfffffffefd58f000          20K     RW                     NX pte
[    0.214826] 0xfffffffefd58f000-0xfffffffefd596000          28K     ro                     x  pte
[    0.214831] 0xfffffffefd596000-0xfffffffefd59d000          28K     RW                     NX pte
[    0.214836] 0xfffffffefd59d000-0xfffffffefd5a0000          12K     ro                     x  pte
[    0.214840] 0xfffffffefd5a0000-0xfffffffefd5a5000          20K     RW                     NX pte
[    0.214846] 0xfffffffefd5a5000-0xfffffffefd5d7000         200K     ro                     x  pte
[    0.214851] 0xfffffffefd5d7000-0xfffffffefd606000         188K     RW                     NX pte
[    0.214856] 0xfffffffefd606000-0xfffffffefd60b000          20K     ro                     x  pte
[    0.214861] 0xfffffffefd60b000-0xfffffffefd610000          20K     RW                     NX pte
[    0.214865] 0xfffffffefd610000-0xfffffffefd611000           4K     ro                     x  pte
[    0.214870] 0xfffffffefd611000-0xfffffffefd615000          16K     RW                     NX pte
[    0.214875] 0xfffffffefd615000-0xfffffffefd618000          12K     ro                     x  pte
[    0.214879] 0xfffffffefd618000-0xfffffffefd61d000          20K     RW                     NX pte
[    0.214884] 0xfffffffefd61d000-0xfffffffefd623000          24K     ro                     x  pte
[    0.214889] 0xfffffffefd623000-0xfffffffefd629000          24K     RW                     NX pte
[    0.214893] 0xfffffffefd629000-0xfffffffefd62b000           8K     ro                     x  pte
[    0.214898] 0xfffffffefd62b000-0xfffffffefd62f000          16K     RW                     NX pte
[    0.214903] 0xfffffffefd62f000-0xfffffffefd630000           4K     ro                     x  pte
[    0.214908] 0xfffffffefd630000-0xfffffffefd634000          16K     RW                     NX pte
[    0.214912] 0xfffffffefd634000-0xfffffffefd635000           4K     ro                     x  pte
[    0.214917] 0xfffffffefd635000-0xfffffffefd63a000          20K     RW                     NX pte
[    0.214922] 0xfffffffefd63a000-0xfffffffefd63d000          12K     ro                     x  pte
[    0.214926] 0xfffffffefd63d000-0xfffffffefd642000          20K     RW                     NX pte
[    0.214931] 0xfffffffefd642000-0xfffffffefd643000           4K     ro                     x  pte
[    0.214936] 0xfffffffefd643000-0xfffffffefd647000          16K     RW                     NX pte
[    0.214941] 0xfffffffefd647000-0xfffffffefd648000           4K     ro                     x  pte
[    0.214945] 0xfffffffefd648000-0xfffffffefd64d000          20K     RW                     NX pte
[    0.214950] 0xfffffffefd64d000-0xfffffffefd64e000           4K     ro                     x  pte
[    0.214955] 0xfffffffefd64e000-0xfffffffefd653000          20K     RW                     NX pte
[    0.214959] 0xfffffffefd653000-0xfffffffefd654000           4K     ro                     x  pte
[    0.214964] 0xfffffffefd654000-0xfffffffefd65b000          28K     RW                     NX pte
[    0.214969] 0xfffffffefd65b000-0xfffffffefd65c000           4K     ro                     x  pte
[    0.214979] 0xfffffffefd65c000-0xfffffffefd800000        1680K     RW                     NX pte
[    0.214985] 0xfffffffefd800000-0xfffffffeffc00000          36M     RW         PSE         NX pmd
[    0.214991] 0xfffffffeffc00000-0xfffffffeffc58000         352K     RW                     NX pte
[    0.214998] 0xfffffffeffc58000-0xfffffffeffd0f000         732K                               pte
[    0.215002] 0xfffffffeffd0f000-0xfffffffeffd10000           4K                               pte
[    0.215009] 0xfffffffeffd10000-0xfffffffeffe10000           1M                               pte
[    0.215013] 0xfffffffeffe10000-0xfffffffeffe11000           4K     RW     PCD             NX pte
[    0.215025] 0xfffffffeffe11000-0xffffffff00000000        1980K                               pte
[    0.215029] 0xffffffff00000000-0xffffffff80000000           2G                               pud
[    0.215033] ---[ High Kernel Mapping ]---
[    0.215036] 0xffffffff80000000-0xffffffff81000000          16M                               pmd
[    0.215040] 0xffffffff81000000-0xffffffff83200000          34M     RW         PSE     GLB x  pmd
[    0.215046] 0xffffffff83200000-0xffffffffa0000000         462M                               pmd
[    0.215047] ---[ Modules ]---
[    0.215064] 0xffffffffa0000000-0xffffffffff000000        1520M                               pmd
[    0.215067] ---[ End Modules ]---
[    0.215070] 0xffffffffff000000-0xffffffffff200000           2M                               pmd
[    0.215086] 0xffffffffff200000-0xffffffffff579000        3556K                               pte
[    0.215090] ---[ Fixmap Area ]---
[    0.215094] 0xffffffffff579000-0xffffffffff5fc000         524K                               pte
[    0.215098] 0xffffffffff5fc000-0xffffffffff5fe000           8K     RW PWT PCD         GLB NX pte
[    0.215110] 0xffffffffff5fe000-0xffffffffff800000        2056K                               pte
[    0.215114] 0xffffffffff800000-0x0000000000000000           8M                               pmd
[    0.215149] LSM: Security Framework initializing
[    0.215254] Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
[    0.215292] Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
[    0.215631] mce: CPU0: Thermal monitoring enabled (TM1)
[    0.215665] process: using mwait in idle threads
[    0.215669] Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
[    0.215672] Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
[    0.215681] Speculative Store Bypass: Vulnerable
[    0.215683] TAA: Mitigation: TSX disabled
[    0.216059] Freeing SMP alternatives memory: 32K
[    0.219069] TSC deadline timer enabled
[    0.219081] smpboot: CPU0: Intel(R) Core(TM) i7-8665U CPU @ 1.90GHz (family: 0x6, model: 0x8e, stepping: 0xc)
[    0.219185] Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver.
[    0.219196] ... version:                4
[    0.219198] ... bit width:              48
[    0.219200] ... generic registers:      4
[    0.219202] ... value mask:             0000ffffffffffff
[    0.219204] ... max period:             00007fffffffffff
[    0.219206] ... fixed-purpose events:   3
[    0.219208] ... event mask:             000000070000000f
[    0.219258] rcu: Hierarchical SRCU implementation.
[    0.219760] watchdog: Disabling watchdog on nohz_full cores by default
[    0.220186] NMI watchdog: Enabled. Permanently consumes one hw-PMU counter.
[    0.220277] smp: Bringing up secondary CPUs ...
[    0.220409] x86: Booting SMP configuration:
[    0.220412] .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
[    0.233545] smp: Brought up 1 node, 8 CPUs
[    0.233545] smpboot: Max logical packages: 1
[    0.233545] smpboot: Total of 8 processors activated (33599.10 BogoMIPS)
[    0.234769] devtmpfs: initialized
[    0.234769] x86/mm: Memory block size: 128MB
[    0.236919] PM: Registering ACPI NVS region [mem 0x6fac2000-0x6fca9fff] (1998848 bytes)
[    0.237116] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
[    0.237123] futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
[    0.237157] pinctrl core: initialized pinctrl subsystem
[    0.237402] PM: RTC time: 21:09:29, date: 2020-05-21
[    0.237410] thermal_sys: Registered thermal governor 'fair_share'
[    0.237411] thermal_sys: Registered thermal governor 'bang_bang'
[    0.237414] thermal_sys: Registered thermal governor 'step_wise'
[    0.237416] thermal_sys: Registered thermal governor 'user_space'
[    0.237418] thermal_sys: Registered thermal governor 'power_allocator'
[    0.237562] NET: Registered protocol family 16
[    0.237706] audit: initializing netlink subsys (disabled)
[    0.237719] audit: type=2000 audit(1590095369.036:1): state=initialized audit_enabled=0 res=1
[    0.237719] EISA bus registered
[    0.237719] cpuidle: using governor ladder
[    0.238057] cpuidle: using governor menu
[    0.238106] Simple Boot Flag at 0x47 set to 0x1
[    0.238106] ACPI FADT declares the system doesn't support PCIe ASPM, so disable it
[    0.238106] ACPI: bus type PCI registered
[    0.238107] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[    0.238340] PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000)
[    0.238360] PCI: not using MMCONFIG
[    0.238362] PCI: Using configuration type 1 for base access
[    0.239049] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[    0.241098] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
[    0.241098] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
[    0.243107] ACPI: Added _OSI(Module Device)
[    0.243111] ACPI: Added _OSI(Processor Device)
[    0.243114] ACPI: Added _OSI(3.0 _SCP Extensions)
[    0.243116] ACPI: Added _OSI(Processor Aggregator Device)
[    0.243119] ACPI: Added _OSI(Linux-Dell-Video)
[    0.243121] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
[    0.243124] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
[    0.337797] ACPI: 12 ACPI AML tables successfully acquired and loaded
[    0.340073] ACPI: EC: EC started
[    0.340076] ACPI: EC: interrupt blocked
[    0.343037] ACPI: EC: EC_CMD/EC_SC=0x66, EC_DATA=0x62
[    0.343040] ACPI: EC: Boot ECDT EC used to handle transactions
[    0.345510] ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored
[    0.417483] ACPI: Dynamic OEM Table Load:
[    0.417507] ACPI: SSDT 0xFFFF88846C3C5D00 0000F4 (v02 PmRef  Cpu0Psd  00003000 INTL 20160527)
[    0.419645] ACPI: \_SB_.PR00: _OSC native thermal LVT Acked
[    0.421285] ACPI: Dynamic OEM Table Load:
[    0.421299] ACPI: SSDT 0xFFFF88846C3DE400 000400 (v02 PmRef  Cpu0Cst  00003001 INTL 20160527)
[    0.423599] ACPI: Dynamic OEM Table Load:
[    0.423612] ACPI: SSDT 0xFFFF88846C3FC800 000560 (v02 PmRef  Cpu0Ist  00003000 INTL 20160527)
[    0.426042] ACPI: Dynamic OEM Table Load:
[    0.426055] ACPI: SSDT 0xFFFF88846C3E9400 000149 (v02 PmRef  Cpu0Hwp  00003000 INTL 20160527)
[    0.428184] ACPI: Dynamic OEM Table Load:
[    0.428198] ACPI: SSDT 0xFFFF88846C3FE800 000724 (v02 PmRef  HwpLvt   00003000 INTL 20160527)
[    0.430845] ACPI: Dynamic OEM Table Load:
[    0.430859] ACPI: SSDT 0xFFFF88846C3F8800 0005FC (v02 PmRef  ApIst    00003000 INTL 20160527)
[    0.433315] ACPI: Dynamic OEM Table Load:
[    0.433328] ACPI: SSDT 0xFFFF88846C3DB400 000317 (v02 PmRef  ApHwp    00003000 INTL 20160527)
[    0.435771] ACPI: Dynamic OEM Table Load:
[    0.435785] ACPI: SSDT 0xFFFF88846C3E0000 000AB0 (v02 PmRef  ApPsd    00003000 INTL 20160527)
[    0.439220] ACPI: Dynamic OEM Table Load:
[    0.439233] ACPI: SSDT 0xFFFF88846C3DD800 00030A (v02 PmRef  ApCst    00003000 INTL 20160527)
[    0.446271] ACPI: Interpreter enabled
[    0.446350] ACPI: (supports S0 S3 S4 S5)
[    0.446352] ACPI: Using IOAPIC for interrupt routing
[    0.446415] PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000)
[    0.448518] PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in ACPI motherboard resources
[    0.448534] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[    0.449396] ACPI: Enabled 8 GPEs in block 00 to 7F
[    0.455175] ACPI: Power Resource [PUBS] (on)
[    0.459721] ACPI: Power Resource [BTPR] (on)
[    0.461772] ACPI: Power Resource [USBC] (on)
[    0.462562] ACPI: Power Resource [PXP] (off)
[    0.469817] ACPI: Power Resource [PXP] (on)
[    0.479243] ACPI: Power Resource [V0PR] (on)
[    0.479561] ACPI: Power Resource [V1PR] (on)
[    0.479867] ACPI: Power Resource [V2PR] (on)
[    0.486593] ACPI: Power Resource [WRST] (on)
[    0.491540] ACPI: Power Resource [PIN] (off)
[    0.491579] ACPI: Power Resource [PINP] (off)
[    0.492479] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7e])
[    0.492490] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
[    0.494511] acpi PNP0A08:00: _OSC: platform does not support [AER]
[    0.498174] acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug SHPCHotplug PME PCIeCapability LTR]
[    0.498178] acpi PNP0A08:00: FADT indicates ASPM is unsupported, using BIOS configuration
[    0.502026] PCI host bridge to bus 0000:00
[    0.502032] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
[    0.502036] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
[    0.502039] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
[    0.502042] pci_bus 0000:00: root bus resource [mem 0x7d800000-0xefffffff window]
[    0.502045] pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window]
[    0.502050] pci_bus 0000:00: root bus resource [bus 00-7e]
[    0.502058] pci_bus 0000:00: scanning bus
[    0.502068] pci 0000:00:00.0: [8086:3e34] type 00 class 0x060000
[    0.503530] pci 0000:00:02.0: [8086:3ea0] type 00 class 0x030000
[    0.503548] pci 0000:00:02.0: reg 0x10: [mem 0xe9000000-0xe9ffffff 64bit]
[    0.503558] pci 0000:00:02.0: reg 0x18: [mem 0xc0000000-0xcfffffff 64bit pref]
[    0.503565] pci 0000:00:02.0: reg 0x20: [io  0x3000-0x303f]
[    0.505058] pci 0000:00:04.0: [8086:1903] type 00 class 0x118000
[    0.505077] pci 0000:00:04.0: reg 0x10: [mem 0xea230000-0xea237fff 64bit]
[    0.506662] pci 0000:00:08.0: [8086:1911] type 00 class 0x088000
[    0.506682] pci 0000:00:08.0: reg 0x10: [mem 0xea242000-0xea242fff 64bit]
[    0.508189] pci 0000:00:12.0: [8086:9df9] type 00 class 0x118000
[    0.508217] pci 0000:00:12.0: reg 0x10: [mem 0xea243000-0xea243fff 64bit]
[    0.509729] pci 0000:00:14.0: [8086:9ded] type 00 class 0x0c0330
[    0.509754] pci 0000:00:14.0: reg 0x10: [mem 0xea220000-0xea22ffff 64bit]
[    0.509831] pci 0000:00:14.0: PME# supported from D3hot D3cold
[    0.509837] pci 0000:00:14.0: PME# disabled
[    0.511401] pci 0000:00:14.2: [8086:9def] type 00 class 0x050000
[    0.511426] pci 0000:00:14.2: reg 0x10: [mem 0xea240000-0xea241fff 64bit]
[    0.511441] pci 0000:00:14.2: reg 0x18: [mem 0xea244000-0xea244fff 64bit]
[    0.512980] pci 0000:00:14.3: [8086:9df0] type 00 class 0x028000
[    0.513116] pci 0000:00:14.3: reg 0x10: [mem 0xea238000-0xea23bfff 64bit]
[    0.513662] pci 0000:00:14.3: PME# supported from D0 D3hot D3cold
[    0.513678] pci 0000:00:14.3: PME# disabled
[    0.515242] pci 0000:00:15.0: [8086:9de8] type 00 class 0x0c8000
[    0.515308] pci 0000:00:15.0: reg 0x10: [mem 0xea245000-0xea245fff 64bit]
[    0.517053] pci 0000:00:15.1: [8086:9de9] type 00 class 0x0c8000
[    0.517118] pci 0000:00:15.1: reg 0x10: [mem 0xea246000-0xea246fff 64bit]
[    0.518880] pci 0000:00:16.0: [8086:9de0] type 00 class 0x078000
[    0.518911] pci 0000:00:16.0: reg 0x10: [mem 0xea247000-0xea247fff 64bit]
[    0.518999] pci 0000:00:16.0: PME# supported from D3hot
[    0.519005] pci 0000:00:16.0: PME# disabled
[    0.520555] pci 0000:00:16.3: [8086:9de3] type 00 class 0x070002
[    0.520582] pci 0000:00:16.3: reg 0x10: [io  0x3060-0x3067]
[    0.520593] pci 0000:00:16.3: reg 0x14: [mem 0xea24a000-0xea24afff]
[    0.522143] pci 0000:00:1d.0: [8086:9db0] type 01 class 0x060400
[    0.522260] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold
[    0.522265] pci 0000:00:1d.0: PME# disabled
[    0.522292] pci 0000:00:1d.0: PTM enabled (root), 4ns granularity
[    0.523889] pci 0000:00:1d.4: [8086:9db4] type 01 class 0x060400
[    0.525350] pci 0000:00:1d.4: PME# supported from D0 D3hot D3cold
[    0.525357] pci 0000:00:1d.4: PME# disabled
[    0.525378] pci 0000:00:1d.4: PTM enabled (root), 4ns granularity
[    0.526991] pci 0000:00:1f.0: [8086:9d84] type 00 class 0x060100
[    0.528799] pci 0000:00:1f.3: [8086:9dc8] type 00 class 0x040380
[    0.528875] pci 0000:00:1f.3: reg 0x10: [mem 0xea23c000-0xea23ffff 64bit]
[    0.528961] pci 0000:00:1f.3: reg 0x20: [mem 0xea000000-0xea0fffff 64bit]
[    0.529102] pci 0000:00:1f.3: PME# supported from D3hot D3cold
[    0.529110] pci 0000:00:1f.3: PME# disabled
[    0.530707] pci 0000:00:1f.4: [8086:9da3] type 00 class 0x0c0500
[    0.530735] pci 0000:00:1f.4: reg 0x10: [mem 0xea248000-0xea2480ff 64bit]
[    0.530761] pci 0000:00:1f.4: reg 0x20: [io  0xefa0-0xefbf]
[    0.532345] pci 0000:00:1f.5: [8086:9da4] type 00 class 0x0c8000
[    0.532368] pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff]
[    0.533858] pci 0000:00:1f.6: [8086:15bd] type 00 class 0x020000
[    0.533918] pci 0000:00:1f.6: reg 0x10: [mem 0xea200000-0xea21ffff]
[    0.534158] pci 0000:00:1f.6: PME# supported from D0 D3hot D3cold
[    0.534167] pci 0000:00:1f.6: PME# disabled
[    0.535616] pci_bus 0000:00: fixups for bus
[    0.535621] pci 0000:00:1d.0: scanning [bus 03-03] behind bridge, pass 0
[    0.535816] pci_bus 0000:03: scanning bus
[    0.535831] pci 0000:03:00.0: [144d:a808] type 00 class 0x010802
[    0.535867] pci 0000:03:00.0: reg 0x10: [mem 0xea100000-0xea103fff 64bit]
[    0.536306] pci_bus 0000:03: fixups for bus
[    0.536308] pci 0000:00:1d.0: PCI bridge to [bus 03]
[    0.536315] pci 0000:00:1d.0:   bridge window [mem 0xea100000-0xea1fffff]
[    0.536322] pci_bus 0000:03: bus scan returning with max=03
[    0.536326] pci 0000:00:1d.4: scanning [bus 05-52] behind bridge, pass 0
[    0.536384] pci_bus 0000:05: scanning bus
[    0.536401] pci 0000:05:00.0: [8086:15d3] type 01 class 0x060400
[    0.536470] pci 0000:05:00.0: enabling Extended Tags
[    0.536568] pci 0000:05:00.0: supports D1 D2
[    0.536571] pci 0000:05:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[    0.536577] pci 0000:05:00.0: PME# disabled
[    0.536741] pci_bus 0000:05: fixups for bus
[    0.536742] pci 0000:00:1d.4: PCI bridge to [bus 05-52]
[    0.536747] pci 0000:00:1d.4:   bridge window [io  0x2000-0x2fff]
[    0.536752] pci 0000:00:1d.4:   bridge window [mem 0xd0000000-0xe81fffff]
[    0.536759] pci 0000:00:1d.4:   bridge window [mem 0x80000000-0xbfffffff 64bit pref]
[    0.536763] pci 0000:05:00.0: scanning [bus 06-52] behind bridge, pass 0
[    0.536818] pci_bus 0000:06: scanning bus
[    0.536835] pci 0000:06:00.0: [8086:15d3] type 01 class 0x060400
[    0.536909] pci 0000:06:00.0: enabling Extended Tags
[    0.537010] pci 0000:06:00.0: supports D1 D2
[    0.537012] pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[    0.537018] pci 0000:06:00.0: PME# disabled
[    0.537132] pci 0000:06:01.0: [8086:15d3] type 01 class 0x060400
[    0.537206] pci 0000:06:01.0: enabling Extended Tags
[    0.537304] pci 0000:06:01.0: supports D1 D2
[    0.537307] pci 0000:06:01.0: PME# supported from D0 D1 D2 D3hot D3cold
[    0.537313] pci 0000:06:01.0: PME# disabled
[    0.537427] pci 0000:06:02.0: [8086:15d3] type 01 class 0x060400
[    0.537501] pci 0000:06:02.0: enabling Extended Tags
[    0.537597] pci 0000:06:02.0: supports D1 D2
[    0.537600] pci 0000:06:02.0: PME# supported from D0 D1 D2 D3hot D3cold
[    0.537605] pci 0000:06:02.0: PME# disabled
[    0.537732] pci 0000:06:04.0: [8086:15d3] type 01 class 0x060400
[    0.537805] pci 0000:06:04.0: enabling Extended Tags
[    0.537907] pci 0000:06:04.0: supports D1 D2
[    0.537909] pci 0000:06:04.0: PME# supported from D0 D1 D2 D3hot D3cold
[    0.537915] pci 0000:06:04.0: PME# disabled
[    0.538045] pci_bus 0000:06: fixups for bus
[    0.538046] pci 0000:05:00.0: PCI bridge to [bus 06-52]
[    0.538057] pci 0000:05:00.0:   bridge window [mem 0xd0000000-0xe81fffff]
[    0.538066] pci 0000:05:00.0:   bridge window [mem 0x80000000-0xbfffffff 64bit pref]
[    0.538071] pci 0000:06:00.0: scanning [bus 07-07] behind bridge, pass 0
[    0.538116] pci_bus 0000:07: scanning bus
[    0.538135] pci 0000:07:00.0: [8086:15d2] type 00 class 0x088000
[    0.538172] pci 0000:07:00.0: reg 0x10: [mem 0xe8100000-0xe813ffff]
[    0.538186] pci 0000:07:00.0: reg 0x14: [mem 0xe8140000-0xe8140fff]
[    0.538254] pci 0000:07:00.0: enabling Extended Tags
[    0.538370] pci 0000:07:00.0: supports D1 D2
[    0.538373] pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[    0.538379] pci 0000:07:00.0: PME# disabled
[    0.538548] pci_bus 0000:07: fixups for bus
[    0.538549] pci 0000:06:00.0: PCI bridge to [bus 07]
[    0.538560] pci 0000:06:00.0:   bridge window [mem 0xe8100000-0xe81fffff]
[    0.538568] pci_bus 0000:07: bus scan returning with max=07
[    0.538573] pci 0000:06:01.0: scanning [bus 08-2c] behind bridge, pass 0
[    0.538615] pci_bus 0000:08: scanning bus
[    0.538646] pci 0000:08:00.0: [8086:15d3] type 01 class 0x060400
[    0.538781] pci 0000:08:00.0: enabling Extended Tags
[    0.538965] pci 0000:08:00.0: supports D1 D2
[    0.538968] pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[    0.538976] pci 0000:08:00.0: PME# disabled
[    0.539105] pci 0000:08:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s x4 link at 0000:06:01.0 (capable of 31.504 Gb/s with 8 GT/s x4 link)
[    0.539233] pci_bus 0000:08: fixups for bus
[    0.539234] pci 0000:06:01.0: PCI bridge to [bus 08-2c]
[    0.539245] pci 0000:06:01.0:   bridge window [mem 0xd0000000-0xdbffffff]
[    0.539253] pci 0000:06:01.0:   bridge window [mem 0x80000000-0x9fffffff 64bit pref]
[    0.539259] pci 0000:08:00.0: scanning [bus 09-2c] behind bridge, pass 0
[    0.539329] pci_bus 0000:09: scanning bus
[    0.539363] pci 0000:09:01.0: [8086:15d3] type 01 class 0x060400
[    0.539506] pci 0000:09:01.0: enabling Extended Tags
[    0.539694] pci 0000:09:01.0: supports D1 D2
[    0.539696] pci 0000:09:01.0: PME# supported from D0 D1 D2 D3hot D3cold
[    0.539704] pci 0000:09:01.0: PME# disabled
[    0.539895] pci 0000:09:04.0: [8086:15d3] type 01 class 0x060400
[    0.540038] pci 0000:09:04.0: enabling Extended Tags
[    0.540223] pci 0000:09:04.0: supports D1 D2
[    0.540226] pci 0000:09:04.0: PME# supported from D0 D1 D2 D3hot D3cold
[    0.540234] pci 0000:09:04.0: PME# disabled
[    0.540462] pci_bus 0000:09: fixups for bus
[    0.540463] pci 0000:08:00.0: PCI bridge to [bus 09-2c]
[    0.540481] pci 0000:08:00.0:   bridge window [mem 0xd0000000-0xdbffffff]
[    0.540495] pci 0000:08:00.0:   bridge window [mem 0x80000000-0x9fffffff 64bit pref]
[    0.540501] pci 0000:09:01.0: scanning [bus 0a-0c] behind bridge, pass 0
[    0.540569] pci_bus 0000:0a: scanning bus
[    0.540605] pci 0000:0a:00.0: [1002:1478] type 01 class 0x060400
[    0.540686] pci 0000:0a:00.0: reg 0x10: [mem 0xd0000000-0xd0003fff]
[    0.541007] pci 0000:0a:00.0: PME# supported from D0 D3hot D3cold
[    0.541016] pci 0000:0a:00.0: PME# disabled
[    0.541186] pci 0000:0a:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s x4 link at 0000:06:01.0 (capable of 252.048 Gb/s with 16 GT/s x16 link)
[    0.541354] pci_bus 0000:0a: fixups for bus
[    0.541355] pci 0000:09:01.0: PCI bridge to [bus 0a-0c]
[    0.541369] pci 0000:09:01.0:   bridge window [io  0x2000-0x2fff]
[    0.541377] pci 0000:09:01.0:   bridge window [mem 0xd0000000-0xd03fffff]
[    0.541390] pci 0000:09:01.0:   bridge window [mem 0x80000000-0x901fffff 64bit pref]
[    0.541397] pci 0000:0a:00.0: scanning [bus 0b-0c] behind bridge, pass 0
[    0.541480] pci_bus 0000:0b: scanning bus
[    0.541513] pci 0000:0b:00.0: [1002:1479] type 01 class 0x060400
[    0.541912] pci 0000:0b:00.0: PME# supported from D0 D3hot D3cold
[    0.541921] pci 0000:0b:00.0: PME# disabled
[    0.542223] pci_bus 0000:0b: fixups for bus
[    0.542225] pci 0000:0a:00.0: PCI bridge to [bus 0b-0c]
[    0.542240] pci 0000:0a:00.0:   bridge window [io  0x2000-0x2fff]
[    0.542250] pci 0000:0a:00.0:   bridge window [mem 0xd0000000-0xd03fffff]
[    0.542265] pci 0000:0a:00.0:   bridge window [mem 0x80000000-0x901fffff 64bit pref]
[    0.542272] pci 0000:0b:00.0: scanning [bus 0c-0c] behind bridge, pass 0
[    0.542355] pci_bus 0000:0c: scanning bus
[    0.542390] pci 0000:0c:00.0: [1002:731f] type 00 class 0x030000
[    0.542501] pci 0000:0c:00.0: reg 0x10: [mem 0x80000000-0x8fffffff 64bit pref]
[    0.542539] pci 0000:0c:00.0: reg 0x18: [mem 0x90000000-0x901fffff 64bit pref]
[    0.542565] pci 0000:0c:00.0: reg 0x20: [io  0x0000-0x00ff]
[    0.542590] pci 0000:0c:00.0: reg 0x24: [mem 0xd0100000-0xd017ffff]
[    0.542616] pci 0000:0c:00.0: reg 0x30: [mem 0xfffe0000-0xffffffff pref]
[    0.542919] pci 0000:0c:00.0: PME# supported from D1 D2 D3hot D3cold
[    0.542929] pci 0000:0c:00.0: PME# disabled
[    0.543150] pci 0000:0c:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s x4 link at 0000:06:01.0 (capable of 252.048 Gb/s with 16 GT/s x16 link)
[    0.543243] pci 0000:0c:00.1: [1002:ab38] type 00 class 0x040300
[    0.543311] pci 0000:0c:00.1: reg 0x10: [mem 0xd0180000-0xd0183fff]
[    0.543647] pci 0000:0c:00.1: PME# supported from D1 D2 D3hot D3cold
[    0.543656] pci 0000:0c:00.1: PME# disabled
[    0.543988] pci_bus 0000:0c: fixups for bus
[    0.543990] pci 0000:0b:00.0: PCI bridge to [bus 0c]
[    0.544005] pci 0000:0b:00.0:   bridge window [io  0x2000-0x2fff]
[    0.544014] pci 0000:0b:00.0:   bridge window [mem 0xd0100000-0xd01fffff]
[    0.544030] pci 0000:0b:00.0:   bridge window [mem 0x80000000-0x901fffff 64bit pref]
[    0.544034] pci_bus 0000:0c: bus scan returning with max=0c
[    0.544042] pci 0000:0b:00.0: scanning [bus 0c-0c] behind bridge, pass 1
[    0.544052] pci_bus 0000:0b: bus scan returning with max=0c
[    0.544060] pci 0000:0a:00.0: scanning [bus 0b-0c] behind bridge, pass 1
[    0.544071] pci_bus 0000:0a: bus scan returning with max=0c
[    0.544078] pci 0000:09:04.0: scanning [bus 0d-2c] behind bridge, pass 0
[    0.544148] pci_bus 0000:0d: scanning bus
[    0.544194] pci 0000:0d:00.0: [8086:15d3] type 01 class 0x060400
[    0.544395] pci 0000:0d:00.0: enabling Extended Tags
[    0.544668] pci 0000:0d:00.0: supports D1 D2
[    0.544671] pci 0000:0d:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[    0.544682] pci 0000:0d:00.0: PME# disabled
[    0.544884] pci 0000:0d:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s x4 link at 0000:06:01.0 (capable of 31.504 Gb/s with 8 GT/s x4 link)
[    0.545072] pci_bus 0000:0d: fixups for bus
[    0.545073] pci 0000:09:04.0: PCI bridge to [bus 0d-2c]
[    0.545092] pci 0000:09:04.0:   bridge window [mem 0xd0400000-0xdbffffff]
[    0.545105] pci 0000:09:04.0:   bridge window [mem 0x90200000-0x9fffffff 64bit pref]
[    0.545112] pci 0000:0d:00.0: scanning [bus 0e-2c] behind bridge, pass 0
[    0.545212] pci_bus 0000:0e: scanning bus
[    0.545258] pci 0000:0e:00.0: [8086:15d3] type 01 class 0x060400
[    0.545469] pci 0000:0e:00.0: enabling Extended Tags
[    0.545738] pci 0000:0e:00.0: supports D1 D2
[    0.545741] pci 0000:0e:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[    0.545751] pci 0000:0e:00.0: PME# disabled
[    0.546024] pci 0000:0e:01.0: [8086:15d3] type 01 class 0x060400
[    0.546236] pci 0000:0e:01.0: enabling Extended Tags
[    0.546504] pci 0000:0e:01.0: supports D1 D2
[    0.546507] pci 0000:0e:01.0: PME# supported from D0 D1 D2 D3hot D3cold
[    0.546517] pci 0000:0e:01.0: PME# disabled
[    0.546784] pci 0000:0e:02.0: [8086:15d3] type 01 class 0x060400
[    0.546995] pci 0000:0e:02.0: enabling Extended Tags
[    0.547260] pci 0000:0e:02.0: supports D1 D2
[    0.547262] pci 0000:0e:02.0: PME# supported from D0 D1 D2 D3hot D3cold
[    0.547273] pci 0000:0e:02.0: PME# disabled
[    0.547609] pci_bus 0000:0e: fixups for bus
[    0.547611] pci 0000:0d:00.0: PCI bridge to [bus 0e-2c]
[    0.547636] pci 0000:0d:00.0:   bridge window [mem 0xd0400000-0xdbffffff]
[    0.547655] pci 0000:0d:00.0:   bridge window [mem 0x90200000-0x9fffffff 64bit pref]
[    0.547662] pci 0000:0e:00.0: scanning [bus 0f-0f] behind bridge, pass 0
[    0.547758] pci_bus 0000:0f: scanning bus
[    0.547813] pci 0000:0f:00.0: [1b21:1242] type 00 class 0x0c0330
[    0.547926] pci 0000:0f:00.0: reg 0x10: [mem 0xd0400000-0xd0407fff 64bit]
[    0.548117] pci 0000:0f:00.0: enabling Extended Tags
[    0.548446] pci 0000:0f:00.0: PME# supported from D3hot D3cold
[    0.548458] pci 0000:0f:00.0: PME# disabled
[    0.548675] pci 0000:0f:00.0: 7.876 Gb/s available PCIe bandwidth, limited by 8 GT/s x1 link at 0000:0e:00.0 (capable of 15.752 Gb/s with 8 GT/s x2 link)
[    0.548936] pci_bus 0000:0f: fixups for bus
[    0.548937] pci 0000:0e:00.0: PCI bridge to [bus 0f]
[    0.548963] pci 0000:0e:00.0:   bridge window [mem 0xd0400000-0xd04fffff]
[    0.548981] pci_bus 0000:0f: bus scan returning with max=0f
[    0.548991] pci 0000:0e:01.0: scanning [bus 10-10] behind bridge, pass 0
[    0.549088] pci_bus 0000:10: scanning bus
[    0.549142] pci 0000:10:00.0: [1b21:1242] type 00 class 0x0c0330
[    0.549256] pci 0000:10:00.0: reg 0x10: [mem 0xd0500000-0xd0507fff 64bit]
[    0.549447] pci 0000:10:00.0: enabling Extended Tags
[    0.549748] pci 0000:10:00.0: PME# supported from D3hot D3cold
[    0.549760] pci 0000:10:00.0: PME# disabled
[    0.549977] pci 0000:10:00.0: 7.876 Gb/s available PCIe bandwidth, limited by 8 GT/s x1 link at 0000:0e:01.0 (capable of 15.752 Gb/s with 8 GT/s x2 link)
[    0.550208] pci_bus 0000:10: fixups for bus
[    0.550209] pci 0000:0e:01.0: PCI bridge to [bus 10]
[    0.550235] pci 0000:0e:01.0:   bridge window [mem 0xd0500000-0xd05fffff]
[    0.550253] pci_bus 0000:10: bus scan returning with max=10
[    0.550263] pci 0000:0e:02.0: scanning [bus 11-11] behind bridge, pass 0
[    0.550361] pci_bus 0000:11: scanning bus
[    0.550416] pci 0000:11:00.0: [1b21:1242] type 00 class 0x0c0330
[    0.550554] pci 0000:11:00.0: reg 0x10: [mem 0xd0600000-0xd0607fff 64bit]
[    0.550797] pci 0000:11:00.0: enabling Extended Tags
[    0.551147] pci 0000:11:00.0: PME# supported from D3hot D3cold
[    0.551159] pci 0000:11:00.0: PME# disabled
[    0.551375] pci 0000:11:00.0: 7.876 Gb/s available PCIe bandwidth, limited by 8 GT/s x1 link at 0000:0e:02.0 (capable of 15.752 Gb/s with 8 GT/s x2 link)
[    0.551633] pci_bus 0000:11: fixups for bus
[    0.551634] pci 0000:0e:02.0: PCI bridge to [bus 11]
[    0.551660] pci 0000:0e:02.0:   bridge window [mem 0xd0600000-0xd06fffff]
[    0.551678] pci_bus 0000:11: bus scan returning with max=11
[    0.551688] pci 0000:0e:00.0: scanning [bus 0f-0f] behind bridge, pass 1
[    0.551705] pci 0000:0e:01.0: scanning [bus 10-10] behind bridge, pass 1
[    0.551721] pci 0000:0e:02.0: scanning [bus 11-11] behind bridge, pass 1
[    0.551734] pci_bus 0000:0e: bus scan returning with max=11
[    0.551744] pci 0000:0d:00.0: scanning [bus 0e-2c] behind bridge, pass 1
[    0.551756] pci_bus 0000:0d: bus scan returning with max=2c
[    0.551763] pci 0000:09:01.0: scanning [bus 0a-0c] behind bridge, pass 1
[    0.551775] pci 0000:09:04.0: scanning [bus 0d-2c] behind bridge, pass 1
[    0.551783] pci_bus 0000:09: bus scan returning with max=2c
[    0.551790] pci 0000:08:00.0: scanning [bus 09-2c] behind bridge, pass 1
[    0.551799] pci_bus 0000:08: bus scan returning with max=2c
[    0.551803] pci 0000:06:02.0: scanning [bus 2d-2d] behind bridge, pass 0
[    0.551867] pci_bus 0000:2d: scanning bus
[    0.551887] pci 0000:2d:00.0: [8086:15d4] type 00 class 0x0c0330
[    0.551927] pci 0000:2d:00.0: reg 0x10: [mem 0xdc000000-0xdc00ffff]
[    0.552012] pci 0000:2d:00.0: enabling Extended Tags
[    0.552133] pci 0000:2d:00.0: supports D1 D2
[    0.552136] pci 0000:2d:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[    0.552142] pci 0000:2d:00.0: PME# disabled
[    0.552227] pci 0000:2d:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s x4 link at 0000:06:02.0 (capable of 31.504 Gb/s with 8 GT/s x4 link)
[    0.552355] pci_bus 0000:2d: fixups for bus
[    0.552357] pci 0000:06:02.0: PCI bridge to [bus 2d]
[    0.552368] pci 0000:06:02.0:   bridge window [mem 0xdc000000-0xdc0fffff]
[    0.552376] pci_bus 0000:2d: bus scan returning with max=2d
[    0.552380] pci 0000:06:04.0: scanning [bus 2e-52] behind bridge, pass 0
[    0.552422] pci_bus 0000:2e: scanning bus
[    0.552426] pci_bus 0000:2e: fixups for bus
[    0.552428] pci 0000:06:04.0: PCI bridge to [bus 2e-52]
[    0.552438] pci 0000:06:04.0:   bridge window [mem 0xdc100000-0xe80fffff]
[    0.552447] pci 0000:06:04.0:   bridge window [mem 0xa0000000-0xbfffffff 64bit pref]
[    0.552450] pci_bus 0000:2e: bus scan returning with max=2e
[    0.552454] pci 0000:06:00.0: scanning [bus 07-07] behind bridge, pass 1
[    0.552461] pci 0000:06:01.0: scanning [bus 08-2c] behind bridge, pass 1
[    0.552467] pci 0000:06:02.0: scanning [bus 2d-2d] behind bridge, pass 1
[    0.552474] pci 0000:06:04.0: scanning [bus 2e-52] behind bridge, pass 1
[    0.552479] pci_bus 0000:06: bus scan returning with max=52
[    0.552483] pci 0000:05:00.0: scanning [bus 06-52] behind bridge, pass 1
[    0.552488] pci_bus 0000:05: bus scan returning with max=52
[    0.552493] pci 0000:00:1d.0: scanning [bus 03-03] behind bridge, pass 1
[    0.552498] pci 0000:00:1d.4: scanning [bus 05-52] behind bridge, pass 1
[    0.552503] pci_bus 0000:00: bus scan returning with max=52
[    0.558411] ACPI: EC: interrupt unblocked
[    0.558438] ACPI: EC: event unblocked
[    0.558456] ACPI: EC: EC_CMD/EC_SC=0x66, EC_DATA=0x62
[    0.558459] ACPI: EC: GPE=0x16
[    0.558462] ACPI: \_SB_.PCI0.LPCB.EC__: Boot DSDT EC used to handle transactions and events
[    0.558582] iommu: Default domain type: Translated 
[    0.558582] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=mem,locks=none
[    0.558582] pci 0000:0c:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none
[    0.558582] pci 0000:00:02.0: vgaarb: no bridge control possible
[    0.558582] pci 0000:0c:00.0: vgaarb: bridge control possible
[    0.558582] pci 0000:00:02.0: vgaarb: setting as boot device
[    0.558582] vgaarb: loaded
[    0.558582] ACPI: bus type USB registered
[    0.558582] usbcore: registered new interface driver usbfs
[    0.558582] usbcore: registered new interface driver hub
[    0.559050] usbcore: registered new device driver usb
[    0.559050] pps_core: LinuxPPS API ver. 1 registered
[    0.559050] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
[    0.559050] PTP clock support registered
[    0.559050] Registered efivars operations
[    0.559063] PCI: Using ACPI for IRQ routing
[    0.572216] PCI: pci_cache_line_size set to 64 bytes
[    0.572222] pci 0000:09:01.0: can't claim BAR 13 [io  0x2000-0x2fff]: no compatible bridge window
[    0.572226] pci 0000:0a:00.0: can't claim BAR 13 [io  0x2000-0x2fff]: no compatible bridge window
[    0.572229] pci 0000:0b:00.0: can't claim BAR 13 [io  0x2000-0x2fff]: no compatible bridge window
[    0.572237] pci 0000:00:02.0: BAR 0: reserving [mem 0xe9000000-0xe9ffffff flags 0x140204] (d=0, p=0)
[    0.572239] pci 0000:00:02.0: BAR 2: reserving [mem 0xc0000000-0xcfffffff flags 0x14220c] (d=0, p=0)
[    0.572251] pci 0000:00:14.0: BAR 0: reserving [mem 0xea220000-0xea22ffff flags 0x140204] (d=0, p=0)
[    0.572262] pci 0000:00:15.0: BAR 0: reserving [mem 0xea245000-0xea245fff flags 0x140204] (d=0, p=0)
[    0.572265] pci 0000:00:15.1: BAR 0: reserving [mem 0xea246000-0xea246fff flags 0x140204] (d=0, p=0)
[    0.572272] pci 0000:00:16.3: BAR 0: reserving [io  0x3060-0x3067 flags 0x40101] (d=0, p=0)
[    0.572274] pci 0000:00:16.3: BAR 1: reserving [mem 0xea24a000-0xea24afff flags 0x40200] (d=0, p=0)
[    0.572421] pci 0000:03:00.0: BAR 0: reserving [mem 0xea100000-0xea103fff flags 0x140204] (d=0, p=0)
[    0.572428] pci 0000:07:00.0: BAR 0: reserving [mem 0xe8100000-0xe813ffff flags 0x40200] (d=0, p=0)
[    0.572430] pci 0000:07:00.0: BAR 1: reserving [mem 0xe8140000-0xe8140fff flags 0x40200] (d=0, p=0)
[    0.572441] pci 0000:0a:00.0: BAR 0: reserving [mem 0xd0000000-0xd0003fff flags 0x40200] (d=0, p=0)
[    0.572443] pci 0000:0a:00.0: can't claim BAR 0 [mem 0xd0000000-0xd0003fff]: address conflict with PCI Bus 0000:0b [mem 0xd0000000-0xd03fffff]
[    0.572455] pci 0000:0c:00.0: BAR 0: reserving [mem 0x80000000-0x8fffffff flags 0x14220c] (d=0, p=0)
[    0.572457] pci 0000:0c:00.0: BAR 2: reserving [mem 0x90000000-0x901fffff flags 0x14220c] (d=0, p=0)
[    0.572458] pci 0000:0c:00.0: BAR 5: reserving [mem 0xd0100000-0xd017ffff flags 0x40200] (d=0, p=0)
[    0.572463] pci 0000:0c:00.1: BAR 0: reserving [mem 0xd0180000-0xd0183fff flags 0x40200] (d=0, p=0)
[    0.572480] pci 0000:0f:00.0: BAR 0: reserving [mem 0xd0400000-0xd0407fff flags 0x140204] (d=0, p=0)
[    0.572490] pci 0000:10:00.0: BAR 0: reserving [mem 0xd0500000-0xd0507fff flags 0x140204] (d=0, p=0)
[    0.572500] pci 0000:11:00.0: BAR 0: reserving [mem 0xd0600000-0xd0607fff flags 0x140204] (d=0, p=0)
[    0.572505] pci 0000:2d:00.0: BAR 0: reserving [mem 0xdc000000-0xdc00ffff flags 0x40200] (d=0, p=0)
[    0.572512] pci 0000:00:1f.3: BAR 0: reserving [mem 0xea23c000-0xea23ffff flags 0x140204] (d=0, p=0)
[    0.572514] pci 0000:00:1f.3: BAR 4: reserving [mem 0xea000000-0xea0fffff flags 0x140204] (d=0, p=0)
[    0.572526] pci 0000:00:1f.5: BAR 0: reserving [mem 0xfe010000-0xfe010fff flags 0x40200] (d=0, p=0)
[    0.572530] pci 0000:00:1f.6: BAR 0: reserving [mem 0xea200000-0xea21ffff flags 0x40200] (d=0, p=0)
[    0.572533] pci 0000:00:02.0: BAR 4: reserving [io  0x3000-0x303f flags 0x40101] (d=1, p=1)
[    0.572535] pci 0000:00:04.0: BAR 0: reserving [mem 0xea230000-0xea237fff flags 0x140204] (d=1, p=1)
[    0.572537] pci 0000:00:08.0: BAR 0: reserving [mem 0xea242000-0xea242fff flags 0x140204] (d=1, p=1)
[    0.572542] pci 0000:00:12.0: BAR 0: reserving [mem 0xea243000-0xea243fff flags 0x140204] (d=1, p=1)
[    0.572550] pci 0000:00:14.2: BAR 0: reserving [mem 0xea240000-0xea241fff flags 0x140204] (d=1, p=1)
[    0.572552] pci 0000:00:14.2: BAR 2: reserving [mem 0xea244000-0xea244fff flags 0x140204] (d=1, p=1)
[    0.572559] pci 0000:00:14.3: BAR 0: reserving [mem 0xea238000-0xea23bfff flags 0x140204] (d=1, p=1)
[    0.572565] pci 0000:00:16.0: BAR 0: reserving [mem 0xea247000-0xea247fff flags 0x140204] (d=1, p=1)
[    0.572788] pci 0000:00:1f.4: BAR 0: reserving [mem 0xea248000-0xea2480ff flags 0x140204] (d=1, p=1)
[    0.572790] pci 0000:00:1f.4: BAR 4: reserving [io  0xefa0-0xefbf flags 0x40101] (d=1, p=1)
[    0.572798] e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff]
[    0.572800] e820: reserve RAM buffer [mem 0x65ec4000-0x67ffffff]
[    0.572800] e820: reserve RAM buffer [mem 0x6b72a000-0x6bffffff]
[    0.572801] e820: reserve RAM buffer [mem 0x6fd10000-0x6fffffff]
[    0.572802] e820: reserve RAM buffer [mem 0x47e800000-0x47fffffff]
[    0.573047] NetLabel: Initializing
[    0.573047] NetLabel:  domain hash size = 128
[    0.573047] NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
[    0.573047] NetLabel:  unlabeled traffic allowed by default
[    0.574046] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
[    0.574054] hpet0: 8 comparators, 64-bit 24.000000 MHz counter
[    0.576076] clocksource: Switched to clocksource tsc-early
[    0.590217] VFS: Disk quotas dquot_6.6.0
[    0.590239] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    0.590298] pnp: PnP ACPI init
[    0.590446] system 00:00: [mem 0x40000000-0x403fffff] could not be reserved
[    0.590456] system 00:00: Plug and Play ACPI device, IDs PNP0c02 (active)
[    0.590968] system 00:01: [io  0x1800-0x18fe] has been reserved
[    0.590973] system 00:01: [mem 0xfd000000-0xfd69ffff] has been reserved
[    0.590977] system 00:01: [mem 0xfd6b0000-0xfd6cffff] has been reserved
[    0.590980] system 00:01: [mem 0xfd6f0000-0xfdffffff] has been reserved
[    0.590983] system 00:01: [mem 0xfe000000-0xfe01ffff] could not be reserved
[    0.590987] system 00:01: [mem 0xfe200000-0xfe7fffff] has been reserved
[    0.590991] system 00:01: [mem 0xff000000-0xffffffff] has been reserved
[    0.590998] system 00:01: Plug and Play ACPI device, IDs PNP0c02 (active)
[    0.591515] system 00:02: [io  0xff00-0xfffe] has been reserved
[    0.591522] system 00:02: Plug and Play ACPI device, IDs PNP0c02 (active)
[    0.592097] pnp 00:03: disabling [io  0x002e-0x002f] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592102] pnp 00:03: disabling [io  0x004e-0x004f] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592106] pnp 00:03: disabling [io  0x0061] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592110] pnp 00:03: disabling [io  0x0063] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592113] pnp 00:03: disabling [io  0x0065] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592117] pnp 00:03: disabling [io  0x0067] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592121] pnp 00:03: disabling [io  0x0070] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592124] pnp 00:03: disabling [io  0x0080] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592128] pnp 00:03: disabling [io  0x0092] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592131] pnp 00:03: disabling [io  0x00b2-0x00b3] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592174] system 00:03: [io  0x0680-0x069f] has been reserved
[    0.592178] system 00:03: [io  0x164e-0x164f] has been reserved
[    0.592184] system 00:03: Plug and Play ACPI device, IDs PNP0c02 (active)
[    0.592388] system 00:04: [io  0x1854-0x1857] has been reserved
[    0.592395] system 00:04: Plug and Play ACPI device, IDs INT3f0d PNP0c02 (active)
[    0.592422] pnp 00:05: Plug and Play ACPI device, IDs LEN0071 PNP0303 (active)
[    0.592445] pnp 00:06: Plug and Play ACPI device, IDs LEN0300 PNP0f13 (active)
[    0.592528] pnp 00:07: disabling [io  0x0010-0x001f] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592532] pnp 00:07: disabling [io  0x0090-0x009f] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592536] pnp 00:07: disabling [io  0x0024-0x0025] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592540] pnp 00:07: disabling [io  0x0028-0x0029] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592543] pnp 00:07: disabling [io  0x002c-0x002d] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592547] pnp 00:07: disabling [io  0x0030-0x0031] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592551] pnp 00:07: disabling [io  0x0034-0x0035] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592554] pnp 00:07: disabling [io  0x0038-0x0039] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592558] pnp 00:07: disabling [io  0x003c-0x003d] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592561] pnp 00:07: disabling [io  0x00a4-0x00a5] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592565] pnp 00:07: disabling [io  0x00a8-0x00a9] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592569] pnp 00:07: disabling [io  0x00ac-0x00ad] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592572] pnp 00:07: disabling [io  0x00b0-0x00b5] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592576] pnp 00:07: disabling [io  0x00b8-0x00b9] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592579] pnp 00:07: disabling [io  0x00bc-0x00bd] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592583] pnp 00:07: disabling [io  0x0050-0x0053] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592587] pnp 00:07: disabling [io  0x0072-0x0077] because it overlaps 0000:0c:00.0 BAR 4 [io  0x0000-0x00ff]
[    0.592631] system 00:07: [io  0x1800-0x189f] could not be reserved
[    0.592634] system 00:07: [io  0x0800-0x087f] has been reserved
[    0.592638] system 00:07: [io  0x0880-0x08ff] has been reserved
[    0.592641] system 00:07: [io  0x0900-0x097f] has been reserved
[    0.592644] system 00:07: [io  0x0980-0x09ff] has been reserved
[    0.592647] system 00:07: [io  0x0a00-0x0a7f] has been reserved
[    0.592650] system 00:07: [io  0x0a80-0x0aff] has been reserved
[    0.592653] system 00:07: [io  0x0b00-0x0b7f] has been reserved
[    0.592656] system 00:07: [io  0x0b80-0x0bff] has been reserved
[    0.592659] system 00:07: [io  0x15e0-0x15ef] has been reserved
[    0.592662] system 00:07: [io  0x1600-0x167f] could not be reserved
[    0.592665] system 00:07: [io  0x1640-0x165f] could not be reserved
[    0.592669] system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved
[    0.592672] system 00:07: [mem 0xfed10000-0xfed13fff] has been reserved
[    0.592676] system 00:07: [mem 0xfed18000-0xfed18fff] has been reserved
[    0.592679] system 00:07: [mem 0xfed19000-0xfed19fff] has been reserved
[    0.592683] system 00:07: [mem 0xfeb00000-0xfebfffff] has been reserved
[    0.592686] system 00:07: [mem 0xfed20000-0xfed3ffff] has been reserved
[    0.592689] system 00:07: [mem 0xfed90000-0xfed93fff] could not be reserved
[    0.592695] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active)
[    0.594692] system 00:08: Plug and Play ACPI device, IDs PNP0c02 (active)
[    0.596180] system 00:09: [mem 0xfed10000-0xfed17fff] could not be reserved
[    0.596185] system 00:09: [mem 0xfed18000-0xfed18fff] has been reserved
[    0.596188] system 00:09: [mem 0xfed19000-0xfed19fff] has been reserved
[    0.596192] system 00:09: [mem 0xf0000000-0xf7ffffff] has been reserved
[    0.596195] system 00:09: [mem 0xfed20000-0xfed3ffff] has been reserved
[    0.596198] system 00:09: [mem 0xfed90000-0xfed93fff] could not be reserved
[    0.596202] system 00:09: [mem 0xfed45000-0xfed8ffff] has been reserved
[    0.596205] system 00:09: [mem 0xfee00000-0xfeefffff] has been reserved
[    0.596211] system 00:09: Plug and Play ACPI device, IDs PNP0c02 (active)
[    0.596754] system 00:0a: [mem 0x00000000-0x0009ffff] could not be reserved
[    0.596759] system 00:0a: [mem 0x000c0000-0x000c3fff] could not be reserved
[    0.596762] system 00:0a: [mem 0x000c8000-0x000cbfff] could not be reserved
[    0.596766] system 00:0a: [mem 0x000d0000-0x000d3fff] could not be reserved
[    0.596769] system 00:0a: [mem 0x000d8000-0x000dbfff] could not be reserved
[    0.596772] system 00:0a: [mem 0x000e0000-0x000e3fff] could not be reserved
[    0.596775] system 00:0a: [mem 0x000e8000-0x000ebfff] could not be reserved
[    0.596779] system 00:0a: [mem 0x000f0000-0x000fffff] could not be reserved
[    0.596782] system 00:0a: [mem 0x00100000-0x7d7fffff] could not be reserved
[    0.596786] system 00:0a: [mem 0xfec00000-0xfed3ffff] could not be reserved
[    0.596789] system 00:0a: [mem 0xfed4c000-0xffffffff] could not be reserved
[    0.596795] system 00:0a: Plug and Play ACPI device, IDs PNP0c01 (active)
[    0.596959] pnp: PnP ACPI: found 11 devices
[    0.602877] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
[    0.602916] pci 0000:0c:00.0: can't claim BAR 6 [mem 0xfffe0000-0xffffffff pref]: no compatible bridge window
[    0.602929] pci 0000:09:04.0: bridge window [io  0x1000-0x0fff] to [bus 0d-2c] add_size 1000
[    0.602934] pci 0000:08:00.0: bridge window [io  0x1000-0x1fff] to [bus 09-2c] add_size 1000
[    0.602937] pci 0000:06:01.0: bridge window [io  0x1000-0x1fff] to [bus 08-2c] add_size 1000
[    0.602942] pci 0000:06:04.0: bridge window [io  0x1000-0x0fff] to [bus 2e-52] add_size 1000
[    0.602946] pci 0000:05:00.0: bridge window [io  0x1000-0x1fff] to [bus 06-52] add_size 2000
[    0.602953] pci 0000:00:1d.0: PCI bridge to [bus 03]
[    0.602963] pci 0000:00:1d.0:   bridge window [mem 0xea100000-0xea1fffff]
[    0.602974] pci 0000:05:00.0: BAR 13: no space for [io  size 0x3000]
[    0.602977] pci 0000:05:00.0: BAR 13: failed to assign [io  size 0x3000]
[    0.602980] pci 0000:05:00.0: BAR 13: assigned [io  0x2000-0x2fff]
[    0.602984] pci 0000:05:00.0: BAR 13: [io  0x2000-0x2fff] (failed to expand by 0x2000)
[    0.602987] pci 0000:05:00.0: failed to add 2000 res[13]=[io  0x2000-0x2fff]
[    0.602992] pci 0000:06:01.0: BAR 13: no space for [io  size 0x2000]
[    0.602995] pci 0000:06:01.0: BAR 13: failed to assign [io  size 0x2000]
[    0.602998] pci 0000:06:04.0: BAR 13: assigned [io  0x2000-0x2fff]
[    0.603001] pci 0000:06:01.0: BAR 13: assigned [io  0x2000-0x2fff]
[    0.603004] pci 0000:06:04.0: BAR 13: no space for [io  size 0x1000]
[    0.603007] pci 0000:06:04.0: BAR 13: failed to assign [io  size 0x1000]
[    0.603011] pci 0000:06:01.0: BAR 13: [io  0x2000-0x2fff] (failed to expand by 0x1000)
[    0.603014] pci 0000:06:01.0: failed to add 1000 res[13]=[io  0x2000-0x2fff]
[    0.603017] pci 0000:06:00.0: PCI bridge to [bus 07]
[    0.603025] pci 0000:06:00.0:   bridge window [mem 0xe8100000-0xe81fffff]
[    0.603037] pci 0000:08:00.0: BAR 13: no space for [io  size 0x2000]
[    0.603040] pci 0000:08:00.0: BAR 13: failed to assign [io  size 0x2000]
[    0.603043] pci 0000:08:00.0: BAR 13: assigned [io  0x2000-0x2fff]
[    0.603046] pci 0000:08:00.0: BAR 13: [io  0x2000-0x2fff] (failed to expand by 0x1000)
[    0.603054] pci 0000:08:00.0: failed to add 1000 res[13]=[io  0x2000-0x2fff]
[    0.603058] pci 0000:09:01.0: BAR 13: assigned [io  0x2000-0x2fff]
[    0.603061] pci 0000:09:04.0: BAR 13: no space for [io  size 0x1000]
[    0.603064] pci 0000:09:04.0: BAR 13: failed to assign [io  size 0x1000]
[    0.603067] pci 0000:09:01.0: BAR 13: assigned [io  0x2000-0x2fff]
[    0.603070] pci 0000:09:04.0: BAR 13: no space for [io  size 0x1000]
[    0.603073] pci 0000:09:04.0: BAR 13: failed to assign [io  size 0x1000]
[    0.603077] pci 0000:0a:00.0: BAR 0: no space for [mem size 0x00004000]
[    0.603080] pci 0000:0a:00.0: BAR 0: trying firmware assignment [mem 0xd0000000-0xd0003fff]
[    0.603084] pci 0000:0a:00.0: BAR 0: [mem 0xd0000000-0xd0003fff] conflicts with PCI Bus 0000:0b [mem 0xd0000000-0xd03fffff]
[    0.603088] pci 0000:0a:00.0: BAR 0: failed to assign [mem size 0x00004000]
[    0.603091] pci 0000:0a:00.0: BAR 13: assigned [io  0x2000-0x2fff]
[    0.603095] pci 0000:0b:00.0: BAR 13: assigned [io  0x2000-0x2fff]
[    0.603099] pci 0000:0c:00.0: BAR 6: assigned [mem 0xd01a0000-0xd01bffff pref]
[    0.603103] pci 0000:0c:00.0: BAR 4: assigned [io  0x2000-0x20ff]
[    0.603113] pci 0000:0b:00.0: PCI bridge to [bus 0c]
[    0.603119] pci 0000:0b:00.0:   bridge window [io  0x2000-0x2fff]
[    0.603132] pci 0000:0b:00.0:   bridge window [mem 0xd0100000-0xd01fffff]
[    0.603141] pci 0000:0b:00.0:   bridge window [mem 0x80000000-0x901fffff 64bit pref]
[    0.603157] pci 0000:0a:00.0: PCI bridge to [bus 0b-0c]
[    0.603163] pci 0000:0a:00.0:   bridge window [io  0x2000-0x2fff]
[    0.603175] pci 0000:0a:00.0:   bridge window [mem 0xd0000000-0xd03fffff]
[    0.603185] pci 0000:0a:00.0:   bridge window [mem 0x80000000-0x901fffff 64bit pref]
[    0.603201] pci 0000:09:01.0: PCI bridge to [bus 0a-0c]
[    0.603206] pci 0000:09:01.0:   bridge window [io  0x2000-0x2fff]
[    0.603216] pci 0000:09:01.0:   bridge window [mem 0xd0000000-0xd03fffff]
[    0.603225] pci 0000:09:01.0:   bridge window [mem 0x80000000-0x901fffff 64bit pref]
[    0.603239] pci 0000:0e:00.0: PCI bridge to [bus 0f]
[    0.603253] pci 0000:0e:00.0:   bridge window [mem 0xd0400000-0xd04fffff]
[    0.603280] pci 0000:0e:01.0: PCI bridge to [bus 10]
[    0.603294] pci 0000:0e:01.0:   bridge window [mem 0xd0500000-0xd05fffff]
[    0.603320] pci 0000:0e:02.0: PCI bridge to [bus 11]
[    0.603334] pci 0000:0e:02.0:   bridge window [mem 0xd0600000-0xd06fffff]
[    0.603360] pci 0000:0d:00.0: PCI bridge to [bus 0e-2c]
[    0.603374] pci 0000:0d:00.0:   bridge window [mem 0xd0400000-0xdbffffff]
[    0.603385] pci 0000:0d:00.0:   bridge window [mem 0x90200000-0x9fffffff 64bit pref]
[    0.603403] pci 0000:09:04.0: PCI bridge to [bus 0d-2c]
[    0.603413] pci 0000:09:04.0:   bridge window [mem 0xd0400000-0xdbffffff]
[    0.603422] pci 0000:09:04.0:   bridge window [mem 0x90200000-0x9fffffff 64bit pref]
[    0.603435] pci 0000:08:00.0: PCI bridge to [bus 09-2c]
[    0.603440] pci 0000:08:00.0:   bridge window [io  0x2000-0x2fff]
[    0.603450] pci 0000:08:00.0:   bridge window [mem 0xd0000000-0xdbffffff]
[    0.603458] pci 0000:08:00.0:   bridge window [mem 0x80000000-0x9fffffff 64bit pref]
[    0.603472] pci 0000:06:01.0: PCI bridge to [bus 08-2c]
[    0.603476] pci 0000:06:01.0:   bridge window [io  0x2000-0x2fff]
[    0.603482] pci 0000:06:01.0:   bridge window [mem 0xd0000000-0xdbffffff]
[    0.603488] pci 0000:06:01.0:   bridge window [mem 0x80000000-0x9fffffff 64bit pref]
[    0.603496] pci 0000:06:02.0: PCI bridge to [bus 2d]
[    0.603503] pci 0000:06:02.0:   bridge window [mem 0xdc000000-0xdc0fffff]
[    0.603514] pci 0000:06:04.0: PCI bridge to [bus 2e-52]
[    0.603520] pci 0000:06:04.0:   bridge window [mem 0xdc100000-0xe80fffff]
[    0.603526] pci 0000:06:04.0:   bridge window [mem 0xa0000000-0xbfffffff 64bit pref]
[    0.603534] pci 0000:05:00.0: PCI bridge to [bus 06-52]
[    0.603538] pci 0000:05:00.0:   bridge window [io  0x2000-0x2fff]
[    0.603545] pci 0000:05:00.0:   bridge window [mem 0xd0000000-0xe81fffff]
[    0.603551] pci 0000:05:00.0:   bridge window [mem 0x80000000-0xbfffffff 64bit pref]
[    0.603559] pci 0000:00:1d.4: PCI bridge to [bus 05-52]
[    0.603562] pci 0000:00:1d.4:   bridge window [io  0x2000-0x2fff]
[    0.603567] pci 0000:00:1d.4:   bridge window [mem 0xd0000000-0xe81fffff]
[    0.603572] pci 0000:00:1d.4:   bridge window [mem 0x80000000-0xbfffffff 64bit pref]
[    0.603579] pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc
[    0.603583] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
[    0.603586] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
[    0.603589] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
[    0.603592] pci_bus 0000:00: resource 7 [mem 0x7d800000-0xefffffff window]
[    0.603595] pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window]
[    0.603598] pci_bus 0000:03: resource 1 [mem 0xea100000-0xea1fffff]
[    0.603601] pci_bus 0000:05: resource 0 [io  0x2000-0x2fff]
[    0.603604] pci_bus 0000:05: resource 1 [mem 0xd0000000-0xe81fffff]
[    0.603607] pci_bus 0000:05: resource 2 [mem 0x80000000-0xbfffffff 64bit pref]
[    0.603610] pci_bus 0000:06: resource 0 [io  0x2000-0x2fff]
[    0.603613] pci_bus 0000:06: resource 1 [mem 0xd0000000-0xe81fffff]
[    0.603616] pci_bus 0000:06: resource 2 [mem 0x80000000-0xbfffffff 64bit pref]
[    0.603619] pci_bus 0000:07: resource 1 [mem 0xe8100000-0xe81fffff]
[    0.603622] pci_bus 0000:08: resource 0 [io  0x2000-0x2fff]
[    0.603624] pci_bus 0000:08: resource 1 [mem 0xd0000000-0xdbffffff]
[    0.603627] pci_bus 0000:08: resource 2 [mem 0x80000000-0x9fffffff 64bit pref]
[    0.603630] pci_bus 0000:09: resource 0 [io  0x2000-0x2fff]
[    0.603633] pci_bus 0000:09: resource 1 [mem 0xd0000000-0xdbffffff]
[    0.603636] pci_bus 0000:09: resource 2 [mem 0x80000000-0x9fffffff 64bit pref]
[    0.603639] pci_bus 0000:0a: resource 0 [io  0x2000-0x2fff]
[    0.603641] pci_bus 0000:0a: resource 1 [mem 0xd0000000-0xd03fffff]
[    0.603644] pci_bus 0000:0a: resource 2 [mem 0x80000000-0x901fffff 64bit pref]
[    0.603647] pci_bus 0000:0b: resource 0 [io  0x2000-0x2fff]
[    0.603650] pci_bus 0000:0b: resource 1 [mem 0xd0000000-0xd03fffff]
[    0.603652] pci_bus 0000:0b: resource 2 [mem 0x80000000-0x901fffff 64bit pref]
[    0.603655] pci_bus 0000:0c: resource 0 [io  0x2000-0x2fff]
[    0.603658] pci_bus 0000:0c: resource 1 [mem 0xd0100000-0xd01fffff]
[    0.603661] pci_bus 0000:0c: resource 2 [mem 0x80000000-0x901fffff 64bit pref]
[    0.603664] pci_bus 0000:0d: resource 1 [mem 0xd0400000-0xdbffffff]
[    0.603667] pci_bus 0000:0d: resource 2 [mem 0x90200000-0x9fffffff 64bit pref]
[    0.603670] pci_bus 0000:0e: resource 1 [mem 0xd0400000-0xdbffffff]
[    0.603673] pci_bus 0000:0e: resource 2 [mem 0x90200000-0x9fffffff 64bit pref]
[    0.603676] pci_bus 0000:0f: resource 1 [mem 0xd0400000-0xd04fffff]
[    0.603679] pci_bus 0000:10: resource 1 [mem 0xd0500000-0xd05fffff]
[    0.603681] pci_bus 0000:11: resource 1 [mem 0xd0600000-0xd06fffff]
[    0.603684] pci_bus 0000:2d: resource 1 [mem 0xdc000000-0xdc0fffff]
[    0.603687] pci_bus 0000:2e: resource 1 [mem 0xdc100000-0xe80fffff]
[    0.603690] pci_bus 0000:2e: resource 2 [mem 0xa0000000-0xbfffffff 64bit pref]
[    0.603928] NET: Registered protocol family 2
[    0.604083] tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear)
[    0.604110] TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[    0.604292] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
[    0.604436] TCP: Hash tables configured (established 131072 bind 65536)
[    0.604468] UDP hash table entries: 8192 (order: 6, 262144 bytes, linear)
[    0.604527] UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear)
[    0.604608] NET: Registered protocol family 1
[    0.604615] NET: Registered protocol family 44
[    0.604629] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
[    0.605340] pci 0000:0c:00.1: D0 power state depends on 0000:0c:00.0
[    0.605391] pci 0000:0c:00.1: saving config space at offset 0x0 (reading 0xab381002)
[    0.605395] pci 0000:0c:00.1: saving config space at offset 0x4 (reading 0x100006)
[    0.605400] pci 0000:0c:00.1: saving config space at offset 0x8 (reading 0x4030000)
[    0.605404] pci 0000:0c:00.1: saving config space at offset 0xc (reading 0x800020)
[    0.605409] pci 0000:0c:00.1: saving config space at offset 0x10 (reading 0xd0180000)
[    0.605413] pci 0000:0c:00.1: saving config space at offset 0x14 (reading 0x0)
[    0.605417] pci 0000:0c:00.1: saving config space at offset 0x18 (reading 0x0)
[    0.605421] pci 0000:0c:00.1: saving config space at offset 0x1c (reading 0x0)
[    0.605426] pci 0000:0c:00.1: saving config space at offset 0x20 (reading 0x0)
[    0.605430] pci 0000:0c:00.1: saving config space at offset 0x24 (reading 0x0)
[    0.605434] pci 0000:0c:00.1: saving config space at offset 0x28 (reading 0x0)
[    0.605438] pci 0000:0c:00.1: saving config space at offset 0x2c (reading 0xab381002)
[    0.605443] pci 0000:0c:00.1: saving config space at offset 0x30 (reading 0x0)
[    0.605447] pci 0000:0c:00.1: saving config space at offset 0x34 (reading 0x48)
[    0.605451] pci 0000:0c:00.1: saving config space at offset 0x38 (reading 0x0)
[    0.605455] pci 0000:0c:00.1: saving config space at offset 0x3c (reading 0x2ff)
[    0.607751] PCI: CLS 128 bytes, default 64
[    0.607790] Trying to unpack rootfs image as initramfs...
[    1.543047] Freeing initrd memory: 448304K
[    1.952187] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[    1.952192] software IO TLB: mapped [mem 0x61437000-0x65437000] (64MB)
[    1.952295] platform rtc_cmos: registered platform RTC device (no PNP device found)
[    1.952442] check: Scanning for low memory corruption every 60 seconds
[    1.952916] Initialise system trusted keyrings
[    1.952930] Key type blacklist registered
[    1.952988] workingset: timestamp_bits=36 max_order=22 bucket_order=0
[    1.954048] zbud: loaded
[    1.960904] Key type asymmetric registered
[    1.960909] Asymmetric key parser 'x509' registered
[    1.960918] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 244)
[    1.961149] pcieport 0000:00:1d.0: runtime IRQ mapping not provided by arch
[    1.961479] pcieport 0000:00:1d.0: PME: Signaling with IRQ 122
[    1.961543] pcieport 0000:00:1d.0: saving config space at offset 0x0 (reading 0x9db08086)
[    1.961546] pcieport 0000:00:1d.0: saving config space at offset 0x4 (reading 0x100407)
[    1.961548] pcieport 0000:00:1d.0: saving config space at offset 0x8 (reading 0x60400f1)
[    1.961551] pcieport 0000:00:1d.0: saving config space at offset 0xc (reading 0x810000)
[    1.961553] pcieport 0000:00:1d.0: saving config space at offset 0x10 (reading 0x0)
[    1.961555] pcieport 0000:00:1d.0: saving config space at offset 0x14 (reading 0x0)
[    1.961557] pcieport 0000:00:1d.0: saving config space at offset 0x18 (reading 0x30300)
[    1.961559] pcieport 0000:00:1d.0: saving config space at offset 0x1c (reading 0x200000f0)
[    1.961561] pcieport 0000:00:1d.0: saving config space at offset 0x20 (reading 0xea10ea10)
[    1.961563] pcieport 0000:00:1d.0: saving config space at offset 0x24 (reading 0x1fff1)
[    1.961565] pcieport 0000:00:1d.0: saving config space at offset 0x28 (reading 0x0)
[    1.961567] pcieport 0000:00:1d.0: saving config space at offset 0x2c (reading 0x0)
[    1.961569] pcieport 0000:00:1d.0: saving config space at offset 0x30 (reading 0x0)
[    1.961571] pcieport 0000:00:1d.0: saving config space at offset 0x34 (reading 0x40)
[    1.961573] pcieport 0000:00:1d.0: saving config space at offset 0x38 (reading 0x0)
[    1.961575] pcieport 0000:00:1d.0: saving config space at offset 0x3c (reading 0x201ff)
[    1.961614] pcieport 0000:00:1d.4: runtime IRQ mapping not provided by arch
[    1.961684] pcieport 0000:00:1d.4: PME: Signaling with IRQ 123
[    1.961715] pcieport 0000:00:1d.4: pciehp: Slot #12 AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise+ Interlock- NoCompl+ LLActRep+
[    1.961723] pci_bus 0000:05: dev 00, created physical slot 12
[    1.961803] pcieport 0000:00:1d.4: saving config space at offset 0x0 (reading 0x9db48086)
[    1.961805] pcieport 0000:00:1d.4: saving config space at offset 0x4 (reading 0x100407)
[    1.961807] pcieport 0000:00:1d.4: saving config space at offset 0x8 (reading 0x60400f1)
[    1.961809] pcieport 0000:00:1d.4: saving config space at offset 0xc (reading 0x810000)
[    1.961811] pcieport 0000:00:1d.4: saving config space at offset 0x10 (reading 0x0)
[    1.961813] pcieport 0000:00:1d.4: saving config space at offset 0x14 (reading 0x0)
[    1.961815] pcieport 0000:00:1d.4: saving config space at offset 0x18 (reading 0x520500)
[    1.961817] pcieport 0000:00:1d.4: saving config space at offset 0x1c (reading 0x20002020)
[    1.961819] pcieport 0000:00:1d.4: saving config space at offset 0x20 (reading 0xe810d000)
[    1.961821] pcieport 0000:00:1d.4: saving config space at offset 0x24 (reading 0xbff18001)
[    1.961822] pcieport 0000:00:1d.4: saving config space at offset 0x28 (reading 0x0)
[    1.961824] pcieport 0000:00:1d.4: saving config space at offset 0x2c (reading 0x0)
[    1.961826] pcieport 0000:00:1d.4: saving config space at offset 0x30 (reading 0x0)
[    1.961828] pcieport 0000:00:1d.4: saving config space at offset 0x34 (reading 0x40)
[    1.961830] pcieport 0000:00:1d.4: saving config space at offset 0x38 (reading 0x0)
[    1.961832] pcieport 0000:00:1d.4: saving config space at offset 0x3c (reading 0x201ff)
[    1.961868] pcieport 0000:05:00.0: runtime IRQ mapping not provided by arch
[    1.961881] pcieport 0000:05:00.0: saving config space at offset 0x0 (reading 0x15d38086)
[    1.961883] pcieport 0000:05:00.0: saving config space at offset 0x4 (reading 0x100007)
[    1.961886] pcieport 0000:05:00.0: saving config space at offset 0x8 (reading 0x6040002)
[    1.961888] pcieport 0000:05:00.0: saving config space at offset 0xc (reading 0x10020)
[    1.961890] pcieport 0000:05:00.0: saving config space at offset 0x10 (reading 0x0)
[    1.961893] pcieport 0000:05:00.0: saving config space at offset 0x14 (reading 0x0)
[    1.961895] pcieport 0000:05:00.0: saving config space at offset 0x18 (reading 0x520605)
[    1.961897] pcieport 0000:05:00.0: saving config space at offset 0x1c (reading 0x2121)
[    1.961900] pcieport 0000:05:00.0: saving config space at offset 0x20 (reading 0xe810d000)
[    1.961902] pcieport 0000:05:00.0: saving config space at offset 0x24 (reading 0xbff18001)
[    1.961904] pcieport 0000:05:00.0: saving config space at offset 0x28 (reading 0x0)
[    1.961907] pcieport 0000:05:00.0: saving config space at offset 0x2c (reading 0x0)
[    1.961909] pcieport 0000:05:00.0: saving config space at offset 0x30 (reading 0x0)
[    1.961911] pcieport 0000:05:00.0: saving config space at offset 0x34 (reading 0x80)
[    1.961914] pcieport 0000:05:00.0: saving config space at offset 0x38 (reading 0x0)
[    1.961916] pcieport 0000:05:00.0: saving config space at offset 0x3c (reading 0x201ff)
[    1.961975] pcieport 0000:06:00.0: runtime IRQ mapping not provided by arch
[    1.962238] pcieport 0000:06:00.0: saving config space at offset 0x0 (reading 0x15d38086)
[    1.962240] pcieport 0000:06:00.0: saving config space at offset 0x4 (reading 0x100407)
[    1.962243] pcieport 0000:06:00.0: saving config space at offset 0x8 (reading 0x6040002)
[    1.962245] pcieport 0000:06:00.0: saving config space at offset 0xc (reading 0x10020)
[    1.962248] pcieport 0000:06:00.0: saving config space at offset 0x10 (reading 0x0)
[    1.962250] pcieport 0000:06:00.0: saving config space at offset 0x14 (reading 0x0)
[    1.962252] pcieport 0000:06:00.0: saving config space at offset 0x18 (reading 0x70706)
[    1.962255] pcieport 0000:06:00.0: saving config space at offset 0x1c (reading 0x1f1)
[    1.962257] pcieport 0000:06:00.0: saving config space at offset 0x20 (reading 0xe810e810)
[    1.962260] pcieport 0000:06:00.0: saving config space at offset 0x24 (reading 0x1fff1)
[    1.962262] pcieport 0000:06:00.0: saving config space at offset 0x28 (reading 0x0)
[    1.962264] pcieport 0000:06:00.0: saving config space at offset 0x2c (reading 0x0)
[    1.962267] pcieport 0000:06:00.0: saving config space at offset 0x30 (reading 0x0)
[    1.962269] pcieport 0000:06:00.0: saving config space at offset 0x34 (reading 0x80)
[    1.962271] pcieport 0000:06:00.0: saving config space at offset 0x38 (reading 0x0)
[    1.962274] pcieport 0000:06:00.0: saving config space at offset 0x3c (reading 0x201ff)
[    1.962334] pcieport 0000:06:01.0: runtime IRQ mapping not provided by arch
[    1.962401] pcieport 0000:06:01.0: pciehp: Slot #1 AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise+ Interlock- NoCompl+ LLActRep+
[    1.962410] pci_bus 0000:08: dev 00, created physical slot 1
[    1.962497] pcieport 0000:06:01.0: saving config space at offset 0x0 (reading 0x15d38086)
[    1.962499] pcieport 0000:06:01.0: saving config space at offset 0x4 (reading 0x100407)
[    1.962502] pcieport 0000:06:01.0: saving config space at offset 0x8 (reading 0x6040002)
[    1.962504] pcieport 0000:06:01.0: saving config space at offset 0xc (reading 0x10020)
[    1.962507] pcieport 0000:06:01.0: saving config space at offset 0x10 (reading 0x0)
[    1.962509] pcieport 0000:06:01.0: saving config space at offset 0x14 (reading 0x0)
[    1.962511] pcieport 0000:06:01.0: saving config space at offset 0x18 (reading 0x2c0806)
[    1.962514] pcieport 0000:06:01.0: saving config space at offset 0x1c (reading 0x2121)
[    1.962516] pcieport 0000:06:01.0: saving config space at offset 0x20 (reading 0xdbf0d000)
[    1.962519] pcieport 0000:06:01.0: saving config space at offset 0x24 (reading 0x9ff18001)
[    1.962521] pcieport 0000:06:01.0: saving config space at offset 0x28 (reading 0x0)
[    1.962523] pcieport 0000:06:01.0: saving config space at offset 0x2c (reading 0x0)
[    1.962526] pcieport 0000:06:01.0: saving config space at offset 0x30 (reading 0x0)
[    1.962528] pcieport 0000:06:01.0: saving config space at offset 0x34 (reading 0x80)
[    1.962531] pcieport 0000:06:01.0: saving config space at offset 0x38 (reading 0x0)
[    1.962533] pcieport 0000:06:01.0: saving config space at offset 0x3c (reading 0x201ff)
[    1.962588] pcieport 0000:06:02.0: runtime IRQ mapping not provided by arch
[    1.962690] pcieport 0000:06:02.0: saving config space at offset 0x0 (reading 0x15d38086)
[    1.962692] pcieport 0000:06:02.0: saving config space at offset 0x4 (reading 0x100407)
[    1.962695] pcieport 0000:06:02.0: saving config space at offset 0x8 (reading 0x6040002)
[    1.962697] pcieport 0000:06:02.0: saving config space at offset 0xc (reading 0x10020)
[    1.962699] pcieport 0000:06:02.0: saving config space at offset 0x10 (reading 0x0)
[    1.962702] pcieport 0000:06:02.0: saving config space at offset 0x14 (reading 0x0)
[    1.962704] pcieport 0000:06:02.0: saving config space at offset 0x18 (reading 0x2d2d06)
[    1.962706] pcieport 0000:06:02.0: saving config space at offset 0x1c (reading 0x1f1)
[    1.962709] pcieport 0000:06:02.0: saving config space at offset 0x20 (reading 0xdc00dc00)
[    1.962711] pcieport 0000:06:02.0: saving config space at offset 0x24 (reading 0x1fff1)
[    1.962714] pcieport 0000:06:02.0: saving config space at offset 0x28 (reading 0x0)
[    1.962716] pcieport 0000:06:02.0: saving config space at offset 0x2c (reading 0x0)
[    1.962718] pcieport 0000:06:02.0: saving config space at offset 0x30 (reading 0x0)
[    1.962721] pcieport 0000:06:02.0: saving config space at offset 0x34 (reading 0x80)
[    1.962723] pcieport 0000:06:02.0: saving config space at offset 0x38 (reading 0x0)
[    1.962725] pcieport 0000:06:02.0: saving config space at offset 0x3c (reading 0x201ff)
[    1.962779] pcieport 0000:06:04.0: runtime IRQ mapping not provided by arch
[    1.962965] pcieport 0000:06:04.0: pciehp: Slot #4 AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise+ Interlock- NoCompl+ LLActRep+
[    1.962975] pci_bus 0000:2e: dev 00, created physical slot 4
[    1.963070] pcieport 0000:06:04.0: saving config space at offset 0x0 (reading 0x15d38086)
[    1.963072] pcieport 0000:06:04.0: saving config space at offset 0x4 (reading 0x100407)
[    1.963075] pcieport 0000:06:04.0: saving config space at offset 0x8 (reading 0x6040002)
[    1.963077] pcieport 0000:06:04.0: saving config space at offset 0xc (reading 0x10020)
[    1.963079] pcieport 0000:06:04.0: saving config space at offset 0x10 (reading 0x0)
[    1.963082] pcieport 0000:06:04.0: saving config space at offset 0x14 (reading 0x0)
[    1.963084] pcieport 0000:06:04.0: saving config space at offset 0x18 (reading 0x522e06)
[    1.963087] pcieport 0000:06:04.0: saving config space at offset 0x1c (reading 0x1f1)
[    1.963089] pcieport 0000:06:04.0: saving config space at offset 0x20 (reading 0xe800dc10)
[    1.963091] pcieport 0000:06:04.0: saving config space at offset 0x24 (reading 0xbff1a001)
[    1.963094] pcieport 0000:06:04.0: saving config space at offset 0x28 (reading 0x0)
[    1.963096] pcieport 0000:06:04.0: saving config space at offset 0x2c (reading 0x0)
[    1.963098] pcieport 0000:06:04.0: saving config space at offset 0x30 (reading 0x0)
[    1.963101] pcieport 0000:06:04.0: saving config space at offset 0x34 (reading 0x80)
[    1.963103] pcieport 0000:06:04.0: saving config space at offset 0x38 (reading 0x0)
[    1.963106] pcieport 0000:06:04.0: saving config space at offset 0x3c (reading 0x201ff)
[    1.963173] pcieport 0000:08:00.0: runtime IRQ mapping not provided by arch
[    1.963196] pcieport 0000:08:00.0: saving config space at offset 0x0 (reading 0x15d38086)
[    1.963199] pcieport 0000:08:00.0: saving config space at offset 0x4 (reading 0x100007)
[    1.963203] pcieport 0000:08:00.0: saving config space at offset 0x8 (reading 0x6040002)
[    1.963207] pcieport 0000:08:00.0: saving config space at offset 0xc (reading 0x10020)
[    1.963210] pcieport 0000:08:00.0: saving config space at offset 0x10 (reading 0x0)
[    1.963214] pcieport 0000:08:00.0: saving config space at offset 0x14 (reading 0x0)
[    1.963217] pcieport 0000:08:00.0: saving config space at offset 0x18 (reading 0x2c0908)
[    1.963221] pcieport 0000:08:00.0: saving config space at offset 0x1c (reading 0x2121)
[    1.963225] pcieport 0000:08:00.0: saving config space at offset 0x20 (reading 0xdbf0d000)
[    1.963228] pcieport 0000:08:00.0: saving config space at offset 0x24 (reading 0x9ff18001)
[    1.963232] pcieport 0000:08:00.0: saving config space at offset 0x28 (reading 0x0)
[    1.963235] pcieport 0000:08:00.0: saving config space at offset 0x2c (reading 0x0)
[    1.963239] pcieport 0000:08:00.0: saving config space at offset 0x30 (reading 0x0)
[    1.963243] pcieport 0000:08:00.0: saving config space at offset 0x34 (reading 0x80)
[    1.963246] pcieport 0000:08:00.0: saving config space at offset 0x38 (reading 0x0)
[    1.963250] pcieport 0000:08:00.0: saving config space at offset 0x3c (reading 0x201ff)
[    1.963361] pcieport 0000:09:01.0: runtime IRQ mapping not provided by arch
[    1.963620] pcieport 0000:09:01.0: saving config space at offset 0x0 (reading 0x15d38086)
[    1.963624] pcieport 0000:09:01.0: saving config space at offset 0x4 (reading 0x100407)
[    1.963627] pcieport 0000:09:01.0: saving config space at offset 0x8 (reading 0x6040002)
[    1.963631] pcieport 0000:09:01.0: saving config space at offset 0xc (reading 0x10020)
[    1.963634] pcieport 0000:09:01.0: saving config space at offset 0x10 (reading 0x0)
[    1.963638] pcieport 0000:09:01.0: saving config space at offset 0x14 (reading 0x0)
[    1.963642] pcieport 0000:09:01.0: saving config space at offset 0x18 (reading 0xc0a09)
[    1.963645] pcieport 0000:09:01.0: saving config space at offset 0x1c (reading 0x2121)
[    1.963649] pcieport 0000:09:01.0: saving config space at offset 0x20 (reading 0xd030d000)
[    1.963653] pcieport 0000:09:01.0: saving config space at offset 0x24 (reading 0x90118001)
[    1.963656] pcieport 0000:09:01.0: saving config space at offset 0x28 (reading 0x0)
[    1.963660] pcieport 0000:09:01.0: saving config space at offset 0x2c (reading 0x0)
[    1.963663] pcieport 0000:09:01.0: saving config space at offset 0x30 (reading 0x0)
[    1.963667] pcieport 0000:09:01.0: saving config space at offset 0x34 (reading 0x80)
[    1.963671] pcieport 0000:09:01.0: saving config space at offset 0x38 (reading 0x0)
[    1.963674] pcieport 0000:09:01.0: saving config space at offset 0x3c (reading 0x201ff)
[    1.963793] pcieport 0000:09:04.0: runtime IRQ mapping not provided by arch
[    1.963901] pcieport 0000:09:04.0: pciehp: Slot #4 AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise+ Interlock- NoCompl+ LLActRep+
[    1.963910] pci_bus 0000:0d: dev 00, created physical slot 4-1
[    1.964027] pcieport 0000:09:04.0: saving config space at offset 0x0 (reading 0x15d38086)
[    1.964031] pcieport 0000:09:04.0: saving config space at offset 0x4 (reading 0x100407)
[    1.964041] pcieport 0000:09:04.0: saving config space at offset 0x8 (reading 0x6040002)
[    1.964045] pcieport 0000:09:04.0: saving config space at offset 0xc (reading 0x10020)
[    1.964054] pcieport 0000:09:04.0: saving config space at offset 0x10 (reading 0x0)
[    1.964058] pcieport 0000:09:04.0: saving config space at offset 0x14 (reading 0x0)
[    1.964062] pcieport 0000:09:04.0: saving config space at offset 0x18 (reading 0x2c0d09)
[    1.964066] pcieport 0000:09:04.0: saving config space at offset 0x1c (reading 0x1f1)
[    1.964070] pcieport 0000:09:04.0: saving config space at offset 0x20 (reading 0xdbf0d040)
[    1.964073] pcieport 0000:09:04.0: saving config space at offset 0x24 (reading 0x9ff19021)
[    1.964077] pcieport 0000:09:04.0: saving config space at offset 0x28 (reading 0x0)
[    1.964080] pcieport 0000:09:04.0: saving config space at offset 0x2c (reading 0x0)
[    1.964084] pcieport 0000:09:04.0: saving config space at offset 0x30 (reading 0x0)
[    1.964088] pcieport 0000:09:04.0: saving config space at offset 0x34 (reading 0x80)
[    1.964091] pcieport 0000:09:04.0: saving config space at offset 0x38 (reading 0x0)
[    1.964095] pcieport 0000:09:04.0: saving config space at offset 0x3c (reading 0x201ff)
[    1.964213] pcieport 0000:0a:00.0: runtime IRQ mapping not provided by arch
[    1.964389] pcieport 0000:0a:00.0: saving config space at offset 0x0 (reading 0x14781002)
[    1.964393] pcieport 0000:0a:00.0: saving config space at offset 0x4 (reading 0x100007)
[    1.964398] pcieport 0000:0a:00.0: saving config space at offset 0x8 (reading 0x60400ca)
[    1.964402] pcieport 0000:0a:00.0: saving config space at offset 0xc (reading 0x10020)
[    1.964406] pcieport 0000:0a:00.0: saving config space at offset 0x10 (reading 0xd0000000)
[    1.964411] pcieport 0000:0a:00.0: saving config space at offset 0x14 (reading 0x0)
[    1.964415] pcieport 0000:0a:00.0: saving config space at offset 0x18 (reading 0xc0b0a)
[    1.964419] pcieport 0000:0a:00.0: saving config space at offset 0x1c (reading 0x2121)
[    1.964423] pcieport 0000:0a:00.0: saving config space at offset 0x20 (reading 0xd030d000)
[    1.964428] pcieport 0000:0a:00.0: saving config space at offset 0x24 (reading 0x90118001)
[    1.964432] pcieport 0000:0a:00.0: saving config space at offset 0x28 (reading 0x0)
[    1.964436] pcieport 0000:0a:00.0: saving config space at offset 0x2c (reading 0x0)
[    1.964440] pcieport 0000:0a:00.0: saving config space at offset 0x30 (reading 0x0)
[    1.964444] pcieport 0000:0a:00.0: saving config space at offset 0x34 (reading 0x48)
[    1.964449] pcieport 0000:0a:00.0: saving config space at offset 0x38 (reading 0x0)
[    1.964453] pcieport 0000:0a:00.0: saving config space at offset 0x3c (reading 0x201ff)
[    1.964587] pcieport 0000:0b:00.0: runtime IRQ mapping not provided by arch
[    1.964881] pcieport 0000:0b:00.0: saving config space at offset 0x0 (reading 0x14791002)
[    1.964886] pcieport 0000:0b:00.0: saving config space at offset 0x4 (reading 0x100407)
[    1.964890] pcieport 0000:0b:00.0: saving config space at offset 0x8 (reading 0x6040000)
[    1.964894] pcieport 0000:0b:00.0: saving config space at offset 0xc (reading 0x10020)
[    1.964898] pcieport 0000:0b:00.0: saving config space at offset 0x10 (reading 0x0)
[    1.964903] pcieport 0000:0b:00.0: saving config space at offset 0x14 (reading 0x0)
[    1.964907] pcieport 0000:0b:00.0: saving config space at offset 0x18 (reading 0xc0c0b)
[    1.964911] pcieport 0000:0b:00.0: saving config space at offset 0x1c (reading 0x2121)
[    1.964916] pcieport 0000:0b:00.0: saving config space at offset 0x20 (reading 0xd010d010)
[    1.964920] pcieport 0000:0b:00.0: saving config space at offset 0x24 (reading 0x90118001)
[    1.964924] pcieport 0000:0b:00.0: saving config space at offset 0x28 (reading 0x0)
[    1.964928] pcieport 0000:0b:00.0: saving config space at offset 0x2c (reading 0x0)
[    1.964932] pcieport 0000:0b:00.0: saving config space at offset 0x30 (reading 0x0)
[    1.964937] pcieport 0000:0b:00.0: saving config space at offset 0x34 (reading 0x50)
[    1.964941] pcieport 0000:0b:00.0: saving config space at offset 0x38 (reading 0x0)
[    1.964945] pcieport 0000:0b:00.0: saving config space at offset 0x3c (reading 0x201ff)
[    1.965085] pcieport 0000:0d:00.0: runtime IRQ mapping not provided by arch
[    1.965120] pcieport 0000:0d:00.0: saving config space at offset 0x0 (reading 0x15d38086)
[    1.965125] pcieport 0000:0d:00.0: saving config space at offset 0x4 (reading 0x100007)
[    1.965130] pcieport 0000:0d:00.0: saving config space at offset 0x8 (reading 0x6040002)
[    1.965135] pcieport 0000:0d:00.0: saving config space at offset 0xc (reading 0x10020)
[    1.965139] pcieport 0000:0d:00.0: saving config space at offset 0x10 (reading 0x0)
[    1.965144] pcieport 0000:0d:00.0: saving config space at offset 0x14 (reading 0x0)
[    1.965149] pcieport 0000:0d:00.0: saving config space at offset 0x18 (reading 0x2c0e0d)
[    1.965154] pcieport 0000:0d:00.0: saving config space at offset 0x1c (reading 0x1f1)
[    1.965159] pcieport 0000:0d:00.0: saving config space at offset 0x20 (reading 0xdbf0d040)
[    1.965164] pcieport 0000:0d:00.0: saving config space at offset 0x24 (reading 0x9ff19021)
[    1.965169] pcieport 0000:0d:00.0: saving config space at offset 0x28 (reading 0x0)
[    1.965173] pcieport 0000:0d:00.0: saving config space at offset 0x2c (reading 0x0)
[    1.965178] pcieport 0000:0d:00.0: saving config space at offset 0x30 (reading 0x0)
[    1.965183] pcieport 0000:0d:00.0: saving config space at offset 0x34 (reading 0x80)
[    1.965188] pcieport 0000:0d:00.0: saving config space at offset 0x38 (reading 0x0)
[    1.965193] pcieport 0000:0d:00.0: saving config space at offset 0x3c (reading 0x201ff)
[    1.965361] pcieport 0000:0e:00.0: runtime IRQ mapping not provided by arch
[    1.965534] pcieport 0000:0e:00.0: saving config space at offset 0x0 (reading 0x15d38086)
[    1.965539] pcieport 0000:0e:00.0: saving config space at offset 0x4 (reading 0x100407)
[    1.965544] pcieport 0000:0e:00.0: saving config space at offset 0x8 (reading 0x6040002)
[    1.965549] pcieport 0000:0e:00.0: saving config space at offset 0xc (reading 0x10020)
[    1.965553] pcieport 0000:0e:00.0: saving config space at offset 0x10 (reading 0x0)
[    1.965558] pcieport 0000:0e:00.0: saving config space at offset 0x14 (reading 0x0)
[    1.965563] pcieport 0000:0e:00.0: saving config space at offset 0x18 (reading 0xf0f0e)
[    1.965568] pcieport 0000:0e:00.0: saving config space at offset 0x1c (reading 0x1f1)
[    1.965573] pcieport 0000:0e:00.0: saving config space at offset 0x20 (reading 0xd040d040)
[    1.965578] pcieport 0000:0e:00.0: saving config space at offset 0x24 (reading 0x1fff1)
[    1.965583] pcieport 0000:0e:00.0: saving config space at offset 0x28 (reading 0x0)
[    1.965588] pcieport 0000:0e:00.0: saving config space at offset 0x2c (reading 0x0)
[    1.965593] pcieport 0000:0e:00.0: saving config space at offset 0x30 (reading 0x0)
[    1.965598] pcieport 0000:0e:00.0: saving config space at offset 0x34 (reading 0x80)
[    1.965602] pcieport 0000:0e:00.0: saving config space at offset 0x38 (reading 0x0)
[    1.965607] pcieport 0000:0e:00.0: saving config space at offset 0x3c (reading 0x201ff)
[    1.965768] pcieport 0000:0e:01.0: runtime IRQ mapping not provided by arch
[    1.965937] pcieport 0000:0e:01.0: saving config space at offset 0x0 (reading 0x15d38086)
[    1.965942] pcieport 0000:0e:01.0: saving config space at offset 0x4 (reading 0x100407)
[    1.965947] pcieport 0000:0e:01.0: saving config space at offset 0x8 (reading 0x6040002)
[    1.965952] pcieport 0000:0e:01.0: saving config space at offset 0xc (reading 0x10020)
[    1.965956] pcieport 0000:0e:01.0: saving config space at offset 0x10 (reading 0x0)
[    1.965961] pcieport 0000:0e:01.0: saving config space at offset 0x14 (reading 0x0)
[    1.965966] pcieport 0000:0e:01.0: saving config space at offset 0x18 (reading 0x10100e)
[    1.965971] pcieport 0000:0e:01.0: saving config space at offset 0x1c (reading 0x1f1)
[    1.965976] pcieport 0000:0e:01.0: saving config space at offset 0x20 (reading 0xd050d050)
[    1.965981] pcieport 0000:0e:01.0: saving config space at offset 0x24 (reading 0x1fff1)
[    1.965986] pcieport 0000:0e:01.0: saving config space at offset 0x28 (reading 0x0)
[    1.965991] pcieport 0000:0e:01.0: saving config space at offset 0x2c (reading 0x0)
[    1.965996] pcieport 0000:0e:01.0: saving config space at offset 0x30 (reading 0x0)
[    1.966000] pcieport 0000:0e:01.0: saving config space at offset 0x34 (reading 0x80)
[    1.966005] pcieport 0000:0e:01.0: saving config space at offset 0x38 (reading 0x0)
[    1.966010] pcieport 0000:0e:01.0: saving config space at offset 0x3c (reading 0x201ff)
[    1.966179] pcieport 0000:0e:02.0: runtime IRQ mapping not provided by arch
[    1.966350] pcieport 0000:0e:02.0: saving config space at offset 0x0 (reading 0x15d38086)
[    1.966355] pcieport 0000:0e:02.0: saving config space at offset 0x4 (reading 0x100407)
[    1.966360] pcieport 0000:0e:02.0: saving config space at offset 0x8 (reading 0x6040002)
[    1.966365] pcieport 0000:0e:02.0: saving config space at offset 0xc (reading 0x10020)
[    1.966370] pcieport 0000:0e:02.0: saving config space at offset 0x10 (reading 0x0)
[    1.966375] pcieport 0000:0e:02.0: saving config space at offset 0x14 (reading 0x0)
[    1.966380] pcieport 0000:0e:02.0: saving config space at offset 0x18 (reading 0x11110e)
[    1.966384] pcieport 0000:0e:02.0: saving config space at offset 0x1c (reading 0x1f1)
[    1.966389] pcieport 0000:0e:02.0: saving config space at offset 0x20 (reading 0xd060d060)
[    1.966394] pcieport 0000:0e:02.0: saving config space at offset 0x24 (reading 0x1fff1)
[    1.966399] pcieport 0000:0e:02.0: saving config space at offset 0x28 (reading 0x0)
[    1.966404] pcieport 0000:0e:02.0: saving config space at offset 0x2c (reading 0x0)
[    1.966409] pcieport 0000:0e:02.0: saving config space at offset 0x30 (reading 0x0)
[    1.966414] pcieport 0000:0e:02.0: saving config space at offset 0x34 (reading 0x80)
[    1.966419] pcieport 0000:0e:02.0: saving config space at offset 0x38 (reading 0x0)
[    1.966424] pcieport 0000:0e:02.0: saving config space at offset 0x3c (reading 0x201ff)
[    1.966644] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
[    1.966719] intel_idle: MWAIT substates: 0x11142120
[    1.966721] intel_idle: v0.4.1 model 0x8E
[    1.967273] intel_idle: lapic_timer_reliable_states 0xffffffff
[    1.970043] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
[    1.972020] serial 0000:00:16.3: runtime IRQ mapping not provided by arch
[    1.972222] serial 0000:00:16.3: saving config space at offset 0x0 (reading 0x9de38086)
[    1.972224] serial 0000:00:16.3: saving config space at offset 0x4 (reading 0xb00003)
[    1.972227] serial 0000:00:16.3: saving config space at offset 0x8 (reading 0x7000211)
[    1.972229] serial 0000:00:16.3: saving config space at offset 0xc (reading 0x800000)
[    1.972231] serial 0000:00:16.3: saving config space at offset 0x10 (reading 0x3061)
[    1.972234] serial 0000:00:16.3: saving config space at offset 0x14 (reading 0xea24a000)
[    1.972236] serial 0000:00:16.3: saving config space at offset 0x18 (reading 0x0)
[    1.972239] serial 0000:00:16.3: saving config space at offset 0x1c (reading 0x0)
[    1.972241] serial 0000:00:16.3: saving config space at offset 0x20 (reading 0x0)
[    1.972243] serial 0000:00:16.3: saving config space at offset 0x24 (reading 0x0)
[    1.972245] serial 0000:00:16.3: saving config space at offset 0x28 (reading 0x0)
[    1.972248] serial 0000:00:16.3: saving config space at offset 0x2c (reading 0x229217aa)
[    1.972250] serial 0000:00:16.3: saving config space at offset 0x30 (reading 0x0)
[    1.972252] serial 0000:00:16.3: saving config space at offset 0x34 (reading 0x40)
[    1.972255] serial 0000:00:16.3: saving config space at offset 0x38 (reading 0x0)
[    1.972257] serial 0000:00:16.3: saving config space at offset 0x3c (reading 0x4ff)
[    1.973306] 0000:00:16.3: ttyS4 at I/O 0x3060 (irq = 19, base_baud = 115200) is a 16550A
[    1.995340] tpm_tis STM7308:00: 2.0 TPM (device-id 0x0, rev-id 78)
[    2.012951] loop: module loaded
[    2.013168] PPP generic driver version 2.4.2
[    2.013244] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    2.013249] ehci-pci: EHCI PCI platform driver
[    2.013305] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
[    2.015146] serio: i8042 KBD port at 0x60,0x64 irq 1
[    2.015146] serio: i8042 AUX port at 0x60,0x64 irq 12
[    2.015146] rtc_cmos rtc_cmos: RTC can wake from S4
[    2.016552] rtc_cmos rtc_cmos: registered as rtc0
[    2.016569] rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram, hpet irqs
[    2.016577] i2c /dev entries driver
[    2.016642] platform eisa.0: Probing EISA bus 0
[    2.016645] platform eisa.0: EISA: Cannot allocate resource for mainboard
[    2.016664] intel_pstate: Intel P-state driver initializing
[    2.017377] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
[    2.017658] intel_pstate: HWP enabled
[    2.017784] ledtrig-cpu: registered to indicate activity on CPUs
[    2.017791] EFI Variables Facility v0.08 2004-May-17
[    2.037469] NET: Registered protocol family 10
[    2.041613] Segment Routing with IPv6
[    2.041628] NET: Registered protocol family 17
[    2.041691] Key type dns_resolver registered
[    2.042066] RAS: Correctable Errors collector initialized.
[    2.042136] microcode: sig=0x806ec, pf=0x80, revision=0xca
[    2.042286] microcode: Microcode Update Driver: v2.2.
[    2.042288] IPI shorthand broadcast: enabled
[    2.042294] sched_clock: Marking stable (2035846182, 6201016)->(2047909003, -5861805)
[    2.042542] registered taskstats version 1
[    2.042550] Loading compiled-in X.509 certificates
[    2.043795] Loaded X.509 cert 'Build time autogenerated kernel key: 639fa2e8e92aa1bc1185be9e78f2bcd2c6be00ec'
[    2.043810] zswap: loaded using pool lzo/zbud
[    2.043952] Key type ._fscrypt registered
[    2.043954] Key type .fscrypt registered
[    2.043955] Key type fscrypt-provisioning registered
[    2.048139] Key type big_key registered
[    2.048143] Key type trusted registered
[    2.050298] Key type encrypted registered
[    2.050306] ima: Allocated hash algorithm: sha1
[    2.066018] ima: No architecture policies found
[    2.066026] evm: Initialising EVM extended attributes:
[    2.066027] evm: security.ima
[    2.066027] evm: security.capability
[    2.066028] evm: HMAC attrs: 0x1
[    2.066799] PM:   Magic number: 4:417:197
[    2.066839] acpi device:52: hash matches
[    2.066846] acpi PNP0100:00: hash matches
[    2.067062] rtc_cmos rtc_cmos: setting system clock to 2020-05-21T21:09:31 UTC (1590095371)
[    2.067703] Freeing unused kernel image (initmem) memory: 2236K
[    2.073260] Write protecting the kernel read-only data: 22528k
[    2.073752] Freeing unused kernel image (text/rodata gap) memory: 2044K
[    2.073893] Freeing unused kernel image (rodata/data gap) memory: 380K
[    2.074038] x86/mm: Checked W+X mappings: passed, no W+X pages found.
[    2.074040] Run /init as init process
[    2.074042]   with arguments:
[    2.074042]     /init
[    2.074042]   with environment:
[    2.074043]     HOME=/
[    2.074043]     TERM=linux
[    2.088072] pcieport 0000:06:04.0: saving config space at offset 0x0 (reading 0x15d38086)
[    2.088074] pcieport 0000:06:04.0: saving config space at offset 0x4 (reading 0x100407)
[    2.088076] pcieport 0000:06:04.0: saving config space at offset 0x8 (reading 0x6040002)
[    2.088078] pcieport 0000:06:04.0: saving config space at offset 0xc (reading 0x10020)
[    2.088080] pcieport 0000:06:04.0: saving config space at offset 0x10 (reading 0x0)
[    2.088082] pcieport 0000:06:04.0: saving config space at offset 0x14 (reading 0x0)
[    2.088084] pcieport 0000:06:04.0: saving config space at offset 0x18 (reading 0x522e06)
[    2.088085] pcieport 0000:06:04.0: saving config space at offset 0x1c (reading 0x1f1)
[    2.088087] pcieport 0000:06:04.0: saving config space at offset 0x20 (reading 0xe800dc10)
[    2.088088] pcieport 0000:06:04.0: saving config space at offset 0x24 (reading 0xbff1a001)
[    2.088090] pcieport 0000:06:04.0: saving config space at offset 0x28 (reading 0x0)
[    2.088092] pcieport 0000:06:04.0: saving config space at offset 0x2c (reading 0x0)
[    2.088093] pcieport 0000:06:04.0: saving config space at offset 0x30 (reading 0x0)
[    2.088095] pcieport 0000:06:04.0: saving config space at offset 0x34 (reading 0x80)
[    2.088096] pcieport 0000:06:04.0: saving config space at offset 0x38 (reading 0x0)
[    2.088098] pcieport 0000:06:04.0: saving config space at offset 0x3c (reading 0x201ff)
[    2.088145] pcieport 0000:06:04.0: PME# enabled
[    2.122746] acpi PNP0C14:02: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[    2.122811] acpi PNP0C14:03: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[    2.122916] acpi PNP0C14:04: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[    2.122979] acpi PNP0C14:05: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[    2.124021] hid: raw HID events driver (C) Jiri Kosina
[    2.128018] intel-lpss 0000:00:15.0: runtime IRQ mapping not provided by arch
[    2.128259] intel-lpss 0000:00:15.0: enabling bus mastering
[    2.128287] intel-lpss 0000:00:15.0: enabling Mem-Wr-Inval
[    2.128414] idma64 idma64.0: Found Intel integrated DMA 64-bit
[    2.128719] intel-lpss 0000:00:15.0: saving config space at offset 0x0 (reading 0x9de88086)
[    2.128722] intel-lpss 0000:00:15.0: saving config space at offset 0x4 (reading 0x100006)
[    2.128725] intel-lpss 0000:00:15.0: saving config space at offset 0x8 (reading 0xc800011)
[    2.128729] intel-lpss 0000:00:15.0: saving config space at offset 0xc (reading 0x800020)
[    2.128733] intel-lpss 0000:00:15.0: saving config space at offset 0x10 (reading 0xea245004)
[    2.128736] intel-lpss 0000:00:15.0: saving config space at offset 0x14 (reading 0x0)
[    2.128740] intel-lpss 0000:00:15.0: saving config space at offset 0x18 (reading 0x0)
[    2.128744] intel-lpss 0000:00:15.0: saving config space at offset 0x1c (reading 0x0)
[    2.128747] intel-lpss 0000:00:15.0: saving config space at offset 0x20 (reading 0x0)
[    2.128751] intel-lpss 0000:00:15.0: saving config space at offset 0x24 (reading 0x0)
[    2.128755] intel-lpss 0000:00:15.0: saving config space at offset 0x28 (reading 0x0)
[    2.128758] intel-lpss 0000:00:15.0: saving config space at offset 0x2c (reading 0x229217aa)
[    2.128762] intel-lpss 0000:00:15.0: saving config space at offset 0x30 (reading 0x0)
[    2.128766] intel-lpss 0000:00:15.0: saving config space at offset 0x34 (reading 0x80)
[    2.128769] intel-lpss 0000:00:15.0: saving config space at offset 0x38 (reading 0x0)
[    2.128773] intel-lpss 0000:00:15.0: saving config space at offset 0x3c (reading 0x1ff)
[    2.128807] intel-lpss 0000:00:15.1: runtime IRQ mapping not provided by arch
[    2.128924] intel-lpss 0000:00:15.1: enabling bus mastering
[    2.128953] intel-lpss 0000:00:15.1: enabling Mem-Wr-Inval
[    2.129055] idma64 idma64.1: Found Intel integrated DMA 64-bit
[    2.129374] intel-lpss 0000:00:15.1: saving config space at offset 0x0 (reading 0x9de98086)
[    2.129389] intel-lpss 0000:00:15.1: saving config space at offset 0x4 (reading 0x100006)
[    2.129394] intel-lpss 0000:00:15.1: saving config space at offset 0x8 (reading 0xc800011)
[    2.129398] intel-lpss 0000:00:15.1: saving config space at offset 0xc (reading 0x800020)
[    2.129402] intel-lpss 0000:00:15.1: saving config space at offset 0x10 (reading 0xea246004)
[    2.129405] intel-lpss 0000:00:15.1: saving config space at offset 0x14 (reading 0x0)
[    2.129409] intel-lpss 0000:00:15.1: saving config space at offset 0x18 (reading 0x0)
[    2.129413] intel-lpss 0000:00:15.1: saving config space at offset 0x1c (reading 0x0)
[    2.129416] intel-lpss 0000:00:15.1: saving config space at offset 0x20 (reading 0x0)
[    2.129420] intel-lpss 0000:00:15.1: saving config space at offset 0x24 (reading 0x0)
[    2.129424] intel-lpss 0000:00:15.1: saving config space at offset 0x28 (reading 0x0)
[    2.129427] intel-lpss 0000:00:15.1: saving config space at offset 0x2c (reading 0x229217aa)
[    2.129431] intel-lpss 0000:00:15.1: saving config space at offset 0x30 (reading 0x0)
[    2.129435] intel-lpss 0000:00:15.1: saving config space at offset 0x34 (reading 0x80)
[    2.129438] intel-lpss 0000:00:15.1: saving config space at offset 0x38 (reading 0x0)
[    2.129442] intel-lpss 0000:00:15.1: saving config space at offset 0x3c (reading 0x2ff)
[    2.132937] thermal LNXTHERM:00: registered as thermal_zone0
[    2.132940] ACPI: Thermal Zone [THM0] (56 C)
[    2.138932] Linux agpgart interface v0.103
[    2.148851] xhci_hcd 0000:00:14.0: runtime IRQ mapping not provided by arch
[    2.149020] xhci_hcd 0000:00:14.0: enabling bus mastering
[    2.149024] xhci_hcd 0000:00:14.0: xHCI Host Controller
[    2.149030] xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1
[    2.150090] xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810
[    2.150094] xhci_hcd 0000:00:14.0: cache line size of 128 is not supported
[    2.150263] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.06
[    2.150265] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    2.150267] usb usb1: Product: xHCI Host Controller
[    2.150268] usb usb1: Manufacturer: Linux 5.6.14-karabijavad xhci-hcd
[    2.150269] usb usb1: SerialNumber: 0000:00:14.0
[    2.151755] thunderbolt 0000:07:00.0: runtime IRQ mapping not provided by arch
[    2.156909] nvme 0000:03:00.0: runtime IRQ mapping not provided by arch
[    2.156986] nvme nvme0: pci function 0000:03:00.0
[    2.170171] i801_smbus 0000:00:1f.4: runtime IRQ mapping not provided by arch
[    2.170186] i801_smbus 0000:00:1f.4: enabling device (0000 -> 0003)
[    2.176847] cryptd: max_cpu_qlen set to 1000
[    2.182282] AVX2 version of gcm_enc/dec engaged.
[    2.182286] AES CTR mode by8 optimization enabled
[    2.191472] e1000e: Intel(R) PRO/1000 Network Driver - 3.2.6-k
[    2.191477] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
[    2.191498] e1000e 0000:00:1f.6: runtime IRQ mapping not provided by arch
[    2.206770] hub 1-0:1.0: USB hub found
[    2.206797] hub 1-0:1.0: 12 ports detected
[    2.207184] e1000e 0000:00:1f.6: saving config space at offset 0x0 (reading 0x15bd8086)
[    2.207187] e1000e 0000:00:1f.6: saving config space at offset 0x4 (reading 0x100006)
[    2.207190] e1000e 0000:00:1f.6: saving config space at offset 0x8 (reading 0x2000011)
[    2.207193] e1000e 0000:00:1f.6: saving config space at offset 0xc (reading 0x0)
[    2.207197] e1000e 0000:00:1f.6: saving config space at offset 0x10 (reading 0xea200000)
[    2.207200] e1000e 0000:00:1f.6: saving config space at offset 0x14 (reading 0x0)
[    2.207203] e1000e 0000:00:1f.6: saving config space at offset 0x18 (reading 0x0)
[    2.207207] e1000e 0000:00:1f.6: saving config space at offset 0x1c (reading 0x0)
[    2.207210] e1000e 0000:00:1f.6: saving config space at offset 0x20 (reading 0x0)
[    2.207213] e1000e 0000:00:1f.6: saving config space at offset 0x24 (reading 0x0)
[    2.207217] e1000e 0000:00:1f.6: saving config space at offset 0x28 (reading 0x0)
[    2.207220] e1000e 0000:00:1f.6: saving config space at offset 0x2c (reading 0x229217aa)
[    2.207223] e1000e 0000:00:1f.6: saving config space at offset 0x30 (reading 0x0)
[    2.207227] e1000e 0000:00:1f.6: saving config space at offset 0x34 (reading 0xc8)
[    2.207230] e1000e 0000:00:1f.6: saving config space at offset 0x38 (reading 0x0)
[    2.207233] e1000e 0000:00:1f.6: saving config space at offset 0x3c (reading 0x1ff)
[    2.207255] e1000e 0000:00:1f.6: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
[    2.209797] nvme 0000:03:00.0: saving config space at offset 0x0 (reading 0xa808144d)
[    2.209799] nvme 0000:03:00.0: saving config space at offset 0x4 (reading 0x100406)
[    2.209800] nvme 0000:03:00.0: saving config space at offset 0x8 (reading 0x1080200)
[    2.209802] nvme 0000:03:00.0: saving config space at offset 0xc (reading 0x0)
[    2.209804] nvme 0000:03:00.0: saving config space at offset 0x10 (reading 0xea100004)
[    2.209806] nvme 0000:03:00.0: saving config space at offset 0x14 (reading 0x0)
[    2.209807] nvme 0000:03:00.0: saving config space at offset 0x18 (reading 0x0)
[    2.209809] nvme 0000:03:00.0: saving config space at offset 0x1c (reading 0x0)
[    2.209811] nvme 0000:03:00.0: saving config space at offset 0x20 (reading 0x0)
[    2.209812] nvme 0000:03:00.0: saving config space at offset 0x24 (reading 0x0)
[    2.209814] nvme 0000:03:00.0: saving config space at offset 0x28 (reading 0x0)
[    2.209815] nvme 0000:03:00.0: saving config space at offset 0x2c (reading 0xa801144d)
[    2.209817] nvme 0000:03:00.0: saving config space at offset 0x30 (reading 0x0)
[    2.209820] nvme 0000:03:00.0: saving config space at offset 0x34 (reading 0x40)
[    2.209821] nvme 0000:03:00.0: saving config space at offset 0x38 (reading 0x0)
[    2.209823] nvme 0000:03:00.0: saving config space at offset 0x3c (reading 0x1ff)
[    2.209922] intel-lpss 0000:00:15.0: power state changed by ACPI to D3cold
[    2.210113] i801_smbus 0000:00:1f.4: SPD Write Disable is set
[    2.210200] i801_smbus 0000:00:1f.4: SMBus using PCI interrupt
[    2.221508] intel-lpss 0000:00:15.1: power state changed by ACPI to D3cold
[    2.221635] intel-lpss 0000:00:15.0: power state changed by ACPI to D0
[    2.221705] intel-lpss 0000:00:15.0: restoring config space at offset 0x10 (was 0x4, writing 0xea245004)
[    2.226330] battery: ACPI: Deprecated procfs I/F for battery is loaded, please retry with CONFIG_ACPI_PROCFS_POWER cleared
[    2.226337] battery: ACPI: Battery Slot [BAT0] (battery present)
[    2.228144] xhci_hcd 0000:00:14.0: xHCI Host Controller
[    2.228149] xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2
[    2.228153] xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed
[    2.228194] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.06
[    2.228197] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    2.228200] usb usb2: Product: xHCI Host Controller
[    2.228202] usb usb2: Manufacturer: Linux 5.6.14-karabijavad xhci-hcd
[    2.228203] usb usb2: SerialNumber: 0000:00:14.0
[    2.228289] hub 2-0:1.0: USB hub found
[    2.228299] hub 2-0:1.0: 6 ports detected
[    2.228638] usb: port power management may be unreliable
[    2.228811] i801_smbus 0000:00:1f.4: saving config space at offset 0x0 (reading 0x9da38086)
[    2.228821] i801_smbus 0000:00:1f.4: saving config space at offset 0x4 (reading 0x2800003)
[    2.228823] i801_smbus 0000:00:1f.4: saving config space at offset 0x8 (reading 0xc050011)
[    2.228833] i801_smbus 0000:00:1f.4: saving config space at offset 0xc (reading 0x0)
[    2.228844] i801_smbus 0000:00:1f.4: saving config space at offset 0x10 (reading 0xea248004)
[    2.228846] i801_smbus 0000:00:1f.4: saving config space at offset 0x14 (reading 0x0)
[    2.228855] i801_smbus 0000:00:1f.4: saving config space at offset 0x18 (reading 0x0)
[    2.228866] i801_smbus 0000:00:1f.4: saving config space at offset 0x1c (reading 0x0)
[    2.228868] i801_smbus 0000:00:1f.4: saving config space at offset 0x20 (reading 0xefa1)
[    2.228871] i801_smbus 0000:00:1f.4: saving config space at offset 0x24 (reading 0x0)
[    2.228873] i801_smbus 0000:00:1f.4: saving config space at offset 0x28 (reading 0x0)
[    2.228883] i801_smbus 0000:00:1f.4: saving config space at offset 0x2c (reading 0x229217aa)
[    2.228887] i801_smbus 0000:00:1f.4: saving config space at offset 0x30 (reading 0x0)
[    2.228888] i801_smbus 0000:00:1f.4: saving config space at offset 0x34 (reading 0x0)
[    2.228892] i801_smbus 0000:00:1f.4: saving config space at offset 0x38 (reading 0x0)
[    2.228894] i801_smbus 0000:00:1f.4: saving config space at offset 0x3c (reading 0x1ff)
[    2.229344] xhci_hcd 0000:0f:00.0: runtime IRQ mapping not provided by arch
[    2.229443] xhci_hcd 0000:0f:00.0: enabling bus mastering
[    2.229448] xhci_hcd 0000:0f:00.0: xHCI Host Controller
[    2.229452] xhci_hcd 0000:0f:00.0: new USB bus registered, assigned bus number 3
[    2.232907] intel-lpss 0000:00:15.0: saving config space at offset 0x0 (reading 0x9de88086)
[    2.232911] intel-lpss 0000:00:15.0: saving config space at offset 0x4 (reading 0x100006)
[    2.232918] intel-lpss 0000:00:15.0: saving config space at offset 0x8 (reading 0xc800011)
[    2.232924] intel-lpss 0000:00:15.0: saving config space at offset 0xc (reading 0x800020)
[    2.232930] intel-lpss 0000:00:15.0: saving config space at offset 0x10 (reading 0xea245004)
[    2.232936] intel-lpss 0000:00:15.0: saving config space at offset 0x14 (reading 0x0)
[    2.232943] intel-lpss 0000:00:15.0: saving config space at offset 0x18 (reading 0x0)
[    2.232948] intel-lpss 0000:00:15.0: saving config space at offset 0x1c (reading 0x0)
[    2.232954] intel-lpss 0000:00:15.0: saving config space at offset 0x20 (reading 0x0)
[    2.232960] intel-lpss 0000:00:15.0: saving config space at offset 0x24 (reading 0x0)
[    2.232967] intel-lpss 0000:00:15.0: saving config space at offset 0x28 (reading 0x0)
[    2.232973] intel-lpss 0000:00:15.0: saving config space at offset 0x2c (reading 0x229217aa)
[    2.232978] intel-lpss 0000:00:15.0: saving config space at offset 0x30 (reading 0x0)
[    2.232984] intel-lpss 0000:00:15.0: saving config space at offset 0x34 (reading 0x80)
[    2.232991] intel-lpss 0000:00:15.0: saving config space at offset 0x38 (reading 0x0)
[    2.232997] intel-lpss 0000:00:15.0: saving config space at offset 0x3c (reading 0x1ff)
[    2.270219] AMD-Vi: AMD IOMMUv2 driver by Joerg Roedel <jroedel@suse.de>
[    2.270223] AMD-Vi: AMD IOMMUv2 functionality not available on this system
[    2.282327] e1000e 0000:00:1f.6 0000:00:1f.6 (uninitialized): registered PHC clock
[    2.288803] xhci_hcd 0000:0f:00.0: hcc params 0x0200eec1 hci version 0x110 quirks 0x0000000000000010
[    2.288819] xhci_hcd 0000:0f:00.0: enabling Mem-Wr-Inval
[    2.288890] intel-lpss 0000:00:15.1: power state changed by ACPI to D0
[    2.288919] intel-lpss 0000:00:15.1: restoring config space at offset 0x10 (was 0x4, writing 0xea246004)
[    2.289376] intel-lpss 0000:00:15.0: power state changed by ACPI to D3cold
[    2.289412] usb usb3: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.06
[    2.289415] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    2.289416] usb usb3: Product: xHCI Host Controller
[    2.289418] usb usb3: Manufacturer: Linux 5.6.14-karabijavad xhci-hcd
[    2.289419] usb usb3: SerialNumber: 0000:0f:00.0
[    2.289503] hub 3-0:1.0: USB hub found
[    2.289516] hub 3-0:1.0: 2 ports detected
[    2.289688] xhci_hcd 0000:0f:00.0: xHCI Host Controller
[    2.289690] xhci_hcd 0000:0f:00.0: new USB bus registered, assigned bus number 4
[    2.289693] xhci_hcd 0000:0f:00.0: Host supports USB 3.1 Enhanced SuperSpeed
[    2.289773] usb usb4: We don't know the algorithms for LPM for this host, disabling LPM.
[    2.289787] usb usb4: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.06
[    2.289790] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    2.289791] usb usb4: Product: xHCI Host Controller
[    2.289793] usb usb4: Manufacturer: Linux 5.6.14-karabijavad xhci-hcd
[    2.289794] usb usb4: SerialNumber: 0000:0f:00.0
[    2.289839] hub 4-0:1.0: USB hub found
[    2.289851] hub 4-0:1.0: 2 ports detected
[    2.289941] xhci_hcd 0000:10:00.0: runtime IRQ mapping not provided by arch
[    2.290249] xhci_hcd 0000:10:00.0: enabling bus mastering
[    2.290255] xhci_hcd 0000:10:00.0: xHCI Host Controller
[    2.290260] xhci_hcd 0000:10:00.0: new USB bus registered, assigned bus number 5
[    2.294734] i2c_hid i2c-SYNA8004:00: i2c-SYNA8004:00 supply vdd not found, using dummy regulator
[    2.294746] i2c_hid i2c-SYNA8004:00: i2c-SYNA8004:00 supply vddl not found, using dummy regulator
[    2.302409] random: fast init done
[    2.346652] e1000e 0000:00:1f.6 eth0: (PCI Express:2.5GT/s:Width x1) 98:fa:9b:40:45:85
[    2.346656] e1000e 0000:00:1f.6 eth0: Intel(R) PRO/1000 Network Connection
[    2.346797] e1000e 0000:00:1f.6 eth0: MAC: 13, PHY: 12, PBA No: FFFFFF-0FF
[    2.347482] e1000e 0000:00:1f.6 enp0s31f6: renamed from eth0
[    2.349606] xhci_hcd 0000:10:00.0: hcc params 0x0200eec1 hci version 0x110 quirks 0x0000000000000010
[    2.349618] xhci_hcd 0000:10:00.0: enabling Mem-Wr-Inval
[    2.350025] usb usb5: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.06
[    2.350027] usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    2.350029] usb usb5: Product: xHCI Host Controller
[    2.350030] usb usb5: Manufacturer: Linux 5.6.14-karabijavad xhci-hcd
[    2.350031] usb usb5: SerialNumber: 0000:10:00.0
[    2.350165] hub 5-0:1.0: USB hub found
[    2.350178] hub 5-0:1.0: 2 ports detected
[    2.350335] xhci_hcd 0000:10:00.0: xHCI Host Controller
[    2.350338] xhci_hcd 0000:10:00.0: new USB bus registered, assigned bus number 6
[    2.350341] xhci_hcd 0000:10:00.0: Host supports USB 3.1 Enhanced SuperSpeed
[    2.350423] usb usb6: We don't know the algorithms for LPM for this host, disabling LPM.
[    2.350435] usb usb6: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.06
[    2.350437] usb usb6: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    2.350438] usb usb6: Product: xHCI Host Controller
[    2.350439] usb usb6: Manufacturer: Linux 5.6.14-karabijavad xhci-hcd
[    2.350440] usb usb6: SerialNumber: 0000:10:00.0
[    2.350482] hub 6-0:1.0: USB hub found
[    2.350493] hub 6-0:1.0: 2 ports detected
[    2.350611] xhci_hcd 0000:11:00.0: runtime IRQ mapping not provided by arch
[    2.350707] xhci_hcd 0000:11:00.0: enabling bus mastering
[    2.350728] xhci_hcd 0000:11:00.0: xHCI Host Controller
[    2.350731] xhci_hcd 0000:11:00.0: new USB bus registered, assigned bus number 7
[    2.410087] xhci_hcd 0000:11:00.0: hcc params 0x0200eec1 hci version 0x110 quirks 0x0000000000000010
[    2.410113] xhci_hcd 0000:11:00.0: enabling Mem-Wr-Inval
[    2.410507] usb usb7: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.06
[    2.410509] usb usb7: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    2.410510] usb usb7: Product: xHCI Host Controller
[    2.410511] usb usb7: Manufacturer: Linux 5.6.14-karabijavad xhci-hcd
[    2.410512] usb usb7: SerialNumber: 0000:11:00.0
[    2.410592] hub 7-0:1.0: USB hub found
[    2.410604] hub 7-0:1.0: 2 ports detected
[    2.410744] xhci_hcd 0000:11:00.0: xHCI Host Controller
[    2.410746] xhci_hcd 0000:11:00.0: new USB bus registered, assigned bus number 8
[    2.410748] xhci_hcd 0000:11:00.0: Host supports USB 3.1 Enhanced SuperSpeed
[    2.410825] usb usb8: We don't know the algorithms for LPM for this host, disabling LPM.
[    2.410836] usb usb8: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.06
[    2.410837] usb usb8: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    2.410838] usb usb8: Product: xHCI Host Controller
[    2.410839] usb usb8: Manufacturer: Linux 5.6.14-karabijavad xhci-hcd
[    2.410841] usb usb8: SerialNumber: 0000:11:00.0
[    2.410920] hub 8-0:1.0: USB hub found
[    2.410934] hub 8-0:1.0: 2 ports detected
[    2.411045] xhci_hcd 0000:2d:00.0: runtime IRQ mapping not provided by arch
[    2.411116] xhci_hcd 0000:2d:00.0: enabling bus mastering
[    2.411118] xhci_hcd 0000:2d:00.0: xHCI Host Controller
[    2.411121] xhci_hcd 0000:2d:00.0: new USB bus registered, assigned bus number 9
[    2.412222] xhci_hcd 0000:2d:00.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000200009810
[    2.412227] xhci_hcd 0000:2d:00.0: enabling Mem-Wr-Inval
[    2.412376] usb usb9: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.06
[    2.412378] usb usb9: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    2.412380] usb usb9: Product: xHCI Host Controller
[    2.412381] usb usb9: Manufacturer: Linux 5.6.14-karabijavad xhci-hcd
[    2.412382] usb usb9: SerialNumber: 0000:2d:00.0
[    2.412459] hub 9-0:1.0: USB hub found
[    2.412464] hub 9-0:1.0: 2 ports detected
[    2.413054] xhci_hcd 0000:2d:00.0: xHCI Host Controller
[    2.413057] xhci_hcd 0000:2d:00.0: new USB bus registered, assigned bus number 10
[    2.413059] xhci_hcd 0000:2d:00.0: Host supports USB 3.1 Enhanced SuperSpeed
[    2.413081] usb usb10: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.06
[    2.413083] usb usb10: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    2.413084] usb usb10: Product: xHCI Host Controller
[    2.413085] usb usb10: Manufacturer: Linux 5.6.14-karabijavad xhci-hcd
[    2.413087] usb usb10: SerialNumber: 0000:2d:00.0
[    2.413182] hub 10-0:1.0: USB hub found
[    2.413188] hub 10-0:1.0: 2 ports detected
[    2.417840] nvme nvme0: missing or invalid SUBNQN field.
[    2.417862] nvme nvme0: Shutdown timeout set to 8 seconds
[    2.431137] nvme nvme0: 8/0/0 default/read/poll queues
[    2.437986]  nvme0n1: p1 p2 p3 p4 p5 p6 p7
[    2.440979] input: SYNA8004:00 06CB:CD8B Mouse as /devices/pci0000:00/0000:00:15.1/i2c_designware.1/i2c-2/i2c-SYNA8004:00/0018:06CB:CD8B.0001/input/input3
[    2.440995] input: SYNA8004:00 06CB:CD8B Touchpad as /devices/pci0000:00/0000:00:15.1/i2c_designware.1/i2c-2/i2c-SYNA8004:00/0018:06CB:CD8B.0001/input/input4
[    2.441099] hid-generic 0018:06CB:CD8B.0001: input,hidraw0: I2C HID v1.00 Mouse [SYNA8004:00 06CB:CD8B] on i2c-SYNA8004:00
[    2.445804] random: lvm: uninitialized urandom read (4 bytes read)
[    2.475539] device-mapper: uevent: version 1.0.3
[    2.475611] device-mapper: ioctl: 4.42.0-ioctl (2020-02-27) initialised: dm-devel@redhat.com
[    2.476074] random: lvm: uninitialized urandom read (2 bytes read)
[    2.521253] xhci_hcd 0000:2d:00.0: saving config space at offset 0x0 (reading 0x15d48086)
[    2.521264] xhci_hcd 0000:2d:00.0: saving config space at offset 0x4 (reading 0x100403)
[    2.521279] xhci_hcd 0000:2d:00.0: saving config space at offset 0x8 (reading 0xc033002)
[    2.521293] xhci_hcd 0000:2d:00.0: saving config space at offset 0xc (reading 0x20)
[    2.521308] xhci_hcd 0000:2d:00.0: saving config space at offset 0x10 (reading 0xdc000000)
[    2.521310] xhci_hcd 0000:2d:00.0: saving config space at offset 0x14 (reading 0x0)
[    2.521312] xhci_hcd 0000:2d:00.0: saving config space at offset 0x18 (reading 0x0)
[    2.521315] xhci_hcd 0000:2d:00.0: saving config space at offset 0x1c (reading 0x0)
[    2.521317] xhci_hcd 0000:2d:00.0: saving config space at offset 0x20 (reading 0x0)
[    2.521319] xhci_hcd 0000:2d:00.0: saving config space at offset 0x24 (reading 0x0)
[    2.521321] xhci_hcd 0000:2d:00.0: saving config space at offset 0x28 (reading 0x0)
[    2.521323] xhci_hcd 0000:2d:00.0: saving config space at offset 0x2c (reading 0x229217aa)
[    2.521326] xhci_hcd 0000:2d:00.0: saving config space at offset 0x30 (reading 0x0)
[    2.521328] xhci_hcd 0000:2d:00.0: saving config space at offset 0x34 (reading 0x80)
[    2.521330] xhci_hcd 0000:2d:00.0: saving config space at offset 0x38 (reading 0x0)
[    2.521332] xhci_hcd 0000:2d:00.0: saving config space at offset 0x3c (reading 0x1ff)
[    2.521399] xhci_hcd 0000:2d:00.0: PME# enabled
[    2.533059] pcieport 0000:06:02.0: saving config space at offset 0x0 (reading 0x15d38086)
[    2.533061] pcieport 0000:06:02.0: saving config space at offset 0x4 (reading 0x100407)
[    2.533062] pcieport 0000:06:02.0: saving config space at offset 0x8 (reading 0x6040002)
[    2.533064] pcieport 0000:06:02.0: saving config space at offset 0xc (reading 0x10020)
[    2.533066] pcieport 0000:06:02.0: saving config space at offset 0x10 (reading 0x0)
[    2.533067] pcieport 0000:06:02.0: saving config space at offset 0x14 (reading 0x0)
[    2.533069] pcieport 0000:06:02.0: saving config space at offset 0x18 (reading 0x2d2d06)
[    2.533071] pcieport 0000:06:02.0: saving config space at offset 0x1c (reading 0x1f1)
[    2.533073] pcieport 0000:06:02.0: saving config space at offset 0x20 (reading 0xdc00dc00)
[    2.533074] pcieport 0000:06:02.0: saving config space at offset 0x24 (reading 0x1fff1)
[    2.533076] pcieport 0000:06:02.0: saving config space at offset 0x28 (reading 0x0)
[    2.533078] pcieport 0000:06:02.0: saving config space at offset 0x2c (reading 0x0)
[    2.533079] pcieport 0000:06:02.0: saving config space at offset 0x30 (reading 0x0)
[    2.533081] pcieport 0000:06:02.0: saving config space at offset 0x34 (reading 0x80)
[    2.533083] pcieport 0000:06:02.0: saving config space at offset 0x38 (reading 0x0)
[    2.533085] pcieport 0000:06:02.0: saving config space at offset 0x3c (reading 0x201ff)
[    2.533129] pcieport 0000:06:02.0: PME# enabled
[    2.547053] usb 1-8: new high-speed USB device number 2 using xhci_hcd
[    2.666369] psmouse serio1: trackpoint: Elan TrackPoint firmware: 0x47, buttons: 3/3
[    2.680325] input: TPPS/2 Elan TrackPoint as /devices/platform/i8042/serio1/input/input2
[    2.728853] usb 1-8: New USB device found, idVendor=04f2, idProduct=b67c, bcdDevice=67.26
[    2.728856] usb 1-8: New USB device strings: Mfr=3, Product=1, SerialNumber=2
[    2.728858] usb 1-8: Product: Integrated Camera
[    2.728860] usb 1-8: Manufacturer: Chicony Electronics Co.,Ltd.
[    2.728862] usb 1-8: SerialNumber: 6726
[    2.731588] usb 8-1: new SuperSpeed Gen 1 USB device number 2 using xhci_hcd
[    2.754394] usb 8-1: New USB device found, idVendor=0b95, idProduct=1790, bcdDevice= 1.00
[    2.754397] usb 8-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[    2.754399] usb 8-1: Product: AX88179
[    2.754401] usb 8-1: Manufacturer: ASIX Elec. Corp.
[    2.754403] usb 8-1: SerialNumber: 0098BB1E1C0B89
[    2.832660] [drm] amdgpu kernel modesetting enabled.
[    2.832813] CRAT table not found
[    2.832816] Virtual CRAT table created for CPU
[    2.832817] Parsing CRAT table with 1 nodes
[    2.832819] Creating topology SYSFS entries
[    2.832844] Topology: Add CPU node
[    2.832845] Finished initializing topology
[    2.832984] amdgpu 0000:0c:00.0: runtime IRQ mapping not provided by arch
[    2.833009] amdgpu 0000:0c:00.0: enabling device (0006 -> 0007)
[    2.833397] [drm] initializing kernel modesetting (NAVI10 0x1002:0x731F 0x1682:0x5710 0xCA).
[    2.833411] [drm] register mmio base: 0xD0100000
[    2.833412] [drm] register mmio size: 524288
[    2.833426] [drm] PCIE atomic ops is not supported
[    2.886145] [drm] set register base offset for ATHUB
[    2.886147] [drm] set register base offset for CLKA
[    2.886148] [drm] set register base offset for CLKA
[    2.886148] [drm] set register base offset for CLKA
[    2.886149] [drm] set register base offset for CLKA
[    2.886150] [drm] set register base offset for CLKA
[    2.886151] [drm] set register base offset for DF
[    2.886152] [drm] set register base offset for DMU
[    2.886153] [drm] set register base offset for GC
[    2.886154] [drm] set register base offset for HDP
[    2.886155] [drm] set register base offset for MMHUB
[    2.886156] [drm] set register base offset for MP0
[    2.886156] [drm] set register base offset for MP1
[    2.886157] [drm] set register base offset for NBIF
[    2.886158] [drm] set register base offset for NBIF
[    2.886159] [drm] set register base offset for OSSSYS
[    2.886160] [drm] set register base offset for SDMA0
[    2.886161] [drm] set register base offset for SDMA1
[    2.886162] [drm] set register base offset for SMUIO
[    2.886163] [drm] set register base offset for THM
[    2.886164] [drm] set register base offset for UVD
[    2.886168] [drm] add ip block number 0 <nv_common>
[    2.886169] [drm] add ip block number 1 <gmc_v10_0>
[    2.886170] [drm] add ip block number 2 <navi10_ih>
[    2.886171] [drm] add ip block number 3 <psp>
[    2.886172] [drm] add ip block number 4 <smu>
[    2.886173] [drm] add ip block number 5 <dm>
[    2.886174] [drm] add ip block number 6 <gfx_v10_0>
[    2.886175] [drm] add ip block number 7 <sdma_v5_0>
[    2.886176] [drm] add ip block number 8 <vcn_v2_0>
[    2.886177] [drm] add ip block number 9 <jpeg_v2_0>
[    2.886210] usb 1-9: new full-speed USB device number 3 using xhci_hcd
[    3.040241] ATOM BIOS: 113-150WO2ANAVIXLE6GB_MIC_191122_W8
[    3.040256] [drm] VCN decode is enabled in VM mode
[    3.040258] [drm] VCN encode is enabled in VM mode
[    3.040259] [drm] JPEG decode is enabled in VM mode
[    3.040275] [drm] GPU posting now...
[    3.040285] usb 7-2: new full-speed USB device number 2 using xhci_hcd
[    3.040314] tsc: Refined TSC clocksource calibration: 2112.000 MHz
[    3.040318] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x1e71785e5dd, max_idle_ns: 440795244814 ns
[    3.040360] clocksource: Switched to clocksource tsc
[    3.040378] [drm] vm size is 262144 GB, 4 levels, block size is 9-bit, fragment size is 9-bit
[    3.040389] amdgpu 0000:0c:00.0: VRAM: 6128M 0x0000008000000000 - 0x000000817EFFFFFF (6128M used)
[    3.040391] amdgpu 0000:0c:00.0: GART: 512M 0x0000000000000000 - 0x000000001FFFFFFF
[    3.040407] [drm] Detected VRAM RAM=6128M, BAR=256M
[    3.040409] [drm] RAM width 192bits GDDR6
[    3.040455] [TTM] Zone  kernel: Available graphics memory: 8017244 KiB
[    3.040457] [TTM] Zone   dma32: Available graphics memory: 2097152 KiB
[    3.040458] [TTM] Initializing pool allocator
[    3.040463] [TTM] Initializing DMA pool allocator
[    3.040498] [drm] amdgpu: 6128M of VRAM memory ready
[    3.040501] [drm] amdgpu: 6128M of GTT memory ready.
[    3.040510] [drm] GART: num cpu pages 131072, num gpu pages 131072
[    3.040823] [drm] PCIE GART of 512M enabled (table at 0x0000008000000000).
[    3.042480] [drm] use_doorbell being set to: [true]
[    3.042626] [drm] use_doorbell being set to: [true]
[    3.042787] [drm] Found VCN firmware Version ENC: 1.7 DEC: 4 VEP: 0 Revision: 17
[    3.042791] [drm] PSP loading VCN firmware
[    3.166959] usb 1-9: New USB device found, idVendor=06cb, idProduct=00bd, bcdDevice= 0.00
[    3.166963] usb 1-9: New USB device strings: Mfr=0, Product=0, SerialNumber=1
[    3.166965] usb 1-9: SerialNumber: 922937fbc025
[    3.281098] usb 1-10: new full-speed USB device number 4 using xhci_hcd
[    3.408113] usb 1-10: New USB device found, idVendor=8087, idProduct=0aaa, bcdDevice= 0.02
[    3.408116] usb 1-10: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[    3.408605] usb 7-2: New USB device found, idVendor=1532, idProduct=0f1a, bcdDevice= 2.00
[    3.408608] usb 7-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[    3.408609] usb 7-2: Product: Razer Core X Chroma
[    3.408610] usb 7-2: Manufacturer: Razer
[    3.422458] input: Razer Razer Core X Chroma as /devices/pci0000:00/0000:00:1d.4/0000:05:00.0/0000:06:01.0/0000:08:00.0/0000:09:04.0/0000:0d:00.0/0000:0e:02.0/0000:11:00.0/usb7/7-2/7-2:1.0/0003:1532:0F1A.0002/input/input6
[    3.474193] hid-generic 0003:1532:0F1A.0002: input,hidraw1: USB HID v1.11 Keyboard [Razer Razer Core X Chroma] on usb-0000:11:00.0-2/input0
[    3.480513] input: Razer Razer Core X Chroma Keyboard as /devices/pci0000:00/0000:00:1d.4/0000:05:00.0/0000:06:01.0/0000:08:00.0/0000:09:04.0/0000:0d:00.0/0000:0e:02.0/0000:11:00.0/usb7/7-2/7-2:1.1/0003:1532:0F1A.0003/input/input7
[    3.532089] input: Razer Razer Core X Chroma Consumer Control as /devices/pci0000:00/0000:00:1d.4/0000:05:00.0/0000:06:01.0/0000:08:00.0/0000:09:04.0/0000:0d:00.0/0000:0e:02.0/0000:11:00.0/usb7/7-2/7-2:1.1/0003:1532:0F1A.0003/input/input8
[    3.532105] input: Razer Razer Core X Chroma System Control as /devices/pci0000:00/0000:00:1d.4/0000:05:00.0/0000:06:01.0/0000:08:00.0/0000:09:04.0/0000:0d:00.0/0000:0e:02.0/0000:11:00.0/usb7/7-2/7-2:1.1/0003:1532:0F1A.0003/input/input9
[    3.532118] input: Razer Razer Core X Chroma as /devices/pci0000:00/0000:00:1d.4/0000:05:00.0/0000:06:01.0/0000:08:00.0/0000:09:04.0/0000:0d:00.0/0000:0e:02.0/0000:11:00.0/usb7/7-2/7-2:1.1/0003:1532:0F1A.0003/input/input10
[    3.532231] hid-generic 0003:1532:0F1A.0003: input,hidraw2: USB HID v1.11 Keyboard [Razer Razer Core X Chroma] on usb-0000:11:00.0-2/input1
[    3.537445] input: Razer Razer Core X Chroma as /devices/pci0000:00/0000:00:1d.4/0000:05:00.0/0000:06:01.0/0000:08:00.0/0000:09:04.0/0000:0d:00.0/0000:0e:02.0/0000:11:00.0/usb7/7-2/7-2:1.2/0003:1532:0F1A.0004/input/input11
[    3.537551] hid-generic 0003:1532:0F1A.0004: input,hidraw3: USB HID v1.11 Mouse [Razer Razer Core X Chroma] on usb-0000:11:00.0-2/input2
[    3.537567] usbcore: registered new interface driver usbhid
[    3.537568] usbhid: USB HID core driver
[    3.569337] intel-lpss 0000:00:15.1: saving config space at offset 0x0 (reading 0x9de98086)
[    3.569340] intel-lpss 0000:00:15.1: saving config space at offset 0x4 (reading 0x100006)
[    3.569344] intel-lpss 0000:00:15.1: saving config space at offset 0x8 (reading 0xc800011)
[    3.569349] intel-lpss 0000:00:15.1: saving config space at offset 0xc (reading 0x800020)
[    3.569353] intel-lpss 0000:00:15.1: saving config space at offset 0x10 (reading 0xea246004)
[    3.569357] intel-lpss 0000:00:15.1: saving config space at offset 0x14 (reading 0x0)
[    3.569361] intel-lpss 0000:00:15.1: saving config space at offset 0x18 (reading 0x0)
[    3.569366] intel-lpss 0000:00:15.1: saving config space at offset 0x1c (reading 0x0)
[    3.569370] intel-lpss 0000:00:15.1: saving config space at offset 0x20 (reading 0x0)
[    3.569375] intel-lpss 0000:00:15.1: saving config space at offset 0x24 (reading 0x0)
[    3.569379] intel-lpss 0000:00:15.1: saving config space at offset 0x28 (reading 0x0)
[    3.569383] intel-lpss 0000:00:15.1: saving config space at offset 0x2c (reading 0x229217aa)
[    3.569388] intel-lpss 0000:00:15.1: saving config space at offset 0x30 (reading 0x0)
[    3.569392] intel-lpss 0000:00:15.1: saving config space at offset 0x34 (reading 0x80)
[    3.569396] intel-lpss 0000:00:15.1: saving config space at offset 0x38 (reading 0x0)
[    3.569401] intel-lpss 0000:00:15.1: saving config space at offset 0x3c (reading 0x2ff)
[    3.777660] random: crng init done
[    3.960875] thunderbolt 0-1: new device found, vendor=0x127 device=0x2
[    3.960878] thunderbolt 0-1: Razer Core X Chroma
[    4.069713] intel-lpss 0000:00:15.1: power state changed by ACPI to D3cold
[    4.072702] [drm] reserve 0x900000 from 0x817e400000 for PSP TMR
[    4.147100] amdgpu 0000:0c:00.0: RAS: ras ta ucode is not available
[    4.153155] amdgpu: [powerplay] use vbios provided pptable
[    4.153278] amdgpu: [powerplay] smu driver if version = 0x00000033, smu fw if version = 0x00000035, smu fw version = 0x002a3200 (42.50.0)
[    4.153279] amdgpu: [powerplay] SMU driver if version not matched
[    4.196528] amdgpu: [powerplay] SMU is initialized successfully!
[    4.197953] [drm] Display Core initialized with v3.2.69!
[    4.298463] thunderbolt 0-101: new device found, vendor=0x127 device=0x3
[    4.298466] thunderbolt 0-101: Razer Core X Chroma
[    4.311409] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[    4.311411] [drm] Driver supports precise vblank timestamp query.
[    4.314301] [drm] kiq ring mec 2 pipe 1 q 0
[    4.319501] [drm] VCN decode and encode initialized successfully(under DPG Mode).
[    4.319930] [drm] JPEG decode initialized successfully.
[    4.321089] kfd kfd: Allocated 3969056 bytes on gart
[    4.321877] Virtual CRAT table created for GPU
[    4.321879] Parsing CRAT table with 1 nodes
[    4.321885] Creating topology SYSFS entries
[    4.321939] Topology: Add dGPU node [0x731f:0x1002]
[    4.321941] kfd kfd: added device 1002:731f
[    4.323421] [drm] fb mappable at 0x801C9000
[    4.323423] [drm] vram apper at 0x80000000
[    4.323424] [drm] size 8294400
[    4.323425] [drm] fb depth is 24
[    4.323426] [drm]    pitch is 7680
[    4.372101] Console: switching to colour frame buffer device 240x67
[    4.432031] amdgpu 0000:0c:00.0: fb0: amdgpudrmfb frame buffer device
[    4.441487] amdgpu 0000:0c:00.0: ring gfx_0.0.0 uses VM inv eng 0 on hub 0
[    4.441721] amdgpu 0000:0c:00.0: ring comp_1.0.0 uses VM inv eng 1 on hub 0
[    4.441965] amdgpu 0000:0c:00.0: ring comp_1.1.0 uses VM inv eng 4 on hub 0
[    4.442207] amdgpu 0000:0c:00.0: ring comp_1.2.0 uses VM inv eng 5 on hub 0
[    4.442449] amdgpu 0000:0c:00.0: ring comp_1.3.0 uses VM inv eng 6 on hub 0
[    4.442687] amdgpu 0000:0c:00.0: ring comp_1.0.1 uses VM inv eng 7 on hub 0
[    4.442933] amdgpu 0000:0c:00.0: ring comp_1.1.1 uses VM inv eng 8 on hub 0
[    4.443180] amdgpu 0000:0c:00.0: ring comp_1.2.1 uses VM inv eng 9 on hub 0
[    4.443426] amdgpu 0000:0c:00.0: ring comp_1.3.1 uses VM inv eng 10 on hub 0
[    4.443678] amdgpu 0000:0c:00.0: ring kiq_2.1.0 uses VM inv eng 11 on hub 0
[    4.443922] amdgpu 0000:0c:00.0: ring sdma0 uses VM inv eng 12 on hub 0
[    4.444156] amdgpu 0000:0c:00.0: ring sdma1 uses VM inv eng 13 on hub 0
[    4.444393] amdgpu 0000:0c:00.0: ring vcn_dec uses VM inv eng 0 on hub 1
[    4.444634] amdgpu 0000:0c:00.0: ring vcn_enc0 uses VM inv eng 1 on hub 1
[    4.444873] amdgpu 0000:0c:00.0: ring vcn_enc1 uses VM inv eng 4 on hub 1
[    4.445121] amdgpu 0000:0c:00.0: ring jpeg_dec uses VM inv eng 5 on hub 1
[    4.445771] [drm] Initialized amdgpu 3.36.0 20150101 for 0000:0c:00.0 on minor 0
[    4.649081] raid6: avx2x4   gen() 21407 MB/s
[    4.666091] raid6: avx2x4   xor() 11858 MB/s
[    4.683085] raid6: avx2x2   gen() 41097 MB/s
[    4.700087] raid6: avx2x2   xor() 29200 MB/s
[    4.717086] raid6: avx2x1   gen() 39029 MB/s
[    4.734084] raid6: avx2x1   xor() 22488 MB/s
[    4.751088] raid6: sse2x4   gen() 19051 MB/s
[    4.768089] raid6: sse2x4   xor() 10062 MB/s
[    4.785086] raid6: sse2x2   gen() 19363 MB/s
[    4.802088] raid6: sse2x2   xor() 12292 MB/s
[    4.819084] raid6: sse2x1   gen() 16381 MB/s
[    4.836085] raid6: sse2x1   xor()  8315 MB/s
[    4.836227] raid6: using algorithm avx2x2 gen() 41097 MB/s
[    4.836425] raid6: .... xor() 29200 MB/s, rmw enabled
[    4.836610] raid6: using avx2x2 recovery algorithm
[    4.837888] xor: automatically using best checksumming function   avx       
[    4.943482] Btrfs loaded, crc32c=crc32c-intel
[    4.969341] BTRFS: device fsid f924d81f-ca73-42ea-b47a-d3a73e89313d devid 1 transid 34706 /dev/mapper/beta-ubuntu scanned by btrfs (420)
[    5.019532] BTRFS info (device dm-1): disk space caching is enabled
[    5.028624] BTRFS info (device dm-1): has skinny extents
[    5.073677] BTRFS info (device dm-1): enabling ssd optimizations
[    5.234574] systemd[1]: Inserted module 'autofs4'
[    5.290311] systemd[1]: systemd 245.4-4ubuntu3.1 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
[    5.321242] systemd[1]: Detected architecture x86-64.
[    5.371678] systemd[1]: Set hostname to <alpha>.
[    5.451113] systemd[1]: /lib/systemd/system/dbus.socket:5: ListenStream= references a path below legacy directory /var/run/, updating /var/run/dbus/system_bus_socket → /run/dbus/system_bus_socket; please update the unit file accordingly.
[    5.474016] [drm:dc_link_detect_helper [amdgpu]] *ERROR* No EDID read.
[    5.497362] systemd[1]: /lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
[    5.518609] systemd[1]: Created slice Virtual Machine and Container Slice.
[    5.543030] systemd[1]: Created slice system-modprobe.slice.
[    5.567486] systemd[1]: Created slice User and Session Slice.
[    5.592082] systemd[1]: Started Forward Password Requests to Wall Directory Watch.
[    5.617675] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
[    5.644417] systemd[1]: Reached target User and Group Name Lookups.
[    5.671861] systemd[1]: Reached target Remote File Systems.
[    5.699859] systemd[1]: Reached target Slices.
[    5.728090] systemd[1]: Reached target Libvirt guests shutdown.
[    5.756872] systemd[1]: Listening on Device-mapper event daemon FIFOs.
[    5.786442] systemd[1]: Listening on LVM2 poll daemon socket.
[    5.819013] systemd[1]: Listening on Syslog Socket.
[    5.848195] systemd[1]: Listening on fsck to fsckd communication Socket.
[    5.877231] systemd[1]: Listening on initctl Compatibility Named Pipe.
[    5.906432] systemd[1]: Listening on Journal Audit Socket.
[    5.939497] systemd[1]: Listening on Journal Socket (/dev/log).
[    5.969738] systemd[1]: Listening on Journal Socket.
[    6.000575] systemd[1]: Listening on udev Control Socket.
[    6.031965] systemd[1]: Listening on udev Kernel Socket.
[    6.064422] systemd[1]: Mounting Huge Pages File System...
[    6.097172] systemd[1]: Mounting POSIX Message Queue File System...
[    6.130148] systemd[1]: Mounting Kernel Debug File System...
[    6.162933] systemd[1]: Mounting Kernel Trace File System...
[    6.195667] systemd[1]: Starting Journal Service...
[    6.227630] systemd[1]: Starting Availability of block devices...
[    6.259851] systemd[1]: Starting Set the console keyboard layout...
[    6.292583] systemd[1]: Starting Create list of static device nodes for the current kernel...
[    6.325716] systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
[    6.358963] systemd[1]: Condition check resulted in Load Kernel Module drm being skipped.
[    6.374975] systemd[1]: Condition check resulted in Set Up Additional Binary Formats being skipped.
[    6.390857] systemd[1]: Condition check resulted in File System Check on Root Device being skipped.
[    6.407371] systemd[1]: Starting Load Kernel Modules...
[    6.437746] lp: driver loaded but no devices found
[    6.456697] systemd[1]: Starting Remount Root and Kernel File Systems...
[    6.456905] ppdev: user-space parallel port driver
[    6.465795] BTRFS info (device dm-1): disk space caching is enabled
[    6.518913] systemd[1]: Starting udev Coldplug all Devices...
[    6.542146] fuse: init (API version 7.31)
[    6.564312] systemd[1]: Mounted Huge Pages File System.
[    6.593088] systemd[1]: Mounted POSIX Message Queue File System.
[    6.621833] systemd[1]: Mounted Kernel Debug File System.
[    6.650227] systemd[1]: Mounted Kernel Trace File System.
[    6.678483] systemd[1]: Finished Availability of block devices.
[    6.706730] systemd[1]: Finished Set the console keyboard layout.
[    6.735062] systemd[1]: Finished Create list of static device nodes for the current kernel.
[    6.763890] systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
[    6.793272] systemd[1]: Finished Load Kernel Modules.
[    6.822461] systemd[1]: Finished Remount Root and Kernel File Systems.
[    6.852128] systemd[1]: Mounting FUSE Control File System...
[    6.882023] [drm] amdgpu_dm_irq_schedule_work FAILED src 1
[    6.882840] systemd[1]: Mounting Kernel Configuration File System...
[    6.925675] systemd[1]: Condition check resulted in Rebuild Hardware Database being skipped.
[    6.940142] systemd[1]: Condition check resulted in Platform Persistent Storage Archival being skipped.
[    6.955175] systemd[1]: Starting Load/Save Random Seed...
[    6.984633] systemd[1]: Starting Apply Kernel Variables...
[    7.014226] systemd[1]: Starting Create System Users...
[    7.029237] systemd[1]: Started Journal Service.
[    7.126199] systemd-journald[513]: Received client request to flush runtime journal.
[    7.389208] input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input12
[    7.389251] ACPI: Sleep Button [SLPB]
[    7.389299] input: Lid Switch as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0D:00/input/input13
[    7.389324] ACPI: Lid Switch [LID]
[    7.389361] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input14
[    7.389385] ACPI: Power Button [PWRF]
[    7.397822] ACPI: Deprecated procfs I/F for AC is loaded, please retry with CONFIG_ACPI_PROCFS_POWER cleared
[    7.397873] ACPI: AC Adapter [AC] (on-line)
[    7.410198] pci 0000:00:08.0: saving config space at offset 0x0 (reading 0x19118086)
[    7.410202] pci 0000:00:08.0: saving config space at offset 0x4 (reading 0x100000)
[    7.410203] pci 0000:00:08.0: saving config space at offset 0x8 (reading 0x8800000)
[    7.410204] pci 0000:00:08.0: saving config space at offset 0xc (reading 0x0)
[    7.410205] pci 0000:00:08.0: saving config space at offset 0x10 (reading 0xea242004)
[    7.410206] pci 0000:00:08.0: saving config space at offset 0x14 (reading 0x0)
[    7.410207] pci 0000:00:08.0: saving config space at offset 0x18 (reading 0x0)
[    7.410208] pci 0000:00:08.0: saving config space at offset 0x1c (reading 0x0)
[    7.410209] pci 0000:00:08.0: saving config space at offset 0x20 (reading 0x0)
[    7.410210] pci 0000:00:08.0: saving config space at offset 0x24 (reading 0x0)
[    7.410211] pci 0000:00:08.0: saving config space at offset 0x28 (reading 0x0)
[    7.410216] pci 0000:00:08.0: saving config space at offset 0x2c (reading 0x229217aa)
[    7.410217] pci 0000:00:08.0: saving config space at offset 0x30 (reading 0x0)
[    7.410218] pci 0000:00:08.0: saving config space at offset 0x34 (reading 0x90)
[    7.410219] pci 0000:00:08.0: saving config space at offset 0x38 (reading 0x0)
[    7.410220] pci 0000:00:08.0: saving config space at offset 0x3c (reading 0x1ff)
[    7.410905] pci 0000:00:04.0: saving config space at offset 0x0 (reading 0x19038086)
[    7.410910] pci 0000:00:04.0: saving config space at offset 0x4 (reading 0x900000)
[    7.410911] pci 0000:00:04.0: saving config space at offset 0x8 (reading 0x1180000c)
[    7.410912] pci 0000:00:04.0: saving config space at offset 0xc (reading 0x0)
[    7.410914] pci 0000:00:04.0: saving config space at offset 0x10 (reading 0xea230004)
[    7.410915] pci 0000:00:04.0: saving config space at offset 0x14 (reading 0x0)
[    7.410916] pci 0000:00:04.0: saving config space at offset 0x18 (reading 0x0)
[    7.410917] pci 0000:00:04.0: saving config space at offset 0x1c (reading 0x0)
[    7.410918] pci 0000:00:04.0: saving config space at offset 0x20 (reading 0x0)
[    7.410926] pci 0000:00:04.0: saving config space at offset 0x24 (reading 0x0)
[    7.410927] pci 0000:00:04.0: saving config space at offset 0x28 (reading 0x0)
[    7.410928] pci 0000:00:04.0: saving config space at offset 0x2c (reading 0x229217aa)
[    7.410930] pci 0000:00:04.0: saving config space at offset 0x30 (reading 0x0)
[    7.410931] pci 0000:00:04.0: saving config space at offset 0x34 (reading 0x90)
[    7.410932] pci 0000:00:04.0: saving config space at offset 0x38 (reading 0x0)
[    7.410933] pci 0000:00:04.0: saving config space at offset 0x3c (reading 0x1ff)
[    7.421908] intel_pch_thermal 0000:00:12.0: runtime IRQ mapping not provided by arch
[    7.421914] intel_pch_thermal 0000:00:12.0: enabling device (0000 -> 0002)
[    7.457035] proc_thermal 0000:00:04.0: runtime IRQ mapping not provided by arch
[    7.457070] proc_thermal 0000:00:04.0: enabling device (0000 -> 0002)
[    7.467187] mei_me 0000:00:16.0: runtime IRQ mapping not provided by arch
[    7.467198] mei_me 0000:00:16.0: enabling device (0000 -> 0002)
[    7.504611] cfg80211: Loading compiled-in X.509 certificates for regulatory database
[    7.518465] Bluetooth: Core ver 2.22
[    7.518474] NET: Registered protocol family 31
[    7.518474] Bluetooth: HCI device and connection manager initialized
[    7.518477] Bluetooth: HCI socket layer initialized
[    7.518478] Bluetooth: L2CAP socket layer initialized
[    7.518480] Bluetooth: SCO socket layer initialized
[    7.519992] mei_me 0000:00:16.0: enabling bus mastering
[    7.520835] intel-lpss 0000:00:15.1: power state changed by ACPI to D0
[    7.520875] intel-lpss 0000:00:15.1: restoring config space at offset 0x10 (was 0x4, writing 0xea246004)
[    7.527752] cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
[    7.529771] input: SYNA8004:00 06CB:CD8B Mouse as /devices/pci0000:00/0000:00:15.1/i2c_designware.1/i2c-2/i2c-SYNA8004:00/0018:06CB:CD8B.0001/input/input15
[    7.529794] input: SYNA8004:00 06CB:CD8B Touchpad as /devices/pci0000:00/0000:00:15.1/i2c_designware.1/i2c-2/i2c-SYNA8004:00/0018:06CB:CD8B.0001/input/input16
[    7.530805] hid-multitouch 0018:06CB:CD8B.0001: input,hidraw0: I2C HID v1.00 Mouse [SYNA8004:00 06CB:CD8B] on i2c-SYNA8004:00
[    7.544872] intel_rapl_common: Found RAPL domain package
[    7.544872] intel_rapl_common: Found RAPL domain dram
[    7.547644] proc_thermal 0000:00:04.0: Creating sysfs group for PROC_THERMAL_PCI
[    7.554163] Non-volatile memory driver v1.3
[    7.580723] mousedev: PS/2 mouse device common for all mice
[    7.581559] mc: Linux media interface: v0.10
[    7.678113] usbcore: registered new interface driver btusb
[    7.681240] Bluetooth: hci0: Bootloader revision 0.1 build 42 week 52 2015
[    7.682241] Bluetooth: hci0: Device revision is 2
[    7.682242] Bluetooth: hci0: Secure boot is enabled
[    7.682242] Bluetooth: hci0: OTP lock is enabled
[    7.682243] Bluetooth: hci0: API lock is enabled
[    7.682243] Bluetooth: hci0: Debug lock is disabled
[    7.682244] Bluetooth: hci0: Minimum firmware build 1 week 10 2014
[    7.683645] Bluetooth: hci0: Found device firmware: intel/ibt-17-16-1.sfi
[    7.685040] thinkpad_acpi: ThinkPad ACPI Extras v0.26
[    7.685041] thinkpad_acpi: http://ibm-acpi.sf.net/
[    7.685043] thinkpad_acpi: ThinkPad BIOS N2HET50W (1.33 ), EC N2HHT34W
[    7.685043] thinkpad_acpi: Lenovo ThinkPad X1 Carbon 7th, model 20QD000SUS
[    7.686277] videodev: Linux video capture interface: v2.00
[    7.686650] Intel(R) Wireless WiFi driver for Linux
[    7.686651] Copyright(c) 2003- 2015 Intel Corporation
[    7.686685] iwlwifi 0000:00:14.3: runtime IRQ mapping not provided by arch
[    7.686703] iwlwifi 0000:00:14.3: enabling device (0000 -> 0002)
[    7.686955] iwlwifi 0000:00:14.3: enabling bus mastering
[    7.688930] thinkpad_acpi: radio switch found; radios are enabled
[    7.689022] thinkpad_acpi: Tablet mode switch found (type: GMMS), currently in laptop mode
[    7.689352] thinkpad_acpi: This ThinkPad has standard ACPI backlight brightness control, supported by the ACPI video driver
[    7.689353] thinkpad_acpi: Disabling thinkpad-acpi brightness events by default...
[    7.691706] snd_hda_intel 0000:00:1f.3: runtime IRQ mapping not provided by arch
[    7.691711] snd_hda_intel 0000:00:1f.3: DSP detected with PCI class/subclass/prog-if info 0x040380
[    7.693538] iwlwifi 0000:00:14.3: Found debug destination: EXTERNAL_DRAM
[    7.693539] iwlwifi 0000:00:14.3: Found debug configuration: 0
[    7.693740] iwlwifi 0000:00:14.3: loaded firmware version 46.6bf1df06.0 9000-pu-b0-jf-b0-46.ucode op_mode iwlmvm
[    7.709416] snd_hda_intel 0000:00:1f.3: Digital mics found on Skylake+ platform, using SOF driver
[    7.709465] snd_hda_intel 0000:0c:00.1: runtime IRQ mapping not provided by arch
[    7.752036] snd_hda_intel 0000:0c:00.1: Force to non-snoop mode
[    7.880778] thinkpad_acpi: rfkill switch tpacpi_bluetooth_sw: radio is unblocked
[    7.880817] RAPL PMU: API unit is 2^-32 Joules, 5 fixed counters, 655360 ms ovfl timer
[    7.880818] RAPL PMU: hw unit of domain pp0-core 2^-14 Joules
[    7.880818] RAPL PMU: hw unit of domain package 2^-14 Joules
[    7.880819] RAPL PMU: hw unit of domain dram 2^-14 Joules
[    7.880819] RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules
[    7.880820] RAPL PMU: hw unit of domain psys 2^-14 Joules
[    7.930772] uvcvideo: Found UVC 1.50 device Integrated Camera (04f2:b67c)
[    7.937475] uvcvideo 1-8:1.0: Entity type for entity Realtek Extended Controls Unit was not initialized!
[    7.937476] uvcvideo 1-8:1.0: Entity type for entity Extension 4 was not initialized!
[    7.937477] uvcvideo 1-8:1.0: Entity type for entity Processing 2 was not initialized!
[    7.937478] uvcvideo 1-8:1.0: Entity type for entity Camera 1 was not initialized!
[    7.937530] input: Integrated Camera: Integrated C as /devices/pci0000:00/0000:00:14.0/usb1/1-8/1-8:1.0/input/input19
[    7.939131] uvcvideo: Found UVC 1.50 device Integrated Camera (04f2:b67c)
[    7.940673] uvcvideo 1-8:1.2: Entity type for entity Realtek Extended Controls Unit was not initialized!
[    7.940674] uvcvideo 1-8:1.2: Entity type for entity Microsoft Extended Controls Uni was not initialized!
[    7.940674] uvcvideo 1-8:1.2: Entity type for entity Extension 9 was not initialized!
[    7.940675] uvcvideo 1-8:1.2: Entity type for entity Extension 11 was not initialized!
[    7.940676] uvcvideo 1-8:1.2: Entity type for entity Processing 15 was not initialized!
[    7.940676] uvcvideo 1-8:1.2: Entity type for entity Camera 8 was not initialized!
[    7.940712] input: Integrated Camera: Integrated I as /devices/pci0000:00/0000:00:14.0/usb1/1-8/1-8:1.2/input/input20
[    7.940755] usbcore: registered new interface driver uvcvideo
[    7.940756] USB Video Class driver (1.1.1)
[    7.941341] skl_uncore 0000:00:00.0: runtime IRQ mapping not provided by arch
[    7.941361] resource sanity check: requesting [mem 0xfed10000-0xfed15fff], which spans more than pnp 00:07 [mem 0xfed10000-0xfed13fff]
[    7.941368] caller snb_uncore_imc_init_box+0x5d/0x80 [intel_uncore] mapping multiple BARs
[    7.947297] thinkpad_acpi: Standard ACPI backlight interface available, not loading native one
[    7.956015] thinkpad_acpi: battery 1 registered (start 0, stop 100)
[    7.956041] battery: new extension: ThinkPad Battery Extension
[    7.958441] input: ThinkPad Extra Buttons as /devices/platform/thinkpad_acpi/input/input18
[    7.960392] snd_hda_intel 0000:0c:00.1: bound 0000:0c:00.0 (ops amdgpu_dm_audio_component_bind_ops [amdgpu])
[    8.093846] intel_rapl_common: Found RAPL domain package
[    8.093847] intel_rapl_common: Found RAPL domain core
[    8.093848] intel_rapl_common: Found RAPL domain uncore
[    8.093849] intel_rapl_common: Found RAPL domain dram
[    8.099905] BTRFS info (device dm-1): device fsid f924d81f-ca73-42ea-b47a-d3a73e89313d devid 1 moved old:/dev/mapper/beta-ubuntu new:/dev/dm-1
[    8.100359] BTRFS info (device dm-1): device fsid f924d81f-ca73-42ea-b47a-d3a73e89313d devid 1 moved old:/dev/dm-1 new:/dev/mapper/beta-ubuntu
[    8.477095] ax88179_178a 8-1:1.0 eth0: register 'ax88179_178a' at usb-0000:11:00.0-1, ASIX AX88179 USB 3.0 Gigabit Ethernet, 98:bb:1e:1c:0b:89
[    8.477124] usbcore: registered new interface driver ax88179_178a
[    8.479878] input: HDA ATI HDMI HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:1d.4/0000:05:00.0/0000:06:01.0/0000:08:00.0/0000:09:01.0/0000:0a:00.0/0000:0b:00.0/0000:0c:00.1/sound/card0/input21
[    8.479957] input: HDA ATI HDMI HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:1d.4/0000:05:00.0/0000:06:01.0/0000:08:00.0/0000:09:01.0/0000:0a:00.0/0000:0b:00.0/0000:0c:00.1/sound/card0/input22
[    8.480037] input: HDA ATI HDMI HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:1d.4/0000:05:00.0/0000:06:01.0/0000:08:00.0/0000:09:01.0/0000:0a:00.0/0000:0b:00.0/0000:0c:00.1/sound/card0/input23
[    8.480127] input: HDA ATI HDMI HDMI/DP,pcm=9 as /devices/pci0000:00/0000:00:1d.4/0000:05:00.0/0000:06:01.0/0000:08:00.0/0000:09:01.0/0000:0a:00.0/0000:0b:00.0/0000:0c:00.1/sound/card0/input24
[    8.480205] input: HDA ATI HDMI HDMI/DP,pcm=10 as /devices/pci0000:00/0000:00:1d.4/0000:05:00.0/0000:06:01.0/0000:08:00.0/0000:09:01.0/0000:0a:00.0/0000:0b:00.0/0000:0c:00.1/sound/card0/input25
[    8.480240] input: HDA ATI HDMI HDMI/DP,pcm=11 as /devices/pci0000:00/0000:00:1d.4/0000:05:00.0/0000:06:01.0/0000:08:00.0/0000:09:01.0/0000:0a:00.0/0000:0b:00.0/0000:0c:00.1/sound/card0/input26
[    8.485103] EXT4-fs (nvme0n1p6): mounted filesystem with ordered data mode. Opts: (null)
[    8.492370] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x0 (reading 0xab381002)
[    8.492404] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x4 (reading 0x100406)
[    8.492429] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x8 (reading 0x4030000)
[    8.492454] snd_hda_intel 0000:0c:00.1: saving config space at offset 0xc (reading 0x800020)
[    8.492477] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x10 (reading 0xd0180000)
[    8.492503] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x14 (reading 0x0)
[    8.492551] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x18 (reading 0x0)
[    8.492630] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x1c (reading 0x0)
[    8.492658] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x20 (reading 0x0)
[    8.492726] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x24 (reading 0x0)
[    8.492752] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x28 (reading 0x0)
[    8.492777] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x2c (reading 0xab381002)
[    8.492806] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x30 (reading 0x0)
[    8.492849] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x34 (reading 0x48)
[    8.492897] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x38 (reading 0x0)
[    8.492967] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x3c (reading 0x2ff)
[    8.493906] snd_hda_intel 0000:0c:00.1: PME# enabled
[    8.541563] iwlwifi 0000:00:14.3: Detected Intel(R) Wireless-AC 9560 160MHz, REV=0x318
[    8.590650] Adding 16777212k swap on /dev/mapper/beta-swap.  Priority:-2 extents:1 across:16777212k SSFS
[    8.593369] iwlwifi 0000:00:14.3: Applying debug destination EXTERNAL_DRAM
[    8.593714] iwlwifi 0000:00:14.3: Allocated 0x00400000 bytes for firmware monitor.
[    8.606785] FAT-fs (nvme0n1p5): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.
[    8.666818] iwlwifi 0000:00:14.3: base HW address: dc:fb:48:03:94:86
[    8.700016] ax88179_178a 8-1:1.0 enx98bb1e1c0b89: renamed from eth0
[    8.736445] ieee80211 phy0: Selected rate control algorithm 'iwl-mvm-rs'
[    8.736763] thermal thermal_zone5: failed to read out thermal zone (-61)
[    8.773244] intel-lpss 0000:00:15.1: saving config space at offset 0x0 (reading 0x9de98086)
[    8.773275] intel-lpss 0000:00:15.1: saving config space at offset 0x4 (reading 0x100006)
[    8.773307] intel-lpss 0000:00:15.1: saving config space at offset 0x8 (reading 0xc800011)
[    8.773340] intel-lpss 0000:00:15.1: saving config space at offset 0xc (reading 0x800020)
[    8.773374] intel-lpss 0000:00:15.1: saving config space at offset 0x10 (reading 0xea246004)
[    8.773433] intel-lpss 0000:00:15.1: saving config space at offset 0x14 (reading 0x0)
[    8.773493] intel-lpss 0000:00:15.1: saving config space at offset 0x18 (reading 0x0)
[    8.773529] intel-lpss 0000:00:15.1: saving config space at offset 0x1c (reading 0x0)
[    8.773569] intel-lpss 0000:00:15.1: saving config space at offset 0x20 (reading 0x0)
[    8.773607] intel-lpss 0000:00:15.1: saving config space at offset 0x24 (reading 0x0)
[    8.773649] intel-lpss 0000:00:15.1: saving config space at offset 0x28 (reading 0x0)
[    8.773685] intel-lpss 0000:00:15.1: saving config space at offset 0x2c (reading 0x229217aa)
[    8.773721] intel-lpss 0000:00:15.1: saving config space at offset 0x30 (reading 0x0)
[    8.773758] intel-lpss 0000:00:15.1: saving config space at offset 0x34 (reading 0x80)
[    8.773817] intel-lpss 0000:00:15.1: saving config space at offset 0x38 (reading 0x0)
[    8.773865] intel-lpss 0000:00:15.1: saving config space at offset 0x3c (reading 0x2ff)
[    8.791816] intel-lpss 0000:00:15.1: power state changed by ACPI to D3cold
[    8.821145] iwlwifi 0000:00:14.3 wlp0s20f3: renamed from wlan0
[    8.949253] intel_pmc_core intel_pmc_core.0:  initialized
[    8.977514] snd_soc_skl 0000:00:1f.3: runtime IRQ mapping not provided by arch
[    8.977517] snd_soc_skl 0000:00:1f.3: DSP detected with PCI class/subclass/prog-if info 0x040380
[    8.982579] snd_soc_skl 0000:00:1f.3: Digital mics found on Skylake+ platform, using SOF driver
[    9.135391] sof-audio-pci 0000:00:1f.3: runtime IRQ mapping not provided by arch
[    9.135398] sof-audio-pci 0000:00:1f.3: DSP detected with PCI class/subclass/prog-if info 0x040380
[    9.135500] sof-audio-pci 0000:00:1f.3: Digital mics found on Skylake+ platform, using SOF driver
[    9.135604] sof-audio-pci 0000:00:1f.3: DSP detected with PCI class/subclass/prog-if 0x040380
[    9.275518] sof-audio-pci 0000:00:1f.3: couldn't bind with audio component
[    9.275527] sof-audio-pci 0000:00:1f.3: error: init i915 and HDMI codec failed
[    9.288478] sof-audio-pci 0000:00:1f.3: error: failed to probe DSP -19
[    9.299801] sof-audio-pci 0000:00:1f.3: error: sof_probe_work failed err: -19
[    9.884531] typec_displayport port1-partner.0: failed to enter mode
[   10.270992] squashfs: version 4.0 (2009/01/31) Phillip Lougher
[   10.411833] Bluetooth: BNEP (Ethernet Emulation) ver 1.3
[   10.411834] Bluetooth: BNEP filters: protocol multicast
[   10.411837] Bluetooth: BNEP socket layer initialized
[   10.439430] Bluetooth: hci0: Waiting for firmware download to complete
[   10.440289] Bluetooth: hci0: Firmware loaded in 2697292 usecs
[   10.440369] Bluetooth: hci0: Waiting for device to boot
[   10.454326] Bluetooth: hci0: Device booted in 13649 usecs
[   10.454400] Bluetooth: hci0: Found Intel DDC parameters: intel/ibt-17-16-1.ddc
[   10.456326] Bluetooth: hci0: Applying Intel DDC parameters completed
[   10.458439] Bluetooth: hci0: Firmware revision 0.1 build 12 week 13 2020
[   10.525997] NET: Registered protocol family 38
[   11.185542] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
[   11.213164] tun: Universal TUN/TAP device driver, 1.6
[   11.566594] virbr0: port 1(virbr0-nic) entered blocking state
[   11.566599] virbr0: port 1(virbr0-nic) entered disabled state
[   11.566741] device virbr0-nic entered promiscuous mode
[   11.581105] iwlwifi 0000:00:14.3: Applying debug destination EXTERNAL_DRAM
[   11.583295] bpfilter: Loaded bpfilter_umh pid 1140
[   11.583494] Started bpfilter
[   11.695298] iwlwifi 0000:00:14.3: Applying debug destination EXTERNAL_DRAM
[   11.760765] iwlwifi 0000:00:14.3: FW already configured (0) - re-configuring
[   11.967372] virbr0: port 1(virbr0-nic) entered blocking state
[   11.967374] virbr0: port 1(virbr0-nic) entered listening state
[   11.985924] virbr0: port 1(virbr0-nic) entered disabled state
[   12.093348] pcieport 0000:06:02.0: restoring config space at offset 0x2c (was 0x0, writing 0x0)
[   12.093352] pcieport 0000:06:02.0: restoring config space at offset 0x28 (was 0x0, writing 0x0)
[   12.093354] pcieport 0000:06:02.0: restoring config space at offset 0x24 (was 0x1fff1, writing 0x1fff1)
[   12.093405] pcieport 0000:06:02.0: PME# disabled
[   12.218162] pcieport 0000:06:02.0: saving config space at offset 0x0 (reading 0x15d38086)
[   12.218168] pcieport 0000:06:02.0: saving config space at offset 0x4 (reading 0x100407)
[   12.218173] pcieport 0000:06:02.0: saving config space at offset 0x8 (reading 0x6040002)
[   12.218177] pcieport 0000:06:02.0: saving config space at offset 0xc (reading 0x10020)
[   12.218181] pcieport 0000:06:02.0: saving config space at offset 0x10 (reading 0x0)
[   12.218185] pcieport 0000:06:02.0: saving config space at offset 0x14 (reading 0x0)
[   12.218189] pcieport 0000:06:02.0: saving config space at offset 0x18 (reading 0x2d2d06)
[   12.218193] pcieport 0000:06:02.0: saving config space at offset 0x1c (reading 0x1f1)
[   12.218197] pcieport 0000:06:02.0: saving config space at offset 0x20 (reading 0xdc00dc00)
[   12.218201] pcieport 0000:06:02.0: saving config space at offset 0x24 (reading 0x1fff1)
[   12.218205] pcieport 0000:06:02.0: saving config space at offset 0x28 (reading 0x0)
[   12.218209] pcieport 0000:06:02.0: saving config space at offset 0x2c (reading 0x0)
[   12.218212] pcieport 0000:06:02.0: saving config space at offset 0x30 (reading 0x0)
[   12.218216] pcieport 0000:06:02.0: saving config space at offset 0x34 (reading 0x80)
[   12.218220] pcieport 0000:06:02.0: saving config space at offset 0x38 (reading 0x0)
[   12.218224] pcieport 0000:06:02.0: saving config space at offset 0x3c (reading 0x201ff)
[   12.218294] pcieport 0000:06:02.0: PME# enabled
[   13.440662] xhci_hcd 0000:00:14.0: saving config space at offset 0x0 (reading 0x9ded8086)
[   13.440674] xhci_hcd 0000:00:14.0: saving config space at offset 0x4 (reading 0x2900402)
[   13.440678] xhci_hcd 0000:00:14.0: saving config space at offset 0x8 (reading 0xc033011)
[   13.440681] xhci_hcd 0000:00:14.0: saving config space at offset 0xc (reading 0x800000)
[   13.440685] xhci_hcd 0000:00:14.0: saving config space at offset 0x10 (reading 0xea220004)
[   13.440689] xhci_hcd 0000:00:14.0: saving config space at offset 0x14 (reading 0x0)
[   13.440693] xhci_hcd 0000:00:14.0: saving config space at offset 0x18 (reading 0x0)
[   13.440696] xhci_hcd 0000:00:14.0: saving config space at offset 0x1c (reading 0x0)
[   13.440700] xhci_hcd 0000:00:14.0: saving config space at offset 0x20 (reading 0x0)
[   13.440703] xhci_hcd 0000:00:14.0: saving config space at offset 0x24 (reading 0x0)
[   13.440707] xhci_hcd 0000:00:14.0: saving config space at offset 0x28 (reading 0x0)
[   13.440711] xhci_hcd 0000:00:14.0: saving config space at offset 0x2c (reading 0x229217aa)
[   13.440714] xhci_hcd 0000:00:14.0: saving config space at offset 0x30 (reading 0x0)
[   13.440718] xhci_hcd 0000:00:14.0: saving config space at offset 0x34 (reading 0x70)
[   13.440721] xhci_hcd 0000:00:14.0: saving config space at offset 0x38 (reading 0x0)
[   13.440725] xhci_hcd 0000:00:14.0: saving config space at offset 0x3c (reading 0x1ff)
[   13.440831] xhci_hcd 0000:00:14.0: PME# enabled
[   13.452235] xhci_hcd 0000:00:14.0: power state changed by ACPI to D3hot
[   14.012707] ax88179_178a 8-1:1.0 enx98bb1e1c0b89: ax88179 - Link status is: 1
[   14.032309] IPv6: ADDRCONF(NETDEV_CHANGE): enx98bb1e1c0b89: link becomes ready
[   14.765885] wlp0s20f3: authenticate with c4:41:1e:3f:f0:60
[   14.776348] wlp0s20f3: send auth to c4:41:1e:3f:f0:60 (try 1/3)
[   14.816625] wlp0s20f3: authenticated
[   14.817068] wlp0s20f3: associate with c4:41:1e:3f:f0:60 (try 1/3)
[   14.819270] wlp0s20f3: RX AssocResp from c4:41:1e:3f:f0:60 (capab=0x1511 status=0 aid=3)
[   14.822038] wlp0s20f3: associated
[   14.890845] IPv6: ADDRCONF(NETDEV_CHANGE): wlp0s20f3: link becomes ready
[   14.895836] wlp0s20f3: Limiting TX power to 30 (30 - 0) dBm as advertised by c4:41:1e:3f:f0:60
[   17.662760] Bridge firewalling registered
[   17.703885] Initializing XFRM netlink socket
[   17.881944] xhci_hcd 0000:00:14.0: power state changed by ACPI to D0
[   17.893209] xhci_hcd 0000:00:14.0: PME# disabled
[   17.893214] xhci_hcd 0000:00:14.0: enabling bus mastering
[   18.246766] Bluetooth: RFCOMM TTY layer initialized
[   18.246771] Bluetooth: RFCOMM socket layer initialized
[   18.246774] Bluetooth: RFCOMM ver 1.11
[   18.580328] snd_hda_intel 0000:0c:00.1: PME# disabled
[   18.850804] process 'docker/tmp/qemu-check252012808/check' started with executable stack
[   19.568447] rfkill: input handler disabled
[   19.933554] usb 1-9: reset full-speed USB device number 3 using xhci_hcd
[   22.585678] xhci_hcd 0000:00:14.0: saving config space at offset 0x0 (reading 0x9ded8086)
[   22.585688] xhci_hcd 0000:00:14.0: saving config space at offset 0x4 (reading 0x2900402)
[   22.585692] xhci_hcd 0000:00:14.0: saving config space at offset 0x8 (reading 0xc033011)
[   22.585696] xhci_hcd 0000:00:14.0: saving config space at offset 0xc (reading 0x800000)
[   22.585700] xhci_hcd 0000:00:14.0: saving config space at offset 0x10 (reading 0xea220004)
[   22.585703] xhci_hcd 0000:00:14.0: saving config space at offset 0x14 (reading 0x0)
[   22.585707] xhci_hcd 0000:00:14.0: saving config space at offset 0x18 (reading 0x0)
[   22.585711] xhci_hcd 0000:00:14.0: saving config space at offset 0x1c (reading 0x0)
[   22.585714] xhci_hcd 0000:00:14.0: saving config space at offset 0x20 (reading 0x0)
[   22.585718] xhci_hcd 0000:00:14.0: saving config space at offset 0x24 (reading 0x0)
[   22.585721] xhci_hcd 0000:00:14.0: saving config space at offset 0x28 (reading 0x0)
[   22.585725] xhci_hcd 0000:00:14.0: saving config space at offset 0x2c (reading 0x229217aa)
[   22.585728] xhci_hcd 0000:00:14.0: saving config space at offset 0x30 (reading 0x0)
[   22.585732] xhci_hcd 0000:00:14.0: saving config space at offset 0x34 (reading 0x70)
[   22.585735] xhci_hcd 0000:00:14.0: saving config space at offset 0x38 (reading 0x0)
[   22.585739] xhci_hcd 0000:00:14.0: saving config space at offset 0x3c (reading 0x1ff)
[   22.585844] xhci_hcd 0000:00:14.0: PME# enabled
[   22.597252] xhci_hcd 0000:00:14.0: power state changed by ACPI to D3hot
[   24.980395] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x0 (reading 0xab381002)
[   24.980405] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x4 (reading 0x100406)
[   24.980412] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x8 (reading 0x4030000)
[   24.980419] snd_hda_intel 0000:0c:00.1: saving config space at offset 0xc (reading 0x800020)
[   24.980426] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x10 (reading 0xd0180000)
[   24.980433] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x14 (reading 0x0)
[   24.980440] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x18 (reading 0x0)
[   24.980446] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x1c (reading 0x0)
[   24.980453] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x20 (reading 0x0)
[   24.980460] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x24 (reading 0x0)
[   24.980466] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x28 (reading 0x0)
[   24.980473] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x2c (reading 0xab381002)
[   24.980480] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x30 (reading 0x0)
[   24.980487] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x34 (reading 0x48)
[   24.980494] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x38 (reading 0x0)
[   24.980501] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x3c (reading 0x2ff)
[   24.980623] snd_hda_intel 0000:0c:00.1: PME# enabled
[   28.230289] xhci_hcd 0000:00:14.0: power state changed by ACPI to D0
[   28.242214] xhci_hcd 0000:00:14.0: PME# disabled
[   28.242223] xhci_hcd 0000:00:14.0: enabling bus mastering
[   29.542539] thunderbolt 0000:07:00.0: saving config space at offset 0x0 (reading 0x15d28086)
[   29.542542] thunderbolt 0000:07:00.0: saving config space at offset 0x4 (reading 0x100406)
[   29.542545] thunderbolt 0000:07:00.0: saving config space at offset 0x8 (reading 0x8800002)
[   29.542547] thunderbolt 0000:07:00.0: saving config space at offset 0xc (reading 0x20)
[   29.542550] thunderbolt 0000:07:00.0: saving config space at offset 0x10 (reading 0xe8100000)
[   29.542552] thunderbolt 0000:07:00.0: saving config space at offset 0x14 (reading 0xe8140000)
[   29.542554] thunderbolt 0000:07:00.0: saving config space at offset 0x18 (reading 0x0)
[   29.542557] thunderbolt 0000:07:00.0: saving config space at offset 0x1c (reading 0x0)
[   29.542559] thunderbolt 0000:07:00.0: saving config space at offset 0x20 (reading 0x0)
[   29.542561] thunderbolt 0000:07:00.0: saving config space at offset 0x24 (reading 0x0)
[   29.542563] thunderbolt 0000:07:00.0: saving config space at offset 0x28 (reading 0x0)
[   29.542566] thunderbolt 0000:07:00.0: saving config space at offset 0x2c (reading 0x229217aa)
[   29.542568] thunderbolt 0000:07:00.0: saving config space at offset 0x30 (reading 0x0)
[   29.542570] thunderbolt 0000:07:00.0: saving config space at offset 0x34 (reading 0x80)
[   29.542572] thunderbolt 0000:07:00.0: saving config space at offset 0x38 (reading 0x0)
[   29.542575] thunderbolt 0000:07:00.0: saving config space at offset 0x3c (reading 0x1ff)
[   29.542639] thunderbolt 0000:07:00.0: PME# enabled
[   29.554202] pcieport 0000:06:00.0: saving config space at offset 0x0 (reading 0x15d38086)
[   29.554205] pcieport 0000:06:00.0: saving config space at offset 0x4 (reading 0x100407)
[   29.554207] pcieport 0000:06:00.0: saving config space at offset 0x8 (reading 0x6040002)
[   29.554209] pcieport 0000:06:00.0: saving config space at offset 0xc (reading 0x10020)
[   29.554211] pcieport 0000:06:00.0: saving config space at offset 0x10 (reading 0x0)
[   29.554213] pcieport 0000:06:00.0: saving config space at offset 0x14 (reading 0x0)
[   29.554215] pcieport 0000:06:00.0: saving config space at offset 0x18 (reading 0x70706)
[   29.554217] pcieport 0000:06:00.0: saving config space at offset 0x1c (reading 0x1f1)
[   29.554219] pcieport 0000:06:00.0: saving config space at offset 0x20 (reading 0xe810e810)
[   29.554222] pcieport 0000:06:00.0: saving config space at offset 0x24 (reading 0x1fff1)
[   29.554223] pcieport 0000:06:00.0: saving config space at offset 0x28 (reading 0x0)
[   29.554225] pcieport 0000:06:00.0: saving config space at offset 0x2c (reading 0x0)
[   29.554227] pcieport 0000:06:00.0: saving config space at offset 0x30 (reading 0x0)
[   29.554229] pcieport 0000:06:00.0: saving config space at offset 0x34 (reading 0x80)
[   29.554231] pcieport 0000:06:00.0: saving config space at offset 0x38 (reading 0x0)
[   29.554233] pcieport 0000:06:00.0: saving config space at offset 0x3c (reading 0x201ff)
[   29.554289] pcieport 0000:06:00.0: PME# enabled
[   30.862542] xhci_hcd 0000:00:14.0: saving config space at offset 0x0 (reading 0x9ded8086)
[   30.862554] xhci_hcd 0000:00:14.0: saving config space at offset 0x4 (reading 0x2900402)
[   30.862558] xhci_hcd 0000:00:14.0: saving config space at offset 0x8 (reading 0xc033011)
[   30.862563] xhci_hcd 0000:00:14.0: saving config space at offset 0xc (reading 0x800000)
[   30.862567] xhci_hcd 0000:00:14.0: saving config space at offset 0x10 (reading 0xea220004)
[   30.862571] xhci_hcd 0000:00:14.0: saving config space at offset 0x14 (reading 0x0)
[   30.862581] xhci_hcd 0000:00:14.0: saving config space at offset 0x18 (reading 0x0)
[   30.862592] xhci_hcd 0000:00:14.0: saving config space at offset 0x1c (reading 0x0)
[   30.862596] xhci_hcd 0000:00:14.0: saving config space at offset 0x20 (reading 0x0)
[   30.862600] xhci_hcd 0000:00:14.0: saving config space at offset 0x24 (reading 0x0)
[   30.862604] xhci_hcd 0000:00:14.0: saving config space at offset 0x28 (reading 0x0)
[   30.862608] xhci_hcd 0000:00:14.0: saving config space at offset 0x2c (reading 0x229217aa)
[   30.862612] xhci_hcd 0000:00:14.0: saving config space at offset 0x30 (reading 0x0)
[   30.862616] xhci_hcd 0000:00:14.0: saving config space at offset 0x34 (reading 0x70)
[   30.862620] xhci_hcd 0000:00:14.0: saving config space at offset 0x38 (reading 0x0)
[   30.862624] xhci_hcd 0000:00:14.0: saving config space at offset 0x3c (reading 0x1ff)
[   30.862741] xhci_hcd 0000:00:14.0: PME# enabled
[   30.874228] xhci_hcd 0000:00:14.0: power state changed by ACPI to D3hot
[   32.058257] intel-lpss 0000:00:15.1: power state changed by ACPI to D0
[   32.058348] intel-lpss 0000:00:15.1: restoring config space at offset 0x10 (was 0x4, writing 0xea246004)
[   35.314110] rfkill: input handler enabled
[   38.167331] intel-lpss 0000:00:15.1: saving config space at offset 0x0 (reading 0x9de98086)
[   38.167341] intel-lpss 0000:00:15.1: saving config space at offset 0x4 (reading 0x100006)
[   38.167346] intel-lpss 0000:00:15.1: saving config space at offset 0x8 (reading 0xc800011)
[   38.167351] intel-lpss 0000:00:15.1: saving config space at offset 0xc (reading 0x800020)
[   38.167357] intel-lpss 0000:00:15.1: saving config space at offset 0x10 (reading 0xea246004)
[   38.167362] intel-lpss 0000:00:15.1: saving config space at offset 0x14 (reading 0x0)
[   38.167367] intel-lpss 0000:00:15.1: saving config space at offset 0x18 (reading 0x0)
[   38.167372] intel-lpss 0000:00:15.1: saving config space at offset 0x1c (reading 0x0)
[   38.167377] intel-lpss 0000:00:15.1: saving config space at offset 0x20 (reading 0x0)
[   38.167382] intel-lpss 0000:00:15.1: saving config space at offset 0x24 (reading 0x0)
[   38.167387] intel-lpss 0000:00:15.1: saving config space at offset 0x28 (reading 0x0)
[   38.167392] intel-lpss 0000:00:15.1: saving config space at offset 0x2c (reading 0x229217aa)
[   38.167397] intel-lpss 0000:00:15.1: saving config space at offset 0x30 (reading 0x0)
[   38.167403] intel-lpss 0000:00:15.1: saving config space at offset 0x34 (reading 0x80)
[   38.167407] intel-lpss 0000:00:15.1: saving config space at offset 0x38 (reading 0x0)
[   38.167412] intel-lpss 0000:00:15.1: saving config space at offset 0x3c (reading 0x2ff)
[   38.179728] intel-lpss 0000:00:15.1: power state changed by ACPI to D3cold
[   41.575096] intel-lpss 0000:00:15.1: power state changed by ACPI to D0
[   41.575191] intel-lpss 0000:00:15.1: restoring config space at offset 0x10 (was 0x4, writing 0xea246004)
[   42.873286] intel-lpss 0000:00:15.1: saving config space at offset 0x0 (reading 0x9de98086)
[   42.873295] intel-lpss 0000:00:15.1: saving config space at offset 0x4 (reading 0x100006)
[   42.873301] intel-lpss 0000:00:15.1: saving config space at offset 0x8 (reading 0xc800011)
[   42.873306] intel-lpss 0000:00:15.1: saving config space at offset 0xc (reading 0x800020)
[   42.873311] intel-lpss 0000:00:15.1: saving config space at offset 0x10 (reading 0xea246004)
[   42.873316] intel-lpss 0000:00:15.1: saving config space at offset 0x14 (reading 0x0)
[   42.873321] intel-lpss 0000:00:15.1: saving config space at offset 0x18 (reading 0x0)
[   42.873326] intel-lpss 0000:00:15.1: saving config space at offset 0x1c (reading 0x0)
[   42.873331] intel-lpss 0000:00:15.1: saving config space at offset 0x20 (reading 0x0)
[   42.873336] intel-lpss 0000:00:15.1: saving config space at offset 0x24 (reading 0x0)
[   42.873341] intel-lpss 0000:00:15.1: saving config space at offset 0x28 (reading 0x0)
[   42.873346] intel-lpss 0000:00:15.1: saving config space at offset 0x2c (reading 0x229217aa)
[   42.873351] intel-lpss 0000:00:15.1: saving config space at offset 0x30 (reading 0x0)
[   42.873356] intel-lpss 0000:00:15.1: saving config space at offset 0x34 (reading 0x80)
[   42.873361] intel-lpss 0000:00:15.1: saving config space at offset 0x38 (reading 0x0)
[   42.873367] intel-lpss 0000:00:15.1: saving config space at offset 0x3c (reading 0x2ff)
[   42.885985] intel-lpss 0000:00:15.1: power state changed by ACPI to D3cold
[   43.496665] intel-lpss 0000:00:15.1: power state changed by ACPI to D0
[   43.496729] intel-lpss 0000:00:15.1: restoring config space at offset 0x10 (was 0x4, writing 0xea246004)
[   44.784438] intel-lpss 0000:00:15.1: saving config space at offset 0x0 (reading 0x9de98086)
[   44.784447] intel-lpss 0000:00:15.1: saving config space at offset 0x4 (reading 0x100006)
[   44.784453] intel-lpss 0000:00:15.1: saving config space at offset 0x8 (reading 0xc800011)
[   44.784458] intel-lpss 0000:00:15.1: saving config space at offset 0xc (reading 0x800020)
[   44.784463] intel-lpss 0000:00:15.1: saving config space at offset 0x10 (reading 0xea246004)
[   44.784468] intel-lpss 0000:00:15.1: saving config space at offset 0x14 (reading 0x0)
[   44.784473] intel-lpss 0000:00:15.1: saving config space at offset 0x18 (reading 0x0)
[   44.784478] intel-lpss 0000:00:15.1: saving config space at offset 0x1c (reading 0x0)
[   44.784483] intel-lpss 0000:00:15.1: saving config space at offset 0x20 (reading 0x0)
[   44.784488] intel-lpss 0000:00:15.1: saving config space at offset 0x24 (reading 0x0)
[   44.784493] intel-lpss 0000:00:15.1: saving config space at offset 0x28 (reading 0x0)
[   44.784499] intel-lpss 0000:00:15.1: saving config space at offset 0x2c (reading 0x229217aa)
[   44.784503] intel-lpss 0000:00:15.1: saving config space at offset 0x30 (reading 0x0)
[   44.784509] intel-lpss 0000:00:15.1: saving config space at offset 0x34 (reading 0x80)
[   44.784514] intel-lpss 0000:00:15.1: saving config space at offset 0x38 (reading 0x0)
[   44.784519] intel-lpss 0000:00:15.1: saving config space at offset 0x3c (reading 0x2ff)
[   44.796780] intel-lpss 0000:00:15.1: power state changed by ACPI to D3cold
[   45.209522] xhci_hcd 0000:00:14.0: power state changed by ACPI to D0
[   45.221767] xhci_hcd 0000:00:14.0: PME# disabled
[   45.221772] xhci_hcd 0000:00:14.0: enabling bus mastering
[   45.420933] snd_hda_intel 0000:0c:00.1: PME# disabled
[   48.277516] xhci_hcd 0000:00:14.0: saving config space at offset 0x0 (reading 0x9ded8086)
[   48.277522] xhci_hcd 0000:00:14.0: saving config space at offset 0x4 (reading 0x2900402)
[   48.277525] xhci_hcd 0000:00:14.0: saving config space at offset 0x8 (reading 0xc033011)
[   48.277528] xhci_hcd 0000:00:14.0: saving config space at offset 0xc (reading 0x800000)
[   48.277530] xhci_hcd 0000:00:14.0: saving config space at offset 0x10 (reading 0xea220004)
[   48.277533] xhci_hcd 0000:00:14.0: saving config space at offset 0x14 (reading 0x0)
[   48.277535] xhci_hcd 0000:00:14.0: saving config space at offset 0x18 (reading 0x0)
[   48.277537] xhci_hcd 0000:00:14.0: saving config space at offset 0x1c (reading 0x0)
[   48.277540] xhci_hcd 0000:00:14.0: saving config space at offset 0x20 (reading 0x0)
[   48.277542] xhci_hcd 0000:00:14.0: saving config space at offset 0x24 (reading 0x0)
[   48.277544] xhci_hcd 0000:00:14.0: saving config space at offset 0x28 (reading 0x0)
[   48.277547] xhci_hcd 0000:00:14.0: saving config space at offset 0x2c (reading 0x229217aa)
[   48.277549] xhci_hcd 0000:00:14.0: saving config space at offset 0x30 (reading 0x0)
[   48.277551] xhci_hcd 0000:00:14.0: saving config space at offset 0x34 (reading 0x70)
[   48.277553] xhci_hcd 0000:00:14.0: saving config space at offset 0x38 (reading 0x0)
[   48.277556] xhci_hcd 0000:00:14.0: saving config space at offset 0x3c (reading 0x1ff)
[   48.277646] xhci_hcd 0000:00:14.0: PME# enabled
[   48.289519] xhci_hcd 0000:00:14.0: power state changed by ACPI to D3hot
[   51.966237] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x0 (reading 0xab381002)
[   51.966241] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x4 (reading 0x100406)
[   51.966245] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x8 (reading 0x4030000)
[   51.966249] snd_hda_intel 0000:0c:00.1: saving config space at offset 0xc (reading 0x800020)
[   51.966254] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x10 (reading 0xd0180000)
[   51.966258] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x14 (reading 0x0)
[   51.966262] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x18 (reading 0x0)
[   51.966265] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x1c (reading 0x0)
[   51.966269] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x20 (reading 0x0)
[   51.966273] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x24 (reading 0x0)
[   51.966277] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x28 (reading 0x0)
[   51.966281] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x2c (reading 0xab381002)
[   51.966285] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x30 (reading 0x0)
[   51.966289] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x34 (reading 0x48)
[   51.966293] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x38 (reading 0x0)
[   51.966297] snd_hda_intel 0000:0c:00.1: saving config space at offset 0x3c (reading 0x2ff)
[   51.966399] snd_hda_intel 0000:0c:00.1: PME# enabled
[   81.292605] intel-lpss 0000:00:15.1: power state changed by ACPI to D0
[   81.292698] intel-lpss 0000:00:15.1: restoring config space at offset 0x10 (was 0x4, writing 0xea246004)
[   83.116490] intel-lpss 0000:00:15.1: saving config space at offset 0x0 (reading 0x9de98086)
[   83.116499] intel-lpss 0000:00:15.1: saving config space at offset 0x4 (reading 0x100006)
[   83.116505] intel-lpss 0000:00:15.1: saving config space at offset 0x8 (reading 0xc800011)
[   83.116510] intel-lpss 0000:00:15.1: saving config space at offset 0xc (reading 0x800020)
[   83.116515] intel-lpss 0000:00:15.1: saving config space at offset 0x10 (reading 0xea246004)
[   83.116521] intel-lpss 0000:00:15.1: saving config space at offset 0x14 (reading 0x0)
[   83.116526] intel-lpss 0000:00:15.1: saving config space at offset 0x18 (reading 0x0)
[   83.116531] intel-lpss 0000:00:15.1: saving config space at offset 0x1c (reading 0x0)
[   83.116536] intel-lpss 0000:00:15.1: saving config space at offset 0x20 (reading 0x0)
[   83.116541] intel-lpss 0000:00:15.1: saving config space at offset 0x24 (reading 0x0)
[   83.116546] intel-lpss 0000:00:15.1: saving config space at offset 0x28 (reading 0x0)
[   83.116551] intel-lpss 0000:00:15.1: saving config space at offset 0x2c (reading 0x229217aa)
[   83.116556] intel-lpss 0000:00:15.1: saving config space at offset 0x30 (reading 0x0)
[   83.116561] intel-lpss 0000:00:15.1: saving config space at offset 0x34 (reading 0x80)
[   83.116566] intel-lpss 0000:00:15.1: saving config space at offset 0x38 (reading 0x0)
[   83.116571] intel-lpss 0000:00:15.1: saving config space at offset 0x3c (reading 0x2ff)
[   83.128574] intel-lpss 0000:00:15.1: power state changed by ACPI to D3cold
[   83.808240] intel-lpss 0000:00:15.1: power state changed by ACPI to D0
[   83.808310] intel-lpss 0000:00:15.1: restoring config space at offset 0x10 (was 0x4, writing 0xea246004)

[-- Attachment #3: lspci --]
[-- Type: application/octet-stream, Size: 2746 bytes --]

-[0000:00]-+-00.0  Intel Corporation Coffee Lake HOST and DRAM Controller
           +-02.0  Intel Corporation UHD Graphics 620 (Whiskey Lake)
           +-04.0  Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Thermal Subsystem
           +-08.0  Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model
           +-12.0  Intel Corporation Cannon Point-LP Thermal Controller
           +-14.0  Intel Corporation Cannon Point-LP USB 3.1 xHCI Controller
           +-14.2  Intel Corporation Cannon Point-LP Shared SRAM
           +-14.3  Intel Corporation Cannon Point-LP CNVi [Wireless-AC]
           +-15.0  Intel Corporation Cannon Point-LP Serial IO I2C Controller #0
           +-15.1  Intel Corporation Cannon Point-LP Serial IO I2C Controller #1
           +-16.0  Intel Corporation Cannon Point-LP MEI Controller #1
           +-16.3  Intel Corporation Cannon Point-LP Keyboard and Text (KT) Redirection
           +-1d.0-[03]----00.0  Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983
           +-1d.4-[05-52]----00.0-[06-52]--+-00.0-[07]----00.0  Intel Corporation JHL6540 Thunderbolt 3 NHI (C step) [Alpine Ridge 4C 2016]
           |                               +-01.0-[08-2c]----00.0-[09-2c]--+-01.0-[0a-0c]----00.0-[0b-0c]----00.0-[0c]--+-00.0  Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 [Radeon RX 5600 OEM/5600 XT / 5700/5700 XT]
           |                               |                               |                                            \-00.1  Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 HDMI Audio
           |                               |                               \-04.0-[0d-2c]----00.0-[0e-2c]--+-00.0-[0f]----00.0  ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller
           |                               |                                                               +-01.0-[10]----00.0  ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller
           |                               |                                                               \-02.0-[11]----00.0  ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller
           |                               +-02.0-[2d]----00.0  Intel Corporation JHL6540 Thunderbolt 3 USB Controller (C step) [Alpine Ridge 4C 2016]
           |                               \-04.0-[2e-52]--
           +-1f.0  Intel Corporation Cannon Point-LP LPC Controller
           +-1f.3  Intel Corporation Cannon Point-LP High Definition Audio Controller
           +-1f.4  Intel Corporation Cannon Point-LP SMBus Controller
           +-1f.5  Intel Corporation Cannon Point-LP SPI Controller
           \-1f.6  Intel Corporation Ethernet Connection (6) I219-LM

[-- Attachment #4: Xorg.0.log --]
[-- Type: text/x-log, Size: 8158 bytes --]

[    17.845] (--) Log file renamed from "/home/karabijavad/.local/share/xorg/Xorg.pid-1697.log" to "/home/karabijavad/.local/share/xorg/Xorg.0.log"
[    17.846] 
X.Org X Server 1.20.8
X Protocol Version 11, Revision 0
[    17.846] Build Operating System: Linux 4.4.0-177-generic x86_64 Ubuntu
[    17.846] Current Operating System: Linux alpha 5.6.14-karabijavad #2 SMP Thu May 21 00:05:01 CDT 2020 x86_64
[    17.846] Kernel command line: root=/dev/mapper/beta-ubuntu ro rootflags=subvol=@ mitigations=off module_blacklist=i915 modprobe.blacklist=i915 nohz_full=1,2,3,4,5,6,7 rcu_nocb_poll initrd=\initrd.img-5.6.14-karabijavad
[    17.846] Build Date: 06 April 2020  09:39:29AM
[    17.846] xorg-server 2:1.20.8-2ubuntu2 (For technical support please see http://www.ubuntu.com/support) 
[    17.846] Current version of pixman: 0.38.4
[    17.846] 	Before reporting problems, check http://wiki.x.org
	to make sure that you have the latest version.
[    17.846] Markers: (--) probed, (**) from config file, (==) default setting,
	(++) from command line, (!!) notice, (II) informational,
	(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
[    17.846] (==) Log file: "/home/karabijavad/.local/share/xorg/Xorg.0.log", Time: Thu May 21 16:09:47 2020
[    17.848] (==) Using config directory: "/etc/X11/xorg.conf.d"
[    17.848] (==) Using system config directory "/usr/share/X11/xorg.conf.d"
[    17.848] (==) No Layout section.  Using the first Screen section.
[    17.848] (==) No screen section available. Using defaults.
[    17.848] (**) |-->Screen "Default Screen Section" (0)
[    17.848] (**) |   |-->Monitor "<default monitor>"
[    17.848] (==) No monitor specified for screen "Default Screen Section".
	Using a default monitor configuration.
[    17.849] (==) Automatically adding devices
[    17.849] (==) Automatically enabling devices
[    17.849] (==) Automatically adding GPU devices
[    17.849] (==) Automatically binding GPU devices
[    17.849] (==) Max clients allowed: 256, resource mask: 0x1fffff
[    17.851] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist.
[    17.851] 	Entry deleted from font path.
[    17.853] (==) FontPath set to:
	/usr/share/fonts/X11/misc,
	/usr/share/fonts/X11/100dpi/:unscaled,
	/usr/share/fonts/X11/75dpi/:unscaled,
	/usr/share/fonts/X11/Type1,
	/usr/share/fonts/X11/100dpi,
	/usr/share/fonts/X11/75dpi,
	built-ins
[    17.853] (==) ModulePath set to "/usr/lib/xorg/modules"
[    17.853] (II) The server relies on udev to provide the list of input devices.
	If no devices become available, reconfigure udev or disable AutoAddDevices.
[    17.853] (II) Loader magic: 0x564095d50020
[    17.853] (II) Module ABI versions:
[    17.853] 	X.Org ANSI C Emulation: 0.4
[    17.853] 	X.Org Video Driver: 24.1
[    17.853] 	X.Org XInput driver : 24.1
[    17.853] 	X.Org Server Extension : 10.0
[    17.854] (++) using VT number 2

[    17.855] (II) systemd-logind: took control of session /org/freedesktop/login1/session/_31
[    17.855] (II) xfree86: Adding drm device (/dev/dri/card0)
[    17.856] (II) systemd-logind: got fd for /dev/dri/card0 226:0 fd 12 paused 0
[    17.860] (--) PCI:*(0@0:2:0) 8086:3ea0:17aa:2292 rev 2, Mem @ 0xe9000000/16777216, 0xc0000000/268435456, I/O @ 0x00003000/64, BIOS @ 0x????????/131072
[    17.860] (--) PCI: (12@0:0:0) 1002:731f:1682:5710 rev 202, Mem @ 0x80000000/268435456, 0x90000000/2097152, 0xd0100000/524288, I/O @ 0x00002000/256, BIOS @ 0x????????/131072
[    17.860] (II) LoadModule: "glx"
[    17.861] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so
[    17.867] (II) Module glx: vendor="X.Org Foundation"
[    17.867] 	compiled for 1.20.8, module version = 1.0.0
[    17.867] 	ABI class: X.Org Server Extension, version 10.0
[    17.867] (II) Applying OutputClass "AMDgpu" to /dev/dri/card0
[    17.867] 	loading driver: amdgpu
[    17.867] (==) Matched amdgpu as autoconfigured driver 0
[    17.867] (==) Matched ati as autoconfigured driver 1
[    17.867] (==) Matched modesetting as autoconfigured driver 2
[    17.867] (==) Matched fbdev as autoconfigured driver 3
[    17.867] (==) Matched vesa as autoconfigured driver 4
[    17.867] (==) Assigned the driver to the xf86ConfigLayout
[    17.867] (II) LoadModule: "amdgpu"
[    17.867] (II) Loading /usr/lib/xorg/modules/drivers/amdgpu_drv.so
[    17.869] (II) Module amdgpu: vendor="X.Org Foundation"
[    17.869] 	compiled for 1.20.5, module version = 19.1.0
[    17.869] 	Module class: X.Org Video Driver
[    17.869] 	ABI class: X.Org Video Driver, version 24.0
[    17.869] (II) LoadModule: "ati"
[    17.870] (WW) Warning, couldn't open module ati
[    17.870] (EE) Failed to load module "ati" (module does not exist, 0)
[    17.870] (II) LoadModule: "modesetting"
[    17.870] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so
[    17.870] (II) Module modesetting: vendor="X.Org Foundation"
[    17.870] 	compiled for 1.20.8, module version = 1.20.8
[    17.870] 	Module class: X.Org Video Driver
[    17.870] 	ABI class: X.Org Video Driver, version 24.1
[    17.870] (II) LoadModule: "fbdev"
[    17.870] (WW) Warning, couldn't open module fbdev
[    17.870] (EE) Failed to load module "fbdev" (module does not exist, 0)
[    17.870] (II) LoadModule: "vesa"
[    17.870] (WW) Warning, couldn't open module vesa
[    17.870] (EE) Failed to load module "vesa" (module does not exist, 0)
[    17.870] (II) Applying OutputClass "AMDgpu" to /dev/dri/card0
[    17.870] 	loading driver: amdgpu
[    17.870] (==) Matched amdgpu as autoconfigured driver 0
[    17.870] (==) Matched ati as autoconfigured driver 1
[    17.870] (==) Matched modesetting as autoconfigured driver 2
[    17.870] (==) Matched fbdev as autoconfigured driver 3
[    17.870] (==) Matched vesa as autoconfigured driver 4
[    17.870] (==) Assigned the driver to the xf86ConfigLayout
[    17.870] (II) LoadModule: "amdgpu"
[    17.870] (II) Loading /usr/lib/xorg/modules/drivers/amdgpu_drv.so
[    17.871] (II) Module amdgpu: vendor="X.Org Foundation"
[    17.871] 	compiled for 1.20.5, module version = 19.1.0
[    17.871] 	Module class: X.Org Video Driver
[    17.871] 	ABI class: X.Org Video Driver, version 24.0
[    17.871] (II) LoadModule: "ati"
[    17.871] (WW) Warning, couldn't open module ati
[    17.871] (EE) Failed to load module "ati" (module does not exist, 0)
[    17.871] (II) LoadModule: "modesetting"
[    17.871] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so
[    17.871] (II) Module modesetting: vendor="X.Org Foundation"
[    17.871] 	compiled for 1.20.8, module version = 1.20.8
[    17.871] 	Module class: X.Org Video Driver
[    17.871] 	ABI class: X.Org Video Driver, version 24.1
[    17.871] (II) UnloadModule: "modesetting"
[    17.871] (II) Unloading modesetting
[    17.871] (II) Failed to load module "modesetting" (already loaded, 0)
[    17.871] (II) LoadModule: "fbdev"
[    17.871] (WW) Warning, couldn't open module fbdev
[    17.871] (EE) Failed to load module "fbdev" (module does not exist, 0)
[    17.871] (II) LoadModule: "vesa"
[    17.871] (WW) Warning, couldn't open module vesa
[    17.871] (EE) Failed to load module "vesa" (module does not exist, 0)
[    17.871] (II) AMDGPU: Driver for AMD Radeon:
	All GPUs supported by the amdgpu kernel driver
[    17.871] (II) modesetting: Driver for Modesetting Kernel Drivers: kms
[    17.871] (WW) Falling back to old probe method for modesetting
[    17.871] (II) modeset(1): using default device
[    17.872] (WW) VGA arbiter: cannot open kernel arbiter, no multi-card support
[    17.872] (EE) Screen 0 deleted because of no matching config section.
[    17.872] (II) UnloadModule: "modesetting"
[    17.872] (EE) 
Fatal server error:
[    17.872] (EE) Cannot run in framebuffer mode. Please specify busIDs        for all framebuffer devices
[    17.872] (EE) 
[    17.872] (EE) 
Please consult the The X.Org Foundation support 
	 at http://wiki.x.org
 for help. 
[    17.872] (EE) Please also check the log file at "/home/karabijavad/.local/share/xorg/Xorg.0.log" for additional information.
[    17.872] (EE) 
[    17.935] (EE) Server terminated with error (1). Closing log file.

[-- Attachment #5: Type: text/plain, Size: 154 bytes --]

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-21 21:21                         ` Javad Karabi
@ 2020-05-22 22:48                           ` Javad Karabi
  2020-05-23 10:17                             ` Michel Dänzer
  0 siblings, 1 reply; 24+ messages in thread
From: Javad Karabi @ 2020-05-22 22:48 UTC (permalink / raw)
  To: Alex Deucher; +Cc: amd-gfx list

so yea, looks like the compositing wasnt happening on the amdgpu, so
thats why when i would see 300fps for glxgears etc.

also, the whole thing about "monitor updating once every 3 seconds"
when i close the lid is because mutter will go down to 1fps when it
detects that the lid is closed.
i setup the compositor to use the graphics card (by simply using a
custom xorg.conf with the Screen's device section being the amd
device) and now it runs perfectly. ill write up a lil blog post or
something to explain it. will link it in this thread if yall are
curious. but really it boils down to "yall are right"

so the fix for now is simply to use a single xorg.conf which
specifically says to use the device for the X screen

thanks alot for yalls help

On Thu, May 21, 2020 at 4:21 PM Javad Karabi <karabijavad@gmail.com> wrote:
>
> the files i attached are using the amdgpu ddx
>
> also, one thing to note: i just switched to modesetting but it seems
> it has the same issue.
> i got it working last night, forgot what i changed. but that was one
> of things i changed. but here are the files for when i use the amdgpu
> ddx
>
> On Thu, May 21, 2020 at 2:15 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> >
> > Please provide your dmesg output and xorg log.
> >
> > Alex
> >
> > On Thu, May 21, 2020 at 3:03 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > >
> > > Alex,
> > > yea, youre totally right i was overcomplicating it lol
> > > so i was able to get the radeon to run super fast, by doing as you
> > > suggested and blacklisting i915.
> > > (had to use module_blacklist= though because modprobe.blacklist still
> > > allows i915, if a dependency wants to load it)
> > > but with one caveat:
> > > using the amdgpu driver, there was some error saying something about
> > > telling me that i need to add BusID to my device or something.
> > > maybe amdgpu wasnt able to find the card or something, i dont
> > > remember. so i used modesetting instead and it seemed to work.
> > > i will try going back to amdgpu and seeing what that error message was.
> > > i recall you saying that modesetting doesnt have some features that
> > > amdgpu provides.
> > > what are some examples of that?
> > > is the direction that graphics drivers are going, to be simply used as
> > > "modesetting" via xorg?
> > >
> > > On Wed, May 20, 2020 at 10:12 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > >
> > > > I think you are overcomplicating things.  Just try and get X running
> > > > on just the AMD GPU on bare metal.  Introducing virtualization is just
> > > > adding more uncertainty.  If you can't configure X to not use the
> > > > integrated GPU, just blacklist the i915 driver (append
> > > > modprobe.blacklist=i915 to the kernel command line in grub) and X
> > > > should come up on the dGPU.
> > > >
> > > > Alex
> > > >
> > > > On Wed, May 20, 2020 at 6:05 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > >
> > > > > Thanks Alex,
> > > > > Here's my plan:
> > > > >
> > > > > since my laptop's os is pretty customized, e.g. compiling my own kernel, building latest xorg, latest xorg-driver-amdgpu, etc etc,
> > > > > im going to use the intel iommu and pass through my rx 5600 into a virtual machine, which will be a 100% stock ubuntu installation.
> > > > > then, inside that vm, i will continue to debug
> > > > >
> > > > > does that sound like it would make sense for testing? for example, with that scenario, it adds the iommu into the mix, so who knows if that causes performance issues. but i think its worth a shot, to see if a stock kernel will handle it better
> > > > >
> > > > > also, quick question:
> > > > > from what i understand, a thunderbolt 3 pci express connection should handle 8 GT/s x4, however, along the chain of bridges to my device, i notice that the bridge closest to the graphics card is at 2.5 GT/s x4, and it also says "downgraded" (this is via the lspci output)
> > > > >
> > > > > now, when i boot into windows, it _also_ says 2.5 GT/s x4, and it runs extremely well. no issues at all.
> > > > >
> > > > > so my question is: the fact that the bridge is at 2.5 GT/s x4, and not at its theoretical "full link speed" of 8 GT/s x4, do you suppose that _could_ be an issue?
> > > > > i do not think so, because, like i said, in windows it also reports that link speed.
> > > > > i would assume that you would want the fastest link speed possible, because i would assume that of _all_ tb3 pci express devices, a GPU would be the #1 most demanding on the link
> > > > >
> > > > > just curious if you think 2.5 GT/s could be the bottleneck
> > > > >
> > > > > i will pass through the device into a ubuntu vm and let you know how it goes. thanks
> > > > >
> > > > >
> > > > >
> > > > > On Tue, May 19, 2020 at 9:29 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > > >>
> > > > >> On Tue, May 19, 2020 at 9:16 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > >> >
> > > > >> > thanks for the answers alex.
> > > > >> >
> > > > >> > so, i went ahead and got a displayport cable to see if that changes
> > > > >> > anything. and now, when i run monitor only, and the monitor connected
> > > > >> > to the card, it has no issues like before! so i am thinking that
> > > > >> > somethings up with either the hdmi cable, or some hdmi related setting
> > > > >> > in my system? who knows, but im just gonna roll with only using
> > > > >> > displayport cables now.
> > > > >> > the previous hdmi cable was actually pretty long, because i was
> > > > >> > extending it with an hdmi extension cable, so maybe the signal was
> > > > >> > really bad or something :/
> > > > >> >
> > > > >> > but yea, i guess the only real issue now is maybe something simple
> > > > >> > related to some sysfs entry about enabling some powermode, voltage,
> > > > >> > clock frequency, or something, so that glxgears will give me more than
> > > > >> > 300 fps. but atleast now i can use a single monitor configuration with
> > > > >> > the monitor displayported up to the card.
> > > > >> >
> > > > >>
> > > > >> The GPU dynamically adjusts the clocks and voltages based on load.  No
> > > > >> manual configuration is required.
> > > > >>
> > > > >> At this point, we probably need to see you xorg log and dmesg output
> > > > >> to try and figure out exactly what is going on.  I still suspect there
> > > > >> is some interaction going on with both GPUs and the integrated GPU
> > > > >> being the primary, so as I mentioned before, you should try and run X
> > > > >> on just the amdgpu rather than trying to use both of them.
> > > > >>
> > > > >> Alex
> > > > >>
> > > > >>
> > > > >> > also, one other thing i think you might be interested in, that was
> > > > >> > happening before.
> > > > >> >
> > > > >> > so, previously, with laptop -tb3-> egpu-hdmi> monitor, there was a
> > > > >> > funny thing happening which i never could figure out.
> > > > >> > when i would look at the X logs, i would see that "modesetting" (for
> > > > >> > the intel integrated graphics) was reporting that MonitorA was used
> > > > >> > with "eDP-1",  which is correct and what i expected.
> > > > >> > when i scrolled further down, i then saw that "HDMI-A-1-2" was being
> > > > >> > used for another MonitorB, which also is what i expected (albeit i
> > > > >> > have no idea why its saying A-1-2)
> > > > >> > but amdgpu was _also_ saying that DisplayPort-1-2 (a port on the
> > > > >> > radeon card) was being used for MonitorA, which is the same Monitor
> > > > >> > that the modesetting driver had claimed to be using with eDP-1!
> > > > >> >
> > > > >> > so the point is that amdgpu was "using" Monitor0 with DisplayPort-1-2,
> > > > >> > although that is what modesetting was using for eDP-1.
> > > > >> >
> > > > >> > anyway, thats a little aside, i doubt it was related to the terrible
> > > > >> > hdmi experience i was getting, since its about display port and stuff,
> > > > >> > but i thought id let you know about that.
> > > > >> >
> > > > >> > if you think that is a possible issue, im more than happy to plug the
> > > > >> > hdmi setup back in and create an issue on gitlab with the logs and
> > > > >> > everything
> > > > >> >
> > > > >> > On Tue, May 19, 2020 at 4:42 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > > >> > >
> > > > >> > > On Tue, May 19, 2020 at 5:22 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > >> > > >
> > > > >> > > > lol youre quick!
> > > > >> > > >
> > > > >> > > > "Windows has supported peer to peer DMA for years so it already has a
> > > > >> > > > numbers of optimizations that are only now becoming possible on Linux"
> > > > >> > > >
> > > > >> > > > whoa, i figured linux would be ahead of windows when it comes to
> > > > >> > > > things like that. but peer-to-peer dma is something that is only
> > > > >> > > > recently possible on linux, but has been possible on windows? what
> > > > >> > > > changed recently that allows for peer to peer dma in linux?
> > > > >> > > >
> > > > >> > >
> > > > >> > > A few things that made this more complicated on Linux:
> > > > >> > > 1. Linux uses IOMMUs more extensively than windows so you can't just
> > > > >> > > pass around physical bus addresses.
> > > > >> > > 2. Linux supports lots of strange architectures that have a lot of
> > > > >> > > limitations with respect to peer to peer transactions
> > > > >> > >
> > > > >> > > It just took years to get all the necessary bits in place in Linux and
> > > > >> > > make everyone happy.
> > > > >> > >
> > > > >> > > > also, in the context of a game running opengl on some gpu, is the
> > > > >> > > > "peer-to-peer" dma transfer something like: the game draw's to some
> > > > >> > > > memory it has allocated, then a DMA transfer gets that and moves it
> > > > >> > > > into the graphics card output?
> > > > >> > >
> > > > >> > > Peer to peer DMA just lets devices access another devices local memory
> > > > >> > > directly.  So if you have a buffer in vram on one device, you can
> > > > >> > > share that directly with another device rather than having to copy it
> > > > >> > > to system memory first.  For example, if you have two GPUs, you can
> > > > >> > > have one of them copy it's content directly to a buffer in the other
> > > > >> > > GPU's vram rather than having to go through system memory first.
> > > > >> > >
> > > > >> > > >
> > > > >> > > > also, i know it can be super annoying trying to debug an issue like
> > > > >> > > > this, with someone like me who has all types of differences from a
> > > > >> > > > normal setup (e.g. using it via egpu, using a kernel with custom
> > > > >> > > > configs and stuff) so as a token of my appreciation i donated 50$ to
> > > > >> > > > the red cross' corona virus outbreak charity thing, on behalf of
> > > > >> > > > amd-gfx.
> > > > >> > >
> > > > >> > > Thanks,
> > > > >> > >
> > > > >> > > Alex
> > > > >> > >
> > > > >> > > >
> > > > >> > > > On Tue, May 19, 2020 at 4:13 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > > >> > > > >
> > > > >> > > > > On Tue, May 19, 2020 at 3:44 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > >> > > > > >
> > > > >> > > > > > just a couple more questions:
> > > > >> > > > > >
> > > > >> > > > > > - based on what you are aware of, the technical details such as
> > > > >> > > > > > "shared buffers go through system memory", and all that, do you see
> > > > >> > > > > > any issues that might exist that i might be missing in my setup? i
> > > > >> > > > > > cant imagine this being the case because the card works great in
> > > > >> > > > > > windows, unless the windows driver does something different?
> > > > >> > > > > >
> > > > >> > > > >
> > > > >> > > > > Windows has supported peer to peer DMA for years so it already has a
> > > > >> > > > > numbers of optimizations that are only now becoming possible on Linux.
> > > > >> > > > >
> > > > >> > > > > > - as far as kernel config, is there anything in particular which
> > > > >> > > > > > _should_ or _should not_ be enabled/disabled?
> > > > >> > > > >
> > > > >> > > > > You'll need the GPU drivers for your devices and dma-buf support.
> > > > >> > > > >
> > > > >> > > > > >
> > > > >> > > > > > - does the vendor matter? for instance, this is an xfx card. when it
> > > > >> > > > > > comes to different vendors, are there interface changes that might
> > > > >> > > > > > make one vendor work better for linux than another? i dont really
> > > > >> > > > > > understand the differences in vendors, but i imagine that the vbios
> > > > >> > > > > > differs between vendors, and as such, the linux compatibility would
> > > > >> > > > > > maybe change?
> > > > >> > > > >
> > > > >> > > > > board vendor shouldn't matter.
> > > > >> > > > >
> > > > >> > > > > >
> > > > >> > > > > > - is the pcie bandwidth possible an issue? the pcie_bw file changes
> > > > >> > > > > > between values like this:
> > > > >> > > > > > 18446683600662707640 18446744071581623085 128
> > > > >> > > > > > and sometimes i see this:
> > > > >> > > > > > 4096 0 128
> > > > >> > > > > > as you can see, the second value seems significantly lower. is that
> > > > >> > > > > > possibly an issue? possibly due to aspm?
> > > > >> > > > >
> > > > >> > > > > pcie_bw is not implemented for navi yet so you are just seeing
> > > > >> > > > > uninitialized data.  This patch set should clear that up.
> > > > >> > > > > https://patchwork.freedesktop.org/patch/366262/
> > > > >> > > > >
> > > > >> > > > > Alex
> > > > >> > > > >
> > > > >> > > > > >
> > > > >> > > > > > On Tue, May 19, 2020 at 2:20 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > >> > > > > > >
> > > > >> > > > > > > im using Driver "amdgpu" in my xorg conf
> > > > >> > > > > > >
> > > > >> > > > > > > how does one verify which gpu is the primary? im assuming my intel
> > > > >> > > > > > > card is the primary, since i have not done anything to change that.
> > > > >> > > > > > >
> > > > >> > > > > > > also, if all shared buffers have to go through system memory, then
> > > > >> > > > > > > that means an eGPU amdgpu wont work very well in general right?
> > > > >> > > > > > > because going through system memory for the egpu means going over the
> > > > >> > > > > > > thunderbolt connection
> > > > >> > > > > > >
> > > > >> > > > > > > and what are the shared buffers youre referring to? for example, if an
> > > > >> > > > > > > application is drawing to a buffer, is that an example of a shared
> > > > >> > > > > > > buffer that has to go through system memory? if so, thats fine, right?
> > > > >> > > > > > > because the application's memory is in system memory, so that copy
> > > > >> > > > > > > wouldnt be an issue.
> > > > >> > > > > > >
> > > > >> > > > > > > in general, do you think the "copy buffer across system memory might
> > > > >> > > > > > > be a hindrance for thunderbolt? im trying to figure out which
> > > > >> > > > > > > directions to go to debug and im totally lost, so maybe i can do some
> > > > >> > > > > > > testing that direction?
> > > > >> > > > > > >
> > > > >> > > > > > > and for what its worth, when i turn the display "off" via the gnome
> > > > >> > > > > > > display settings, its the same issue as when the laptop lid is closed,
> > > > >> > > > > > > so unless the motherboard reads the "closed lid" the same as "display
> > > > >> > > > > > > off", then im not sure if its thermal issues.
> > > > >> > > > > > >
> > > > >> > > > > > > On Tue, May 19, 2020 at 2:14 PM Alex Deucher <alexdeucher@gmail.com> wrote:
> > > > >> > > > > > > >
> > > > >> > > > > > > > On Tue, May 19, 2020 at 2:59 PM Javad Karabi <karabijavad@gmail.com> wrote:
> > > > >> > > > > > > > >
> > > > >> > > > > > > > > given this setup:
> > > > >> > > > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> > > > >> > > > > > > > > DRI_PRIME=1 glxgears gears gives me ~300fps
> > > > >> > > > > > > > >
> > > > >> > > > > > > > > given this setup:
> > > > >> > > > > > > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > > > >> > > > > > > > > laptop -hdmi-> monitor
> > > > >> > > > > > > > >
> > > > >> > > > > > > > > glx gears gives me ~1800fps
> > > > >> > > > > > > > >
> > > > >> > > > > > > > > this doesnt make sense to me because i thought that having the monitor
> > > > >> > > > > > > > > plugged directly into the card should give best performance.
> > > > >> > > > > > > > >
> > > > >> > > > > > > >
> > > > >> > > > > > > > Do you have displays connected to both GPUs?  If you are using X which
> > > > >> > > > > > > > ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> > > > >> > > > > > > > IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> > > > >> > > > > > > > yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> > > > >> > > > > > > > Note that the GPU which does the rendering is not necessarily the one
> > > > >> > > > > > > > that the displays are attached to.  The render GPU renders to it's
> > > > >> > > > > > > > render buffer and then that data may end up being copied other GPUs
> > > > >> > > > > > > > for display.  Also, at this point, all shared buffers have to go
> > > > >> > > > > > > > through system memory (this will be changing eventually now that we
> > > > >> > > > > > > > support device memory via dma-buf), so there is often an extra copy
> > > > >> > > > > > > > involved.
> > > > >> > > > > > > >
> > > > >> > > > > > > > > theres another really weird issue...
> > > > >> > > > > > > > >
> > > > >> > > > > > > > > given setup 1, where the monitor is plugged in to the card:
> > > > >> > > > > > > > > when i close the laptop lid, my monitor is "active" and whatnot, and i
> > > > >> > > > > > > > > can "use it" in a sense
> > > > >> > > > > > > > >
> > > > >> > > > > > > > > however, heres the weirdness:
> > > > >> > > > > > > > > the mouse cursor will move along the monitor perfectly smooth and
> > > > >> > > > > > > > > fine, but all the other updates to the screen are delayed by about 2
> > > > >> > > > > > > > > or 3 seconds.
> > > > >> > > > > > > > > that is to say, its as if the laptop is doing everything (e.g. if i
> > > > >> > > > > > > > > open a terminal, the terminal will open, but it will take 2 seconds
> > > > >> > > > > > > > > for me to see it)
> > > > >> > > > > > > > >
> > > > >> > > > > > > > > its almost as if all the frames and everything are being drawn, and
> > > > >> > > > > > > > > the laptop is running fine and everything, but i simply just dont get
> > > > >> > > > > > > > > to see it on the monitor, except for one time every 2 seconds.
> > > > >> > > > > > > > >
> > > > >> > > > > > > > > its hard to articulate, because its so bizarre. its not like, a "low
> > > > >> > > > > > > > > fps" per se, because the cursor is totally smooth. but its that
> > > > >> > > > > > > > > _everything else_ is only updated once every couple seconds.
> > > > >> > > > > > > >
> > > > >> > > > > > > > This might also be related to which GPU is the primary.  It still may
> > > > >> > > > > > > > be the integrated GPU since that is what is attached to the laptop
> > > > >> > > > > > > > panel.  Also the platform and some drivers may do certain things when
> > > > >> > > > > > > > the lid is closed.  E.g., for thermal reasons, the integrated GPU or
> > > > >> > > > > > > > CPU may have a more limited TDP because the laptop cannot cool as
> > > > >> > > > > > > > efficiently.
> > > > >> > > > > > > >
> > > > >> > > > > > > > Alex
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-22 22:48                           ` Javad Karabi
@ 2020-05-23 10:17                             ` Michel Dänzer
  2020-05-25  1:03                               ` Javad Karabi
  0 siblings, 1 reply; 24+ messages in thread
From: Michel Dänzer @ 2020-05-23 10:17 UTC (permalink / raw)
  To: Javad Karabi, Alex Deucher; +Cc: amd-gfx list

On 2020-05-23 12:48 a.m., Javad Karabi wrote:
> 
> also, the whole thing about "monitor updating once every 3 seconds"
> when i close the lid is because mutter will go down to 1fps when it
> detects that the lid is closed.

Xorg's Present extension code ends up doing that (because it has no
support for secondary GPUs), not mutter.


-- 
Earthling Michel Dänzer               |               https://redhat.com
Libre software enthusiast             |             Mesa and X developer
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-23 10:17                             ` Michel Dänzer
@ 2020-05-25  1:03                               ` Javad Karabi
  2020-05-25  1:29                                 ` Javad Karabi
  0 siblings, 1 reply; 24+ messages in thread
From: Javad Karabi @ 2020-05-25  1:03 UTC (permalink / raw)
  To: Michel Dänzer; +Cc: Alex Deucher, amd-gfx list

Michel, ah my bad! thank you. sorry, thought it was mutter

also, one other thing. so i have been messing around with all types of
xorg configuration blah blah blah, but i just had an epiphany, and it
works!

so, all i ever needed to do was add Option "PrimaryGpu" "true" to
/usr/share/X11/xorg.conf.d/10-amdgpu.conf
with that _one_ change, i dont need any other xorg configs, and when i
boot without the amdgpu, it should work just fine, and when the amdgpu
is present it will automatically become the primary due to the
outputclass matching it!

that PrimaryGpu being added was exactly the thing. im so glad it works now

So, these are my thoughts:
theres no telling what other graphics cards might be installed, so
xorg defaults to using whatever linux was booted with as the primary,
in my case the intel graphics i guess.

now, on a regular desktop, thats totally fine because the graphics
card has direct access to ram much easier, and with fancy things like
dma and whatnot, its no problem at all for a graphics card to act as a
render offload since the card  can simply dma the results into main
memory or something

but when you got the graphics card in an eGPU, across a thunderbolt
connection, it essentially because NUMA, since that memory access has
way more latency

so the fact that the debian package isnt saying "PrimaryGpu" "true" i
guess makes sense, becuase who knows what you want the primary to be.

but yea, just thought yall might be interested to know that the
solution for running an egpu in linux is simply to add "PrimaryGpu" to
the output class that matches your gpu.
and when you boot without the gpu, the outputclass wont match, so it
will default to normal behavior

also, lets say you have N number of gpus, each of which may or may not
be present. from what i understand, you can still enforce a level of
precedence about picking which one to be primary like this:

"If multiple output devices match an OutputClass section with the
PrimaryGPU option set, the first one enumerated becomes the primary
GPU."

so one can simply define a file in which you define N number of
outputclasses, in order from highest to lowest precedence for being
the primary gpu, then simply put Option "PrimaryGpu" "true"

i realize this isnt an xorg list, and doesnt have much to do with
amdgpu, but would love to hear yalls thoughts. theres alot of
discussion online in forums and whatnot, and people coming up with all
kinds of "automatic xorg configuration startup scripts" and stuff to
manage egpus, but if my hypothesis is correct, this is the cleanest,
simplest and most elegant solution


On Sat, May 23, 2020 at 5:17 AM Michel Dänzer <michel@daenzer.net> wrote:
>
> On 2020-05-23 12:48 a.m., Javad Karabi wrote:
> >
> > also, the whole thing about "monitor updating once every 3 seconds"
> > when i close the lid is because mutter will go down to 1fps when it
> > detects that the lid is closed.
>
> Xorg's Present extension code ends up doing that (because it has no
> support for secondary GPUs), not mutter.
>
>
> --
> Earthling Michel Dänzer               |               https://redhat.com
> Libre software enthusiast             |             Mesa and X developer
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: slow rx 5600 xt fps
  2020-05-25  1:03                               ` Javad Karabi
@ 2020-05-25  1:29                                 ` Javad Karabi
  0 siblings, 0 replies; 24+ messages in thread
From: Javad Karabi @ 2020-05-25  1:29 UTC (permalink / raw)
  To: Michel Dänzer; +Cc: Alex Deucher, amd-gfx list

wow, i totally just realized that this is what you meant by talking
about primary gpu, early on in this email chain.
ive come full circle! you were totally right and even knew exactly
what the easiest change was lol.
my bad!

On Sun, May 24, 2020 at 8:03 PM Javad Karabi <karabijavad@gmail.com> wrote:
>
> Michel, ah my bad! thank you. sorry, thought it was mutter
>
> also, one other thing. so i have been messing around with all types of
> xorg configuration blah blah blah, but i just had an epiphany, and it
> works!
>
> so, all i ever needed to do was add Option "PrimaryGpu" "true" to
> /usr/share/X11/xorg.conf.d/10-amdgpu.conf
> with that _one_ change, i dont need any other xorg configs, and when i
> boot without the amdgpu, it should work just fine, and when the amdgpu
> is present it will automatically become the primary due to the
> outputclass matching it!
>
> that PrimaryGpu being added was exactly the thing. im so glad it works now
>
> So, these are my thoughts:
> theres no telling what other graphics cards might be installed, so
> xorg defaults to using whatever linux was booted with as the primary,
> in my case the intel graphics i guess.
>
> now, on a regular desktop, thats totally fine because the graphics
> card has direct access to ram much easier, and with fancy things like
> dma and whatnot, its no problem at all for a graphics card to act as a
> render offload since the card  can simply dma the results into main
> memory or something
>
> but when you got the graphics card in an eGPU, across a thunderbolt
> connection, it essentially because NUMA, since that memory access has
> way more latency
>
> so the fact that the debian package isnt saying "PrimaryGpu" "true" i
> guess makes sense, becuase who knows what you want the primary to be.
>
> but yea, just thought yall might be interested to know that the
> solution for running an egpu in linux is simply to add "PrimaryGpu" to
> the output class that matches your gpu.
> and when you boot without the gpu, the outputclass wont match, so it
> will default to normal behavior
>
> also, lets say you have N number of gpus, each of which may or may not
> be present. from what i understand, you can still enforce a level of
> precedence about picking which one to be primary like this:
>
> "If multiple output devices match an OutputClass section with the
> PrimaryGPU option set, the first one enumerated becomes the primary
> GPU."
>
> so one can simply define a file in which you define N number of
> outputclasses, in order from highest to lowest precedence for being
> the primary gpu, then simply put Option "PrimaryGpu" "true"
>
> i realize this isnt an xorg list, and doesnt have much to do with
> amdgpu, but would love to hear yalls thoughts. theres alot of
> discussion online in forums and whatnot, and people coming up with all
> kinds of "automatic xorg configuration startup scripts" and stuff to
> manage egpus, but if my hypothesis is correct, this is the cleanest,
> simplest and most elegant solution
>
>
> On Sat, May 23, 2020 at 5:17 AM Michel Dänzer <michel@daenzer.net> wrote:
> >
> > On 2020-05-23 12:48 a.m., Javad Karabi wrote:
> > >
> > > also, the whole thing about "monitor updating once every 3 seconds"
> > > when i close the lid is because mutter will go down to 1fps when it
> > > detects that the lid is closed.
> >
> > Xorg's Present extension code ends up doing that (because it has no
> > support for secondary GPUs), not mutter.
> >
> >
> > --
> > Earthling Michel Dänzer               |               https://redhat.com
> > Libre software enthusiast             |             Mesa and X developer
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2020-05-25  1:30 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-19 18:59 slow rx 5600 xt fps Javad Karabi
2020-05-19 19:13 ` Alex Deucher
2020-05-19 19:20   ` Javad Karabi
2020-05-19 19:44     ` Javad Karabi
2020-05-19 20:01       ` Javad Karabi
2020-05-19 21:34         ` Alex Deucher
2020-05-19 21:13       ` Alex Deucher
2020-05-19 21:22         ` Javad Karabi
2020-05-19 21:42           ` Alex Deucher
2020-05-20  1:16             ` Javad Karabi
2020-05-20  1:19               ` Javad Karabi
2020-05-20  1:20               ` Bridgman, John
2020-05-20  1:35                 ` Javad Karabi
2020-05-20  2:29               ` Alex Deucher
2020-05-20 22:04                 ` Javad Karabi
2020-05-21  3:11                   ` Alex Deucher
2020-05-21 19:03                     ` Javad Karabi
2020-05-21 19:15                       ` Alex Deucher
2020-05-21 21:21                         ` Javad Karabi
2020-05-22 22:48                           ` Javad Karabi
2020-05-23 10:17                             ` Michel Dänzer
2020-05-25  1:03                               ` Javad Karabi
2020-05-25  1:29                                 ` Javad Karabi
2020-05-19 21:32     ` Alex Deucher

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).