All of lore.kernel.org
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel@ffwll.ch>
To: Dave Airlie <airlied@gmail.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>, LKP <lkp@01.org>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	Thomas Zimmermann <tzimmermann@suse.de>,
	kernel test robot <rong.a.chen@intel.com>
Subject: Re: [drm/mgag200] 90f479ae51: vm-scalability.median -18.8% regression
Date: Wed, 31 Jul 2019 10:13:52 +0200	[thread overview]
Message-ID: <CAKMK7uEbLP7j38VhdX9qniwqLfSc0_LrcrCD1R8m4kihbxePUA@mail.gmail.com> (raw)
In-Reply-To: <CAPM=9txDY7ROKkoLsc1bEaTnEZ+y5p7=EFoibcuy9uoTvsE75g@mail.gmail.com>

On Tue, Jul 30, 2019 at 10:27 PM Dave Airlie <airlied@gmail.com> wrote:
>
> On Wed, 31 Jul 2019 at 05:00, Daniel Vetter <daniel@ffwll.ch> wrote:
> >
> > On Tue, Jul 30, 2019 at 8:50 PM Thomas Zimmermann <tzimmermann@suse.de> wrote:
> > >
> > > Hi
> > >
> > > Am 30.07.19 um 20:12 schrieb Daniel Vetter:
> > > > On Tue, Jul 30, 2019 at 7:50 PM Thomas Zimmermann <tzimmermann@suse.de> wrote:
> > > >> Am 29.07.19 um 11:51 schrieb kernel test robot:
> > > >>> Greeting,
> > > >>>
> > > >>> FYI, we noticed a -18.8% regression of vm-scalability.median due to commit:>
> > > >>>
> > > >>> commit: 90f479ae51afa45efab97afdde9b94b9660dd3e4 ("drm/mgag200: Replace struct mga_fbdev with generic framebuffer emulation")
> > > >>> https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next.git master
> > > >>
> > > >> Daniel, Noralf, we may have to revert this patch.
> > > >>
> > > >> I expected some change in display performance, but not in VM. Since it's
> > > >> a server chipset, probably no one cares much about display performance.
> > > >> So that seemed like a good trade-off for re-using shared code.
> > > >>
> > > >> Part of the patch set is that the generic fb emulation now maps and
> > > >> unmaps the fbdev BO when updating the screen. I guess that's the cause
> > > >> of the performance regression. And it should be visible with other
> > > >> drivers as well if they use a shadow FB for fbdev emulation.
> > > >
> > > > For fbcon we should need to do any maps/unamps at all, this is for the
> > > > fbdev mmap support only. If the testcase mentioned here tests fbdev
> > > > mmap handling it's pretty badly misnamed :-) And as long as you don't
> > > > have an fbdev mmap there shouldn't be any impact at all.
> > >
> > > The ast and mgag200 have only a few MiB of VRAM, so we have to get the
> > > fbdev BO out if it's not being displayed. If not being mapped, it can be
> > > evicted and make room for X, etc.
> > >
> > > To make this work, the BO's memory is mapped and unmapped in
> > > drm_fb_helper_dirty_work() before being updated from the shadow FB. [1]
> > > That fbdev mapping is established on each screen update, more or less.
> > > From my (yet unverified) understanding, this causes the performance
> > > regression in the VM code.
> > >
> > > The original code in mgag200 used to kmap the fbdev BO while it's being
> > > displayed; [2] and the drawing code only mapped it when necessary (i.e.,
> > > not being display). [3]
> >
> > Hm yeah, this vmap/vunmap is going to be pretty bad. We indeed should
> > cache this.
> >
> > > I think this could be added for VRAM helpers as well, but it's still a
> > > workaround and non-VRAM drivers might also run into such a performance
> > > regression if they use the fbdev's shadow fb.
> >
> > Yeah agreed, fbdev emulation should try to cache the vmap.
> >
> > > Noralf mentioned that there are plans for other DRM clients besides the
> > > console. They would as well run into similar problems.
> > >
> > > >> The thing is that we'd need another generic fbdev emulation for ast and
> > > >> mgag200 that handles this issue properly.
> > > >
> > > > Yeah I dont think we want to jump the gun here.  If you can try to
> > > > repro locally and profile where we're wasting cpu time I hope that
> > > > should sched a light what's going wrong here.
> > >
> > > I don't have much time ATM and I'm not even officially at work until
> > > late Aug. I'd send you the revert and investigate later. I agree that
> > > using generic fbdev emulation would be preferable.
> >
> > Still not sure that's the right thing to do really. Yes it's a
> > regression, but vm testcases shouldn run a single line of fbcon or drm
> > code. So why this is impacted so heavily by a silly drm change is very
> > confusing to me. We might be papering over a deeper and much more
> > serious issue ...
>
> It's a regression, the right thing is to revert first and then work
> out the right thing to do.

Sure, but I have no idea whether the testcase is doing something
reasonable. If it's accidentally testing vm scalability of fbdev and
there's no one else doing something this pointless, then it's not a
real bug. Plus I think we're shooting the messenger here.

> It's likely the test runs on the console and printfs stuff out while running.

But why did we not regress the world if a few prints on the console
have such a huge impact? We didn't get an entire stream of mails about
breaking stuff ...
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Vetter <daniel@ffwll.ch>
To: lkp@lists.01.org
Subject: Re: [drm/mgag200] 90f479ae51: vm-scalability.median -18.8% regression
Date: Wed, 31 Jul 2019 10:13:52 +0200	[thread overview]
Message-ID: <CAKMK7uEbLP7j38VhdX9qniwqLfSc0_LrcrCD1R8m4kihbxePUA@mail.gmail.com> (raw)
In-Reply-To: <CAPM=9txDY7ROKkoLsc1bEaTnEZ+y5p7=EFoibcuy9uoTvsE75g@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 4659 bytes --]

On Tue, Jul 30, 2019 at 10:27 PM Dave Airlie <airlied@gmail.com> wrote:
>
> On Wed, 31 Jul 2019 at 05:00, Daniel Vetter <daniel@ffwll.ch> wrote:
> >
> > On Tue, Jul 30, 2019 at 8:50 PM Thomas Zimmermann <tzimmermann@suse.de> wrote:
> > >
> > > Hi
> > >
> > > Am 30.07.19 um 20:12 schrieb Daniel Vetter:
> > > > On Tue, Jul 30, 2019 at 7:50 PM Thomas Zimmermann <tzimmermann@suse.de> wrote:
> > > >> Am 29.07.19 um 11:51 schrieb kernel test robot:
> > > >>> Greeting,
> > > >>>
> > > >>> FYI, we noticed a -18.8% regression of vm-scalability.median due to commit:>
> > > >>>
> > > >>> commit: 90f479ae51afa45efab97afdde9b94b9660dd3e4 ("drm/mgag200: Replace struct mga_fbdev with generic framebuffer emulation")
> > > >>> https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next.git master
> > > >>
> > > >> Daniel, Noralf, we may have to revert this patch.
> > > >>
> > > >> I expected some change in display performance, but not in VM. Since it's
> > > >> a server chipset, probably no one cares much about display performance.
> > > >> So that seemed like a good trade-off for re-using shared code.
> > > >>
> > > >> Part of the patch set is that the generic fb emulation now maps and
> > > >> unmaps the fbdev BO when updating the screen. I guess that's the cause
> > > >> of the performance regression. And it should be visible with other
> > > >> drivers as well if they use a shadow FB for fbdev emulation.
> > > >
> > > > For fbcon we should need to do any maps/unamps at all, this is for the
> > > > fbdev mmap support only. If the testcase mentioned here tests fbdev
> > > > mmap handling it's pretty badly misnamed :-) And as long as you don't
> > > > have an fbdev mmap there shouldn't be any impact at all.
> > >
> > > The ast and mgag200 have only a few MiB of VRAM, so we have to get the
> > > fbdev BO out if it's not being displayed. If not being mapped, it can be
> > > evicted and make room for X, etc.
> > >
> > > To make this work, the BO's memory is mapped and unmapped in
> > > drm_fb_helper_dirty_work() before being updated from the shadow FB. [1]
> > > That fbdev mapping is established on each screen update, more or less.
> > > From my (yet unverified) understanding, this causes the performance
> > > regression in the VM code.
> > >
> > > The original code in mgag200 used to kmap the fbdev BO while it's being
> > > displayed; [2] and the drawing code only mapped it when necessary (i.e.,
> > > not being display). [3]
> >
> > Hm yeah, this vmap/vunmap is going to be pretty bad. We indeed should
> > cache this.
> >
> > > I think this could be added for VRAM helpers as well, but it's still a
> > > workaround and non-VRAM drivers might also run into such a performance
> > > regression if they use the fbdev's shadow fb.
> >
> > Yeah agreed, fbdev emulation should try to cache the vmap.
> >
> > > Noralf mentioned that there are plans for other DRM clients besides the
> > > console. They would as well run into similar problems.
> > >
> > > >> The thing is that we'd need another generic fbdev emulation for ast and
> > > >> mgag200 that handles this issue properly.
> > > >
> > > > Yeah I dont think we want to jump the gun here.  If you can try to
> > > > repro locally and profile where we're wasting cpu time I hope that
> > > > should sched a light what's going wrong here.
> > >
> > > I don't have much time ATM and I'm not even officially at work until
> > > late Aug. I'd send you the revert and investigate later. I agree that
> > > using generic fbdev emulation would be preferable.
> >
> > Still not sure that's the right thing to do really. Yes it's a
> > regression, but vm testcases shouldn run a single line of fbcon or drm
> > code. So why this is impacted so heavily by a silly drm change is very
> > confusing to me. We might be papering over a deeper and much more
> > serious issue ...
>
> It's a regression, the right thing is to revert first and then work
> out the right thing to do.

Sure, but I have no idea whether the testcase is doing something
reasonable. If it's accidentally testing vm scalability of fbdev and
there's no one else doing something this pointless, then it's not a
real bug. Plus I think we're shooting the messenger here.

> It's likely the test runs on the console and printfs stuff out while running.

But why did we not regress the world if a few prints on the console
have such a huge impact? We didn't get an entire stream of mails about
breaking stuff ...
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

  reply	other threads:[~2019-07-31  8:14 UTC|newest]

Thread overview: 132+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-29  9:51 [drm/mgag200] 90f479ae51: vm-scalability.median -18.8% regression kernel test robot
2019-07-29  9:51 ` kernel test robot
2019-07-30 17:50 ` Thomas Zimmermann
2019-07-30 17:50   ` Thomas Zimmermann
2019-07-30 18:12   ` Daniel Vetter
2019-07-30 18:12     ` Daniel Vetter
2019-07-30 18:50     ` Thomas Zimmermann
2019-07-30 18:50       ` Thomas Zimmermann
2019-07-30 18:59       ` Daniel Vetter
2019-07-30 18:59         ` Daniel Vetter
2019-07-30 20:26         ` Dave Airlie
2019-07-30 20:26           ` Dave Airlie
2019-07-31  8:13           ` Daniel Vetter [this message]
2019-07-31  8:13             ` Daniel Vetter
2019-07-31  9:25             ` [LKP] " Huang, Ying
2019-07-31  9:25               ` Huang, Ying
2019-07-31 10:12               ` [LKP] " Thomas Zimmermann
2019-07-31 10:12                 ` Thomas Zimmermann
2019-07-31 10:21               ` [LKP] " Michel Dänzer
2019-08-01  6:19                 ` Rong Chen
2019-08-01  6:19                   ` Rong Chen
2019-08-01  8:37                   ` [LKP] " Feng Tang
2019-08-01  8:37                     ` Feng Tang
2019-08-01  9:59                     ` [LKP] " Thomas Zimmermann
2019-08-01  9:59                       ` Thomas Zimmermann
2019-08-01 11:25                       ` [LKP] " Feng Tang
2019-08-01 11:25                         ` Feng Tang
2019-08-01 11:58                         ` [LKP] " Thomas Zimmermann
2019-08-01 11:58                           ` Thomas Zimmermann
2019-08-02  7:11                           ` [LKP] " Rong Chen
2019-08-02  7:11                             ` Rong Chen
2019-08-02  8:23                             ` [LKP] " Thomas Zimmermann
2019-08-02  8:23                               ` Thomas Zimmermann
2019-08-02  9:20                             ` [LKP] " Thomas Zimmermann
2019-08-02  9:20                               ` Thomas Zimmermann
2019-08-01  9:57                   ` [LKP] " Thomas Zimmermann
2019-08-01  9:57                     ` Thomas Zimmermann
2019-08-01 13:30                   ` [LKP] " Michel Dänzer
2019-08-02  8:17                     ` Thomas Zimmermann
2019-08-02  8:17                       ` Thomas Zimmermann
2019-07-31 10:10             ` Thomas Zimmermann
2019-07-31 10:10               ` Thomas Zimmermann
2019-08-02  9:11               ` Daniel Vetter
2019-08-02  9:11                 ` Daniel Vetter
2019-08-02  9:26                 ` Thomas Zimmermann
2019-08-02  9:26                   ` Thomas Zimmermann
2019-08-04 18:39   ` Thomas Zimmermann
2019-08-04 18:39     ` Thomas Zimmermann
2019-08-05  7:02     ` Feng Tang
2019-08-05  7:02       ` Feng Tang
2019-08-05  7:28       ` Rong Chen
2019-08-05 10:25         ` Thomas Zimmermann
2019-08-05 10:25           ` Thomas Zimmermann
2019-08-06 12:59           ` [LKP] " Chen, Rong A
2019-08-06 12:59             ` Chen, Rong A
2019-08-07 10:42             ` [LKP] " Thomas Zimmermann
2019-08-07 10:42               ` Thomas Zimmermann
2019-08-09  8:12               ` [LKP] " Rong Chen
2019-08-09  8:12                 ` Rong Chen
2019-08-12  7:25                 ` [LKP] " Feng Tang
2019-08-12  7:25                   ` Feng Tang
2019-08-13  9:36                   ` [LKP] " Feng Tang
2019-08-13  9:36                     ` Feng Tang
2019-08-13  9:36                     ` [LKP] " Feng Tang
2019-08-16  6:55                     ` Feng Tang
2019-08-16  6:55                       ` Feng Tang
2019-08-22 17:25                     ` [LKP] " Thomas Zimmermann
2019-08-22 17:25                       ` Thomas Zimmermann
2019-08-22 17:25                       ` [LKP] " Thomas Zimmermann
2019-08-22 20:02                       ` Dave Airlie
2019-08-22 20:02                         ` Dave Airlie
2019-08-23  9:54                         ` [LKP] " Thomas Zimmermann
2019-08-23  9:54                           ` Thomas Zimmermann
2019-08-23  9:54                           ` [LKP] " Thomas Zimmermann
2019-08-24  5:16                       ` Feng Tang
2019-08-24  5:16                         ` Feng Tang
2019-08-24  5:16                         ` [LKP] " Feng Tang
2019-08-26 10:50                         ` Thomas Zimmermann
2019-08-26 10:50                           ` Thomas Zimmermann
2019-08-27 12:33                           ` [LKP] " Chen, Rong A
2019-08-27 12:33                             ` Chen, Rong A
2019-08-27 12:33                             ` [LKP] " Chen, Rong A
2019-08-27 17:16                             ` Thomas Zimmermann
2019-08-27 17:16                               ` Thomas Zimmermann
2019-08-28  9:37                               ` [LKP] " Rong Chen
2019-08-28  9:37                                 ` Rong Chen
2019-08-28 10:51                                 ` [LKP] " Thomas Zimmermann
2019-08-28 10:51                                   ` Thomas Zimmermann
2019-09-04  6:27                                   ` [LKP] " Feng Tang
2019-09-04  6:27                                     ` Feng Tang
2019-09-04  6:53                                     ` [LKP] " Thomas Zimmermann
2019-09-04  6:53                                       ` Thomas Zimmermann
2019-09-04  8:11                                       ` [LKP] " Daniel Vetter
2019-09-04  8:11                                         ` Daniel Vetter
2019-09-04  8:35                                         ` [LKP] " Feng Tang
2019-09-04  8:35                                           ` Feng Tang
2019-09-04  8:43                                           ` [LKP] " Thomas Zimmermann
2019-09-04  8:43                                             ` Thomas Zimmermann
2019-09-04 14:30                                             ` [LKP] " Chen, Rong A
2019-09-04 14:30                                               ` Chen, Rong A
2019-09-04  9:17                                           ` [LKP] " Daniel Vetter
2019-09-04  9:17                                             ` Daniel Vetter
2019-09-04 11:15                                             ` [LKP] " Dave Airlie
2019-09-04 11:15                                               ` Dave Airlie
2019-09-04 11:20                                               ` [LKP] " Daniel Vetter
2019-09-04 11:20                                                 ` Daniel Vetter
2019-09-04 11:20                                                 ` [LKP] " Daniel Vetter
2019-09-05  6:59                                                 ` Feng Tang
2019-09-05  6:59                                                   ` Feng Tang
2019-09-05 10:37                                                   ` [LKP] " Daniel Vetter
2019-09-05 10:37                                                     ` Daniel Vetter
2019-09-05 10:48                                                     ` [LKP] " Feng Tang
2019-09-05 10:48                                                       ` Feng Tang
2019-09-05 10:48                                                       ` [LKP] " Feng Tang
2019-09-09 14:12                                     ` Thomas Zimmermann
2019-09-09 14:12                                       ` Thomas Zimmermann
2019-09-09 14:12                                       ` [LKP] " Thomas Zimmermann
2019-09-16  9:06                                       ` Feng Tang
2019-09-16  9:06                                         ` Feng Tang
2019-09-17  8:48                                         ` [LKP] " Thomas Zimmermann
2019-09-17  8:48                                           ` Thomas Zimmermann
2019-09-17  8:48                                           ` [LKP] " Thomas Zimmermann
2019-08-05 10:22       ` Thomas Zimmermann
2019-08-05 10:22         ` Thomas Zimmermann
2019-08-05 12:52         ` Feng Tang
2019-08-05 12:52           ` Feng Tang
2020-01-06 13:19           ` Thomas Zimmermann
2020-01-06 13:19             ` Thomas Zimmermann
2020-01-08  2:25             ` Rong Chen
2020-01-08  2:28               ` Rong Chen
2020-01-08  5:20               ` Thomas Zimmermann
2020-01-08  5:20                 ` Thomas Zimmermann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKMK7uEbLP7j38VhdX9qniwqLfSc0_LrcrCD1R8m4kihbxePUA@mail.gmail.com \
    --to=daniel@ffwll.ch \
    --cc=airlied@gmail.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=lkp@01.org \
    --cc=rong.a.chen@intel.com \
    --cc=sfr@canb.auug.org.au \
    --cc=tzimmermann@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.