All of lore.kernel.org
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel@ffwll.ch>
To: Dave Airlie <airlied@gmail.com>
Cc: "Feng Tang" <feng.tang@intel.com>,
	"Stephen Rothwell" <sfr@canb.auug.org.au>,
	"Rong Chen" <rong.a.chen@intel.com>,
	"Michel Dänzer" <michel@daenzer.net>,
	"Linux Kernel Mailing List" <linux-kernel@vger.kernel.org>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	"Thomas Zimmermann" <tzimmermann@suse.de>, LKP <lkp@01.org>
Subject: Re: [LKP] [drm/mgag200] 90f479ae51: vm-scalability.median -18.8% regression
Date: Wed, 4 Sep 2019 13:20:29 +0200	[thread overview]
Message-ID: <CAKMK7uGtNu0M74+Ag5-7HJhuHDVv1HoMPz=2XjU6tCkfMScQnA@mail.gmail.com> (raw)
In-Reply-To: <CAPM=9tzDMfRf_VKaiHmnb_KKVwqW3=y=09JO0SJrG6ySe=DbfQ@mail.gmail.com>

On Wed, Sep 4, 2019 at 1:15 PM Dave Airlie <airlied@gmail.com> wrote:
>
> On Wed, 4 Sep 2019 at 19:17, Daniel Vetter <daniel@ffwll.ch> wrote:
> >
> > On Wed, Sep 4, 2019 at 10:35 AM Feng Tang <feng.tang@intel.com> wrote:
> > >
> > > Hi Daniel,
> > >
> > > On Wed, Sep 04, 2019 at 10:11:11AM +0200, Daniel Vetter wrote:
> > > > On Wed, Sep 4, 2019 at 8:53 AM Thomas Zimmermann <tzimmermann@suse.de> wrote:
> > > > >
> > > > > Hi
> > > > >
> > > > > Am 04.09.19 um 08:27 schrieb Feng Tang:
> > > > > >> Thank you for testing. But don't get too excited, because the patch
> > > > > >> simulates a bug that was present in the original mgag200 code. A
> > > > > >> significant number of frames are simply skipped. That is apparently the
> > > > > >> reason why it's faster.
> > > > > >
> > > > > > Thanks for the detailed info, so the original code skips time-consuming
> > > > > > work inside atomic context on purpose. Is there any space to optmise it?
> > > > > > If 2 scheduled update worker are handled at almost same time, can one be
> > > > > > skipped?
> > > > >
> > > > > To my knowledge, there's only one instance of the worker. Re-scheduling
> > > > > the worker before a previous instance started, will not create a second
> > > > > instance. The worker's instance will complete all pending updates. So in
> > > > > some way, skipping workers already happens.
> > > >
> > > > So I think that the most often fbcon update from atomic context is the
> > > > blinking cursor. If you disable that one you should be back to the old
> > > > performance level I think, since just writing to dmesg is from process
> > > > context, so shouldn't change.
> > >
> > > Hmm, then for the old driver, it should also do the most update in
> > > non-atomic context?
> > >
> > > One other thing is, I profiled that updating a 3MB shadow buffer needs
> > > 20 ms, which transfer to 150 MB/s bandwidth. Could it be related with
> > > the cache setting of DRM shadow buffer? say the orginal code use a
> > > cachable buffer?
> >
> > Hm, that would indicate the write-combining got broken somewhere. This
> > should definitely be faster. Also we shouldn't transfer the hole
> > thing, except when scrolling ...
>
> First rule of fbcon usage, you are always effectively scrolling.
>
> Also these devices might be on a PCIE 1x piece of wet string, not sure
> if the numbers reflect that.

pcie 1x 1.0 is 250MB/s, so yeah with a bit of inefficiency and
overhead not entirely out of the question that 150MB/s is actually the
hw limit. If it's really pcie 1x 1.0, no idea where to check that.
Also might be worth to double-check that the gpu pci bar is listed as
wc in debugfs/x86/pat_memtype_list.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Vetter <daniel@ffwll.ch>
To: Dave Airlie <airlied@gmail.com>
Cc: "Stephen Rothwell" <sfr@canb.auug.org.au>,
	"Feng Tang" <feng.tang@intel.com>,
	"Rong Chen" <rong.a.chen@intel.com>,
	"Michel Dänzer" <michel@daenzer.net>,
	"Linux Kernel Mailing List" <linux-kernel@vger.kernel.org>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	"Thomas Zimmermann" <tzimmermann@suse.de>, LKP <lkp@01.org>
Subject: Re: [LKP] [drm/mgag200] 90f479ae51: vm-scalability.median -18.8% regression
Date: Wed, 4 Sep 2019 13:20:29 +0200	[thread overview]
Message-ID: <CAKMK7uGtNu0M74+Ag5-7HJhuHDVv1HoMPz=2XjU6tCkfMScQnA@mail.gmail.com> (raw)
In-Reply-To: <CAPM=9tzDMfRf_VKaiHmnb_KKVwqW3=y=09JO0SJrG6ySe=DbfQ@mail.gmail.com>

On Wed, Sep 4, 2019 at 1:15 PM Dave Airlie <airlied@gmail.com> wrote:
>
> On Wed, 4 Sep 2019 at 19:17, Daniel Vetter <daniel@ffwll.ch> wrote:
> >
> > On Wed, Sep 4, 2019 at 10:35 AM Feng Tang <feng.tang@intel.com> wrote:
> > >
> > > Hi Daniel,
> > >
> > > On Wed, Sep 04, 2019 at 10:11:11AM +0200, Daniel Vetter wrote:
> > > > On Wed, Sep 4, 2019 at 8:53 AM Thomas Zimmermann <tzimmermann@suse.de> wrote:
> > > > >
> > > > > Hi
> > > > >
> > > > > Am 04.09.19 um 08:27 schrieb Feng Tang:
> > > > > >> Thank you for testing. But don't get too excited, because the patch
> > > > > >> simulates a bug that was present in the original mgag200 code. A
> > > > > >> significant number of frames are simply skipped. That is apparently the
> > > > > >> reason why it's faster.
> > > > > >
> > > > > > Thanks for the detailed info, so the original code skips time-consuming
> > > > > > work inside atomic context on purpose. Is there any space to optmise it?
> > > > > > If 2 scheduled update worker are handled at almost same time, can one be
> > > > > > skipped?
> > > > >
> > > > > To my knowledge, there's only one instance of the worker. Re-scheduling
> > > > > the worker before a previous instance started, will not create a second
> > > > > instance. The worker's instance will complete all pending updates. So in
> > > > > some way, skipping workers already happens.
> > > >
> > > > So I think that the most often fbcon update from atomic context is the
> > > > blinking cursor. If you disable that one you should be back to the old
> > > > performance level I think, since just writing to dmesg is from process
> > > > context, so shouldn't change.
> > >
> > > Hmm, then for the old driver, it should also do the most update in
> > > non-atomic context?
> > >
> > > One other thing is, I profiled that updating a 3MB shadow buffer needs
> > > 20 ms, which transfer to 150 MB/s bandwidth. Could it be related with
> > > the cache setting of DRM shadow buffer? say the orginal code use a
> > > cachable buffer?
> >
> > Hm, that would indicate the write-combining got broken somewhere. This
> > should definitely be faster. Also we shouldn't transfer the hole
> > thing, except when scrolling ...
>
> First rule of fbcon usage, you are always effectively scrolling.
>
> Also these devices might be on a PCIE 1x piece of wet string, not sure
> if the numbers reflect that.

pcie 1x 1.0 is 250MB/s, so yeah with a bit of inefficiency and
overhead not entirely out of the question that 150MB/s is actually the
hw limit. If it's really pcie 1x 1.0, no idea where to check that.
Also might be worth to double-check that the gpu pci bar is listed as
wc in debugfs/x86/pat_memtype_list.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Vetter <daniel@ffwll.ch>
To: lkp@lists.01.org
Subject: Re: [drm/mgag200] 90f479ae51: vm-scalability.median -18.8% regression
Date: Wed, 04 Sep 2019 13:20:29 +0200	[thread overview]
Message-ID: <CAKMK7uGtNu0M74+Ag5-7HJhuHDVv1HoMPz=2XjU6tCkfMScQnA@mail.gmail.com> (raw)
In-Reply-To: <CAPM=9tzDMfRf_VKaiHmnb_KKVwqW3=y=09JO0SJrG6ySe=DbfQ@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 2846 bytes --]

On Wed, Sep 4, 2019 at 1:15 PM Dave Airlie <airlied@gmail.com> wrote:
>
> On Wed, 4 Sep 2019 at 19:17, Daniel Vetter <daniel@ffwll.ch> wrote:
> >
> > On Wed, Sep 4, 2019 at 10:35 AM Feng Tang <feng.tang@intel.com> wrote:
> > >
> > > Hi Daniel,
> > >
> > > On Wed, Sep 04, 2019 at 10:11:11AM +0200, Daniel Vetter wrote:
> > > > On Wed, Sep 4, 2019 at 8:53 AM Thomas Zimmermann <tzimmermann@suse.de> wrote:
> > > > >
> > > > > Hi
> > > > >
> > > > > Am 04.09.19 um 08:27 schrieb Feng Tang:
> > > > > >> Thank you for testing. But don't get too excited, because the patch
> > > > > >> simulates a bug that was present in the original mgag200 code. A
> > > > > >> significant number of frames are simply skipped. That is apparently the
> > > > > >> reason why it's faster.
> > > > > >
> > > > > > Thanks for the detailed info, so the original code skips time-consuming
> > > > > > work inside atomic context on purpose. Is there any space to optmise it?
> > > > > > If 2 scheduled update worker are handled at almost same time, can one be
> > > > > > skipped?
> > > > >
> > > > > To my knowledge, there's only one instance of the worker. Re-scheduling
> > > > > the worker before a previous instance started, will not create a second
> > > > > instance. The worker's instance will complete all pending updates. So in
> > > > > some way, skipping workers already happens.
> > > >
> > > > So I think that the most often fbcon update from atomic context is the
> > > > blinking cursor. If you disable that one you should be back to the old
> > > > performance level I think, since just writing to dmesg is from process
> > > > context, so shouldn't change.
> > >
> > > Hmm, then for the old driver, it should also do the most update in
> > > non-atomic context?
> > >
> > > One other thing is, I profiled that updating a 3MB shadow buffer needs
> > > 20 ms, which transfer to 150 MB/s bandwidth. Could it be related with
> > > the cache setting of DRM shadow buffer? say the orginal code use a
> > > cachable buffer?
> >
> > Hm, that would indicate the write-combining got broken somewhere. This
> > should definitely be faster. Also we shouldn't transfer the hole
> > thing, except when scrolling ...
>
> First rule of fbcon usage, you are always effectively scrolling.
>
> Also these devices might be on a PCIE 1x piece of wet string, not sure
> if the numbers reflect that.

pcie 1x 1.0 is 250MB/s, so yeah with a bit of inefficiency and
overhead not entirely out of the question that 150MB/s is actually the
hw limit. If it's really pcie 1x 1.0, no idea where to check that.
Also might be worth to double-check that the gpu pci bar is listed as
wc in debugfs/x86/pat_memtype_list.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

  reply	other threads:[~2019-09-04 11:20 UTC|newest]

Thread overview: 132+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-29  9:51 [drm/mgag200] 90f479ae51: vm-scalability.median -18.8% regression kernel test robot
2019-07-29  9:51 ` kernel test robot
2019-07-30 17:50 ` Thomas Zimmermann
2019-07-30 17:50   ` Thomas Zimmermann
2019-07-30 18:12   ` Daniel Vetter
2019-07-30 18:12     ` Daniel Vetter
2019-07-30 18:50     ` Thomas Zimmermann
2019-07-30 18:50       ` Thomas Zimmermann
2019-07-30 18:59       ` Daniel Vetter
2019-07-30 18:59         ` Daniel Vetter
2019-07-30 20:26         ` Dave Airlie
2019-07-30 20:26           ` Dave Airlie
2019-07-31  8:13           ` Daniel Vetter
2019-07-31  8:13             ` Daniel Vetter
2019-07-31  9:25             ` [LKP] " Huang, Ying
2019-07-31  9:25               ` Huang, Ying
2019-07-31 10:12               ` [LKP] " Thomas Zimmermann
2019-07-31 10:12                 ` Thomas Zimmermann
2019-07-31 10:21               ` [LKP] " Michel Dänzer
2019-08-01  6:19                 ` Rong Chen
2019-08-01  6:19                   ` Rong Chen
2019-08-01  8:37                   ` [LKP] " Feng Tang
2019-08-01  8:37                     ` Feng Tang
2019-08-01  9:59                     ` [LKP] " Thomas Zimmermann
2019-08-01  9:59                       ` Thomas Zimmermann
2019-08-01 11:25                       ` [LKP] " Feng Tang
2019-08-01 11:25                         ` Feng Tang
2019-08-01 11:58                         ` [LKP] " Thomas Zimmermann
2019-08-01 11:58                           ` Thomas Zimmermann
2019-08-02  7:11                           ` [LKP] " Rong Chen
2019-08-02  7:11                             ` Rong Chen
2019-08-02  8:23                             ` [LKP] " Thomas Zimmermann
2019-08-02  8:23                               ` Thomas Zimmermann
2019-08-02  9:20                             ` [LKP] " Thomas Zimmermann
2019-08-02  9:20                               ` Thomas Zimmermann
2019-08-01  9:57                   ` [LKP] " Thomas Zimmermann
2019-08-01  9:57                     ` Thomas Zimmermann
2019-08-01 13:30                   ` [LKP] " Michel Dänzer
2019-08-02  8:17                     ` Thomas Zimmermann
2019-08-02  8:17                       ` Thomas Zimmermann
2019-07-31 10:10             ` Thomas Zimmermann
2019-07-31 10:10               ` Thomas Zimmermann
2019-08-02  9:11               ` Daniel Vetter
2019-08-02  9:11                 ` Daniel Vetter
2019-08-02  9:26                 ` Thomas Zimmermann
2019-08-02  9:26                   ` Thomas Zimmermann
2019-08-04 18:39   ` Thomas Zimmermann
2019-08-04 18:39     ` Thomas Zimmermann
2019-08-05  7:02     ` Feng Tang
2019-08-05  7:02       ` Feng Tang
2019-08-05  7:28       ` Rong Chen
2019-08-05 10:25         ` Thomas Zimmermann
2019-08-05 10:25           ` Thomas Zimmermann
2019-08-06 12:59           ` [LKP] " Chen, Rong A
2019-08-06 12:59             ` Chen, Rong A
2019-08-07 10:42             ` [LKP] " Thomas Zimmermann
2019-08-07 10:42               ` Thomas Zimmermann
2019-08-09  8:12               ` [LKP] " Rong Chen
2019-08-09  8:12                 ` Rong Chen
2019-08-12  7:25                 ` [LKP] " Feng Tang
2019-08-12  7:25                   ` Feng Tang
2019-08-13  9:36                   ` [LKP] " Feng Tang
2019-08-13  9:36                     ` Feng Tang
2019-08-13  9:36                     ` [LKP] " Feng Tang
2019-08-16  6:55                     ` Feng Tang
2019-08-16  6:55                       ` Feng Tang
2019-08-22 17:25                     ` [LKP] " Thomas Zimmermann
2019-08-22 17:25                       ` Thomas Zimmermann
2019-08-22 17:25                       ` [LKP] " Thomas Zimmermann
2019-08-22 20:02                       ` Dave Airlie
2019-08-22 20:02                         ` Dave Airlie
2019-08-23  9:54                         ` [LKP] " Thomas Zimmermann
2019-08-23  9:54                           ` Thomas Zimmermann
2019-08-23  9:54                           ` [LKP] " Thomas Zimmermann
2019-08-24  5:16                       ` Feng Tang
2019-08-24  5:16                         ` Feng Tang
2019-08-24  5:16                         ` [LKP] " Feng Tang
2019-08-26 10:50                         ` Thomas Zimmermann
2019-08-26 10:50                           ` Thomas Zimmermann
2019-08-27 12:33                           ` [LKP] " Chen, Rong A
2019-08-27 12:33                             ` Chen, Rong A
2019-08-27 12:33                             ` [LKP] " Chen, Rong A
2019-08-27 17:16                             ` Thomas Zimmermann
2019-08-27 17:16                               ` Thomas Zimmermann
2019-08-28  9:37                               ` [LKP] " Rong Chen
2019-08-28  9:37                                 ` Rong Chen
2019-08-28 10:51                                 ` [LKP] " Thomas Zimmermann
2019-08-28 10:51                                   ` Thomas Zimmermann
2019-09-04  6:27                                   ` [LKP] " Feng Tang
2019-09-04  6:27                                     ` Feng Tang
2019-09-04  6:53                                     ` [LKP] " Thomas Zimmermann
2019-09-04  6:53                                       ` Thomas Zimmermann
2019-09-04  8:11                                       ` [LKP] " Daniel Vetter
2019-09-04  8:11                                         ` Daniel Vetter
2019-09-04  8:35                                         ` [LKP] " Feng Tang
2019-09-04  8:35                                           ` Feng Tang
2019-09-04  8:43                                           ` [LKP] " Thomas Zimmermann
2019-09-04  8:43                                             ` Thomas Zimmermann
2019-09-04 14:30                                             ` [LKP] " Chen, Rong A
2019-09-04 14:30                                               ` Chen, Rong A
2019-09-04  9:17                                           ` [LKP] " Daniel Vetter
2019-09-04  9:17                                             ` Daniel Vetter
2019-09-04 11:15                                             ` [LKP] " Dave Airlie
2019-09-04 11:15                                               ` Dave Airlie
2019-09-04 11:20                                               ` Daniel Vetter [this message]
2019-09-04 11:20                                                 ` Daniel Vetter
2019-09-04 11:20                                                 ` [LKP] " Daniel Vetter
2019-09-05  6:59                                                 ` Feng Tang
2019-09-05  6:59                                                   ` Feng Tang
2019-09-05 10:37                                                   ` [LKP] " Daniel Vetter
2019-09-05 10:37                                                     ` Daniel Vetter
2019-09-05 10:48                                                     ` [LKP] " Feng Tang
2019-09-05 10:48                                                       ` Feng Tang
2019-09-05 10:48                                                       ` [LKP] " Feng Tang
2019-09-09 14:12                                     ` Thomas Zimmermann
2019-09-09 14:12                                       ` Thomas Zimmermann
2019-09-09 14:12                                       ` [LKP] " Thomas Zimmermann
2019-09-16  9:06                                       ` Feng Tang
2019-09-16  9:06                                         ` Feng Tang
2019-09-17  8:48                                         ` [LKP] " Thomas Zimmermann
2019-09-17  8:48                                           ` Thomas Zimmermann
2019-09-17  8:48                                           ` [LKP] " Thomas Zimmermann
2019-08-05 10:22       ` Thomas Zimmermann
2019-08-05 10:22         ` Thomas Zimmermann
2019-08-05 12:52         ` Feng Tang
2019-08-05 12:52           ` Feng Tang
2020-01-06 13:19           ` Thomas Zimmermann
2020-01-06 13:19             ` Thomas Zimmermann
2020-01-08  2:25             ` Rong Chen
2020-01-08  2:28               ` Rong Chen
2020-01-08  5:20               ` Thomas Zimmermann
2020-01-08  5:20                 ` Thomas Zimmermann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKMK7uGtNu0M74+Ag5-7HJhuHDVv1HoMPz=2XjU6tCkfMScQnA@mail.gmail.com' \
    --to=daniel@ffwll.ch \
    --cc=airlied@gmail.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=feng.tang@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkp@01.org \
    --cc=michel@daenzer.net \
    --cc=rong.a.chen@intel.com \
    --cc=sfr@canb.auug.org.au \
    --cc=tzimmermann@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.