Hi Vetter, On Wed, Sep 04, 2019 at 01:20:29PM +0200, Daniel Vetter wrote: > On Wed, Sep 4, 2019 at 1:15 PM Dave Airlie wrote: > > > > On Wed, 4 Sep 2019 at 19:17, Daniel Vetter wrote: > > > > > > On Wed, Sep 4, 2019 at 10:35 AM Feng Tang wrote: > > > > > > > > Hi Daniel, > > > > > > > > On Wed, Sep 04, 2019 at 10:11:11AM +0200, Daniel Vetter wrote: > > > > > On Wed, Sep 4, 2019 at 8:53 AM Thomas Zimmermann wrote: > > > > > > > > > > > > Hi > > > > > > > > > > > > Am 04.09.19 um 08:27 schrieb Feng Tang: > > > > > > >> Thank you for testing. But don't get too excited, because the patch > > > > > > >> simulates a bug that was present in the original mgag200 code. A > > > > > > >> significant number of frames are simply skipped. That is apparently the > > > > > > >> reason why it's faster. > > > > > > > > > > > > > > Thanks for the detailed info, so the original code skips time-consuming > > > > > > > work inside atomic context on purpose. Is there any space to optmise it? > > > > > > > If 2 scheduled update worker are handled at almost same time, can one be > > > > > > > skipped? > > > > > > > > > > > > To my knowledge, there's only one instance of the worker. Re-scheduling > > > > > > the worker before a previous instance started, will not create a second > > > > > > instance. The worker's instance will complete all pending updates. So in > > > > > > some way, skipping workers already happens. > > > > > > > > > > So I think that the most often fbcon update from atomic context is the > > > > > blinking cursor. If you disable that one you should be back to the old > > > > > performance level I think, since just writing to dmesg is from process > > > > > context, so shouldn't change. > > > > > > > > Hmm, then for the old driver, it should also do the most update in > > > > non-atomic context? > > > > > > > > One other thing is, I profiled that updating a 3MB shadow buffer needs > > > > 20 ms, which transfer to 150 MB/s bandwidth. Could it be related with > > > > the cache setting of DRM shadow buffer? say the orginal code use a > > > > cachable buffer? > > > > > > Hm, that would indicate the write-combining got broken somewhere. This > > > should definitely be faster. Also we shouldn't transfer the hole > > > thing, except when scrolling ... > > > > First rule of fbcon usage, you are always effectively scrolling. > > > > Also these devices might be on a PCIE 1x piece of wet string, not sure > > if the numbers reflect that. > > pcie 1x 1.0 is 250MB/s, so yeah with a bit of inefficiency and > overhead not entirely out of the question that 150MB/s is actually the > hw limit. If it's really pcie 1x 1.0, no idea where to check that. > Also might be worth to double-check that the gpu pci bar is listed as > wc in debugfs/x86/pat_memtype_list. Here is some dump of the device info and the pat_memtype_list, while it is running other 0day task: controller info ================= 03:00.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200e [Pilot] ServerEngines (SEP1) (rev 05) (prog-if 00 [VGA controller]) Subsystem: Intel Corporation MGA G200e [Pilot] ServerEngines (SEP1) Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- TAbort+ SERR- Capabilities: [144 v1] Vendor Specific Information: ID=0004 Rev=1 Len=03c Capabilities: [1d0 v1] Vendor Specific Information: ID=0003 Rev=1 Len=00a Capabilities: [250 v1] #19 Capabilities: [280 v1] Vendor Specific Information: ID=0005 Rev=3 Len=018 Capabilities: [298 v1] Vendor Specific Information: ID=0007 Rev=0 Len=024 Thanks, Feng > > -Daniel > -- > Daniel Vetter > Software Engineer, Intel Corporation > +41 (0) 79 365 57 48 - http://blog.ffwll.ch