From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756001AbdDGNOI (ORCPT ); Fri, 7 Apr 2017 09:14:08 -0400 Received: from mx1.redhat.com ([209.132.183.28]:57552 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753505AbdDGNOB (ORCPT ); Fri, 7 Apr 2017 09:14:01 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com CD9417F7B1 Authentication-Results: ext-mx04.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx04.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=aarcange@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com CD9417F7B1 Date: Fri, 7 Apr 2017 15:13:58 +0200 From: Andrea Arcangeli To: Chris Wilson , Martin Kepplinger , Thorsten Leemhuis , daniel.vetter@intel.com, Dave Airlie , intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: Re: [PATCH 5/5] i915: fence workqueue optimization Message-ID: <20170407131358.GB5035@redhat.com> References: <87pogtplxr.fsf@intel.com> <20170406232347.988-1-aarcange@redhat.com> <20170406232347.988-6-aarcange@redhat.com> <20170407095838.GF10496@nuc-i3427.alporthouse.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170407095838.GF10496@nuc-i3427.alporthouse.com> User-Agent: Mutt/1.8.0 (2017-02-23) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Fri, 07 Apr 2017 13:14:01 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Apr 07, 2017 at 10:58:38AM +0100, Chris Wilson wrote: > On Fri, Apr 07, 2017 at 01:23:47AM +0200, Andrea Arcangeli wrote: > > Insist to run llist_del_all() until the free_list is found empty, this > > may avoid having to schedule more workqueues. > > The work will already be scheduled (everytime we add the first element, > the work is scheduled, and the scheduled bit is cleared before the work > is executed). So we aren't saving the kworker from having to process > another work, but we may make that having nothing to do. The question is > whether we want to trap the kworker here, and presumably you will also want > to add a cond_resched() between passes. Yes it is somewhat dubious in the two event only case, but it will save kworker in case of more events if there is a flood of llist_add. It just looked fast enough but it's up to you, it's a cmpxchg more for each intel_atomic_helper_free_state. If it's unlikely more work is added, it's better to drop it. Agree about cond_resched() if we keep it. The same issue exists in __i915_gem_free_work, but I guess it's more likely there that by the time __i915_gem_free_objects returns the free_list isn't empty anymore because __i915_gem_free_objects has a longer runtime but then you may want to re-evaluate that too as it's slower for the two llist_add in a row case and only pays off from the third. while ((freed = llist_del_all(&i915->mm.free_list))) __i915_gem_free_objects(i915, freed); From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrea Arcangeli Subject: Re: [PATCH 5/5] i915: fence workqueue optimization Date: Fri, 7 Apr 2017 15:13:58 +0200 Message-ID: <20170407131358.GB5035@redhat.com> References: <87pogtplxr.fsf@intel.com> <20170406232347.988-1-aarcange@redhat.com> <20170406232347.988-6-aarcange@redhat.com> <20170407095838.GF10496@nuc-i3427.alporthouse.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: Content-Disposition: inline In-Reply-To: <20170407095838.GF10496@nuc-i3427.alporthouse.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" To: Chris Wilson , Martin Kepplinger , Thorsten Leemhuis , daniel.vetter@intel.com, Dave Airlie , intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org List-Id: dri-devel@lists.freedesktop.org T24gRnJpLCBBcHIgMDcsIDIwMTcgYXQgMTA6NTg6MzhBTSArMDEwMCwgQ2hyaXMgV2lsc29uIHdy b3RlOgo+IE9uIEZyaSwgQXByIDA3LCAyMDE3IGF0IDAxOjIzOjQ3QU0gKzAyMDAsIEFuZHJlYSBB cmNhbmdlbGkgd3JvdGU6Cj4gPiBJbnNpc3QgdG8gcnVuIGxsaXN0X2RlbF9hbGwoKSB1bnRpbCB0 aGUgZnJlZV9saXN0IGlzIGZvdW5kIGVtcHR5LCB0aGlzCj4gPiBtYXkgYXZvaWQgaGF2aW5nIHRv IHNjaGVkdWxlIG1vcmUgd29ya3F1ZXVlcy4KPiAKPiBUaGUgd29yayB3aWxsIGFscmVhZHkgYmUg c2NoZWR1bGVkIChldmVyeXRpbWUgd2UgYWRkIHRoZSBmaXJzdCBlbGVtZW50LAo+IHRoZSB3b3Jr IGlzIHNjaGVkdWxlZCwgYW5kIHRoZSBzY2hlZHVsZWQgYml0IGlzIGNsZWFyZWQgYmVmb3JlIHRo ZSB3b3JrCj4gaXMgZXhlY3V0ZWQpLiBTbyB3ZSBhcmVuJ3Qgc2F2aW5nIHRoZSBrd29ya2VyIGZy b20gaGF2aW5nIHRvIHByb2Nlc3MKPiBhbm90aGVyIHdvcmssIGJ1dCB3ZSBtYXkgbWFrZSB0aGF0 IGhhdmluZyBub3RoaW5nIHRvIGRvLiBUaGUgcXVlc3Rpb24gaXMKPiB3aGV0aGVyIHdlIHdhbnQg dG8gdHJhcCB0aGUga3dvcmtlciBoZXJlLCBhbmQgcHJlc3VtYWJseSB5b3Ugd2lsbCBhbHNvIHdh bnQKPiB0byBhZGQgYSBjb25kX3Jlc2NoZWQoKSBiZXR3ZWVuIHBhc3Nlcy4KClllcyBpdCBpcyBz b21ld2hhdCBkdWJpb3VzIGluIHRoZSB0d28gZXZlbnQgb25seSBjYXNlLCBidXQgaXQgd2lsbApz YXZlIGt3b3JrZXIgaW4gY2FzZSBvZiBtb3JlIGV2ZW50cyBpZiB0aGVyZSBpcyBhIGZsb29kIG9m CmxsaXN0X2FkZC4gSXQganVzdCBsb29rZWQgZmFzdCBlbm91Z2ggYnV0IGl0J3MgdXAgdG8geW91 LCBpdCdzIGEKY21weGNoZyBtb3JlIGZvciBlYWNoIGludGVsX2F0b21pY19oZWxwZXJfZnJlZV9z dGF0ZS4gSWYgaXQncyB1bmxpa2VseQptb3JlIHdvcmsgaXMgYWRkZWQsIGl0J3MgYmV0dGVyIHRv IGRyb3AgaXQuIEFncmVlIGFib3V0CmNvbmRfcmVzY2hlZCgpIGlmIHdlIGtlZXAgaXQuCgpUaGUg c2FtZSBpc3N1ZSBleGlzdHMgaW4gX19pOTE1X2dlbV9mcmVlX3dvcmssIGJ1dCBJIGd1ZXNzIGl0 J3MgbW9yZQpsaWtlbHkgdGhlcmUgdGhhdCBieSB0aGUgdGltZSBfX2k5MTVfZ2VtX2ZyZWVfb2Jq ZWN0cyByZXR1cm5zIHRoZQpmcmVlX2xpc3QgaXNuJ3QgZW1wdHkgYW55bW9yZSBiZWNhdXNlIF9f aTkxNV9nZW1fZnJlZV9vYmplY3RzIGhhcyBhCmxvbmdlciBydW50aW1lIGJ1dCB0aGVuIHlvdSBt YXkgd2FudCB0byByZS1ldmFsdWF0ZSB0aGF0IHRvbyBhcyBpdCdzCnNsb3dlciBmb3IgdGhlIHR3 byBsbGlzdF9hZGQgaW4gYSByb3cgY2FzZSBhbmQgb25seSBwYXlzIG9mZiBmcm9tIHRoZQp0aGly ZC4KCgl3aGlsZSAoKGZyZWVkID0gbGxpc3RfZGVsX2FsbCgmaTkxNS0+bW0uZnJlZV9saXN0KSkp CgkJX19pOTE1X2dlbV9mcmVlX29iamVjdHMoaTkxNSwgZnJlZWQpOwpfX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpkcmktZGV2ZWwgbWFpbGluZyBsaXN0CmRy aS1kZXZlbEBsaXN0cy5mcmVlZGVza3RvcC5vcmcKaHR0cHM6Ly9saXN0cy5mcmVlZGVza3RvcC5v cmcvbWFpbG1hbi9saXN0aW5mby9kcmktZGV2ZWwK