From mboxrd@z Thu Jan 1 00:00:00 1970 From: Michal Hocko Subject: Re: [Intel-gfx] [RFC PATCH] mm, oom: distinguish blockable mode for mmu notifiers Date: Fri, 22 Jun 2018 17:57:16 +0200 Message-ID: <20180622155716.GE10465@dhcp22.suse.cz> References: <20180622150242.16558-1-mhocko@kernel.org> <152968180950.11773.3374981930722769733@mail.alporthouse.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: Content-Disposition: inline In-Reply-To: <152968180950.11773.3374981930722769733@mail.alporthouse.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" To: Chris Wilson Cc: Rodrigo@kvack.org, Michal Hocko =?utf-8?B?PG1ob2Nrb0BzdXNlLmNvbT4sIGt2bUB2Z2VyLmtlcm5l?= =?utf-8?B?bC5vcmcsICAiIFJhZGltIEtyxI1tw6HFmSA8cmtyY21hckByZWRoYXQuY29t?= =?utf-8?B?Piw=?= David Airlie , Sudeep Dutt , dri-devel@lists.freedesktop.org, Deucher@kvack.org, linux-mm@kvack.org, Mike@kvack.org, Vivi@kvack.org, Juergen@kvack.org, Andrea Arcangeli , Dimitri Sivanich , Paolo@kvack.org, Dennis@kvack.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, Boris@kvack.org, Jason Gunthorpe , Doug Ledford , David Rientjes , xen-devel@lists.xenproject.org, Ashutosh@kvack.org, Marciniszyn@kvack.org, Alex@kvack.org, intel-gfx@lists.freedesktop.org, Dalessandro@kvack.org, =?iso-8859-1?B?IiBK6XL0bWU=?= Glisse , Ostrovsky@k List-Id: linux-rdma@vger.kernel.org T24gRnJpIDIyLTA2LTE4IDE2OjM2OjQ5LCBDaHJpcyBXaWxzb24gd3JvdGU6Cj4gUXVvdGluZyBN aWNoYWwgSG9ja28gKDIwMTgtMDYtMjIgMTY6MDI6NDIpCj4gPiBIaSwKPiA+IHRoaXMgaXMgYW4g UkZDIGFuZCBub3QgdGVzdGVkIGF0IGFsbC4gSSBhbSBub3QgdmVyeSBmYW1pbGlhciB3aXRoIHRo ZQo+ID4gbW11IG5vdGlmaWVycyBzZW1hbnRpY3MgdmVyeSBtdWNoIHNvIHRoaXMgaXMgYSBjcnVk ZSBhdHRlbXB0IHRvIGFjaGlldmUKPiA+IHdoYXQgSSBuZWVkIGJhc2ljYWxseS4gSXQgbWlnaHQg YmUgY29tcGxldGVseSB3cm9uZyBidXQgSSB3b3VsZCBsaWtlCj4gPiB0byBkaXNjdXNzIHdoYXQg d291bGQgYmUgYSBiZXR0ZXIgd2F5IGlmIHRoYXQgaXMgdGhlIGNhc2UuCj4gPiAKPiA+IGdldF9t YWludGFpbmVycyBnYXZlIG1lIHF1aXRlIGxhcmdlIGxpc3Qgb2YgcGVvcGxlIHRvIENDIHNvIEkg aGFkIHRvIHRyaW0KPiA+IGl0IGRvd24uIElmIHlvdSB0aGluayBJIGhhdmUgZm9yZ290IHNvbWVi b2R5LCBwbGVhc2UgbGV0IG1lIGtub3cKPiAKPiA+IGRpZmYgLS1naXQgYS9kcml2ZXJzL2dwdS9k cm0vaTkxNS9pOTE1X2dlbV91c2VycHRyLmMgYi9kcml2ZXJzL2dwdS9kcm0vaTkxNS9pOTE1X2dl bV91c2VycHRyLmMKPiA+IGluZGV4IDg1NGJkNTFiOTQ3OC4uNTI4NWRmOTMzMWZhIDEwMDY0NAo+ ID4gLS0tIGEvZHJpdmVycy9ncHUvZHJtL2k5MTUvaTkxNV9nZW1fdXNlcnB0ci5jCj4gPiArKysg Yi9kcml2ZXJzL2dwdS9kcm0vaTkxNS9pOTE1X2dlbV91c2VycHRyLmMKPiA+IEBAIC0xMTIsMTAg KzExMiwxMSBAQCBzdGF0aWMgdm9pZCBkZWxfb2JqZWN0KHN0cnVjdCBpOTE1X21tdV9vYmplY3Qg Km1vKQo+ID4gICAgICAgICBtby0+YXR0YWNoZWQgPSBmYWxzZTsKPiA+ICB9Cj4gPiAgCj4gPiAt c3RhdGljIHZvaWQgaTkxNV9nZW1fdXNlcnB0cl9tbl9pbnZhbGlkYXRlX3JhbmdlX3N0YXJ0KHN0 cnVjdCBtbXVfbm90aWZpZXIgKl9tbiwKPiA+ICtzdGF0aWMgaW50IGk5MTVfZ2VtX3VzZXJwdHJf bW5faW52YWxpZGF0ZV9yYW5nZV9zdGFydChzdHJ1Y3QgbW11X25vdGlmaWVyICpfbW4sCj4gPiAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgc3Ry dWN0IG1tX3N0cnVjdCAqbW0sCj4gPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQgbG9uZyBzdGFydCwKPiA+IC0gICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBsb25n IGVuZCkKPiA+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICB1bnNpZ25lZCBsb25nIGVuZCwKPiA+ICsgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBib29sIGJsb2NrYWJsZSkKPiA+ICB7Cj4gPiAg ICAgICAgIHN0cnVjdCBpOTE1X21tdV9ub3RpZmllciAqbW4gPQo+ID4gICAgICAgICAgICAgICAg IGNvbnRhaW5lcl9vZihfbW4sIHN0cnVjdCBpOTE1X21tdV9ub3RpZmllciwgbW4pOwo+ID4gQEAg LTEyNCw3ICsxMjUsNyBAQCBzdGF0aWMgdm9pZCBpOTE1X2dlbV91c2VycHRyX21uX2ludmFsaWRh dGVfcmFuZ2Vfc3RhcnQoc3RydWN0IG1tdV9ub3RpZmllciAqX21uLAo+ID4gICAgICAgICBMSVNU X0hFQUQoY2FuY2VsbGVkKTsKPiA+ICAKPiA+ICAgICAgICAgaWYgKFJCX0VNUFRZX1JPT1QoJm1u LT5vYmplY3RzLnJiX3Jvb3QpKQo+ID4gLSAgICAgICAgICAgICAgIHJldHVybjsKPiA+ICsgICAg ICAgICAgICAgICByZXR1cm4gMDsKPiAKPiBUaGUgcHJpbmNpcGxlIHdhaXQgaGVyZSBpcyBmb3Ig dGhlIEhXIChldmVuIGFmdGVyIGZpeGluZyBhbGwgdGhlIGxvY2tzCj4gdG8gYmUgbm90IHNvIGNv YXJzZSwgd2Ugc3RpbGwgaGF2ZSB0byB3YWl0IGZvciB0aGUgSFcgdG8gZmluaXNoIGl0cwo+IGFj Y2VzcykuCgpJcyB0aGlzIHdhaXQgYm91bmQgb3IgaXQgY2FuIHRha2UgYmFzaWNhbGx5IGFyYml0 cmFyeSBhbW91bnQgb2YgdGltZT8KCj4gVGhlIGZpcnN0IHBhc3Mgd291bGQgYmUgdGhlbiB0byBu b3QgZG8gYW55dGhpbmcgaGVyZSBpZgo+ICFibG9ja2FibGUuCgpzb21ldGhpbmcgbGlrZSB0aGlz PyAoaW5jcmVtZW50YWwgZGlmZikKCmRpZmYgLS1naXQgYS9kcml2ZXJzL2dwdS9kcm0vaTkxNS9p OTE1X2dlbV91c2VycHRyLmMgYi9kcml2ZXJzL2dwdS9kcm0vaTkxNS9pOTE1X2dlbV91c2VycHRy LmMKaW5kZXggNTI4NWRmOTMzMWZhLi5lOWVkMGQyY2ZhYmMgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMv Z3B1L2RybS9pOTE1L2k5MTVfZ2VtX3VzZXJwdHIuYworKysgYi9kcml2ZXJzL2dwdS9kcm0vaTkx NS9pOTE1X2dlbV91c2VycHRyLmMKQEAgLTEyMiw2ICsxMjIsNyBAQCBzdGF0aWMgaW50IGk5MTVf Z2VtX3VzZXJwdHJfbW5faW52YWxpZGF0ZV9yYW5nZV9zdGFydChzdHJ1Y3QgbW11X25vdGlmaWVy ICpfbW4sCiAJCWNvbnRhaW5lcl9vZihfbW4sIHN0cnVjdCBpOTE1X21tdV9ub3RpZmllciwgbW4p OwogCXN0cnVjdCBpOTE1X21tdV9vYmplY3QgKm1vOwogCXN0cnVjdCBpbnRlcnZhbF90cmVlX25v ZGUgKml0OworCWludCByZXQgPSAwOwogCUxJU1RfSEVBRChjYW5jZWxsZWQpOwogCiAJaWYgKFJC X0VNUFRZX1JPT1QoJm1uLT5vYmplY3RzLnJiX3Jvb3QpKQpAQCAtMTMzLDYgKzEzNCwxMCBAQCBz dGF0aWMgaW50IGk5MTVfZ2VtX3VzZXJwdHJfbW5faW52YWxpZGF0ZV9yYW5nZV9zdGFydChzdHJ1 Y3QgbW11X25vdGlmaWVyICpfbW4sCiAJc3Bpbl9sb2NrKCZtbi0+bG9jayk7CiAJaXQgPSBpbnRl cnZhbF90cmVlX2l0ZXJfZmlyc3QoJm1uLT5vYmplY3RzLCBzdGFydCwgZW5kKTsKIAl3aGlsZSAo aXQpIHsKKwkJaWYgKCFibG9ja2FibGUpIHsKKwkJCXJldCA9IC1FQUdBSU47CisJCQlnb3RvIG91 dF91bmxvY2s7CisJCX0KIAkJLyogVGhlIG1tdV9vYmplY3QgaXMgcmVsZWFzZWQgbGF0ZSB3aGVu IGRlc3Ryb3lpbmcgdGhlCiAJCSAqIEdFTSBvYmplY3Qgc28gaXQgaXMgZW50aXJlbHkgcG9zc2li bGUgdG8gZ2FpbiBhCiAJCSAqIHJlZmVyZW5jZSBvbiBhbiBvYmplY3QgaW4gdGhlIHByb2Nlc3Mg b2YgYmVpbmcgZnJlZWQKQEAgLTE1NCw4ICsxNTksMTAgQEAgc3RhdGljIGludCBpOTE1X2dlbV91 c2VycHRyX21uX2ludmFsaWRhdGVfcmFuZ2Vfc3RhcnQoc3RydWN0IG1tdV9ub3RpZmllciAqX21u LAogCXNwaW5fdW5sb2NrKCZtbi0+bG9jayk7CiAKIAkvKiBUT0RPOiBjYW4gd2Ugc2tpcCB3YWl0 aW5nIGhlcmU/ICovCi0JaWYgKCFsaXN0X2VtcHR5KCZjYW5jZWxsZWQpICYmIGJsb2NrYWJsZSkK KwlpZiAoIWxpc3RfZW1wdHkoJmNhbmNlbGxlZCkpCiAJCWZsdXNoX3dvcmtxdWV1ZShtbi0+d3Ep OworCisJcmV0dXJuIHJldDsKIH0KIAogc3RhdGljIGNvbnN0IHN0cnVjdCBtbXVfbm90aWZpZXJf b3BzIGk5MTVfZ2VtX3VzZXJwdHJfbm90aWZpZXIgPSB7Ci0tIApNaWNoYWwgSG9ja28KU1VTRSBM YWJzCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCmRyaS1k ZXZlbCBtYWlsaW5nIGxpc3QKZHJpLWRldmVsQGxpc3RzLmZyZWVkZXNrdG9wLm9yZwpodHRwczov L2xpc3RzLmZyZWVkZXNrdG9wLm9yZy9tYWlsbWFuL2xpc3RpbmZvL2RyaS1kZXZlbAo= From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.5 required=3.0 tests=MAILING_LIST_MULTI,SPF_PASS, USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 058D1C43142 for ; Fri, 22 Jun 2018 15:57:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BD02124048 for ; Fri, 22 Jun 2018 15:57:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BD02124048 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934065AbeFVP5Y (ORCPT ); Fri, 22 Jun 2018 11:57:24 -0400 Received: from mx2.suse.de ([195.135.220.15]:57630 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933853AbeFVP5V (ORCPT ); Fri, 22 Jun 2018 11:57:21 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext-too.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 2DA27AD5C; Fri, 22 Jun 2018 15:57:19 +0000 (UTC) Date: Fri, 22 Jun 2018 17:57:16 +0200 From: Michal Hocko To: Chris Wilson Cc: LKML , Michal Hocko =?utf-8?B?PG1ob2Nrb0BzdXNlLmNvbT4sIGt2bUB2Z2VyLmtlcm5l?= =?utf-8?B?bC5vcmcsICAiIFJhZGltIEtyxI1tw6HFmSA8cmtyY21hckByZWRoYXQuY29t?= =?utf-8?B?Piw=?= David Airlie , Sudeep Dutt , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, Andrea Arcangeli , "David (ChunMing) Zhou" , Dimitri Sivanich , linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, Jason Gunthorpe , Doug Ledford , David Rientjes , xen-devel@lists.xenproject.org, intel-gfx@lists.freedesktop.org, =?iso-8859-1?B?IiBK6XL0bWU=?= Glisse , Rodrigo@kvack.org, Vivi@kvack.org, Boris@kvack.org, Ostrovsky@kvack.org, Juergen@kvack.org, Gross@kvack.org, Mike@kvack.org, Marciniszyn@kvack.org, Dennis@kvack.org, Dalessandro@kvack.org, Ashutosh@kvack.org, Dixit@kvack.org, Alex@kvack.org, Deucher@kvack.org, Paolo@kvack.org, Bonzini@kvack.org Subject: Re: [Intel-gfx] [RFC PATCH] mm, oom: distinguish blockable mode for mmu notifiers Message-ID: <20180622155716.GE10465@dhcp22.suse.cz> References: <20180622150242.16558-1-mhocko@kernel.org> <152968180950.11773.3374981930722769733@mail.alporthouse.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <152968180950.11773.3374981930722769733@mail.alporthouse.com> User-Agent: Mutt/1.9.5 (2018-04-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri 22-06-18 16:36:49, Chris Wilson wrote: > Quoting Michal Hocko (2018-06-22 16:02:42) > > Hi, > > this is an RFC and not tested at all. I am not very familiar with the > > mmu notifiers semantics very much so this is a crude attempt to achieve > > what I need basically. It might be completely wrong but I would like > > to discuss what would be a better way if that is the case. > > > > get_maintainers gave me quite large list of people to CC so I had to trim > > it down. If you think I have forgot somebody, please let me know > > > diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c > > index 854bd51b9478..5285df9331fa 100644 > > --- a/drivers/gpu/drm/i915/i915_gem_userptr.c > > +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c > > @@ -112,10 +112,11 @@ static void del_object(struct i915_mmu_object *mo) > > mo->attached = false; > > } > > > > -static void i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, > > +static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, > > struct mm_struct *mm, > > unsigned long start, > > - unsigned long end) > > + unsigned long end, > > + bool blockable) > > { > > struct i915_mmu_notifier *mn = > > container_of(_mn, struct i915_mmu_notifier, mn); > > @@ -124,7 +125,7 @@ static void i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, > > LIST_HEAD(cancelled); > > > > if (RB_EMPTY_ROOT(&mn->objects.rb_root)) > > - return; > > + return 0; > > The principle wait here is for the HW (even after fixing all the locks > to be not so coarse, we still have to wait for the HW to finish its > access). Is this wait bound or it can take basically arbitrary amount of time? > The first pass would be then to not do anything here if > !blockable. something like this? (incremental diff) diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c index 5285df9331fa..e9ed0d2cfabc 100644 --- a/drivers/gpu/drm/i915/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c @@ -122,6 +122,7 @@ static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, container_of(_mn, struct i915_mmu_notifier, mn); struct i915_mmu_object *mo; struct interval_tree_node *it; + int ret = 0; LIST_HEAD(cancelled); if (RB_EMPTY_ROOT(&mn->objects.rb_root)) @@ -133,6 +134,10 @@ static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, spin_lock(&mn->lock); it = interval_tree_iter_first(&mn->objects, start, end); while (it) { + if (!blockable) { + ret = -EAGAIN; + goto out_unlock; + } /* The mmu_object is released late when destroying the * GEM object so it is entirely possible to gain a * reference on an object in the process of being freed @@ -154,8 +159,10 @@ static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, spin_unlock(&mn->lock); /* TODO: can we skip waiting here? */ - if (!list_empty(&cancelled) && blockable) + if (!list_empty(&cancelled)) flush_workqueue(mn->wq); + + return ret; } static const struct mmu_notifier_ops i915_gem_userptr_notifier = { -- Michal Hocko SUSE Labs