From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_PASS,USER_AGENT_NEOMUTT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC6C6C43218 for ; Thu, 25 Apr 2019 13:29:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AAFFD216C8 for ; Thu, 25 Apr 2019 13:29:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727597AbfDYN3w (ORCPT ); Thu, 25 Apr 2019 09:29:52 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49513 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726120AbfDYN3v (ORCPT ); Thu, 25 Apr 2019 09:29:51 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 4B577307B4B0; Thu, 25 Apr 2019 13:29:50 +0000 (UTC) Received: from treble (ovpn-123-99.rdu2.redhat.com [10.10.123.99]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 390A060141; Thu, 25 Apr 2019 13:29:39 +0000 (UTC) Date: Thu, 25 Apr 2019 08:29:35 -0500 From: Josh Poimboeuf To: Thomas Gleixner Cc: LKML , x86@kernel.org, Andy Lutomirski , Steven Rostedt , Alexander Potapenko , Alexey Dobriyan , Andrew Morton , Christoph Lameter , Pekka Enberg , linux-mm@kvack.org, David Rientjes , Catalin Marinas , Dmitry Vyukov , Andrey Ryabinin , kasan-dev@googlegroups.com, Mike Rapoport , Akinobu Mita , Christoph Hellwig , iommu@lists.linux-foundation.org, Robin Murphy , Marek Szyprowski , Johannes Thumshirn , David Sterba , Chris Mason , Josef Bacik , linux-btrfs@vger.kernel.org, dm-devel@redhat.com, Mike Snitzer , Alasdair Kergon , Daniel Vetter , intel-gfx@lists.freedesktop.org, Joonas Lahtinen , Maarten Lankhorst , dri-devel@lists.freedesktop.org, David Airlie , Jani Nikula , Rodrigo Vivi , Tom Zanussi , Miroslav Benes , linux-arch@vger.kernel.org Subject: Re: [patch V3 21/29] tracing: Use percpu stack trace buffer more intelligently Message-ID: <20190425132935.ae35l5oybby5ddgl@treble> References: <20190425094453.875139013@linutronix.de> <20190425094803.066064076@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20190425094803.066064076@linutronix.de> User-Agent: NeoMutt/20180716 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.46]); Thu, 25 Apr 2019 13:29:51 +0000 (UTC) Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org On Thu, Apr 25, 2019 at 11:45:14AM +0200, Thomas Gleixner wrote: > @@ -2788,29 +2798,32 @@ static void __ftrace_trace_stack(struct > */ > preempt_disable_notrace(); > > - use_stack = __this_cpu_inc_return(ftrace_stack_reserve); > + stackidx = __this_cpu_inc_return(ftrace_stack_reserve); > + > + /* This should never happen. If it does, yell once and skip */ > + if (WARN_ON_ONCE(stackidx >= FTRACE_KSTACK_NESTING)) > + goto out; > + > /* > - * We don't need any atomic variables, just a barrier. > - * If an interrupt comes in, we don't care, because it would > - * have exited and put the counter back to what we want. > - * We just need a barrier to keep gcc from moving things > - * around. > + * The above __this_cpu_inc_return() is 'atomic' cpu local. An > + * interrupt will either see the value pre increment or post > + * increment. If the interrupt happens pre increment it will have > + * restored the counter when it returns. We just need a barrier to > + * keep gcc from moving things around. > */ > barrier(); > - if (use_stack == 1) { > - trace.entries = this_cpu_ptr(ftrace_stack.calls); > - trace.max_entries = FTRACE_STACK_MAX_ENTRIES; > - > - if (regs) > - save_stack_trace_regs(regs, &trace); > - else > - save_stack_trace(&trace); > - > - if (trace.nr_entries > size) > - size = trace.nr_entries; > - } else > - /* From now on, use_stack is a boolean */ > - use_stack = 0; > + > + fstack = this_cpu_ptr(ftrace_stacks.stacks) + (stackidx - 1); nit: it would be slightly less surprising if stackidx were 0-based: diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index d3f6ec7eb729..4fc93004feab 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -2798,10 +2798,10 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer, */ preempt_disable_notrace(); - stackidx = __this_cpu_inc_return(ftrace_stack_reserve); + stackidx = __this_cpu_inc_return(ftrace_stack_reserve) - 1; /* This should never happen. If it does, yell once and skip */ - if (WARN_ON_ONCE(stackidx >= FTRACE_KSTACK_NESTING)) + if (WARN_ON_ONCE(stackidx > FTRACE_KSTACK_NESTING)) goto out; /* @@ -2813,7 +2813,7 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer, */ barrier(); - fstack = this_cpu_ptr(ftrace_stacks.stacks) + (stackidx - 1); + fstack = this_cpu_ptr(ftrace_stacks.stacks) + stackidx; trace.entries = fstack->calls; trace.max_entries = FTRACE_KSTACK_ENTRIES; From mboxrd@z Thu Jan 1 00:00:00 1970 From: Josh Poimboeuf Subject: Re: [patch V3 21/29] tracing: Use percpu stack trace buffer more intelligently Date: Thu, 25 Apr 2019 08:29:35 -0500 Message-ID: <20190425132935.ae35l5oybby5ddgl@treble> References: <20190425094453.875139013@linutronix.de> <20190425094803.066064076@linutronix.de> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: Content-Disposition: inline In-Reply-To: <20190425094803.066064076@linutronix.de> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" To: Thomas Gleixner Cc: Mike Snitzer , David Airlie , Catalin Marinas , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, dm-devel@redhat.com, Alexander Potapenko , Christoph Lameter , Miroslav Benes , Christoph Hellwig , Alasdair Kergon , Marek Szyprowski , linux-arch@vger.kernel.org, x86@kernel.org, kasan-dev@googlegroups.com, Johannes Thumshirn , Andrey Ryabinin , Alexey Dobriyan , intel-gfx@lists.freedesktop.org, David Rientjes , Akinobu Mita , Steven Rostedt , Josef Bacik , Mike Rapoport , Andy Lutomirski , David Sterba List-Id: linux-arch.vger.kernel.org T24gVGh1LCBBcHIgMjUsIDIwMTkgYXQgMTE6NDU6MTRBTSArMDIwMCwgVGhvbWFzIEdsZWl4bmVy IHdyb3RlOgo+IEBAIC0yNzg4LDI5ICsyNzk4LDMyIEBAIHN0YXRpYyB2b2lkIF9fZnRyYWNlX3Ry YWNlX3N0YWNrKHN0cnVjdAo+ICAJICovCj4gIAlwcmVlbXB0X2Rpc2FibGVfbm90cmFjZSgpOwo+ ICAKPiAtCXVzZV9zdGFjayA9IF9fdGhpc19jcHVfaW5jX3JldHVybihmdHJhY2Vfc3RhY2tfcmVz ZXJ2ZSk7Cj4gKwlzdGFja2lkeCA9IF9fdGhpc19jcHVfaW5jX3JldHVybihmdHJhY2Vfc3RhY2tf cmVzZXJ2ZSk7Cj4gKwo+ICsJLyogVGhpcyBzaG91bGQgbmV2ZXIgaGFwcGVuLiBJZiBpdCBkb2Vz LCB5ZWxsIG9uY2UgYW5kIHNraXAgKi8KPiArCWlmIChXQVJOX09OX09OQ0Uoc3RhY2tpZHggPj0g RlRSQUNFX0tTVEFDS19ORVNUSU5HKSkKPiArCQlnb3RvIG91dDsKPiArCj4gIAkvKgo+IC0JICog V2UgZG9uJ3QgbmVlZCBhbnkgYXRvbWljIHZhcmlhYmxlcywganVzdCBhIGJhcnJpZXIuCj4gLQkg KiBJZiBhbiBpbnRlcnJ1cHQgY29tZXMgaW4sIHdlIGRvbid0IGNhcmUsIGJlY2F1c2UgaXQgd291 bGQKPiAtCSAqIGhhdmUgZXhpdGVkIGFuZCBwdXQgdGhlIGNvdW50ZXIgYmFjayB0byB3aGF0IHdl IHdhbnQuCj4gLQkgKiBXZSBqdXN0IG5lZWQgYSBiYXJyaWVyIHRvIGtlZXAgZ2NjIGZyb20gbW92 aW5nIHRoaW5ncwo+IC0JICogYXJvdW5kLgo+ICsJICogVGhlIGFib3ZlIF9fdGhpc19jcHVfaW5j X3JldHVybigpIGlzICdhdG9taWMnIGNwdSBsb2NhbC4gQW4KPiArCSAqIGludGVycnVwdCB3aWxs IGVpdGhlciBzZWUgdGhlIHZhbHVlIHByZSBpbmNyZW1lbnQgb3IgcG9zdAo+ICsJICogaW5jcmVt ZW50LiBJZiB0aGUgaW50ZXJydXB0IGhhcHBlbnMgcHJlIGluY3JlbWVudCBpdCB3aWxsIGhhdmUK PiArCSAqIHJlc3RvcmVkIHRoZSBjb3VudGVyIHdoZW4gaXQgcmV0dXJucy4gIFdlIGp1c3QgbmVl ZCBhIGJhcnJpZXIgdG8KPiArCSAqIGtlZXAgZ2NjIGZyb20gbW92aW5nIHRoaW5ncyBhcm91bmQu Cj4gIAkgKi8KPiAgCWJhcnJpZXIoKTsKPiAtCWlmICh1c2Vfc3RhY2sgPT0gMSkgewo+IC0JCXRy YWNlLmVudHJpZXMJCT0gdGhpc19jcHVfcHRyKGZ0cmFjZV9zdGFjay5jYWxscyk7Cj4gLQkJdHJh Y2UubWF4X2VudHJpZXMJPSBGVFJBQ0VfU1RBQ0tfTUFYX0VOVFJJRVM7Cj4gLQo+IC0JCWlmIChy ZWdzKQo+IC0JCQlzYXZlX3N0YWNrX3RyYWNlX3JlZ3MocmVncywgJnRyYWNlKTsKPiAtCQllbHNl Cj4gLQkJCXNhdmVfc3RhY2tfdHJhY2UoJnRyYWNlKTsKPiAtCj4gLQkJaWYgKHRyYWNlLm5yX2Vu dHJpZXMgPiBzaXplKQo+IC0JCQlzaXplID0gdHJhY2UubnJfZW50cmllczsKPiAtCX0gZWxzZQo+ IC0JCS8qIEZyb20gbm93IG9uLCB1c2Vfc3RhY2sgaXMgYSBib29sZWFuICovCj4gLQkJdXNlX3N0 YWNrID0gMDsKPiArCj4gKwlmc3RhY2sgPSB0aGlzX2NwdV9wdHIoZnRyYWNlX3N0YWNrcy5zdGFj a3MpICsgKHN0YWNraWR4IC0gMSk7CgpuaXQ6IGl0IHdvdWxkIGJlIHNsaWdodGx5IGxlc3Mgc3Vy cHJpc2luZyBpZiBzdGFja2lkeCB3ZXJlIDAtYmFzZWQ6CgpkaWZmIC0tZ2l0IGEva2VybmVsL3Ry YWNlL3RyYWNlLmMgYi9rZXJuZWwvdHJhY2UvdHJhY2UuYwppbmRleCBkM2Y2ZWM3ZWI3MjkuLjRm YzkzMDA0ZmVhYiAxMDA2NDQKLS0tIGEva2VybmVsL3RyYWNlL3RyYWNlLmMKKysrIGIva2VybmVs L3RyYWNlL3RyYWNlLmMKQEAgLTI3OTgsMTAgKzI3OTgsMTAgQEAgc3RhdGljIHZvaWQgX19mdHJh Y2VfdHJhY2Vfc3RhY2soc3RydWN0IHJpbmdfYnVmZmVyICpidWZmZXIsCiAJICovCiAJcHJlZW1w dF9kaXNhYmxlX25vdHJhY2UoKTsKIAotCXN0YWNraWR4ID0gX190aGlzX2NwdV9pbmNfcmV0dXJu KGZ0cmFjZV9zdGFja19yZXNlcnZlKTsKKwlzdGFja2lkeCA9IF9fdGhpc19jcHVfaW5jX3JldHVy bihmdHJhY2Vfc3RhY2tfcmVzZXJ2ZSkgLSAxOwogCiAJLyogVGhpcyBzaG91bGQgbmV2ZXIgaGFw cGVuLiBJZiBpdCBkb2VzLCB5ZWxsIG9uY2UgYW5kIHNraXAgKi8KLQlpZiAoV0FSTl9PTl9PTkNF KHN0YWNraWR4ID49IEZUUkFDRV9LU1RBQ0tfTkVTVElORykpCisJaWYgKFdBUk5fT05fT05DRShz dGFja2lkeCA+IEZUUkFDRV9LU1RBQ0tfTkVTVElORykpCiAJCWdvdG8gb3V0OwogCiAJLyoKQEAg LTI4MTMsNyArMjgxMyw3IEBAIHN0YXRpYyB2b2lkIF9fZnRyYWNlX3RyYWNlX3N0YWNrKHN0cnVj dCByaW5nX2J1ZmZlciAqYnVmZmVyLAogCSAqLwogCWJhcnJpZXIoKTsKIAotCWZzdGFjayA9IHRo aXNfY3B1X3B0cihmdHJhY2Vfc3RhY2tzLnN0YWNrcykgKyAoc3RhY2tpZHggLSAxKTsKKwlmc3Rh Y2sgPSB0aGlzX2NwdV9wdHIoZnRyYWNlX3N0YWNrcy5zdGFja3MpICsgc3RhY2tpZHg7CiAJdHJh Y2UuZW50cmllcwkJPSBmc3RhY2stPmNhbGxzOwogCXRyYWNlLm1heF9lbnRyaWVzCT0gRlRSQUNF X0tTVEFDS19FTlRSSUVTOwogCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fCkludGVsLWdmeCBtYWlsaW5nIGxpc3QKSW50ZWwtZ2Z4QGxpc3RzLmZyZWVkZXNr dG9wLm9yZwpodHRwczovL2xpc3RzLmZyZWVkZXNrdG9wLm9yZy9tYWlsbWFuL2xpc3RpbmZvL2lu dGVsLWdmeA== From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_PASS,URIBL_BLOCKED,USER_AGENT_NEOMUTT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17B28C43219 for ; Thu, 25 Apr 2019 13:30:20 +0000 (UTC) Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C660C212F5 for ; Thu, 25 Apr 2019 13:30:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C660C212F5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=iommu-bounces@lists.linux-foundation.org Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id B0EF81E9B; Thu, 25 Apr 2019 13:30:19 +0000 (UTC) Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 70C391E93 for ; Thu, 25 Apr 2019 13:29:53 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id B1BAC82B for ; Thu, 25 Apr 2019 13:29:51 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 4B577307B4B0; Thu, 25 Apr 2019 13:29:50 +0000 (UTC) Received: from treble (ovpn-123-99.rdu2.redhat.com [10.10.123.99]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 390A060141; Thu, 25 Apr 2019 13:29:39 +0000 (UTC) Date: Thu, 25 Apr 2019 08:29:35 -0500 From: Josh Poimboeuf To: Thomas Gleixner Subject: Re: [patch V3 21/29] tracing: Use percpu stack trace buffer more intelligently Message-ID: <20190425132935.ae35l5oybby5ddgl@treble> References: <20190425094453.875139013@linutronix.de> <20190425094803.066064076@linutronix.de> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20190425094803.066064076@linutronix.de> User-Agent: NeoMutt/20180716 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.46]); Thu, 25 Apr 2019 13:29:51 +0000 (UTC) Cc: Mike Snitzer , David Airlie , Catalin Marinas , Joonas Lahtinen , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, dm-devel@redhat.com, Alexander Potapenko , Christoph Lameter , Miroslav Benes , Christoph Hellwig , Alasdair Kergon , linux-arch@vger.kernel.org, x86@kernel.org, kasan-dev@googlegroups.com, Johannes Thumshirn , Andrey Ryabinin , Alexey Dobriyan , intel-gfx@lists.freedesktop.org, David Rientjes , Maarten Lankhorst , Akinobu Mita , Steven Rostedt , Josef Bacik , Mike Rapoport , Jani Nikula , Andy Lutomirski , Rodrigo Vivi , David Sterba , Dmitry Vyukov , Tom Zanussi , Chris Mason , LKML , Pekka Enberg , iommu@lists.linux-foundation.org, Daniel Vetter , Andrew Morton , Robin Murphy , linux-btrfs@vger.kernel.org X-BeenThere: iommu@lists.linux-foundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Development issues for Linux IOMMU support List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit Sender: iommu-bounces@lists.linux-foundation.org Errors-To: iommu-bounces@lists.linux-foundation.org Message-ID: <20190425132935.RQ0g0srPg2-Sf0rfq3Kr0xvOqkIIhtbuLPn_s02EEHU@z> On Thu, Apr 25, 2019 at 11:45:14AM +0200, Thomas Gleixner wrote: > @@ -2788,29 +2798,32 @@ static void __ftrace_trace_stack(struct > */ > preempt_disable_notrace(); > > - use_stack = __this_cpu_inc_return(ftrace_stack_reserve); > + stackidx = __this_cpu_inc_return(ftrace_stack_reserve); > + > + /* This should never happen. If it does, yell once and skip */ > + if (WARN_ON_ONCE(stackidx >= FTRACE_KSTACK_NESTING)) > + goto out; > + > /* > - * We don't need any atomic variables, just a barrier. > - * If an interrupt comes in, we don't care, because it would > - * have exited and put the counter back to what we want. > - * We just need a barrier to keep gcc from moving things > - * around. > + * The above __this_cpu_inc_return() is 'atomic' cpu local. An > + * interrupt will either see the value pre increment or post > + * increment. If the interrupt happens pre increment it will have > + * restored the counter when it returns. We just need a barrier to > + * keep gcc from moving things around. > */ > barrier(); > - if (use_stack == 1) { > - trace.entries = this_cpu_ptr(ftrace_stack.calls); > - trace.max_entries = FTRACE_STACK_MAX_ENTRIES; > - > - if (regs) > - save_stack_trace_regs(regs, &trace); > - else > - save_stack_trace(&trace); > - > - if (trace.nr_entries > size) > - size = trace.nr_entries; > - } else > - /* From now on, use_stack is a boolean */ > - use_stack = 0; > + > + fstack = this_cpu_ptr(ftrace_stacks.stacks) + (stackidx - 1); nit: it would be slightly less surprising if stackidx were 0-based: diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index d3f6ec7eb729..4fc93004feab 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -2798,10 +2798,10 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer, */ preempt_disable_notrace(); - stackidx = __this_cpu_inc_return(ftrace_stack_reserve); + stackidx = __this_cpu_inc_return(ftrace_stack_reserve) - 1; /* This should never happen. If it does, yell once and skip */ - if (WARN_ON_ONCE(stackidx >= FTRACE_KSTACK_NESTING)) + if (WARN_ON_ONCE(stackidx > FTRACE_KSTACK_NESTING)) goto out; /* @@ -2813,7 +2813,7 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer, */ barrier(); - fstack = this_cpu_ptr(ftrace_stacks.stacks) + (stackidx - 1); + fstack = this_cpu_ptr(ftrace_stacks.stacks) + stackidx; trace.entries = fstack->calls; trace.max_entries = FTRACE_KSTACK_ENTRIES; _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu