linux-kselftest.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Hubbard <jhubbard@nvidia.com>
To: Leon Romanovsky <leon@kernel.org>
Cc: "Jason Gunthorpe" <jgg@ziepe.ca>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	"Al Viro" <viro@zeniv.linux.org.uk>,
	"Alex Williamson" <alex.williamson@redhat.com>,
	"Benjamin Herrenschmidt" <benh@kernel.crashing.org>,
	"Björn Töpel" <bjorn.topel@intel.com>,
	"Christoph Hellwig" <hch@infradead.org>,
	"Dan Williams" <dan.j.williams@intel.com>,
	"Daniel Vetter" <daniel@ffwll.ch>,
	"Dave Chinner" <david@fromorbit.com>,
	"David Airlie" <airlied@linux.ie>,
	"David S . Miller" <davem@davemloft.net>,
	"Ira Weiny" <ira.weiny@intel.com>, "Jan Kara" <jack@suse.cz>,
	"Jens Axboe" <axboe@kernel.dk>,
	"Jonathan Corbet" <corbet@lwn.net>,
	"Jérôme Glisse" <jglisse@redhat.com>,
	"Magnus Karlsson" <magnus.karlsson@intel.com>,
	"Mauro Carvalho Chehab" <mchehab@kernel.org>,
	"Michael Ellerman" <mpe@ellerman.id.au>,
	"Michal Hocko" <mhocko@suse.com>,
	"Mike Kravetz" <mike.kravetz@oracle.com>,
	"Paul Mackerras" <paulus@samba.org>,
	"Shuah Khan" <shuah@kernel.org>,
	"Vlastimil Babka" <vbabka@suse.cz>,
	bpf@vger.kernel.org, dri-devel@lists.freedesktop.org,
	kvm@vger.kernel.org, linux-block@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org,
	linux-rdma@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	netdev@vger.kernel.org, linux-mm@kvack.org,
	LKML <linux-kernel@vger.kernel.org>,
	"Maor Gottlieb" <maorg@mellanox.com>,
	"Ran Rozenstein" <ranro@mellanox.com>
Subject: Re: [PATCH v11 00/25] mm/gup: track dma-pinned pages: FOLL_PIN
Date: Fri, 27 Dec 2019 13:56:48 -0800	[thread overview]
Message-ID: <b879d191-a07c-e808-e48f-2b9bd8ba4fa3@nvidia.com> (raw)
In-Reply-To: <20191225052612.GA212002@unreal>

On 12/24/19 9:26 PM, Leon Romanovsky wrote:
...
>>>> The branch is here (I just tested it and it seems healthy):
>>>>
>>>> git@github.com:johnhubbard/linux.git  pin_user_pages_tracking_v11_with_diags
>>>
>>> Hi,
>>>
>>> We tested the following branch and here comes results:
>>
>> Thanks for this testing run!
>>
>>> [root@server consume_mtts]# (master) $ grep foll_pin /proc/vmstat
>>> nr_foll_pin_requested 0
>>> nr_foll_pin_returned 0
>>>
>>
>> Zero pinned pages!
> 
> Maybe we are missing some CONFIG_* option?
> https://lore.kernel.org/linux-rdma/12a28917-f8c9-5092-2f01-92bb74714cae@nvidia.com/T/#mf900896f5dfc86cdee9246219990c632ed77115f


Ah OK, it must be that CONFIG_DEBUG_VM is not set, thanks!


>> ...now I'm confused. Somehow FOLL_PIN and pin_user_pages*() calls are
>> not happening. And although the backtraces below show some of my new
>> routines (like try_grab_page), they also confirm the above: there is no
>> pin_user_page*() call in the stack.
>>
>> In particular, it looks like ib_umem_get() is calling through to
>> get_user_pages*(), rather than pin_user_pages*(). I don't see how this
>> is possible, because the code on my screen shows ib_umem_get() calling
>> pin_user_pages_fast().
>>

It must be that pin_user_pages() is in the call stack, but just not getting
printed. There's no other way to explain this.

>> Any thoughts or ideas are welcome here.
>>
>> However, glossing over all of that and assuming that the new
>> GUP_PIN_COUNTING_BIAS of 256 is applied, it's interesting that we still
>> see any overflow. I'm less confident now that this is a true refcount
>> overflow.
> 
> Earlier in this email thread, I posted possible function call chain which
> doesn't involve refcount overflow, but for some reason the refcount
> overflow was chosen as a way to explore.
> 

Well, both of the WARN() calls are asserting that the refcount went negative
(well, one asserts negative, and the other asserts "<=0"). So it's pretty
hard to talk our way out of a refcount overflow here.

>>
>> Also, any information that would get me closer to being able to attempt
>> my own reproduction of the problem are *very* welcome. :)
> 
> It is ancient verification test (~10y) which is not an easy task to
> make it understandable and standalone :).
> 

Is this the only test that fails, btw? No other test failures or hints of
problems?

(Also, maybe hopeless, but can *anyone* on the RDMA list provide some
characterization of the test, such as how many pins per page, what page
sizes are used? I'm still hoping to write a test to trigger something
close to this...)

I do have a couple more ideas for test runs:

1. Reduce GUP_PIN_COUNTING_BIAS to 1. That would turn the whole override of
page->_refcount into a no-op, and so if all is well (it may not be!) with the
rest of the patch, then we'd expect this problem to not reappear.

2. Active /proc/vmstat *foll_pin* statistics unconditionally (just for these
tests, of course), so we can see if there is a get/put mismatch. However, that
will change the timing, and so it must be attempted independently of (1), in
order to see if it ends up hiding the repro.

I've updated this branch to implement (1), but not (2), hoping you can give
this one a spin?

     git@github.com:johnhubbard/linux.git  pin_user_pages_tracking_v11_with_diags


thanks,
-- 
John Hubbard
NVIDIA


>>
>>> [root@serer consume_mtts]# (master) $ dmesg
>>> [  425.221459] ------------[ cut here ]------------
>>> [  425.225894] WARNING: CPU: 1 PID: 6738 at mm/gup.c:61 try_grab_compound_head+0x90/0xa0
>>> [  425.228021] Modules linked in: mlx5_ib mlx5_core mlxfw mlx4_ib mlx4_en ptp pps_core mlx4_core bonding ip6_gre ip6_tunnel tunnel6 ip_gre gre ip_tunnel rdma_rxe ip6_udp_tunnel udp_tunnel rdma_ucm ib_uverbs ib_ipoib ib_umad ib_srp scsi_transport_srp rpcrdma ib_iser libiscsi scsi_transport_iscsi rdma_cm iw_cm ib_cm ib_core [last unloaded: mlxfw]
>>> [  425.235266] CPU: 1 PID: 6738 Comm: consume_mtts Tainted: G           O      5.5.0-rc2+ #1
>>> [  425.237480] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014
>>> [  425.239738] RIP: 0010:try_grab_compound_head+0x90/0xa0
>>> [  425.241170] Code: 06 48 8d 4f 34 f0 0f b1 57 34 74 cd 85 c0 74 cf 8d 14 06 f0 0f b1 11 74 c0 eb f1 8d 14 06 f0 0f b1 11 74 b5 85 c0 75 f3 eb b5 <0f> 0b 31 c0 c3 90 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 41
>>> [  425.245739] RSP: 0018:ffffc900006878a8 EFLAGS: 00010082
>>> [  425.247124] RAX: 0000000080000001 RBX: 00007f780488a000 RCX: 0000000000000bb0
>>> [  425.248956] RDX: ffffea000e031087 RSI: 0000000000008a00 RDI: ffffea000dc58000
>>> [  425.250761] RBP: ffffea000e031080 R08: ffffc90000687974 R09: 000fffffffe00000
>>> [  425.252661] R10: 0000000000000000 R11: ffff888362560000 R12: 000000000000008a
>>> [  425.254487] R13: 80000003716000e7 R14: 00007f780488a000 R15: ffffc90000687974
>>> [  425.256309] FS:  00007f780d9d3740(0000) GS:ffff8883b1c80000(0000) knlGS:0000000000000000
>>> [  425.258401] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>> [  425.259949] CR2: 0000000002334048 CR3: 000000039c68c001 CR4: 00000000001606a0
>>> [  425.261884] Call Trace:
>>> [  425.262735]  gup_pgd_range+0x517/0x5a0
>>> [  425.263819]  internal_get_user_pages_fast+0x210/0x250
>>> [  425.265193]  ib_umem_get+0x298/0x550 [ib_uverbs]
>>> [  425.266476]  mr_umem_get+0xc9/0x260 [mlx5_ib]
>>> [  425.267699]  mlx5_ib_reg_user_mr+0xcc/0x7e0 [mlx5_ib]
>>> [  425.269134]  ? xas_load+0x8/0x80
>>> [  425.270074]  ? xa_load+0x48/0x90
>>> [  425.271038]  ? lookup_get_idr_uobject.part.10+0x12/0x70 [ib_uverbs]
>>> [  425.272757]  ib_uverbs_reg_mr+0x127/0x280 [ib_uverbs]
>>> [  425.274120]  ib_uverbs_handler_UVERBS_METHOD_INVOKE_WRITE+0xc2/0xf0 [ib_uverbs]
>>> [  425.276058]  ib_uverbs_cmd_verbs.isra.6+0x5be/0xbe0 [ib_uverbs]
>>> [  425.277657]  ? uverbs_disassociate_api+0xd0/0xd0 [ib_uverbs]
>>> [  425.279155]  ? __alloc_pages_nodemask+0x148/0x2b0
>>> [  425.280445]  ib_uverbs_ioctl+0xc0/0x120 [ib_uverbs]
>>> [  425.281755]  do_vfs_ioctl+0x9d/0x650
>>> [  425.282766]  ksys_ioctl+0x70/0x80
>>> [  425.283745]  __x64_sys_ioctl+0x16/0x20
>>> [  425.284912]  do_syscall_64+0x42/0x130
>>> [  425.285973]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
>>> [  425.287377] RIP: 0033:0x7f780d2df267
>>> [  425.288449] Code: b3 66 90 48 8b 05 19 3c 2c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d e9 3b 2c 00 f7 d8 64 89 01 48
>>> [  425.293073] RSP: 002b:00007ffce49a88a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
>>> [  425.295034] RAX: ffffffffffffffda RBX: 00007ffce49a8938 RCX: 00007f780d2df267
>>> [  425.296895] RDX: 00007ffce49a8920 RSI: 00000000c0181b01 RDI: 0000000000000003
>>> [  425.298689] RBP: 00007ffce49a8900 R08: 0000000000000003 R09: 00007f780d9a1010
>>> [  425.300480] R10: 00000000ffffffff R11: 0000000000000246 R12: 00007f780d9a1150
>>> [  425.302290] R13: 00007ffce49a8900 R14: 00007ffce49a8ad8 R15: 00007f780468a000
>>> [  425.304113] ---[ end trace 1ecbefdb403190dd ]---
>>> [  425.305434] ------------[ cut here ]------------
>>> [  425.307147] WARNING: CPU: 1 PID: 6738 at mm/gup.c:150 try_grab_page+0x56/0x60
>>> [  425.309111] Modules linked in: mlx5_ib mlx5_core mlxfw mlx4_ib mlx4_en ptp pps_core mlx4_core bonding ip6_gre ip6_tunnel tunnel6 ip_gre gre ip_tunnel rdma_rxe ip6_udp_tunnel udp_tunnel rdma_ucm ib_uverbs ib_ipoib ib_umad ib_srp scsi_transport_srp rpcrdma ib_iser libiscsi scsi_transport_iscsi rdma_cm iw_cm ib_cm ib_core [last unloaded: mlxfw]
>>> [  425.316461] CPU: 1 PID: 6738 Comm: consume_mtts Tainted: G        W  O      5.5.0-rc2+ #1
>>> [  425.318582] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014
>>> [  425.320958] RIP: 0010:try_grab_page+0x56/0x60
>>> [  425.322167] Code: 7e 28 f0 81 47 34 00 01 00 00 c3 48 8b 47 08 48 8d 50 ff a8 01 48 0f 45 fa 8b 47 34 85 c0 7e 0f f0 ff 47 34 b8 01 00 00 00 c3 <0f> 0b 31 c0 c3 0f 0b 31 c0 c3 0f 1f 44 00 00 41 57 41 56 41 55 41
>>> [  425.326814] RSP: 0018:ffffc90000687830 EFLAGS: 00010282
>>> [  425.328226] RAX: 0000000000000001 RBX: ffffea000dc58000 RCX: ffffea000e031087
>>> [  425.330104] RDX: 0000000080000001 RSI: 0000000000040000 RDI: ffffea000dc58000
>>> [  425.331980] RBP: 00007f7804800000 R08: 000ffffffffff000 R09: 80000003716000e7
>>> [  425.333898] R10: ffff88834af80120 R11: ffff8883ac16f000 R12: ffff88834af80120
>>> [  425.335704] R13: ffff88837c0915c0 R14: 0000000000050201 R15: 00007f7804800000
>>> [  425.337638] FS:  00007f780d9d3740(0000) GS:ffff8883b1c80000(0000) knlGS:0000000000000000
>>> [  425.339734] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>> [  425.341369] CR2: 0000000002334048 CR3: 000000039c68c001 CR4: 00000000001606a0
>>> [  425.343160] Call Trace:
>>> [  425.343967]  follow_trans_huge_pmd+0x16f/0x2e0
>>> [  425.345263]  follow_p4d_mask+0x51c/0x630
>>> [  425.346344]  __get_user_pages+0x1a1/0x6c0
>>> [  425.347463]  internal_get_user_pages_fast+0x17b/0x250
>>> [  425.348918]  ib_umem_get+0x298/0x550 [ib_uverbs]
>>> [  425.350174]  mr_umem_get+0xc9/0x260 [mlx5_ib]
>>> [  425.351383]  mlx5_ib_reg_user_mr+0xcc/0x7e0 [mlx5_ib]
>>> [  425.352849]  ? xas_load+0x8/0x80
>>> [  425.353776]  ? xa_load+0x48/0x90
>>> [  425.354730]  ? lookup_get_idr_uobject.part.10+0x12/0x70 [ib_uverbs]
>>> [  425.356410]  ib_uverbs_reg_mr+0x127/0x280 [ib_uverbs]
>>> [  425.357843]  ib_uverbs_handler_UVERBS_METHOD_INVOKE_WRITE+0xc2/0xf0 [ib_uverbs]
>>> [  425.359749]  ib_uverbs_cmd_verbs.isra.6+0x5be/0xbe0 [ib_uverbs]
>>> [  425.361405]  ? uverbs_disassociate_api+0xd0/0xd0 [ib_uverbs]
>>> [  425.362898]  ? __alloc_pages_nodemask+0x148/0x2b0
>>> [  425.364206]  ib_uverbs_ioctl+0xc0/0x120 [ib_uverbs]
>>> [  425.365564]  do_vfs_ioctl+0x9d/0x650
>>> [  425.366567]  ksys_ioctl+0x70/0x80
>>> [  425.367537]  __x64_sys_ioctl+0x16/0x20
>>> [  425.368698]  do_syscall_64+0x42/0x130
>>> [  425.369782]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
>>> [  425.371117] RIP: 0033:0x7f780d2df267
>>> [  425.372159] Code: b3 66 90 48 8b 05 19 3c 2c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d e9 3b 2c 00 f7 d8 64 89 01 48
>>> [  425.376774] RSP: 002b:00007ffce49a88a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
>>> [  425.378740] RAX: ffffffffffffffda RBX: 00007ffce49a8938 RCX: 00007f780d2df267
>>> [  425.380598] RDX: 00007ffce49a8920 RSI: 00000000c0181b01 RDI: 0000000000000003
>>> [  425.382411] RBP: 00007ffce49a8900 R08: 0000000000000003 R09: 00007f780d9a1010
>>> [  425.384312] R10: 00000000ffffffff R11: 0000000000000246 R12: 00007f780d9a1150
>>> [  425.386132] R13: 00007ffce49a8900 R14: 00007ffce49a8ad8 R15: 00007f780468a000
>>> [  425.387964] ---[ end trace 1ecbefdb403190de ]---
>>>


  reply	other threads:[~2019-12-27 21:59 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-16 22:25 [PATCH v11 00/25] mm/gup: track dma-pinned pages: FOLL_PIN John Hubbard
2019-12-16 22:25 ` [PATCH v11 01/25] mm/gup: factor out duplicate code from four routines John Hubbard
2019-12-18 15:52   ` Kirill A. Shutemov
2019-12-18 22:15     ` John Hubbard
2019-12-18 22:45       ` Kirill A. Shutemov
2019-12-16 22:25 ` [PATCH v11 02/25] mm/gup: move try_get_compound_head() to top, fix minor issues John Hubbard
2019-12-16 22:25 ` [PATCH v11 03/25] mm: Cleanup __put_devmap_managed_page() vs ->page_free() John Hubbard
2019-12-16 22:25 ` [PATCH v11 04/25] mm: devmap: refactor 1-based refcounting for ZONE_DEVICE pages John Hubbard
2019-12-18 16:04   ` Kirill A. Shutemov
2019-12-19  0:32     ` John Hubbard
2019-12-19  0:40     ` [PATCH v12] " John Hubbard
2019-12-19  5:27   ` [PATCH v11 04/25] " Dan Williams
2019-12-19  5:48     ` John Hubbard
2019-12-19  6:52       ` Dan Williams
2019-12-19  7:33         ` John Hubbard
2019-12-16 22:25 ` [PATCH v11 05/25] goldish_pipe: rename local pin_user_pages() routine John Hubbard
2019-12-16 22:25 ` [PATCH v11 06/25] mm: fix get_user_pages_remote()'s handling of FOLL_LONGTERM John Hubbard
2019-12-18 16:19   ` Kirill A. Shutemov
2019-12-18 22:15     ` John Hubbard
2019-12-16 22:25 ` [PATCH v11 07/25] vfio: fix FOLL_LONGTERM use, simplify get_user_pages_remote() call John Hubbard
2019-12-16 22:25 ` [PATCH v11 08/25] mm/gup: allow FOLL_FORCE for get_user_pages_fast() John Hubbard
2019-12-16 22:25 ` [PATCH v11 09/25] IB/umem: use get_user_pages_fast() to pin DMA pages John Hubbard
2019-12-16 22:25 ` [PATCH v11 10/25] mm/gup: introduce pin_user_pages*() and FOLL_PIN John Hubbard
2019-12-16 22:25 ` [PATCH v11 11/25] goldish_pipe: convert to pin_user_pages() and put_user_page() John Hubbard
2019-12-16 22:25 ` [PATCH v11 12/25] IB/{core,hw,umem}: set FOLL_PIN via pin_user_pages*(), fix up ODP John Hubbard
2019-12-16 22:25 ` [PATCH v11 13/25] mm/process_vm_access: set FOLL_PIN via pin_user_pages_remote() John Hubbard
2019-12-16 22:25 ` [PATCH v11 14/25] drm/via: set FOLL_PIN via pin_user_pages_fast() John Hubbard
2019-12-16 22:25 ` [PATCH v11 15/25] fs/io_uring: set FOLL_PIN via pin_user_pages() John Hubbard
2019-12-16 22:25 ` [PATCH v11 16/25] net/xdp: " John Hubbard
2019-12-16 22:25 ` [PATCH v11 17/25] media/v4l2-core: set pages dirty upon releasing DMA buffers John Hubbard
2019-12-16 22:25 ` [PATCH v11 18/25] media/v4l2-core: pin_user_pages (FOLL_PIN) and put_user_page() conversion John Hubbard
2019-12-16 22:25 ` [PATCH v11 19/25] vfio, mm: " John Hubbard
2019-12-16 22:25 ` [PATCH v11 20/25] powerpc: book3s64: convert to pin_user_pages() and put_user_page() John Hubbard
2019-12-16 22:25 ` [PATCH v11 21/25] mm/gup_benchmark: use proper FOLL_WRITE flags instead of hard-coding "1" John Hubbard
2019-12-16 22:25 ` [PATCH v11 22/25] mm, tree-wide: rename put_user_page*() to unpin_user_page*() John Hubbard
2019-12-16 22:25 ` [PATCH v11 23/25] mm/gup: track FOLL_PIN pages John Hubbard
2019-12-17 14:19   ` [PATCH v12 " John Hubbard
2019-12-16 22:25 ` [PATCH v11 24/25] mm/gup_benchmark: support pin_user_pages() and related calls John Hubbard
2019-12-16 22:25 ` [PATCH v11 25/25] selftests/vm: run_vmtests: invoke gup_benchmark with basic FOLL_PIN coverage John Hubbard
2019-12-17  7:39 ` [PATCH v11 00/25] mm/gup: track dma-pinned pages: FOLL_PIN Jan Kara
2019-12-19 13:26 ` Leon Romanovsky
2019-12-19 20:30   ` John Hubbard
2019-12-19 21:07     ` Jason Gunthorpe
2019-12-19 21:13       ` John Hubbard
2019-12-20 13:34         ` Jason Gunthorpe
2019-12-21  0:32           ` Dan Williams
2019-12-23 18:24             ` Jason Gunthorpe
2019-12-19 22:58       ` John Hubbard
2019-12-20 18:48         ` Leon Romanovsky
2019-12-20 23:13           ` John Hubbard
2019-12-20 18:29       ` Leon Romanovsky
2019-12-20 23:54         ` John Hubbard
2019-12-21 10:08           ` Leon Romanovsky
2019-12-21 23:59             ` John Hubbard
2019-12-22 13:23           ` Leon Romanovsky
2019-12-25  2:03             ` John Hubbard
2019-12-25  5:26               ` Leon Romanovsky
2019-12-27 21:56                 ` John Hubbard [this message]
2019-12-29  4:33                   ` John Hubbard
2020-01-06  9:01                     ` Jan Kara
2020-01-07  1:26                       ` John Hubbard
2019-12-20  9:21     ` Jan Kara
2019-12-21  0:02       ` John Hubbard
2019-12-21  0:33       ` Dan Williams
2019-12-21  0:41         ` John Hubbard
2019-12-21  0:51           ` Dan Williams
2019-12-21  0:53             ` John Hubbard

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b879d191-a07c-e808-e48f-2b9bd8ba4fa3@nvidia.com \
    --to=jhubbard@nvidia.com \
    --cc=airlied@linux.ie \
    --cc=akpm@linux-foundation.org \
    --cc=alex.williamson@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=benh@kernel.crashing.org \
    --cc=bjorn.topel@intel.com \
    --cc=bpf@vger.kernel.org \
    --cc=corbet@lwn.net \
    --cc=dan.j.williams@intel.com \
    --cc=daniel@ffwll.ch \
    --cc=davem@davemloft.net \
    --cc=david@fromorbit.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@infradead.org \
    --cc=ira.weiny@intel.com \
    --cc=jack@suse.cz \
    --cc=jgg@ziepe.ca \
    --cc=jglisse@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=leon@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=magnus.karlsson@intel.com \
    --cc=maorg@mellanox.com \
    --cc=mchehab@kernel.org \
    --cc=mhocko@suse.com \
    --cc=mike.kravetz@oracle.com \
    --cc=mpe@ellerman.id.au \
    --cc=netdev@vger.kernel.org \
    --cc=paulus@samba.org \
    --cc=ranro@mellanox.com \
    --cc=shuah@kernel.org \
    --cc=vbabka@suse.cz \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).