From: Michal Hocko <mhocko@suse.com> To: "Christian König" <christian.koenig@amd.com> Cc: Peter.Enderborg@sony.com, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, sumit.semwal@linaro.org, adobriyan@gmail.com, akpm@linux-foundation.org, songmuchun@bytedance.com, guro@fb.com, shakeelb@google.com, neilb@suse.de, samitolvanen@google.com, rppt@kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, willy@infradead.org Subject: Re: [PATCH v4] dma-buf: Add DmaBufTotal counter in meminfo Date: Tue, 20 Apr 2021 09:46:17 +0200 [thread overview] Message-ID: <YH6GyThr2mPrM6h5@dhcp22.suse.cz> (raw) In-Reply-To: <b89c84da-65d2-35df-7249-ea8edc0bee9b@amd.com> On Tue 20-04-21 09:32:14, Christian König wrote: > Am 20.04.21 um 09:04 schrieb Michal Hocko: > > On Mon 19-04-21 18:37:13, Christian König wrote: > > > Am 19.04.21 um 18:11 schrieb Michal Hocko: [...] > > What I am trying to bring up with NUMA side is that the same problem can > > happen on per-node basis. Let's say that some user consumes unexpectedly > > large amount of dma-buf on a certain node. This can lead to observable > > performance impact on anybody on allocating from that node and even > > worse cause an OOM for node bound consumers. How do I find out that it > > was dma-buf that has caused the problem? > > Yes, that is the direction my thinking goes as well, but also even further. > > See DMA-buf is also used to share device local memory between processes as > well. In other words VRAM on graphics hardware. > > On my test system here I have 32GB of system memory and 16GB of VRAM. I can > use DMA-buf to allocate that 16GB of VRAM quite easily which then shows up > under /proc/meminfo as used memory. This is something that would be really interesting in the changelog. I mean the expected and extreme memory consumption of this memory. Ideally with some hints on what to do when the number is really high (e.g. mount debugfs and have a look here and there to check whether this is just too many users or an unexpected pattern to be reported). > But that isn't really system memory at all, it's just allocated device > memory. OK, that was not really clear to me. So this is not really accounted to MemTotal? If that is really the case then reporting it into the oom report is completely pointless and I am not even sure /proc/meminfo is the right interface either. It would just add more confusion I am afraid. > > See where I am heading? > > Yeah, totally. Thanks for pointing this out. > > Suggestions how to handle that? As I've pointed out in previous reply we do have an API to account per node memory but now that you have brought up that this is not something we account as a regular memory then this doesn't really fit into that model. But maybe I am just confused. -- Michal Hocko SUSE Labs
WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@suse.com> To: "Christian König" <christian.koenig@amd.com> Cc: willy@infradead.org, neilb@suse.de, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Peter.Enderborg@sony.com, linaro-mm-sig@lists.linaro.org, shakeelb@google.com, rppt@kernel.org, samitolvanen@google.com, songmuchun@bytedance.com, linux-fsdevel@vger.kernel.org, akpm@linux-foundation.org, adobriyan@gmail.com, guro@fb.com, linux-media@vger.kernel.org Subject: Re: [PATCH v4] dma-buf: Add DmaBufTotal counter in meminfo Date: Tue, 20 Apr 2021 09:46:17 +0200 [thread overview] Message-ID: <YH6GyThr2mPrM6h5@dhcp22.suse.cz> (raw) In-Reply-To: <b89c84da-65d2-35df-7249-ea8edc0bee9b@amd.com> On Tue 20-04-21 09:32:14, Christian König wrote: > Am 20.04.21 um 09:04 schrieb Michal Hocko: > > On Mon 19-04-21 18:37:13, Christian König wrote: > > > Am 19.04.21 um 18:11 schrieb Michal Hocko: [...] > > What I am trying to bring up with NUMA side is that the same problem can > > happen on per-node basis. Let's say that some user consumes unexpectedly > > large amount of dma-buf on a certain node. This can lead to observable > > performance impact on anybody on allocating from that node and even > > worse cause an OOM for node bound consumers. How do I find out that it > > was dma-buf that has caused the problem? > > Yes, that is the direction my thinking goes as well, but also even further. > > See DMA-buf is also used to share device local memory between processes as > well. In other words VRAM on graphics hardware. > > On my test system here I have 32GB of system memory and 16GB of VRAM. I can > use DMA-buf to allocate that 16GB of VRAM quite easily which then shows up > under /proc/meminfo as used memory. This is something that would be really interesting in the changelog. I mean the expected and extreme memory consumption of this memory. Ideally with some hints on what to do when the number is really high (e.g. mount debugfs and have a look here and there to check whether this is just too many users or an unexpected pattern to be reported). > But that isn't really system memory at all, it's just allocated device > memory. OK, that was not really clear to me. So this is not really accounted to MemTotal? If that is really the case then reporting it into the oom report is completely pointless and I am not even sure /proc/meminfo is the right interface either. It would just add more confusion I am afraid. > > See where I am heading? > > Yeah, totally. Thanks for pointing this out. > > Suggestions how to handle that? As I've pointed out in previous reply we do have an API to account per node memory but now that you have brought up that this is not something we account as a regular memory then this doesn't really fit into that model. But maybe I am just confused. -- Michal Hocko SUSE Labs _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
next prev parent reply other threads:[~2021-04-20 7:46 UTC|newest] Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-04-17 10:40 [PATCH v4] dma-buf: Add DmaBufTotal counter in meminfo Peter Enderborg 2021-04-17 10:40 ` Peter Enderborg 2021-04-17 10:59 ` Christian König 2021-04-17 10:59 ` Christian König 2021-04-17 11:20 ` Peter.Enderborg 2021-04-17 11:20 ` Peter.Enderborg 2021-04-17 11:54 ` Christian König 2021-04-17 11:54 ` Christian König 2021-04-17 12:13 ` Peter.Enderborg 2021-04-17 12:13 ` Peter.Enderborg 2021-04-20 8:39 ` Daniel Vetter 2021-04-20 8:39 ` Daniel Vetter 2021-04-17 13:07 ` [External] " Muchun Song 2021-04-17 13:07 ` Muchun Song 2021-04-17 13:43 ` Peter.Enderborg 2021-04-17 13:43 ` Peter.Enderborg 2021-04-17 14:21 ` Muchun Song 2021-04-17 14:21 ` Muchun Song 2021-04-17 15:03 ` Christian König 2021-04-17 15:03 ` Christian König 2021-04-19 12:16 ` Michal Hocko 2021-04-19 12:16 ` Michal Hocko 2021-04-19 12:41 ` Peter.Enderborg 2021-04-19 12:41 ` Peter.Enderborg 2021-04-19 15:00 ` Michal Hocko 2021-04-19 15:00 ` Michal Hocko 2021-04-19 15:19 ` Peter.Enderborg 2021-04-19 15:19 ` Peter.Enderborg 2021-04-19 15:44 ` Christian König 2021-04-19 15:44 ` Christian König 2021-04-19 16:11 ` Michal Hocko 2021-04-19 16:11 ` Michal Hocko 2021-04-19 16:37 ` Christian König 2021-04-19 16:37 ` Christian König 2021-04-20 7:04 ` Michal Hocko 2021-04-20 7:04 ` Michal Hocko 2021-04-20 7:20 ` Mike Rapoport 2021-04-20 7:20 ` Mike Rapoport 2021-04-20 7:47 ` Michal Hocko 2021-04-20 7:47 ` Michal Hocko 2021-04-20 7:32 ` Christian König 2021-04-20 7:32 ` Christian König 2021-04-20 7:46 ` Michal Hocko [this message] 2021-04-20 7:46 ` Michal Hocko 2021-04-20 8:00 ` Christian König 2021-04-20 8:00 ` Christian König 2021-04-20 8:28 ` Michal Hocko 2021-04-20 8:28 ` Michal Hocko 2021-04-20 9:02 ` Peter.Enderborg 2021-04-20 9:02 ` Peter.Enderborg 2021-04-20 9:12 ` Michal Hocko 2021-04-20 9:12 ` Michal Hocko 2021-04-20 9:25 ` Peter.Enderborg 2021-04-20 9:25 ` Peter.Enderborg 2021-04-20 11:04 ` Michal Hocko 2021-04-20 11:04 ` Michal Hocko 2021-04-20 11:24 ` Peter.Enderborg 2021-04-20 11:24 ` Peter.Enderborg
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=YH6GyThr2mPrM6h5@dhcp22.suse.cz \ --to=mhocko@suse.com \ --cc=Peter.Enderborg@sony.com \ --cc=adobriyan@gmail.com \ --cc=akpm@linux-foundation.org \ --cc=christian.koenig@amd.com \ --cc=dri-devel@lists.freedesktop.org \ --cc=guro@fb.com \ --cc=linaro-mm-sig@lists.linaro.org \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-media@vger.kernel.org \ --cc=neilb@suse.de \ --cc=rppt@kernel.org \ --cc=samitolvanen@google.com \ --cc=shakeelb@google.com \ --cc=songmuchun@bytedance.com \ --cc=sumit.semwal@linaro.org \ --cc=willy@infradead.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.