From: Michal Hocko <mhocko@kernel.org>
To: coverity-bot <keescook@chromium.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
"David S. Miller" <davem@davemloft.net>,
Andrew Morton <akpm@linux-foundation.org>,
Vladimir Davydov <vdavydov@virtuozzo.com>,
Linus Torvalds <torvalds@linux-foundation.org>,
"Gustavo A. R. Silva" <gustavo@embeddedor.com>,
linux-next@vger.kernel.org
Subject: Re: Coverity: reclaim_high(): Error handling issues
Date: Tue, 29 Oct 2019 14:13:58 +0100 [thread overview]
Message-ID: <20191029131358.GM31513@dhcp22.suse.cz> (raw)
In-Reply-To: <201910281604.EC4A108@keescook>
On Mon 28-10-19 16:04:23, Kees Cook wrote:
> Hello!
>
> This is an experimental automated report about issues detected by Coverity
> from a scan of next-20191025 as part of the linux-next weekly scan project:
> https://scan.coverity.com/projects/linux-next-weekly-scan
>
> You're getting this email because you were associated with the identified
> lines of code (noted below) that were touched by recent commits:
>
> f7e1cb6ec51b ("mm: memcontrol: account socket memory in unified hierarchy memory controller")
JFTR This seems misleading wrt to the issue reported here AFAICS. The
code was there even before this commit.
> Coverity reported the following:
>
> *** CID 1487368: Error handling issues (CHECKED_RETURN)
> /mm/memcontrol.c: 2343 in reclaim_high()
> 2337 gfp_t gfp_mask)
> 2338 {
> 2339 do {
> 2340 if (page_counter_read(&memcg->memory) <= memcg->high)
> 2341 continue;
> 2342 memcg_memory_event(memcg, MEMCG_HIGH);
> vvv CID 1487368: Error handling issues (CHECKED_RETURN)
> vvv Calling "try_to_free_mem_cgroup_pages" without checking return value (as is done elsewhere 5 out of 6 times).
> 2343 try_to_free_mem_cgroup_pages(memcg, nr_pages, gfp_mask, true);
Yes we do not check for the return value here because the high limit is
a best effort feature. A more highlevel throttling which is not based on
the reclaim directly is implemented in mem_cgroup_handle_over_high
> 2344 } while ((memcg = parent_mem_cgroup(memcg)));
> 2345 }
> 2346
> 2347 static void high_work_func(struct work_struct *work)
> 2348 {
>
> If this is a false positive, please let us know so we can mark it as
> such, or teach the Coverity rules to be smarter. If not, please make
> sure fixes get into linux-next. :) For patches fixing this, please
> include:
>
> Reported-by: coverity-bot <keescook+coverity-bot@chromium.org>
> Addresses-Coverity-ID: 1487368 ("Error handling issues")
> Fixes: f7e1cb6ec51b ("mm: memcontrol: account socket memory in unified hierarchy memory controller")
>
>
> Thanks for your attention!
>
> --
> Coverity-bot
--
Michal Hocko
SUSE Labs
prev parent reply other threads:[~2019-10-29 13:14 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-28 23:04 Coverity: reclaim_high(): Error handling issues coverity-bot
2019-10-29 13:13 ` Michal Hocko [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191029131358.GM31513@dhcp22.suse.cz \
--to=mhocko@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=davem@davemloft.net \
--cc=gustavo@embeddedor.com \
--cc=hannes@cmpxchg.org \
--cc=keescook@chromium.org \
--cc=linux-next@vger.kernel.org \
--cc=torvalds@linux-foundation.org \
--cc=vdavydov@virtuozzo.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).