From: David Hildenbrand <david@redhat.com>
To: Luis Chamberlain <mcgrof@kernel.org>
Cc: Prarit Bhargava <prarit@redhat.com>,
pmladek@suse.com, Petr Pavlu <petr.pavlu@suse.com>,
linux-modules@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 2/2] module: Merge same-name module load requests
Date: Fri, 18 Nov 2022 18:32:54 +0100 [thread overview]
Message-ID: <8c826c96-62ec-2f72-c4cb-30139d5639d1@redhat.com> (raw)
In-Reply-To: <Y3Pol5H4kJioAV9W@bombadil.infradead.org>
On 15.11.22 20:29, Luis Chamberlain wrote:
> On Mon, Nov 14, 2022 at 04:45:05PM +0100, David Hildenbrand wrote:
>> Note that I don't think the issue I raised is due to 6e6de3dee51a.
>> I don't have the machine at hand right now. But, again, I doubt this will
>> fix it.
>
> There are *more* modules processed after that commit. That's all. So
> testing would be appreciated.
I just tested that change on top of 6.1.0-rc5+ on that large system
and CONFIG_KASAN_INLINE=y. No change.
[ 207.955184] vmap allocation for size 2490368 failed: use vmalloc=<size> to increase size
[ 207.955891] vmap allocation for size 2490368 failed: use vmalloc=<size> to increase size
[ 207.956253] vmap allocation for size 2490368 failed: use vmalloc=<size> to increase size
[ 207.956461] systemd-udevd: vmalloc error: size 2486272, vm_struct allocation failed, mode:0xcc0(GFP_KERNEL), nodemask=(null),cpuset=/,mems_allowed=1-7
[ 207.956573] CPU: 88 PID: 4925 Comm: systemd-udevd Not tainted 6.1.0-rc5+ #4
[ 207.956580] Hardware name: Lenovo ThinkSystem SR950 -[7X12ABC1WW]-/-[7X12ABC1WW]-, BIOS -[PSE130O-1.81]- 05/20/2020
[ 207.956584] Call Trace:
[ 207.956588] <TASK>
[ 207.956593] vmap allocation for size 2490368 failed: use vmalloc=<size> to increase size
[ 207.956593] dump_stack_lvl+0x5b/0x77
[ 207.956613] warn_alloc.cold+0x86/0x195
[ 207.956632] ? zone_watermark_ok_safe+0x2b0/0x2b0
[ 207.956641] ? slab_free_freelist_hook+0x11e/0x1d0
[ 207.956672] ? __get_vm_area_node+0x2a4/0x340
[ 207.956694] __vmalloc_node_range+0xad6/0x11b0
[ 207.956699] ? trace_contention_end+0xda/0x140
[ 207.956715] ? __mutex_lock+0x254/0x1360
[ 207.956740] ? __mutex_unlock_slowpath+0x154/0x600
[ 207.956752] ? bit_wait_io_timeout+0x170/0x170
[ 207.956761] ? vfree_atomic+0xa0/0xa0
[ 207.956775] ? load_module+0x1d8f/0x7ff0
[ 207.956786] module_alloc+0xe7/0x170
[ 207.956802] ? load_module+0x1d8f/0x7ff0
[ 207.956822] load_module+0x1d8f/0x7ff0
[ 207.956876] ? module_frob_arch_sections+0x20/0x20
[ 207.956888] ? ima_post_read_file+0x15a/0x180
[ 207.956904] ? ima_read_file+0x140/0x140
[ 207.956918] ? kernel_read+0x5c/0x140
[ 207.956931] ? security_kernel_post_read_file+0x6d/0xb0
[ 207.956950] ? kernel_read_file+0x21d/0x7d0
[ 207.956971] ? __x64_sys_fspick+0x270/0x270
[ 207.956999] ? __do_sys_finit_module+0xfc/0x180
[ 207.957005] __do_sys_finit_module+0xfc/0x180
[ 207.957012] ? __ia32_sys_init_module+0xa0/0xa0
[ 207.957023] ? __seccomp_filter+0x15e/0xc20
[ 207.957066] ? syscall_trace_enter.constprop.0+0x98/0x230
[ 207.957078] do_syscall_64+0x58/0x80
[ 207.957085] ? asm_exc_page_fault+0x22/0x30
[ 207.957095] ? lockdep_hardirqs_on+0x7d/0x100
[ 207.957103] entry_SYSCALL_64_after_hwframe+0x63/0xcd
I have access to the system for a couple more days, if there
is anything else I should test.
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2022-11-18 17:35 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-19 12:32 [PATCH v2 0/2] module: Merge same-name module load requests Petr Pavlu
2022-09-19 12:32 ` [PATCH v2 1/2] module: Correct wake up of module_wq Petr Pavlu
2022-09-30 20:22 ` Luis Chamberlain
2022-10-14 8:40 ` Petr Mladek
2022-09-19 12:32 ` [PATCH v2 2/2] module: Merge same-name module load requests Petr Pavlu
2022-09-30 20:30 ` Luis Chamberlain
2022-10-15 9:27 ` Petr Pavlu
2022-10-18 18:33 ` Luis Chamberlain
2022-10-18 19:19 ` Prarit Bhargava
2022-10-18 19:53 ` Prarit Bhargava
2022-10-20 7:19 ` Petr Mladek
2022-10-24 13:22 ` Prarit Bhargava
2022-10-24 17:08 ` Luis Chamberlain
2022-10-24 12:37 ` Petr Pavlu
2022-10-24 14:00 ` Prarit Bhargava
2022-11-13 16:44 ` Petr Pavlu
2022-10-19 12:00 ` Petr Pavlu
2022-10-20 7:03 ` Petr Mladek
2022-10-24 17:53 ` Luis Chamberlain
2022-11-12 1:47 ` Luis Chamberlain
2022-11-14 8:57 ` David Hildenbrand
2022-11-14 15:38 ` Luis Chamberlain
2022-11-14 15:45 ` David Hildenbrand
2022-11-15 19:29 ` Luis Chamberlain
2022-11-16 16:03 ` Prarit Bhargava
2022-11-21 16:00 ` Petr Pavlu
2022-11-21 19:03 ` Luis Chamberlain
2022-11-21 19:50 ` David Hildenbrand
2022-11-21 20:27 ` Luis Chamberlain
2022-11-22 13:59 ` Petr Pavlu
2022-11-22 17:58 ` Luis Chamberlain
2022-11-16 16:04 ` David Hildenbrand
2022-11-18 17:32 ` David Hildenbrand [this message]
2022-11-28 16:29 ` Prarit Bhargava
2022-11-29 13:13 ` Petr Pavlu
2022-12-02 16:36 ` Petr Mladek
2022-12-06 12:31 ` Prarit Bhargava
2022-12-07 13:23 ` Petr Pavlu
2022-12-04 19:58 ` Prarit Bhargava
2022-10-14 7:54 ` David Hildenbrand
2022-10-15 9:49 ` Petr Pavlu
2022-10-14 13:52 ` Petr Mladek
2022-10-16 12:25 ` Petr Pavlu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8c826c96-62ec-2f72-c4cb-30139d5639d1@redhat.com \
--to=david@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-modules@vger.kernel.org \
--cc=mcgrof@kernel.org \
--cc=petr.pavlu@suse.com \
--cc=pmladek@suse.com \
--cc=prarit@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).