From: Joe Lawrence <joe.lawrence@redhat.com>
To: Johannes Erdfelt <johannes@erdfelt.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>,
Jessica Yu <jeyu@kernel.org>, Jiri Kosina <jikos@kernel.org>,
Miroslav Benes <mbenes@suse.cz>,
Steven Rostedt <rostedt@goodmis.org>,
Ingo Molnar <mingo@redhat.com>,
live-patching@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: Oops caused by race between livepatch and ftrace
Date: Mon, 20 May 2019 17:19:59 -0400 [thread overview]
Message-ID: <1802c0d2-702f-08ec-6a85-c7f887eb6d14@redhat.com> (raw)
In-Reply-To: <20190520210905.GC1646@sventech.com>
On 5/20/19 5:09 PM, Johannes Erdfelt wrote:
> On Mon, May 20, 2019, Joe Lawrence <joe.lawrence@redhat.com> wrote:
>> [ fixed jeyu's email address ]
>
> Thank you, the bounce message made it seem like my mail server was
> blocked and not that the address didn't exist.
>
> I think MAINTAINERS needs an update since it still has the @redhat.com
> address.
>
Here's how it looks on my end:
% git describe HEAD
v5.1-12317-ga6a4b66bd8f4
% grep M:.*jeyu MAINTAINERS
M: Jessica Yu <jeyu@kernel.org>
>> On 5/20/19 3:49 PM, Johannes Erdfelt wrote:
>>> [ ... snip ... ]
>>>
>>> I have put together a test case that can reproduce the crash using
>>> KVM. The tarball includes a minimal kernel and initramfs, along with
>>> a script to run qemu and the .config used to build the kernel. By
>>> default it will attempt to reproduce by loading multiple livepatches
>>> at the same time. Passing 'test=ftrace' to the script will attempt to
>>> reproduce by racing with ftrace.
>>>
>>> My test setup reproduces the race and oops more reliably by loading
>>> multiple livepatches at the same time than with the ftrace method. It's
>>> not 100% reproducible, so the test case may need to be run multiple
>>> times.
>>>
>>> It can be found here (not attached because of its size):
>>> http://johannes.erdfelt.com/5.2.0-rc1-a188339ca5-livepatch-race.tar.gz
>>
>> Hi Johannes,
>>
>> This is cool way to distribute the repro kernel, modules, etc!
>
> This oops was common in our production environment and was particularly
> annoying since livepatches would load at boot and early enough to happen
> before networking and SSH were started.
>
> Unfortunately it was difficult to reproduce on other hardware (changing
> the timing just enough) and our production environment is very
> complicated.
>
> I spent more time than I'd like to admit trying to reproduce this fairly
> reliably. I knew that I needed to help make it as easy as possible to
> reproduce to root cause it and for others to take a look at it as well.
>
Thanks for building this test image -- it repro'd on the first try for me.
Hmmm, I wonder then how reproducible it would be if we simply extracted
the .ko's and test scripts from out of your initramfs and ran it on
arbitrary machines.
I think the rcutorture self-tests use qemu/kvm to fire up test VMs, but
I dunno if livepatch self-tests are ready for level of sophistication
yet :) Will need to think on that a bit.
>> These two testing scenarios might be interesting to add to our selftests
>> suite. Can you post or add the source(s) to livepatch-test<n>.ko to the
>> tarball?
>
> I made the livepatches using kpatch-build and this simple patch:
>
> diff --git a/fs/proc/version.c b/fs/proc/version.c
> index 94901e8e700d..6b8a3449f455 100644
> --- a/fs/proc/version.c
> +++ b/fs/proc/version.c
> @@ -12,6 +12,7 @@ static int version_proc_show(struct seq_file *m, void *v)
> utsname()->sysname,
> utsname()->release,
> utsname()->version);
> + seq_printf(m, "example livepatch\n");
> return 0;
> }
>
> I just created enough livepatches with the same source patch so that I
> could reproduce the issue somewhat reliably.
>
> I'll see if I can make something that uses klp directly.
Ah ok great, I was hoping it was a relatively simply livepatch. We
could probably reuse lib/livepatch/test_klp_livepatch.c to do this
(patching cmdline_proc_show instead).
> The rest of the userspace in the initramfs is really straight forward
> with the only interesting parts being a couple of shell scripts.
Yup. I'll be on PTO later this week, but I'll see about extracting the
scripts and building a pile of livepatch .ko's to see how easily it
reproduces without qemu.
Thanks,
-- Joe
next prev parent reply other threads:[~2019-05-20 21:20 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-20 19:49 Oops caused by race between livepatch and ftrace Johannes Erdfelt
2019-05-20 20:46 ` Joe Lawrence
2019-05-20 21:09 ` Johannes Erdfelt
2019-05-20 21:19 ` Josh Poimboeuf
2019-05-20 21:39 ` Steven Rostedt
2019-05-21 14:16 ` Josh Poimboeuf
2019-05-21 14:42 ` Steven Rostedt
2019-05-21 16:42 ` Josh Poimboeuf
2019-05-21 16:53 ` Steven Rostedt
2019-05-29 11:17 ` Jiri Kosina
2019-05-29 12:06 ` Steven Rostedt
2019-05-29 12:30 ` Josh Poimboeuf
2019-05-22 13:00 ` Josh Poimboeuf
2019-05-29 17:29 ` Jessica Yu
2019-05-29 17:39 ` Josh Poimboeuf
2019-05-29 18:34 ` Jessica Yu
2019-05-20 21:48 ` Johannes Erdfelt
2019-05-20 21:19 ` Joe Lawrence [this message]
2019-05-21 19:27 ` Joe Lawrence
2019-05-21 21:00 ` Josh Poimboeuf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1802c0d2-702f-08ec-6a85-c7f887eb6d14@redhat.com \
--to=joe.lawrence@redhat.com \
--cc=jeyu@kernel.org \
--cc=jikos@kernel.org \
--cc=johannes@erdfelt.com \
--cc=jpoimboe@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=live-patching@vger.kernel.org \
--cc=mbenes@suse.cz \
--cc=mingo@redhat.com \
--cc=rostedt@goodmis.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).