linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mike Galbraith <efault@gmx.de>
To: Oleksandr Natalenko <oleksandr@natalenko.name>,
	linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Steven Rostedt <rostedt@goodmis.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-rt-users@vger.kernel.org
Subject: Re: scheduling while atomic in z3fold
Date: Sun, 29 Nov 2020 10:21:33 +0100	[thread overview]
Message-ID: <f1c39a0504310a97e42b667fc4d458af4a86d97a.camel@gmx.de> (raw)
In-Reply-To: <79ee43026efe5aaa560953ea8fe29a826ac4e855.camel@gmx.de>

On Sun, 2020-11-29 at 08:48 +0100, Mike Galbraith wrote:
> On Sun, 2020-11-29 at 07:41 +0100, Mike Galbraith wrote:
> > On Sat, 2020-11-28 at 15:27 +0100, Oleksandr Natalenko wrote:
> > >
> > > > > Shouldn't the list manipulation be protected with
> > > > > local_lock+this_cpu_ptr instead of get_cpu_ptr+spin_lock?
> > >
> > > Totally untested:
> >
> > Hrm, the thing doesn't seem to care deeply about preemption being
> > disabled, so adding another lock may be overkill.  It looks like you
> > could get the job done via migrate_disable()+this_cpu_ptr().
>
> There is however an ever so tiny chance that I'm wrong about that :)

Or not, your local_lock+this_cpu_ptr version exploded too.

Perhaps there's a bit of non-rt related racy racy going on in zswap
thingy that makes swap an even less wonderful idea for RT than usual.

crash.rt> bt -s
PID: 32399  TASK: ffff8e4528cd8000  CPU: 4   COMMAND: "cc1"
 #0 [ffff9f0f1228f488] machine_kexec+366 at ffffffff8c05f87e
 #1 [ffff9f0f1228f4d0] __crash_kexec+210 at ffffffff8c14c052
 #2 [ffff9f0f1228f590] crash_kexec+48 at ffffffff8c14d240
 #3 [ffff9f0f1228f5a0] oops_end+202 at ffffffff8c02680a
 #4 [ffff9f0f1228f5c0] exc_general_protection+403 at ffffffff8c8be413
 #5 [ffff9f0f1228f650] asm_exc_general_protection+30 at ffffffff8ca00a0e
 #6 [ffff9f0f1228f6d8] __z3fold_alloc+118 at ffffffff8c2b4ea6
 #7 [ffff9f0f1228f760] z3fold_zpool_malloc+115 at ffffffff8c2b56c3
 #8 [ffff9f0f1228f7c8] zswap_frontswap_store+789 at ffffffff8c27d335
 #9 [ffff9f0f1228f828] __frontswap_store+110 at ffffffff8c27bafe
#10 [ffff9f0f1228f858] swap_writepage+55 at ffffffff8c273b17
#11 [ffff9f0f1228f870] shmem_writepage+612 at ffffffff8c232964
#12 [ffff9f0f1228f8a8] pageout+210 at ffffffff8c225f12
#13 [ffff9f0f1228f928] shrink_page_list+2428 at ffffffff8c22744c
#14 [ffff9f0f1228f9c0] shrink_inactive_list+534 at ffffffff8c229746
#15 [ffff9f0f1228fa68] shrink_lruvec+927 at ffffffff8c22a35f
#16 [ffff9f0f1228fb78] shrink_node+567 at ffffffff8c22a7d7
#17 [ffff9f0f1228fbf8] do_try_to_free_pages+185 at ffffffff8c22ad39
#18 [ffff9f0f1228fc40] try_to_free_pages+201 at ffffffff8c22c239
#19 [ffff9f0f1228fcd0] __alloc_pages_slowpath.constprop.111+1056 at ffffffff8c26eb70
#20 [ffff9f0f1228fda8] __alloc_pages_nodemask+786 at ffffffff8c26f7e2
#21 [ffff9f0f1228fe00] alloc_pages_vma+309 at ffffffff8c288f15
#22 [ffff9f0f1228fe40] handle_mm_fault+1687 at ffffffff8c24ee97
#23 [ffff9f0f1228fef8] exc_page_fault+821 at ffffffff8c8c1be5
#24 [ffff9f0f1228ff50] asm_exc_page_fault+30 at ffffffff8ca00ace
    RIP: 00000000010fea3b  RSP: 00007ffc88ad5a50  RFLAGS: 00010206
    RAX: 00007f4a548d1638  RBX: 00007f4a5c0c5000  RCX: 0000000000000000
    RDX: 0000000000020000  RSI: 000000000000000f  RDI: 0000000000018001
    RBP: 00007f4a5547a000   R8: 0000000000018000   R9: 0000000000240000
    R10: 00007f4a5523a000  R11: 0000000000098628  R12: 00007f4a548d1638
    R13: 00007f4a54ab1478  R14: 000000005002ac29  R15: 0000000000000000
    ORIG_RAX: ffffffffffffffff  CS: 0033  SS: 002b
crash.rt>



  reply	other threads:[~2020-11-29  9:24 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-28 14:05 scheduling while atomic in z3fold Oleksandr Natalenko
2020-11-28 14:09 ` Oleksandr Natalenko
2020-11-28 14:27   ` Oleksandr Natalenko
2020-11-29  6:41     ` Mike Galbraith
2020-11-29  7:48       ` Mike Galbraith
2020-11-29  9:21         ` Mike Galbraith [this message]
2020-11-29 10:56           ` Mike Galbraith
2020-11-29 11:29             ` Oleksandr Natalenko
2020-11-29 11:41               ` Mike Galbraith
2020-11-30 13:20                 ` Sebastian Andrzej Siewior
2020-11-30 13:53                   ` Oleksandr Natalenko
2020-11-30 14:28                     ` Sebastian Andrzej Siewior
2020-11-30 14:42                   ` Mike Galbraith
2020-11-30 14:52                     ` Sebastian Andrzej Siewior
2020-11-30 15:01                       ` Mike Galbraith
2020-11-30 15:03                         ` Mike Galbraith
2020-11-30 16:03                         ` Sebastian Andrzej Siewior
2020-11-30 16:27                           ` Mike Galbraith
2020-11-30 16:32                             ` Sebastian Andrzej Siewior
2020-11-30 16:36                               ` Mike Galbraith
2020-11-30 19:09                               ` Mike Galbraith
2020-11-30 16:53                             ` Mike Galbraith
2020-12-02  2:30                           ` Mike Galbraith
2020-12-02 22:08                             ` Sebastian Andrzej Siewior
2020-12-03  2:16                               ` Mike Galbraith
2020-12-03  8:18                                 ` Mike Galbraith
2020-12-03 13:39                                   ` Sebastian Andrzej Siewior
2020-12-03 14:07                                     ` Vitaly Wool
2020-12-06  9:18                                     ` Mike Galbraith
     [not found]                                       ` <cad7848c-7fd3-b4a4-c079-5896bb47ee49@konsulko.com>
2020-12-07  2:18                                         ` Mike Galbraith
2020-12-07 11:52                                           ` Vitaly Wool
2020-12-07 12:34                                             ` Mike Galbraith
2020-12-07 15:21                                               ` Vitaly Wool
2020-12-07 15:41                                                 ` Sebastian Andrzej Siewior
2020-12-07 15:41                                                 ` Mike Galbraith
2020-12-08 23:26                                                   ` Vitaly Wool
2020-12-09  6:13                                                     ` Mike Galbraith
2020-12-09  6:31                                                       ` Mike Galbraith

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f1c39a0504310a97e42b667fc4d458af4a86d97a.camel@gmx.de \
    --to=efault@gmx.de \
    --cc=akpm@linux-foundation.org \
    --cc=bigeasy@linutronix.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rt-users@vger.kernel.org \
    --cc=oleksandr@natalenko.name \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).