Linux-RISC-V Archive on lore.kernel.org
 help / color / Atom feed
From: Atish Patra <Atish.Patra@wdc.com>
To: "hch@infradead.org" <hch@infradead.org>
Cc: "aou@eecs.berkeley.edu" <aou@eecs.berkeley.edu>,
	"gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>,
	Anup Patel <Anup.Patel@wdc.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"alexios.zavras@intel.com" <alexios.zavras@intel.com>,
	"palmer@sifive.com" <palmer@sifive.com>,
	"paul.walmsley@sifive.com" <paul.walmsley@sifive.com>,
	"linux-riscv@lists.infradead.org"
	<linux-riscv@lists.infradead.org>,
	"allison@lohutok.net" <allison@lohutok.net>
Subject: Re: [PATCH] RISC-V: Issue a local tlb flush if possible.
Date: Thu, 15 Aug 2019 20:37:04 +0000
Message-ID: <3f55d5878044129a3cbb72b13b712e9a1c218dc7.camel@wdc.com> (raw)
In-Reply-To: <20190813143027.GA31668@infradead.org>

On Tue, 2019-08-13 at 07:30 -0700, hch@infradead.org wrote:
> On Tue, Aug 13, 2019 at 12:15:15AM +0000, Atish Patra wrote:
> > I thought if it recieves an empty cpumask, then it should at least
> > do a
> > local flush is the expected behavior. Are you saying that we should
> > just skip all and return ? 
> 
> How could we ever receive an empty cpu mask?  I think it could only
> be empty without the current cpu.  At least that is my reading of
> the callers and a few other implementations.
> 

We get ton of them. Here is the stack dump.

[   16.735814] [<ffffffe000035498>] walk_stackframe+0x0/0xa0^M
298436 [   16.819037] [<ffffffe0000355f8>] show_stack+0x2a/0x34^M
298437 [   16.899648] [<ffffffe00067b54c>] dump_stack+0x62/0x7c^M
298438 [   16.977402] [<ffffffe0000ef6f6>] tlb_flush_mmu+0x14a/0x150^M
298439 [   17.054197] [<ffffffe0000ef7a4>] tlb_finish_mmu+0x42/0x72^M
298440 [   17.129986] [<ffffffe0000ede7c>] exit_mmap+0x8e/0xfa^M
298441 [   17.203669] [<ffffffe000037d54>] mmput.part.3+0x1a/0xc4^M
298442 [   17.275985] [<ffffffe000037e1e>] mmput+0x20/0x28^M
298443 [   17.345248] [<ffffffe0001143c2>] flush_old_exec+0x418/0x5f8^M
298444 [   17.415370] [<ffffffe000158408>]
load_elf_binary+0x262/0xc84^M
298445 [   17.483641] [<ffffffe000114614>]
search_binary_handler.part.7+0x72/0x172^M
298446 [   17.552078] [<ffffffe000114bb2>]
__do_execve_file+0x40c/0x56a^M
298447 [   17.617932] [<ffffffe00011503e>] sys_execve+0x26/0x32^M
298448 [   17.682164] [<ffffffe00003437e>] ret_from_syscall+0x0/0xe^M

It looks like it is in the path of clearing the old traces of already
run script or program.  I am not sure if the cpumask supposed to be
empty in this path.

Probably we should just issue tlb flush all for all CPUs instead of
just the local CPU.

> > > 	if (!cpumask || cpumask_test_cpu(cpu, cpumask) {
> > > 		if ((start == 0 && size == -1) || size > PAGE_SIZE)
> > > 			local_flush_tlb_all();
> > > 		else if (size == PAGE_SIZE)
> > > 			local_flush_tlb_page(start);
> > > 		cpumask_clear_cpu(cpuid, cpumask);
> > 
> > This will modify the original cpumask which is not correct. clear
> > part
> > has to be done on hmask to avoid a copy.
> 
> Indeed.  But looking at the x86 tlbflush implementation it seems like
> we
> could use cpumask_any_but() to avoid having to modify the passed in
> cpumask.

Looking at the x86 code, it uses cpumask_any_but to just test if there
any other cpu present apart from the current one.

If yes, it calls smp_call_function_many which ignores the current cpu
and execute tlb flush code on all other cpus.

For RISC-V, it has to still send the cpumask containing local cpu and
M-mode software may do a local tlb flush the tlbs again for no reason.


Regards,
Atish
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

  reply index

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-10  1:43 Atish Patra
2019-08-10  3:30 ` Anup Patel
2019-08-10  5:28   ` Atish Patra
2019-08-10  6:37 ` Andreas Schwab
2019-08-10  9:21 ` Atish Patra
2019-08-12 14:56 ` Christoph Hellwig
2019-08-13  0:15   ` Atish Patra
2019-08-13 14:30     ` hch
2019-08-15 20:37       ` Atish Patra [this message]
2019-08-19 14:46         ` hch
2019-08-19 15:09           ` Anup Patel
2019-08-19 15:10             ` hch
2019-08-20  0:02               ` Atish Patra
2019-08-12 15:36 ` Troy Benjegerdes
2019-08-12 17:13   ` Atish Patra
2019-08-12 17:55   ` Christoph Hellwig
2019-08-13 18:25 ` Paul Walmsley
2019-08-14  1:49   ` Atish Patra

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3f55d5878044129a3cbb72b13b712e9a1c218dc7.camel@wdc.com \
    --to=atish.patra@wdc.com \
    --cc=Anup.Patel@wdc.com \
    --cc=alexios.zavras@intel.com \
    --cc=allison@lohutok.net \
    --cc=aou@eecs.berkeley.edu \
    --cc=gregkh@linuxfoundation.org \
    --cc=hch@infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=palmer@sifive.com \
    --cc=paul.walmsley@sifive.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-RISC-V Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-riscv/0 linux-riscv/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-riscv linux-riscv/ https://lore.kernel.org/linux-riscv \
		linux-riscv@lists.infradead.org infradead-linux-riscv@archiver.kernel.org
	public-inbox-index linux-riscv


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.infradead.lists.linux-riscv


AGPL code for this site: git clone https://public-inbox.org/ public-inbox