linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: Commit 'fs: Do not check if there is a fsnotify watcher on pseudo inodes' breaks chromium here
       [not found] <7b4aa1e985007c6d582fffe5e8435f8153e28e0f.camel@redhat.com>
@ 2020-06-28 12:53 ` Amir Goldstein
  2020-06-28 13:14   ` Maxim Levitsky
  2020-06-29 13:09   ` Jan Kara
  0 siblings, 2 replies; 10+ messages in thread
From: Amir Goldstein @ 2020-06-28 12:53 UTC (permalink / raw)
  To: Maxim Levitsky
  Cc: linux-kernel, Alexander Viro, linux-fsdevel, Mel Gorman,
	Jan Kara, Linus Torvalds

[-- Attachment #1: Type: text/plain, Size: 4514 bytes --]

On Sun, Jun 28, 2020 at 2:14 PM Maxim Levitsky <mlevitsk@redhat.com> wrote:
>
> Hi,
>
> I just did usual kernel update and now chromium crashes on startup.
> It happens both in a KVM's VM (with virtio-gpu if that matters) and natively with amdgpu driver.
> Most likely not GPU related although I initially suspected that it is.
>
> Chromium starts as a white rectangle, shows few white rectangles
> that resemble its notifications and then crashes.
>
> The stdout output from chromium:
>
[...]

> Received signal 6
> #0 0x55f6da0120d9 base::debug::CollectStackTrace()
> #1 0x55f6d9f75246 base::debug::StackTrace::StackTrace()
> #2 0x55f6da01170a base::debug::(anonymous namespace)::StackDumpSignalHandler()
> #3 0x55f6da011cfe base::debug::(anonymous namespace)::StackDumpSignalHandler()
> #4 0x7ff46643ab20 (/usr/lib64/libpthread-2.30.so+0x14b1f)
> #5 0x7ff462d87625 __GI_raise
> #6 0x7ff462d708d9 __GI_abort
> #7 0x55f6da0112d5 base::debug::BreakDebugger()
> #8 0x55f6d9f86405 logging::LogMessage::~LogMessage()
> #9 0x55f6d7ed5488 content::(anonymous namespace)::IntentionallyCrashBrowserForUnusableGpuProcess()
> #10 0x55f6d7ed8479 content::GpuDataManagerImplPrivate::FallBackToNextGpuMode()
> #11 0x55f6d7ed4eef content::GpuDataManagerImpl::FallBackToNextGpuMode()
> #12 0x55f6d7ee0f41 content::GpuProcessHost::RecordProcessCrash()
> #13 0x55f6d7ee105d content::GpuProcessHost::OnProcessCrashed()
> #14 0x55f6d7cbe308 content::BrowserChildProcessHostImpl::OnChildDisconnected()
> #15 0x55f6da8b511a IPC::ChannelMojo::OnPipeError()
> #16 0x55f6da13cd62 mojo::InterfaceEndpointClient::NotifyError()
> #17 0x55f6da8c1f9d IPC::(anonymous namespace)::ChannelAssociatedGroupController::OnPipeError()
> #18 0x55f6da138968 mojo::Connector::HandleError()
> #19 0x55f6da15bce7 mojo::SimpleWatcher::OnHandleReady()
> #20 0x55f6da15c0fb mojo::SimpleWatcher::Context::CallNotify()
> #21 0x55f6d78eaa73 mojo::core::WatcherDispatcher::InvokeWatchCallback()
> #22 0x55f6d78ea38f mojo::core::Watch::InvokeCallback()
> #23 0x55f6d78e6efa mojo::core::RequestContext::~RequestContext()
> #24 0x55f6d78db76a mojo::core::NodeChannel::OnChannelError()
> #25 0x55f6d78f232a mojo::core::(anonymous namespace)::ChannelPosix::OnFileCanReadWithoutBlocking()
> #26 0x55f6da03345e base::MessagePumpLibevent::OnLibeventNotification()
> #27 0x55f6da0f9b2d event_base_loop
> #28 0x55f6da03316d base::MessagePumpLibevent::Run()
> #29 0x55f6d9fd79c9 base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::Run()
> #30 0x55f6d9fada7a base::RunLoop::Run()
> #31 0x55f6d7ce6324 content::BrowserProcessSubThread::IOThreadRun()
> #32 0x55f6d9fe0cb8 base::Thread::ThreadMain()
> #33 0x55f6da024705 base::(anonymous namespace)::ThreadFunc()
> #34 0x7ff46642f4e2 start_thread
> #35 0x7ff462e4c6a3 __GI___clone
>   r8: 0000000000000000  r9: 00007ff44e6a58d0 r10: 0000000000000008 r11: 0000000000000246
>  r12: 00007ff44e6a6b40 r13: 00007ff44e6a6d00 r14: 000000000000006d r15: 00007ff44e6a6b30
>   di: 0000000000000002  si: 00007ff44e6a58d0  bp: 00007ff44e6a5b20  bx: 00007ff44e6a9700
>   dx: 0000000000000000  ax: 0000000000000000  cx: 00007ff462d87625  sp: 00007ff44e6a58d0
>   ip: 00007ff462d87625 efl: 0000000000000246 cgf: 002b000000000033 erf: 0000000000000000
>  trp: 0000000000000000 msk: 0000000000000000 cr2: 0000000000000000
> [end of stack trace]
> Calling _exit(1). Core file will not be generated.
>
>

I guess this answers our question whether we could disable fsnoitfy
watches on pseudo inodes....

From comments like these in chromium code:
https://chromium.googlesource.com/chromium/src/+/master/mojo/core/watcher_dispatcher.cc#77
https://chromium.googlesource.com/chromium/src/+/master/base/files/file_descriptor_watcher_posix.cc#176
https://chromium.googlesource.com/chromium/src/+/master/ipc/ipc_channel_mojo.cc#240

I am taking a wild guess that the missing FS_CLOSE event on anonymous pipes is
the cause for regression.

The motivation for the patch "fs: Do not check if there is a fsnotify
watcher on pseudo inodes"
was performance, but actually, FS_CLOSE and FS_OPEN events probably do
not impact
performance as FS_MODIFY and FS_ACCESS.

Mel,

Do your perf results support the claim above?

Jan/Linus,

Do you agree that dropping FS_MODIFY/FS_ACCESS events for FMODE_STREAM
files as a general rule should be safe?

Maxim, can you try if the attached patch fixes the chromium regression.
It is expected to leave the FS_OPEN/FS_CLOSE events on anonymous pipes
but drop the FS_MODIFY/FS_ACCESS events.

Thanks,
Amir.

[-- Attachment #2: fsnotify-suppress-access-modify-events-on-stream-files.patch.txt --]
[-- Type: text/plain, Size: 1966 bytes --]

From a81c729a5c52ddb2d8d98220478e492b71956574 Mon Sep 17 00:00:00 2001
From: Amir Goldstein <amir73il@gmail.com>
Date: Sun, 28 Jun 2020 15:36:56 +0300
Subject: [PATCH] fsnotify: suppress access/modify events on stream files

We wanted to suppress all fsnotify events on anonymous pipes/sockets,
but chromioum seems to be relying on some of those events.

Let's try to suppress only access/modify events on stream files.

Reported-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/lkml/7b4aa1e985007c6d582fffe5e8435f8153e28e0f.camel@redhat.com
Fixes: e9c15badbb7b ("fs: Do not check if there is a fsnotify watcher on pseudo inodes")
Signed-off-by: Amir Goldstein <amir73il@gmail.com>
---
 fs/file_table.c          | 2 +-
 include/linux/fsnotify.h | 6 ++++++
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/fs/file_table.c b/fs/file_table.c
index 65603502fed6..656647f9575a 100644
--- a/fs/file_table.c
+++ b/fs/file_table.c
@@ -230,7 +230,7 @@ struct file *alloc_file_pseudo(struct inode *inode, struct vfsmount *mnt,
 		d_set_d_op(path.dentry, &anon_ops);
 	path.mnt = mntget(mnt);
 	d_instantiate(path.dentry, inode);
-	file = alloc_file(&path, flags | FMODE_NONOTIFY, fops);
+	file = alloc_file(&path, flags, fops);
 	if (IS_ERR(file)) {
 		ihold(inode);
 		path_put(&path);
diff --git a/include/linux/fsnotify.h b/include/linux/fsnotify.h
index 5ab28f6c7d26..3a07824332f5 100644
--- a/include/linux/fsnotify.h
+++ b/include/linux/fsnotify.h
@@ -246,6 +246,9 @@ static inline void fsnotify_rmdir(struct inode *dir, struct dentry *dentry)
  */
 static inline void fsnotify_access(struct file *file)
 {
+	if (file->f_mode & FMODE_STREAM)
+		return 0;
+
 	fsnotify_file(file, FS_ACCESS);
 }
 
@@ -254,6 +257,9 @@ static inline void fsnotify_access(struct file *file)
  */
 static inline void fsnotify_modify(struct file *file)
 {
+	if (file->f_mode & FMODE_STREAM)
+		return 0;
+
 	fsnotify_file(file, FS_MODIFY);
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: Commit 'fs: Do not check if there is a fsnotify watcher on pseudo inodes' breaks chromium here
  2020-06-28 12:53 ` Commit 'fs: Do not check if there is a fsnotify watcher on pseudo inodes' breaks chromium here Amir Goldstein
@ 2020-06-28 13:14   ` Maxim Levitsky
  2020-06-28 13:17     ` Maxim Levitsky
  2020-06-29 13:09   ` Jan Kara
  1 sibling, 1 reply; 10+ messages in thread
From: Maxim Levitsky @ 2020-06-28 13:14 UTC (permalink / raw)
  To: Amir Goldstein
  Cc: linux-kernel, Alexander Viro, linux-fsdevel, Mel Gorman,
	Jan Kara, Linus Torvalds

On Sun, 2020-06-28 at 15:53 +0300, Amir Goldstein wrote:
> On Sun, Jun 28, 2020 at 2:14 PM Maxim Levitsky <mlevitsk@redhat.com> wrote:
> > Hi,
> > 
> > I just did usual kernel update and now chromium crashes on startup.
> > It happens both in a KVM's VM (with virtio-gpu if that matters) and natively with amdgpu driver.
> > Most likely not GPU related although I initially suspected that it is.
> > 
> > Chromium starts as a white rectangle, shows few white rectangles
> > that resemble its notifications and then crashes.
> > 
> > The stdout output from chromium:
> > 
> [...]
> 
> > Received signal 6
> > #0 0x55f6da0120d9 base::debug::CollectStackTrace()
> > #1 0x55f6d9f75246 base::debug::StackTrace::StackTrace()
> > #2 0x55f6da01170a base::debug::(anonymous namespace)::StackDumpSignalHandler()
> > #3 0x55f6da011cfe base::debug::(anonymous namespace)::StackDumpSignalHandler()
> > #4 0x7ff46643ab20 (/usr/lib64/libpthread-2.30.so+0x14b1f)
> > #5 0x7ff462d87625 __GI_raise
> > #6 0x7ff462d708d9 __GI_abort
> > #7 0x55f6da0112d5 base::debug::BreakDebugger()
> > #8 0x55f6d9f86405 logging::LogMessage::~LogMessage()
> > #9 0x55f6d7ed5488 content::(anonymous namespace)::IntentionallyCrashBrowserForUnusableGpuProcess()
> > #10 0x55f6d7ed8479 content::GpuDataManagerImplPrivate::FallBackToNextGpuMode()
> > #11 0x55f6d7ed4eef content::GpuDataManagerImpl::FallBackToNextGpuMode()
> > #12 0x55f6d7ee0f41 content::GpuProcessHost::RecordProcessCrash()
> > #13 0x55f6d7ee105d content::GpuProcessHost::OnProcessCrashed()
> > #14 0x55f6d7cbe308 content::BrowserChildProcessHostImpl::OnChildDisconnected()
> > #15 0x55f6da8b511a IPC::ChannelMojo::OnPipeError()
> > #16 0x55f6da13cd62 mojo::InterfaceEndpointClient::NotifyError()
> > #17 0x55f6da8c1f9d IPC::(anonymous namespace)::ChannelAssociatedGroupController::OnPipeError()
> > #18 0x55f6da138968 mojo::Connector::HandleError()
> > #19 0x55f6da15bce7 mojo::SimpleWatcher::OnHandleReady()
> > #20 0x55f6da15c0fb mojo::SimpleWatcher::Context::CallNotify()
> > #21 0x55f6d78eaa73 mojo::core::WatcherDispatcher::InvokeWatchCallback()
> > #22 0x55f6d78ea38f mojo::core::Watch::InvokeCallback()
> > #23 0x55f6d78e6efa mojo::core::RequestContext::~RequestContext()
> > #24 0x55f6d78db76a mojo::core::NodeChannel::OnChannelError()
> > #25 0x55f6d78f232a mojo::core::(anonymous namespace)::ChannelPosix::OnFileCanReadWithoutBlocking()
> > #26 0x55f6da03345e base::MessagePumpLibevent::OnLibeventNotification()
> > #27 0x55f6da0f9b2d event_base_loop
> > #28 0x55f6da03316d base::MessagePumpLibevent::Run()
> > #29 0x55f6d9fd79c9 base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::Run()
> > #30 0x55f6d9fada7a base::RunLoop::Run()
> > #31 0x55f6d7ce6324 content::BrowserProcessSubThread::IOThreadRun()
> > #32 0x55f6d9fe0cb8 base::Thread::ThreadMain()
> > #33 0x55f6da024705 base::(anonymous namespace)::ThreadFunc()
> > #34 0x7ff46642f4e2 start_thread
> > #35 0x7ff462e4c6a3 __GI___clone
> >   r8: 0000000000000000  r9: 00007ff44e6a58d0 r10: 0000000000000008 r11: 0000000000000246
> >  r12: 00007ff44e6a6b40 r13: 00007ff44e6a6d00 r14: 000000000000006d r15: 00007ff44e6a6b30
> >   di: 0000000000000002  si: 00007ff44e6a58d0  bp: 00007ff44e6a5b20  bx: 00007ff44e6a9700
> >   dx: 0000000000000000  ax: 0000000000000000  cx: 00007ff462d87625  sp: 00007ff44e6a58d0
> >   ip: 00007ff462d87625 efl: 0000000000000246 cgf: 002b000000000033 erf: 0000000000000000
> >  trp: 0000000000000000 msk: 0000000000000000 cr2: 0000000000000000
> > [end of stack trace]
> > Calling _exit(1). Core file will not be generated.
> > 
> > 
> 
> I guess this answers our question whether we could disable fsnoitfy
> watches on pseudo inodes....
> 
> From comments like these in chromium code:
> https://chromium.googlesource.com/chromium/src/+/master/mojo/core/watcher_dispatcher.cc#77
> https://chromium.googlesource.com/chromium/src/+/master/base/files/file_descriptor_watcher_posix.cc#176
> https://chromium.googlesource.com/chromium/src/+/master/ipc/ipc_channel_mojo.cc#240
> 
> I am taking a wild guess that the missing FS_CLOSE event on anonymous pipes is
> the cause for regression.
> 
> The motivation for the patch "fs: Do not check if there is a fsnotify
> watcher on pseudo inodes"
> was performance, but actually, FS_CLOSE and FS_OPEN events probably do
> not impact
> performance as FS_MODIFY and FS_ACCESS.
> 
> Mel,
> 
> Do your perf results support the claim above?
> 
> Jan/Linus,
> 
> Do you agree that dropping FS_MODIFY/FS_ACCESS events for FMODE_STREAM
> files as a general rule should be safe?
> 
> Maxim, can you try if the attached patch fixes the chromium regression.
> It is expected to leave the FS_OPEN/FS_CLOSE events on anonymous pipes
> but drop the FS_MODIFY/FS_ACCESS events.
Tested this (in the VM this time) and it works.

Best regards,
	Maxim Levitsky

> 
> Thanks,
> Amir.



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Commit 'fs: Do not check if there is a fsnotify watcher on pseudo inodes' breaks chromium here
  2020-06-28 13:14   ` Maxim Levitsky
@ 2020-06-28 13:17     ` Maxim Levitsky
  2020-06-28 13:34       ` Amir Goldstein
  0 siblings, 1 reply; 10+ messages in thread
From: Maxim Levitsky @ 2020-06-28 13:17 UTC (permalink / raw)
  To: Amir Goldstein
  Cc: linux-kernel, Alexander Viro, linux-fsdevel, Mel Gorman,
	Jan Kara, Linus Torvalds

On Sun, 2020-06-28 at 16:14 +0300, Maxim Levitsky wrote:
> On Sun, 2020-06-28 at 15:53 +0300, Amir Goldstein wrote:
> > On Sun, Jun 28, 2020 at 2:14 PM Maxim Levitsky <mlevitsk@redhat.com> wrote:
> > > Hi,
> > > 
> > > I just did usual kernel update and now chromium crashes on startup.
> > > It happens both in a KVM's VM (with virtio-gpu if that matters) and natively with amdgpu driver.
> > > Most likely not GPU related although I initially suspected that it is.
> > > 
> > > Chromium starts as a white rectangle, shows few white rectangles
> > > that resemble its notifications and then crashes.
> > > 
> > > The stdout output from chromium:
> > > 
> > [...]
> > 
> > > Received signal 6
> > > #0 0x55f6da0120d9 base::debug::CollectStackTrace()
> > > #1 0x55f6d9f75246 base::debug::StackTrace::StackTrace()
> > > #2 0x55f6da01170a base::debug::(anonymous namespace)::StackDumpSignalHandler()
> > > #3 0x55f6da011cfe base::debug::(anonymous namespace)::StackDumpSignalHandler()
> > > #4 0x7ff46643ab20 (/usr/lib64/libpthread-2.30.so+0x14b1f)
> > > #5 0x7ff462d87625 __GI_raise
> > > #6 0x7ff462d708d9 __GI_abort
> > > #7 0x55f6da0112d5 base::debug::BreakDebugger()
> > > #8 0x55f6d9f86405 logging::LogMessage::~LogMessage()
> > > #9 0x55f6d7ed5488 content::(anonymous namespace)::IntentionallyCrashBrowserForUnusableGpuProcess()
> > > #10 0x55f6d7ed8479 content::GpuDataManagerImplPrivate::FallBackToNextGpuMode()
> > > #11 0x55f6d7ed4eef content::GpuDataManagerImpl::FallBackToNextGpuMode()
> > > #12 0x55f6d7ee0f41 content::GpuProcessHost::RecordProcessCrash()
> > > #13 0x55f6d7ee105d content::GpuProcessHost::OnProcessCrashed()
> > > #14 0x55f6d7cbe308 content::BrowserChildProcessHostImpl::OnChildDisconnected()
> > > #15 0x55f6da8b511a IPC::ChannelMojo::OnPipeError()
> > > #16 0x55f6da13cd62 mojo::InterfaceEndpointClient::NotifyError()
> > > #17 0x55f6da8c1f9d IPC::(anonymous namespace)::ChannelAssociatedGroupController::OnPipeError()
> > > #18 0x55f6da138968 mojo::Connector::HandleError()
> > > #19 0x55f6da15bce7 mojo::SimpleWatcher::OnHandleReady()
> > > #20 0x55f6da15c0fb mojo::SimpleWatcher::Context::CallNotify()
> > > #21 0x55f6d78eaa73 mojo::core::WatcherDispatcher::InvokeWatchCallback()
> > > #22 0x55f6d78ea38f mojo::core::Watch::InvokeCallback()
> > > #23 0x55f6d78e6efa mojo::core::RequestContext::~RequestContext()
> > > #24 0x55f6d78db76a mojo::core::NodeChannel::OnChannelError()
> > > #25 0x55f6d78f232a mojo::core::(anonymous namespace)::ChannelPosix::OnFileCanReadWithoutBlocking()
> > > #26 0x55f6da03345e base::MessagePumpLibevent::OnLibeventNotification()
> > > #27 0x55f6da0f9b2d event_base_loop
> > > #28 0x55f6da03316d base::MessagePumpLibevent::Run()
> > > #29 0x55f6d9fd79c9 base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::Run()
> > > #30 0x55f6d9fada7a base::RunLoop::Run()
> > > #31 0x55f6d7ce6324 content::BrowserProcessSubThread::IOThreadRun()
> > > #32 0x55f6d9fe0cb8 base::Thread::ThreadMain()
> > > #33 0x55f6da024705 base::(anonymous namespace)::ThreadFunc()
> > > #34 0x7ff46642f4e2 start_thread
> > > #35 0x7ff462e4c6a3 __GI___clone
> > >   r8: 0000000000000000  r9: 00007ff44e6a58d0 r10: 0000000000000008 r11: 0000000000000246
> > >  r12: 00007ff44e6a6b40 r13: 00007ff44e6a6d00 r14: 000000000000006d r15: 00007ff44e6a6b30
> > >   di: 0000000000000002  si: 00007ff44e6a58d0  bp: 00007ff44e6a5b20  bx: 00007ff44e6a9700
> > >   dx: 0000000000000000  ax: 0000000000000000  cx: 00007ff462d87625  sp: 00007ff44e6a58d0
> > >   ip: 00007ff462d87625 efl: 0000000000000246 cgf: 002b000000000033 erf: 0000000000000000
> > >  trp: 0000000000000000 msk: 0000000000000000 cr2: 0000000000000000
> > > [end of stack trace]
> > > Calling _exit(1). Core file will not be generated.
> > > 
> > > 
> > 
> > I guess this answers our question whether we could disable fsnoitfy
> > watches on pseudo inodes....
> > 
> > From comments like these in chromium code:
> > https://chromium.googlesource.com/chromium/src/+/master/mojo/core/watcher_dispatcher.cc#77
> > https://chromium.googlesource.com/chromium/src/+/master/base/files/file_descriptor_watcher_posix.cc#176
> > https://chromium.googlesource.com/chromium/src/+/master/ipc/ipc_channel_mojo.cc#240
> > 
> > I am taking a wild guess that the missing FS_CLOSE event on anonymous pipes is
> > the cause for regression.
> > 
> > The motivation for the patch "fs: Do not check if there is a fsnotify
> > watcher on pseudo inodes"
> > was performance, but actually, FS_CLOSE and FS_OPEN events probably do
> > not impact
> > performance as FS_MODIFY and FS_ACCESS.
> > 
> > Mel,
> > 
> > Do your perf results support the claim above?
> > 
> > Jan/Linus,
> > 
> > Do you agree that dropping FS_MODIFY/FS_ACCESS events for FMODE_STREAM
> > files as a general rule should be safe?
> > 
> > Maxim, can you try if the attached patch fixes the chromium regression.
> > It is expected to leave the FS_OPEN/FS_CLOSE events on anonymous pipes
> > but drop the FS_MODIFY/FS_ACCESS events.
> Tested this (in the VM this time) and it works.


Note that this should be changed to 'return' since function returns void.

+       if (file->f_mode & FMODE_STREAM)
+               return 0;

Best regards,
	Maxim Levitsky

> 
> Best regards,
> 	Maxim Levitsky
> 
> > Thanks,
> > Amir.



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Commit 'fs: Do not check if there is a fsnotify watcher on pseudo inodes' breaks chromium here
  2020-06-28 13:17     ` Maxim Levitsky
@ 2020-06-28 13:34       ` Amir Goldstein
  2020-06-29  9:31         ` Mel Gorman
  0 siblings, 1 reply; 10+ messages in thread
From: Amir Goldstein @ 2020-06-28 13:34 UTC (permalink / raw)
  To: Maxim Levitsky
  Cc: linux-kernel, Alexander Viro, linux-fsdevel, Mel Gorman,
	Jan Kara, Linus Torvalds

On Sun, Jun 28, 2020 at 4:17 PM Maxim Levitsky <mlevitsk@redhat.com> wrote:
>
> On Sun, 2020-06-28 at 16:14 +0300, Maxim Levitsky wrote:
> > On Sun, 2020-06-28 at 15:53 +0300, Amir Goldstein wrote:
> > > On Sun, Jun 28, 2020 at 2:14 PM Maxim Levitsky <mlevitsk@redhat.com> wrote:
> > > > Hi,
> > > >
> > > > I just did usual kernel update and now chromium crashes on startup.
> > > > It happens both in a KVM's VM (with virtio-gpu if that matters) and natively with amdgpu driver.
> > > > Most likely not GPU related although I initially suspected that it is.
> > > >
> > > > Chromium starts as a white rectangle, shows few white rectangles
> > > > that resemble its notifications and then crashes.
> > > >
> > > > The stdout output from chromium:
> > > >
> > > [...]
> > >
> > > > Received signal 6
> > > > #0 0x55f6da0120d9 base::debug::CollectStackTrace()
> > > > #1 0x55f6d9f75246 base::debug::StackTrace::StackTrace()
> > > > #2 0x55f6da01170a base::debug::(anonymous namespace)::StackDumpSignalHandler()
> > > > #3 0x55f6da011cfe base::debug::(anonymous namespace)::StackDumpSignalHandler()
> > > > #4 0x7ff46643ab20 (/usr/lib64/libpthread-2.30.so+0x14b1f)
> > > > #5 0x7ff462d87625 __GI_raise
> > > > #6 0x7ff462d708d9 __GI_abort
> > > > #7 0x55f6da0112d5 base::debug::BreakDebugger()
> > > > #8 0x55f6d9f86405 logging::LogMessage::~LogMessage()
> > > > #9 0x55f6d7ed5488 content::(anonymous namespace)::IntentionallyCrashBrowserForUnusableGpuProcess()
> > > > #10 0x55f6d7ed8479 content::GpuDataManagerImplPrivate::FallBackToNextGpuMode()
> > > > #11 0x55f6d7ed4eef content::GpuDataManagerImpl::FallBackToNextGpuMode()
> > > > #12 0x55f6d7ee0f41 content::GpuProcessHost::RecordProcessCrash()
> > > > #13 0x55f6d7ee105d content::GpuProcessHost::OnProcessCrashed()
> > > > #14 0x55f6d7cbe308 content::BrowserChildProcessHostImpl::OnChildDisconnected()
> > > > #15 0x55f6da8b511a IPC::ChannelMojo::OnPipeError()
> > > > #16 0x55f6da13cd62 mojo::InterfaceEndpointClient::NotifyError()
> > > > #17 0x55f6da8c1f9d IPC::(anonymous namespace)::ChannelAssociatedGroupController::OnPipeError()
> > > > #18 0x55f6da138968 mojo::Connector::HandleError()
> > > > #19 0x55f6da15bce7 mojo::SimpleWatcher::OnHandleReady()
> > > > #20 0x55f6da15c0fb mojo::SimpleWatcher::Context::CallNotify()
> > > > #21 0x55f6d78eaa73 mojo::core::WatcherDispatcher::InvokeWatchCallback()
> > > > #22 0x55f6d78ea38f mojo::core::Watch::InvokeCallback()
> > > > #23 0x55f6d78e6efa mojo::core::RequestContext::~RequestContext()
> > > > #24 0x55f6d78db76a mojo::core::NodeChannel::OnChannelError()
> > > > #25 0x55f6d78f232a mojo::core::(anonymous namespace)::ChannelPosix::OnFileCanReadWithoutBlocking()
> > > > #26 0x55f6da03345e base::MessagePumpLibevent::OnLibeventNotification()
> > > > #27 0x55f6da0f9b2d event_base_loop
> > > > #28 0x55f6da03316d base::MessagePumpLibevent::Run()
> > > > #29 0x55f6d9fd79c9 base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::Run()
> > > > #30 0x55f6d9fada7a base::RunLoop::Run()
> > > > #31 0x55f6d7ce6324 content::BrowserProcessSubThread::IOThreadRun()
> > > > #32 0x55f6d9fe0cb8 base::Thread::ThreadMain()
> > > > #33 0x55f6da024705 base::(anonymous namespace)::ThreadFunc()
> > > > #34 0x7ff46642f4e2 start_thread
> > > > #35 0x7ff462e4c6a3 __GI___clone
> > > >   r8: 0000000000000000  r9: 00007ff44e6a58d0 r10: 0000000000000008 r11: 0000000000000246
> > > >  r12: 00007ff44e6a6b40 r13: 00007ff44e6a6d00 r14: 000000000000006d r15: 00007ff44e6a6b30
> > > >   di: 0000000000000002  si: 00007ff44e6a58d0  bp: 00007ff44e6a5b20  bx: 00007ff44e6a9700
> > > >   dx: 0000000000000000  ax: 0000000000000000  cx: 00007ff462d87625  sp: 00007ff44e6a58d0
> > > >   ip: 00007ff462d87625 efl: 0000000000000246 cgf: 002b000000000033 erf: 0000000000000000
> > > >  trp: 0000000000000000 msk: 0000000000000000 cr2: 0000000000000000
> > > > [end of stack trace]
> > > > Calling _exit(1). Core file will not be generated.
> > > >
> > > >
> > >
> > > I guess this answers our question whether we could disable fsnoitfy
> > > watches on pseudo inodes....
> > >
> > > From comments like these in chromium code:
> > > https://chromium.googlesource.com/chromium/src/+/master/mojo/core/watcher_dispatcher.cc#77
> > > https://chromium.googlesource.com/chromium/src/+/master/base/files/file_descriptor_watcher_posix.cc#176
> > > https://chromium.googlesource.com/chromium/src/+/master/ipc/ipc_channel_mojo.cc#240
> > >
> > > I am taking a wild guess that the missing FS_CLOSE event on anonymous pipes is
> > > the cause for regression.
> > >
> > > The motivation for the patch "fs: Do not check if there is a fsnotify
> > > watcher on pseudo inodes"
> > > was performance, but actually, FS_CLOSE and FS_OPEN events probably do
> > > not impact
> > > performance as FS_MODIFY and FS_ACCESS.
> > >
> > > Mel,
> > >
> > > Do your perf results support the claim above?
> > >
> > > Jan/Linus,
> > >
> > > Do you agree that dropping FS_MODIFY/FS_ACCESS events for FMODE_STREAM
> > > files as a general rule should be safe?
> > >
> > > Maxim, can you try if the attached patch fixes the chromium regression.
> > > It is expected to leave the FS_OPEN/FS_CLOSE events on anonymous pipes
> > > but drop the FS_MODIFY/FS_ACCESS events.
> > Tested this (in the VM this time) and it works.
>
>
> Note that this should be changed to 'return' since function returns void.
>
> +       if (file->f_mode & FMODE_STREAM)
> +               return 0;
>

Right sorry. Didn't pay attention to the build warnings.

Now only left to see if this approach is acceptable and if it also
fixes the performance issue reported by Mel.

Then we can start figuring out how to handle other pseudo inodes
on which events may race with unmount.

Thanks,
Amir.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Commit 'fs: Do not check if there is a fsnotify watcher on pseudo inodes' breaks chromium here
  2020-06-28 13:34       ` Amir Goldstein
@ 2020-06-29  9:31         ` Mel Gorman
  0 siblings, 0 replies; 10+ messages in thread
From: Mel Gorman @ 2020-06-29  9:31 UTC (permalink / raw)
  To: Amir Goldstein
  Cc: Maxim Levitsky, linux-kernel, Alexander Viro, linux-fsdevel,
	Jan Kara, Linus Torvalds

On Sun, Jun 28, 2020 at 04:34:51PM +0300, Amir Goldstein wrote:
> > > > > #0 0x55f6da0120d9 base::debug::CollectStackTrace()
> > > > > #1 0x55f6d9f75246 base::debug::StackTrace::StackTrace()
> > > > > #2 0x55f6da01170a base::debug::(anonymous namespace)::StackDumpSignalHandler()
> > > > > #3 0x55f6da011cfe base::debug::(anonymous namespace)::StackDumpSignalHandler()
> > > > > #4 0x7ff46643ab20 (/usr/lib64/libpthread-2.30.so+0x14b1f)
> > > > > #5 0x7ff462d87625 __GI_raise
> > > > > #6 0x7ff462d708d9 __GI_abort
> > > > > #7 0x55f6da0112d5 base::debug::BreakDebugger()
> > > > > #8 0x55f6d9f86405 logging::LogMessage::~LogMessage()
> > > > > #9 0x55f6d7ed5488 content::(anonymous namespace)::IntentionallyCrashBrowserForUnusableGpuProcess()
> > > > > #10 0x55f6d7ed8479 content::GpuDataManagerImplPrivate::FallBackToNextGpuMode()
> > > > > #11 0x55f6d7ed4eef content::GpuDataManagerImpl::FallBackToNextGpuMode()
> > > > > #12 0x55f6d7ee0f41 content::GpuProcessHost::RecordProcessCrash()
> > > > > #13 0x55f6d7ee105d content::GpuProcessHost::OnProcessCrashed()
> > > > > #14 0x55f6d7cbe308 content::BrowserChildProcessHostImpl::OnChildDisconnected()
> > > > > #15 0x55f6da8b511a IPC::ChannelMojo::OnPipeError()
> > > > > #16 0x55f6da13cd62 mojo::InterfaceEndpointClient::NotifyError()
> > > > > #17 0x55f6da8c1f9d IPC::(anonymous namespace)::ChannelAssociatedGroupController::OnPipeError()
> > > > > #18 0x55f6da138968 mojo::Connector::HandleError()
> > > > > #19 0x55f6da15bce7 mojo::SimpleWatcher::OnHandleReady()
> > > > > #20 0x55f6da15c0fb mojo::SimpleWatcher::Context::CallNotify()
> > > > > #21 0x55f6d78eaa73 mojo::core::WatcherDispatcher::InvokeWatchCallback()
> > > > > #22 0x55f6d78ea38f mojo::core::Watch::InvokeCallback()
> > > > > #23 0x55f6d78e6efa mojo::core::RequestContext::~RequestContext()
> > > > > #24 0x55f6d78db76a mojo::core::NodeChannel::OnChannelError()
> > > > > #25 0x55f6d78f232a mojo::core::(anonymous namespace)::ChannelPosix::OnFileCanReadWithoutBlocking()
> > > > > #26 0x55f6da03345e base::MessagePumpLibevent::OnLibeventNotification()
> > > > > #27 0x55f6da0f9b2d event_base_loop
> > > > > #28 0x55f6da03316d base::MessagePumpLibevent::Run()
> > > > > #29 0x55f6d9fd79c9 base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::Run()
> > > > > #30 0x55f6d9fada7a base::RunLoop::Run()
> > > > > #31 0x55f6d7ce6324 content::BrowserProcessSubThread::IOThreadRun()
> > > > > #32 0x55f6d9fe0cb8 base::Thread::ThreadMain()
> > > > > #33 0x55f6da024705 base::(anonymous namespace)::ThreadFunc()
> > > > > #34 0x7ff46642f4e2 start_thread
> > > > > #35 0x7ff462e4c6a3 __GI___clone
> > > > >   r8: 0000000000000000  r9: 00007ff44e6a58d0 r10: 0000000000000008 r11: 0000000000000246
> > > > >  r12: 00007ff44e6a6b40 r13: 00007ff44e6a6d00 r14: 000000000000006d r15: 00007ff44e6a6b30
> > > > >   di: 0000000000000002  si: 00007ff44e6a58d0  bp: 00007ff44e6a5b20  bx: 00007ff44e6a9700
> > > > >   dx: 0000000000000000  ax: 0000000000000000  cx: 00007ff462d87625  sp: 00007ff44e6a58d0
> > > > >   ip: 00007ff462d87625 efl: 0000000000000246 cgf: 002b000000000033 erf: 0000000000000000
> > > > >  trp: 0000000000000000 msk: 0000000000000000 cr2: 0000000000000000
> > > > > [end of stack trace]
> > > > > Calling _exit(1). Core file will not be generated.
> > > > >
> > > > >
> > > >
> > > > I guess this answers our question whether we could disable fsnoitfy
> > > > watches on pseudo inodes....
> > > >
> > > > From comments like these in chromium code:
> > > > https://chromium.googlesource.com/chromium/src/+/master/mojo/core/watcher_dispatcher.cc#77
> > > > https://chromium.googlesource.com/chromium/src/+/master/base/files/file_descriptor_watcher_posix.cc#176
> > > > https://chromium.googlesource.com/chromium/src/+/master/ipc/ipc_channel_mojo.cc#240
> > > >
> > > > I am taking a wild guess that the missing FS_CLOSE event on anonymous pipes is
> > > > the cause for regression.
> > > >
> > > > The motivation for the patch "fs: Do not check if there is a fsnotify
> > > > watcher on pseudo inodes"
> > > > was performance, but actually, FS_CLOSE and FS_OPEN events probably do
> > > > not impact
> > > > performance as FS_MODIFY and FS_ACCESS.
> > > >
> > > > Mel,
> > > >
> > > > Do your perf results support the claim above?
> > > >
> > > > Jan/Linus,
> > > >
> > > > Do you agree that dropping FS_MODIFY/FS_ACCESS events for FMODE_STREAM
> > > > files as a general rule should be safe?
> > > >
> > > > Maxim, can you try if the attached patch fixes the chromium regression.
> > > > It is expected to leave the FS_OPEN/FS_CLOSE events on anonymous pipes
> > > > but drop the FS_MODIFY/FS_ACCESS events.
> > > Tested this (in the VM this time) and it works.
> >
> >
> > Note that this should be changed to 'return' since function returns void.
> >
> > +       if (file->f_mode & FMODE_STREAM)
> > +               return 0;
> >
> 
> Right sorry. Didn't pay attention to the build warnings.
> 
> Now only left to see if this approach is acceptable and if it also
> fixes the performance issue reported by Mel.
> 


Thanks Amir!

The performance seems fine and I think the patch is ok. If there is an
issue with special casing FMODE_STREAM then the alternative would be to
special case pipes with FMODE_NONOTIFY specifically as pipes appear to
be the pseudo inode that is most affected by fsnotify checks.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Commit 'fs: Do not check if there is a fsnotify watcher on pseudo inodes' breaks chromium here
  2020-06-28 12:53 ` Commit 'fs: Do not check if there is a fsnotify watcher on pseudo inodes' breaks chromium here Amir Goldstein
  2020-06-28 13:14   ` Maxim Levitsky
@ 2020-06-29 13:09   ` Jan Kara
  2020-06-29 14:05     ` Amir Goldstein
  1 sibling, 1 reply; 10+ messages in thread
From: Jan Kara @ 2020-06-29 13:09 UTC (permalink / raw)
  To: Amir Goldstein
  Cc: Maxim Levitsky, linux-kernel, Alexander Viro, linux-fsdevel,
	Mel Gorman, Jan Kara, Linus Torvalds

On Sun 28-06-20 15:53:51, Amir Goldstein wrote:
> On Sun, Jun 28, 2020 at 2:14 PM Maxim Levitsky <mlevitsk@redhat.com> wrote:
> >
> > Hi,
> >
> > I just did usual kernel update and now chromium crashes on startup.
> > It happens both in a KVM's VM (with virtio-gpu if that matters) and natively with amdgpu driver.
> > Most likely not GPU related although I initially suspected that it is.
> >
> > Chromium starts as a white rectangle, shows few white rectangles
> > that resemble its notifications and then crashes.
> >
> > The stdout output from chromium:
> 
> I guess this answers our question whether we could disable fsnoitfy
> watches on pseudo inodes....

Right :-|

> From comments like these in chromium code:
> https://chromium.googlesource.com/chromium/src/+/master/mojo/core/watcher_dispatcher.cc#77
> https://chromium.googlesource.com/chromium/src/+/master/base/files/file_descriptor_watcher_posix.cc#176
> https://chromium.googlesource.com/chromium/src/+/master/ipc/ipc_channel_mojo.cc#240
> 
> I am taking a wild guess that the missing FS_CLOSE event on anonymous pipes is
> the cause for regression.

I was checking the Chromium code for some time. It uses inotify in
base/files/file_path_watcher_linux.cc and watches IN_CLOSE_WRITE event
(among other ones) but I was unable to track down how the class gets
connected to the mojo class that crashes. I'd be somewhat curious how they
place inotify watches on pipe inodes - probably they have to utilize proc
magic links but I'd like to be sure. Anyway your guess appears to be
correct :)

> The motivation for the patch "fs: Do not check if there is a fsnotify
> watcher on pseudo inodes"
> was performance, but actually, FS_CLOSE and FS_OPEN events probably do
> not impact performance as FS_MODIFY and FS_ACCESS.

Correct.

> Do you agree that dropping FS_MODIFY/FS_ACCESS events for FMODE_STREAM
> files as a general rule should be safe?

Hum, so your patch drops FS_MODIFY/FS_ACCESS events also for named pipes
compared to the original patch AFAIU and for those fsnotify works fine
so far. So I'm not sure we won't regress someone else with this.

I've also tested inotify on a sample pipe like: cat /dev/stdin | tee
and watched /proc/<tee pid>/fd/0 and it actually generated IN_MODIFY |
IN_ACCESS when data arrived to a pipe and tee(1) read it and then
IN_CLOSE_WRITE | IN_CLOSE_NOWRITE when the pipe got closed (I thought you
mentioned modify and access events didn't get properly generated?).

So as much as I agree that some fsnotify events on FMODE_STREAM files are
dubious, they could get used (possibly accidentally) and so after this
Chromium experience I think we just have to revert the change and live with
generating notification events for pipes to avoid userspace regressions.

Thoughts?

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Commit 'fs: Do not check if there is a fsnotify watcher on pseudo inodes' breaks chromium here
  2020-06-29 13:09   ` Jan Kara
@ 2020-06-29 14:05     ` Amir Goldstein
  2020-06-29 14:32       ` Mel Gorman
  2020-06-29 14:41       ` [PATCH] Revert "fs: Do not check if there is a fsnotify watcher on pseudo inodes" Mel Gorman
  0 siblings, 2 replies; 10+ messages in thread
From: Amir Goldstein @ 2020-06-29 14:05 UTC (permalink / raw)
  To: Jan Kara
  Cc: Maxim Levitsky, linux-kernel, Alexander Viro, linux-fsdevel,
	Mel Gorman, Linus Torvalds

On Mon, Jun 29, 2020 at 4:09 PM Jan Kara <jack@suse.cz> wrote:
>
> On Sun 28-06-20 15:53:51, Amir Goldstein wrote:
> > On Sun, Jun 28, 2020 at 2:14 PM Maxim Levitsky <mlevitsk@redhat.com> wrote:
> > >
> > > Hi,
> > >
> > > I just did usual kernel update and now chromium crashes on startup.
> > > It happens both in a KVM's VM (with virtio-gpu if that matters) and natively with amdgpu driver.
> > > Most likely not GPU related although I initially suspected that it is.
> > >
> > > Chromium starts as a white rectangle, shows few white rectangles
> > > that resemble its notifications and then crashes.
> > >
> > > The stdout output from chromium:
> >
> > I guess this answers our question whether we could disable fsnoitfy
> > watches on pseudo inodes....
>
> Right :-|
>
> > From comments like these in chromium code:
> > https://chromium.googlesource.com/chromium/src/+/master/mojo/core/watcher_dispatcher.cc#77
> > https://chromium.googlesource.com/chromium/src/+/master/base/files/file_descriptor_watcher_posix.cc#176
> > https://chromium.googlesource.com/chromium/src/+/master/ipc/ipc_channel_mojo.cc#240
> >
> > I am taking a wild guess that the missing FS_CLOSE event on anonymous pipes is
> > the cause for regression.
>
> I was checking the Chromium code for some time. It uses inotify in
> base/files/file_path_watcher_linux.cc and watches IN_CLOSE_WRITE event
> (among other ones) but I was unable to track down how the class gets
> connected to the mojo class that crashes. I'd be somewhat curious how they
> place inotify watches on pipe inodes - probably they have to utilize proc
> magic links but I'd like to be sure. Anyway your guess appears to be
> correct :)

Well, I lost track of the code as well...

>
> > The motivation for the patch "fs: Do not check if there is a fsnotify
> > watcher on pseudo inodes"
> > was performance, but actually, FS_CLOSE and FS_OPEN events probably do
> > not impact performance as FS_MODIFY and FS_ACCESS.
>
> Correct.
>
> > Do you agree that dropping FS_MODIFY/FS_ACCESS events for FMODE_STREAM
> > files as a general rule should be safe?
>
> Hum, so your patch drops FS_MODIFY/FS_ACCESS events also for named pipes
> compared to the original patch AFAIU and for those fsnotify works fine
> so far. So I'm not sure we won't regress someone else with this.
>
> I've also tested inotify on a sample pipe like: cat /dev/stdin | tee
> and watched /proc/<tee pid>/fd/0 and it actually generated IN_MODIFY |
> IN_ACCESS when data arrived to a pipe and tee(1) read it and then
> IN_CLOSE_WRITE | IN_CLOSE_NOWRITE when the pipe got closed (I thought you
> mentioned modify and access events didn't get properly generated?).

I don't think that I did (did I?)

>
> So as much as I agree that some fsnotify events on FMODE_STREAM files are
> dubious, they could get used (possibly accidentally) and so after this
> Chromium experience I think we just have to revert the change and live with
> generating notification events for pipes to avoid userspace regressions.
>
> Thoughts?

I am fine with that.

Before I thought of trying out FMODE_STREAM I was considering to propose
to set the new flag FMODE_NOIONOTIFY in alloc_file_pseudo() to narrow Mel's
patch to dropping FS_MODIFY|FS_ACCESS.

But I guess the burden of proof is back on Mel.
And besides, quoting Mel's patch:
"A patch is pending that reduces, but does not eliminate, the overhead of
    fsnotify but for files that cannot be looked up via a path, even that
    small overhead is unnecessary"

So really, we are not even sacrificing much by reverting this patch.
We down to "nano optimizations".

Thanks,
Amir.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Commit 'fs: Do not check if there is a fsnotify watcher on pseudo inodes' breaks chromium here
  2020-06-29 14:05     ` Amir Goldstein
@ 2020-06-29 14:32       ` Mel Gorman
  2020-06-29 14:41       ` [PATCH] Revert "fs: Do not check if there is a fsnotify watcher on pseudo inodes" Mel Gorman
  1 sibling, 0 replies; 10+ messages in thread
From: Mel Gorman @ 2020-06-29 14:32 UTC (permalink / raw)
  To: Amir Goldstein
  Cc: Jan Kara, Maxim Levitsky, linux-kernel, Alexander Viro,
	linux-fsdevel, Linus Torvalds

On Mon, Jun 29, 2020 at 05:05:38PM +0300, Amir Goldstein wrote:
> > > The motivation for the patch "fs: Do not check if there is a fsnotify
> > > watcher on pseudo inodes"
> > > was performance, but actually, FS_CLOSE and FS_OPEN events probably do
> > > not impact performance as FS_MODIFY and FS_ACCESS.
> >
> > Correct.
> >
> > > Do you agree that dropping FS_MODIFY/FS_ACCESS events for FMODE_STREAM
> > > files as a general rule should be safe?
> >
> > Hum, so your patch drops FS_MODIFY/FS_ACCESS events also for named pipes
> > compared to the original patch AFAIU and for those fsnotify works fine
> > so far. So I'm not sure we won't regress someone else with this.
> >
> > I've also tested inotify on a sample pipe like: cat /dev/stdin | tee
> > and watched /proc/<tee pid>/fd/0 and it actually generated IN_MODIFY |
> > IN_ACCESS when data arrived to a pipe and tee(1) read it and then
> > IN_CLOSE_WRITE | IN_CLOSE_NOWRITE when the pipe got closed (I thought you
> > mentioned modify and access events didn't get properly generated?).
> 
> I don't think that I did (did I?)
> 

I didn't see them properly generated for fanotify_mark but that could
have been a failure. inotify-watch is able to generate the events.

> >
> > So as much as I agree that some fsnotify events on FMODE_STREAM files are
> > dubious, they could get used (possibly accidentally) and so after this
> > Chromium experience I think we just have to revert the change and live with
> > generating notification events for pipes to avoid userspace regressions.
> >
> > Thoughts?
> 
> I am fine with that.
> 
> Before I thought of trying out FMODE_STREAM I was considering to propose
> to set the new flag FMODE_NOIONOTIFY in alloc_file_pseudo() to narrow Mel's
> patch to dropping FS_MODIFY|FS_ACCESS.
> 
> But I guess the burden of proof is back on Mel.
> And besides, quoting Mel's patch:
> "A patch is pending that reduces, but does not eliminate, the overhead of
>     fsnotify but for files that cannot be looked up via a path, even that
>     small overhead is unnecessary"
> 
> So really, we are not even sacrificing much by reverting this patch.
> We down to "nano optimizations".
> 

It's too marginal to be worth the risk. A plain revert is safest when
multiple people are hitting this.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] Revert "fs: Do not check if there is a fsnotify watcher on pseudo inodes"
  2020-06-29 14:05     ` Amir Goldstein
  2020-06-29 14:32       ` Mel Gorman
@ 2020-06-29 14:41       ` Mel Gorman
  2020-06-29 18:12         ` Jan Kara
  1 sibling, 1 reply; 10+ messages in thread
From: Mel Gorman @ 2020-06-29 14:41 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Jan Kara, Maxim Levitsky, linux-kernel, Alexander Viro,
	Amir Goldstein, linux-fsdevel

This reverts commit e9c15badbb7b ("fs: Do not check if there is a
fsnotify watcher on pseudo inodes"). The commit intended to eliminate
fsnotify-related overhead for pseudo inodes but it is broken in
concept. inotify can receive events of pipe files under /proc/X/fd and
chromium relies on close and open events for sandboxing. Maxim Levitsky
reported the following

  Chromium starts as a white rectangle, shows few white rectangles that
  resemble its notifications and then crashes.

  The stdout output from chromium:

  [mlevitsk@starship ~]$chromium-freeworld
  mesa: for the   --simplifycfg-sink-common option: may only occur zero or one times!
  mesa: for the   --global-isel-abort option: may only occur zero or one times!
  [3379:3379:0628/135151.440930:ERROR:browser_switcher_service.cc(238)] XXX Init()
  ../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
  Received signal 11 SEGV_MAPERR 0000004a9048

Crashes are not universal but even if chromium does not crash, it certainly
does not work properly. While filtering just modify and access might be
safe, the benefit is not worth the risk hence the revert.

Reported-by: Maxim Levitsky <mlevitsk@redhat.com>
Fixes: e9c15badbb7b ("fs: Do not check if there is a fsnotify watcher on pseudo inodes")
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
 fs/file_table.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/file_table.c b/fs/file_table.c
index 65603502fed6..656647f9575a 100644
--- a/fs/file_table.c
+++ b/fs/file_table.c
@@ -230,7 +230,7 @@ struct file *alloc_file_pseudo(struct inode *inode, struct vfsmount *mnt,
 		d_set_d_op(path.dentry, &anon_ops);
 	path.mnt = mntget(mnt);
 	d_instantiate(path.dentry, inode);
-	file = alloc_file(&path, flags | FMODE_NONOTIFY, fops);
+	file = alloc_file(&path, flags, fops);
 	if (IS_ERR(file)) {
 		ihold(inode);
 		path_put(&path);

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] Revert "fs: Do not check if there is a fsnotify watcher on pseudo inodes"
  2020-06-29 14:41       ` [PATCH] Revert "fs: Do not check if there is a fsnotify watcher on pseudo inodes" Mel Gorman
@ 2020-06-29 18:12         ` Jan Kara
  0 siblings, 0 replies; 10+ messages in thread
From: Jan Kara @ 2020-06-29 18:12 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Linus Torvalds, Jan Kara, Maxim Levitsky, linux-kernel,
	Alexander Viro, Amir Goldstein, linux-fsdevel

On Mon 29-06-20 15:41:45, Mel Gorman wrote:
> This reverts commit e9c15badbb7b ("fs: Do not check if there is a
> fsnotify watcher on pseudo inodes"). The commit intended to eliminate
> fsnotify-related overhead for pseudo inodes but it is broken in
> concept. inotify can receive events of pipe files under /proc/X/fd and
> chromium relies on close and open events for sandboxing. Maxim Levitsky
> reported the following
> 
>   Chromium starts as a white rectangle, shows few white rectangles that
>   resemble its notifications and then crashes.
> 
>   The stdout output from chromium:
> 
>   [mlevitsk@starship ~]$chromium-freeworld
>   mesa: for the   --simplifycfg-sink-common option: may only occur zero or one times!
>   mesa: for the   --global-isel-abort option: may only occur zero or one times!
>   [3379:3379:0628/135151.440930:ERROR:browser_switcher_service.cc(238)] XXX Init()
>   ../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
>   Received signal 11 SEGV_MAPERR 0000004a9048
> 
> Crashes are not universal but even if chromium does not crash, it certainly
> does not work properly. While filtering just modify and access might be
> safe, the benefit is not worth the risk hence the revert.
> 
> Reported-by: Maxim Levitsky <mlevitsk@redhat.com>
> Fixes: e9c15badbb7b ("fs: Do not check if there is a fsnotify watcher on pseudo inodes")
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>

Thanks for the revert Mel. I can see Linus already picked it up so we are
done.

								Honza

> ---
>  fs/file_table.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/fs/file_table.c b/fs/file_table.c
> index 65603502fed6..656647f9575a 100644
> --- a/fs/file_table.c
> +++ b/fs/file_table.c
> @@ -230,7 +230,7 @@ struct file *alloc_file_pseudo(struct inode *inode, struct vfsmount *mnt,
>  		d_set_d_op(path.dentry, &anon_ops);
>  	path.mnt = mntget(mnt);
>  	d_instantiate(path.dentry, inode);
> -	file = alloc_file(&path, flags | FMODE_NONOTIFY, fops);
> +	file = alloc_file(&path, flags, fops);
>  	if (IS_ERR(file)) {
>  		ihold(inode);
>  		path_put(&path);
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-06-29 21:23 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <7b4aa1e985007c6d582fffe5e8435f8153e28e0f.camel@redhat.com>
2020-06-28 12:53 ` Commit 'fs: Do not check if there is a fsnotify watcher on pseudo inodes' breaks chromium here Amir Goldstein
2020-06-28 13:14   ` Maxim Levitsky
2020-06-28 13:17     ` Maxim Levitsky
2020-06-28 13:34       ` Amir Goldstein
2020-06-29  9:31         ` Mel Gorman
2020-06-29 13:09   ` Jan Kara
2020-06-29 14:05     ` Amir Goldstein
2020-06-29 14:32       ` Mel Gorman
2020-06-29 14:41       ` [PATCH] Revert "fs: Do not check if there is a fsnotify watcher on pseudo inodes" Mel Gorman
2020-06-29 18:12         ` Jan Kara

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).