archive mirror
 help / color / mirror / Atom feed
From: Jeff Layton <>
To: Alexander Viro <>
Cc: "Eric W. Biederman" <>,
	"Daniel P . Berrangé" <>,,
Subject: [PATCH v2 0/3] exec: fix passing of file locks across execve in multithreaded processes
Date: Thu, 30 Aug 2018 13:24:20 -0400	[thread overview]
Message-ID: <> (raw)

v2: fix displaced_files cleanup in __do_execve_file

I've done a bit more testing (now with the error handling fixed and it
seems to work ok. I have not looked at performance regressions here,
as I'm not sure how best to test for that.

My main question at this point is whether this is the correct way to
fix it. Cover letter from the RFC set follows:

A few months ago, Dan reported that when you call execve in process that
is multithreaded, any traditional POSIX locks are silently dropped.

The problem is that we end up unsharing the files_struct from the
process very early during exec, when it looks like it's shared between
tasks. Eventually, when the other, non-exec'ing tasks are killed, we
tear down the old files_struct. That ends up tearing down the old
files_struct, which ends up looking like a close() was issues on each
open fd and that causes the locks to be dropped.

This patchset is a second stab at fixing this issue, this time following
the method suggested by Eric Biederman. The idea here is to move the
unshare_files() call after de_thread(), which helps ensure that we only
unshare the files_struct when it's truly shared between different
processes, and not just when the exec'ing process is multithreaded.

This seems to fix the originally reported problem (now known as xfstest
generic/484), and basic testing doesn't seem to show any issues.

During the original discussion though, Al had mentioned that this could
be problematic due to the fdtable being modifiable by other threads
(or even processes) during the binfmt probe. That may make this idea

I'm also not terribly thrilled with the way this sprinkles the
files_struct->file_lock all over the place. It may be possible to do
some of this with atomic ops if the basic approach turns out to be

Comments and suggestions welcome.

Jeff Layton (3):
  exec: separate thread_count for files_struct
  exec: delay clone(CLONE_FILES) if task associated with current
    files_struct is exec'ing
  exec: do unshare_files after de_thread

 fs/exec.c               | 25 ++++++++++++++++++-------
 fs/file.c               | 18 ++++++++++++++++++
 include/linux/binfmts.h |  1 +
 include/linux/fdtable.h |  2 ++
 kernel/fork.c           | 26 ++++++++++++++++++++++----
 5 files changed, 61 insertions(+), 11 deletions(-)


             reply	other threads:[~2018-08-30 21:27 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-30 17:24 Jeff Layton [this message]
2018-08-30 17:24 ` [PATCH v2 1/3] exec: separate thread_count for files_struct Jeff Layton
2018-08-30 17:24 ` [PATCH v2 2/3] exec: delay clone(CLONE_FILES) if task associated with current files_struct is exec'ing Jeff Layton
2018-08-30 17:24 ` [PATCH v2 3/3] exec: do unshare_files after de_thread Jeff Layton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).