linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Oleg Nesterov <oleg@redhat.com>
To: Mike Christie <michael.christie@oracle.com>
Cc: linux@leemhuis.info, nicolas.dichtel@6wind.com, axboe@kernel.dk,
	ebiederm@xmission.com, torvalds@linux-foundation.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mst@redhat.com,
	sgarzare@redhat.com, jasowang@redhat.com, stefanha@redhat.com,
	brauner@kernel.org
Subject: Re: [PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
Date: Tue, 23 May 2023 14:15:06 +0200	[thread overview]
Message-ID: <20230523121506.GA6562@redhat.com> (raw)
In-Reply-To: <20230522174757.GC22159@redhat.com>

On 05/22, Oleg Nesterov wrote:
>
> Right now I think that "int dead" should die,

No, probably we shouldn't call get_signal() if we have already dequeued SIGKILL.

> but let me think tomorrow.

May be something like this... I don't like it but I can't suggest anything better
right now.

	bool killed = false;

	for (;;) {
		...
	
		node = llist_del_all(&worker->work_list);
		if (!node) {
			schedule();
			/*
			 * When we get a SIGKILL our release function will
			 * be called. That will stop new IOs from being queued
			 * and check for outstanding cmd responses. It will then
			 * call vhost_task_stop to tell us to return and exit.
			 */
			if (signal_pending(current)) {
				struct ksignal ksig;

				if (!killed)
					killed = get_signal(&ksig);

				clear_thread_flag(TIF_SIGPENDING);
			}

			continue;
		}

-------------------------------------------------------------------------------
But let me ask a couple of questions. Let's forget this patch, let's look at the
current code:

		node = llist_del_all(&worker->work_list);
		if (!node)
			schedule();

		node = llist_reverse_order(node);
		... process works ...

To me this looks a bit confusing. Shouldn't we do

		if (!node) {
			schedule();
			continue;
		}

just to make the code a bit more clear? If node == NULL then
llist_reverse_order() and llist_for_each_entry_safe() will do nothing.
But this is minor.



		/* make sure flag is seen after deletion */
		smp_wmb();
		llist_for_each_entry_safe(work, work_next, node, node) {
			clear_bit(VHOST_WORK_QUEUED, &work->flags);

I am not sure about smp_wmb + clear_bit. Once we clear VHOST_WORK_QUEUED,
vhost_work_queue() can add this work again and change work->node->next.

That is why we use _safe, but we need to ensure that llist_for_each_safe()
completes LOAD(work->node->next) before VHOST_WORK_QUEUED is cleared.

So it seems that smp_wmb() can't help and should be removed, instead we need

		llist_for_each_entry_safe(...) {
			smp_mb__before_atomic();
			clear_bit(VHOST_WORK_QUEUED, &work->flags);

Also, if the work->fn pointer is not stable, we should read it before
smp_mb__before_atomic() as well.

No?


			__set_current_state(TASK_RUNNING);

Why do we set TASK_RUNNING inside the loop? Does this mean that work->fn()
can return with current->state != RUNNING ?


			work->fn(work);

Now the main question. Whatever we do, SIGKILL/SIGSTOP/etc can come right
before we call work->fn(). Is it "safe" to run this callback with
signal_pending() or fatal_signal_pending() ?


Finally. I never looked into drivers/vhost/ before so I don't understand
this code at all, but let me ask anyway... Can we change vhost_dev_flush()
to run the pending callbacks rather than wait for vhost_worker() ?
I guess we can't, ->mm won't be correct, but can you confirm?

Oleg.


  reply	other threads:[~2023-05-23 12:16 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-22  2:51 [PATCH 0/3] vhost: Fix freezer/ps regressions Mike Christie
2023-05-22  2:51 ` [PATCH 1/3] signal: Don't always put SIGKILL in shared_pending Mike Christie
2023-05-23 15:30   ` Eric W. Biederman
2023-05-22  2:51 ` [PATCH 2/3] signal: Don't exit for PF_USER_WORKER tasks Mike Christie
2023-05-22  2:51 ` [PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression Mike Christie
2023-05-22 12:30   ` Oleg Nesterov
2023-05-22 17:00     ` Mike Christie
2023-05-22 17:47       ` Oleg Nesterov
2023-05-23 12:15         ` Oleg Nesterov [this message]
2023-05-23 15:57           ` Eric W. Biederman
2023-05-24 14:10             ` Oleg Nesterov
2023-05-24 14:44               ` Eric W. Biederman
2023-05-25 11:55                 ` Oleg Nesterov
2023-05-25 15:30                   ` Eric W. Biederman
2023-05-25 16:20                     ` Linus Torvalds
2023-05-27  9:49                       ` Eric W. Biederman
2023-05-27 16:12                         ` Linus Torvalds
2023-05-28  1:17                           ` Eric W. Biederman
2023-05-28  1:21                             ` Linus Torvalds
2023-05-29 11:19                             ` Oleg Nesterov
2023-05-29 16:09                               ` michael.christie
2023-05-29 17:46                                 ` Oleg Nesterov
2023-05-29 17:54                                   ` Oleg Nesterov
2023-05-29 19:03                                     ` Mike Christie
2023-05-29 19:35                                   ` Mike Christie
2023-05-29 19:46                                     ` michael.christie
2023-05-30  2:48                                       ` Eric W. Biederman
2023-05-30  2:38                                 ` Eric W. Biederman
2023-05-30 15:34                                   ` Mike Christie
2023-05-31  3:30                                   ` Mike Christie
2023-05-29 16:11                               ` michael.christie
2023-05-30 14:15                               ` Christian Brauner
2023-05-30 17:55                                 ` Oleg Nesterov
2023-05-30 15:01                         ` Eric W. Biederman
2023-05-31  5:22             ` Jason Wang
2023-05-24  0:02           ` Mike Christie
2023-05-25 16:15           ` Mike Christie
2023-05-28  1:41             ` Eric W. Biederman
2023-05-28 19:29               ` Mike Christie
2023-05-31  5:22           ` Jason Wang
2023-05-31  7:25             ` Oleg Nesterov
2023-05-31  8:17               ` Jason Wang
2023-05-31  9:14                 ` Oleg Nesterov
2023-06-01  2:44                   ` Jason Wang
2023-06-01  7:43                     ` Oleg Nesterov
2023-06-02  5:03                       ` Jason Wang
2023-06-02 17:58                         ` Oleg Nesterov
2023-06-02 20:07                           ` Linus Torvalds
2023-06-05 14:20                             ` Oleg Nesterov
2023-05-22 19:40   ` Michael S. Tsirkin
2023-05-23 15:39     ` Eric W. Biederman
2023-05-23 15:48     ` Mike Christie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230523121506.GA6562@redhat.com \
    --to=oleg@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=brauner@kernel.org \
    --cc=ebiederm@xmission.com \
    --cc=jasowang@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@leemhuis.info \
    --cc=michael.christie@oracle.com \
    --cc=mst@redhat.com \
    --cc=nicolas.dichtel@6wind.com \
    --cc=sgarzare@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=torvalds@linux-foundation.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).