From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756424AbcIGDWk (ORCPT ); Tue, 6 Sep 2016 23:22:40 -0400 Received: from email.kedacom.com ([221.224.36.251]:8862 "EHLO test1.kedacom.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1756347AbcIGDWd (ORCPT ); Tue, 6 Sep 2016 23:22:33 -0400 From: chengchao Subject: Re: [PATCH] sched/core: simpler function for sched_exec migration To: Oleg Nesterov References: <1473056403-7877-1-git-send-email-chengchao@kedacom.com> <20160905131147.GA8552@redhat.com> <20160906152253.GB17586@redhat.com> Cc: mingo@kernel.org, peterz@infradead.org, tj@kernel.org, akpm@linux-foundation.org, chris@chris-wilson.co.uk, linux-kernel@vger.kernel.org Message-ID: <89a992af-67cd-91b4-8890-a19ccb251fe6@kedacom.com> Date: Wed, 7 Sep 2016 11:22:28 +0800 User-Agent: Mozilla/5.0 (X11; Linux i686; rv:45.0) Gecko/20100101 Thunderbird/45.0 MIME-Version: 1.0 In-Reply-To: <20160906152253.GB17586@redhat.com> X-MIMETrack: Itemize by SMTP Server on kedacomsmtp/kedacom(Release 8.5.3|September 15, 2011) at 2016-09-07 11:22:25, Serialize by Router on kedacomsmtp/kedacom(Release 8.5.3|September 15, 2011) at 2016-09-07 11:22:25, Serialize complete at 2016-09-07 11:22:25, Itemize by SMTP Server on kedacomtest1/kedacom(Release 8.5.3|September 15, 2011) at 2016/09/07 11:22:21, Serialize by Router on kedacomtest1/kedacom(Release 8.5.3|September 15, 2011) at 2016/09/07 11:22:28, Serialize complete at 2016/09/07 11:22:28 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=windows-1252 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Oleg, thank you very much. on 09/06/2016 11:22 PM, Oleg Nesterov wrote: > On 09/06, chengchao wrote: >> >> the key point is for CONFIG_PREEMPT_NONE=y, >> ... >> it is too much overhead for one task(fork()+exec()), isn't it? > > Yes, yes, I see, this is suboptimal. Not sure we actually do care, > but yes, perhaps another helper which migrates the current task makes > sense, I dunno. for CONFIG_PREEMPT_NONE=y, this patch wants the stopper thread can migrate the current successfully instead of doing nothing. > > But, > >>> stop_one_cpu_sync() assumes that cpu == smp_processor_id/task_cpu(current), >>> and thus the stopper thread should preempt us at least after schedule() >>> (if CONFIG_PREEMPT_NONE), so we do not need to synchronize. >>> >> yes. the stop_one_cpu_sync is not a good name, stop_one_cpu_schedule is better? >> there is nothing about synchronization. > > We need to synchronize with the stopper to ensure it can't touch > cpu_stop_work on stack after stop_one_cpu_sync() returns, and yes, you are right. > >>> But this is not necessarily true? This task can migrate to another CPU >>> before cpu_stop_queue_work() ? >>> >> before sched_exec() calls stop_one_cpu()/cpu_stop_queue_work(), this >> task(current) cannot migrate to another cpu,because this task is running >> on the cpu. > > Why? The running task can migrate to another CPU at any moment. Unless it > runs with preemption disabled or CONFIG_PREEMPT_NONE=y. yes, this patch focused the CONFIG_PREEMPT_NONE=y at the beginning, so I didn't pay more attention to the CONFIG_PREEMPT=y and CONFIG_PREEMPT_VOLUNTARY=y. > > And this means that cpu_stop_queue_work() can queue the work on another > CPU != smp_processor_id(), and in this case the kernel can crash because > the pending cpu_stop_work can be overwritten right after return. > > So you need something like > > void stop_one_cpu_sync(cpu_stop_fn_t fn, void *arg) > { > struct cpu_stop_work work = { .fn = fn, .arg = arg, .done = NULL }; > > preempt_disable(); > cpu_stop_queue_work(raw_smp_processor_id(), &work); > preempt_enable_no_resched(); > schedule(); > } > > or I am totally confused. Note that it doesn't (and shouldn't) have > the "int cpu" argument. > if preempt happens after preempt_enable_no_resched(), there is still risky that the stop_one_cpu_sync() returns before the stopper thread can use cpu_stop_work safely. as you said previously. thus, I modify the patch: int stop_one_cpu(unsigned int cpu, cpu_stop_fn_t fn, void *arg) { struct cpu_stop_done done; struct cpu_stop_work work = { .fn = fn, .arg = arg, .done = &done }; cpu_stop_init_done(&done, 1); if (!cpu_stop_queue_work(cpu, &work)) return -ENOENT; #if defined(CONFIG_PREEMPT_NONE) /* * let the stopper thread runs as soon as possible, * and keep current TASK_RUNNING. */ scheudle(); #endif wait_for_completion(&done.completion); return done.ret; } remove the new function stop_one_cpu_sync(). When I posted this patch, I didn't want to modify the stop_one_cpu(), because there are many functions to call the stop_one_cpu(). but now, I think it's good place to modify. Any suggestions? thanks again. > Oleg. > >