All of lore.kernel.org
 help / color / mirror / Atom feed
* How to migrate vCPUs based on Credit Scheduler
@ 2017-04-03 16:21 甘清甜
  2017-04-03 17:18 ` Lars Kurth
  0 siblings, 1 reply; 3+ messages in thread
From: 甘清甜 @ 2017-04-03 16:21 UTC (permalink / raw)
  To: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1729 bytes --]

Hi,

 I'm now designing new vCPU scheduler in Xen, and trying to implement
the scheduler based on the Credit scheduler in Xen-4.5.1. But I encountered
 come problems when debuging the code.

Most of the code modification is done in function csched_schedule() in
file: xen/common/csched_schedule.c . And the core code is as followed:

if( vcpu_runnable(current) )
{
        if( match the migration contition )
        {
            cpu_affinity = pick_pcpu_runq();  // this function is defined
by myself

            pcpulock = pcpu_schedule_lock_irqsave(cpu_affinity,
                   &pcpulock_flag);

            TRACE_3D(TRC_CSCHED_STOLEN_VCPU, cpu_affinity  , domain_id,
                   vcpu_id);
            SCHED_VCPU_STAT_CRANK(scurr, migrate_q);
            SCHED_STAT_CRANK(migrate_queued);
            WARN_ON(scurr->vcpu->is_urgent);
        scurr->vcpu->processor = cpu_affinity;

             __runq_insert(cpu_affinity, scurr);
            pcpu_schedule_unlock_irqrestore(pcpulock, pcpulock_flag,
cpu_affinity  );
        }
        else
            __runq_insert(cpu, scurr);
}
else
        BUG_ON( is_idle_vcpu(current) || list_empty(runq) );


I try to run the modified Xen. But according to the log I found that,
although I insert the vCPU into the runqueue  of another pCPU, the
vCPU still appears at the old pCPU in the following scheduling period.
Now I have a few questions:

1. Does the Xen scheduler framework support changing the pCPU of a
vCPU after using out the scheduling time slice, but not just to steal one
vCPU from runqueue of other pCPU in load_balance period?

2. If yes, what status of the vCPU should be changed before inserting
the vCPU into the destination pCPU?

Thank you very much!

[-- Attachment #1.2: Type: text/html, Size: 3785 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: How to migrate vCPUs based on Credit Scheduler
  2017-04-03 16:21 How to migrate vCPUs based on Credit Scheduler 甘清甜
@ 2017-04-03 17:18 ` Lars Kurth
  2017-04-04 10:59   ` Dario Faggioli
  0 siblings, 1 reply; 3+ messages in thread
From: Lars Kurth @ 2017-04-03 17:18 UTC (permalink / raw)
  To: 甘清甜
  Cc: xen-devel, Dario Faggioli, George Dunlap, Anshul Makkar

Adding George, Dario & Anshul
Lars

> On 3 Apr 2017, at 17:21, 甘清甜 <qingtiangan@gmail.com> wrote:
> 
> Hi,
> 
>  I'm now designing new vCPU scheduler in Xen, and trying to implement 
> the scheduler based on the Credit scheduler in Xen-4.5.1. But I encountered
>  come problems when debuging the code.
> 
> Most of the code modification is done in function csched_schedule() in 
> file: xen/common/csched_schedule.c . And the core code is as followed:
> 
> if( vcpu_runnable(current) )
> {
>         if( match the migration contition )
>         {
>             cpu_affinity = pick_pcpu_runq();  // this function is defined by myself
>             
>             pcpulock = pcpu_schedule_lock_irqsave(cpu_affinity, 
>                    &pcpulock_flag);
> 
>             TRACE_3D(TRC_CSCHED_STOLEN_VCPU, cpu_affinity  , domain_id,
>                    vcpu_id);
>             SCHED_VCPU_STAT_CRANK(scurr, migrate_q);
>             SCHED_STAT_CRANK(migrate_queued);
>             WARN_ON(scurr->vcpu->is_urgent);
>     	    scurr->vcpu->processor = cpu_affinity;	
> 
>              __runq_insert(cpu_affinity, scurr);
>             pcpu_schedule_unlock_irqrestore(pcpulock, pcpulock_flag, cpu_affinity  );
>         }
>         else 
>             __runq_insert(cpu, scurr);
> }
> else 
>         BUG_ON( is_idle_vcpu(current) || list_empty(runq) );
> 
> 
> I try to run the modified Xen. But according to the log I found that, 
> although I insert the vCPU into the runqueue  of another pCPU, the 
> vCPU still appears at the old pCPU in the following scheduling period. 
> Now I have a few questions:
> 
> 1. Does the Xen scheduler framework support changing the pCPU of a 
> vCPU after using out the scheduling time slice, but not just to steal one 
> vCPU from runqueue of other pCPU in load_balance period?
> 
> 2. If yes, what status of the vCPU should be changed before inserting 
> the vCPU into the destination pCPU?
> 
> Thank you very much!
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: How to migrate vCPUs based on Credit Scheduler
  2017-04-03 17:18 ` Lars Kurth
@ 2017-04-04 10:59   ` Dario Faggioli
  0 siblings, 0 replies; 3+ messages in thread
From: Dario Faggioli @ 2017-04-04 10:59 UTC (permalink / raw)
  To: Lars Kurth, 甘清甜
  Cc: xen-devel, Anshul Makkar, George Dunlap


[-- Attachment #1.1: Type: text/plain, Size: 4506 bytes --]

On Mon, 2017-04-03 at 18:18 +0100, Lars Kurth wrote:
> Adding George, Dario & Anshul
>
Hey, hello,

Thanks Lars for the heads up...

> > On 3 Apr 2017, at 17:21, 甘清甜 <qingtiangan@gmail.com> wrote:
> > 
> > Hi,
> > 
> >  I'm now designing new vCPU scheduler in Xen, and trying to
> > implement 
> > the scheduler based on the Credit scheduler in Xen-4.5.1.
>
Can I ask what the purpose and the end goal of this is? 99% of the
times, knowing that helps giving good advices.

Also, are you forced to use 4.5.x for some specific reason? Because, if
that is not the case, it's always better to base on the most possible
recent code base (which, ideally, would be upstream git repo or, if
that's not possible or convenient, the latest released version, i.e.,
4.8, and soon enough, 4.9-rc).

> >  But I encountered
> >  come problems when debuging the code.
> > 
> > Most of the code modification is done in function csched_schedule()
> > in 
> > file: xen/common/csched_schedule.c . And the core code is as
> > followed:
> > 
> > if( vcpu_runnable(current) )
> > {
> >         if( match the migration contition )
> >         {
> >             cpu_affinity = pick_pcpu_runq();  // this function is
> > defined by myself
> >             
> >             pcpulock = pcpu_schedule_lock_irqsave(cpu_affinity, 
> >                    &pcpulock_flag);
> > 
Err... It's rather hard to comment without seeing the code. All of it,
I mean. For instance, you're already in csched_schedule(), called, say,
on cpu X, and hence you hold the scheduler lock of pcpu X.

Calling pcpu_schedule_lock() like above, is at high risk of deadlock,
depending of what "match the migration condition" actually means, and
on how pick_pcpu_runq() is defined.

In fact, if you look, for instance, in csched_load_balance(), you'll
see that it does a trylock, exactly for this very reason.


> >             TRACE_3D(TRC_CSCHED_STOLEN_VCPU, cpu_affinity  ,
> > domain_id,
> >                    vcpu_id);
> >             SCHED_VCPU_STAT_CRANK(scurr, migrate_q);
> >             SCHED_STAT_CRANK(migrate_queued);
> >             WARN_ON(scurr->vcpu->is_urgent);
> >     	    scurr->vcpu->processor = cpu_affinity;	
> > 
> >              __runq_insert(cpu_affinity, scurr);
> >             pcpu_schedule_unlock_irqrestore(pcpulock,
> > pcpulock_flag, cpu_affinity  );
> >         }
> >         else 
> >             __runq_insert(cpu, scurr);
> > }
> > else 
> >         BUG_ON( is_idle_vcpu(current) || list_empty(runq) );
> > 
> > 
> > I try to run the modified Xen. But according to the log I found
> > that, 
> > although I insert the vCPU into the runqueue  of another pCPU, the 
> > vCPU still appears at the old pCPU in the following scheduling
> > period. 
>
Again: it's impossible to tell why this is happening, only looking at
the code snipped above.

For instance, what does csched_schedule() returns, in the case you'd
want the migration to occur? That influences what will be run in the
next scheduling period.

> > Now I have a few questions:
> > 
> > 1. Does the Xen scheduler framework support changing the pCPU of a 
> > vCPU after using out the scheduling time slice, but not just to
> > steal one 
> > vCPU from runqueue of other pCPU in load_balance period?
> > 
It does such a thing already (at least, if I understood correctly what
you're asking). Look at csched_load_balance() and csched_runq_steal().
They do exactly this.

> > 2. If yes, what status of the vCPU should be changed before
> > inserting 
> > the vCPU into the destination pCPU?
> > 
The answer varies, depending on whether the vCPU is currently running
on a pCPU, whether it is in a pCPU's runqueue, or whether it is
blocked/sleeping.

Looking closely at what csched_runq_steal() does is the best source of
information, but really, in order to be able to say anything useful, we
need to see the code, and to know what the end goal is. :-)

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2017-04-04 10:59 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-03 16:21 How to migrate vCPUs based on Credit Scheduler 甘清甜
2017-04-03 17:18 ` Lars Kurth
2017-04-04 10:59   ` Dario Faggioli

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.