* [PATCH 0/3] arinc: Implement cpu-pool support
@ 2013-11-18 20:16 Nathan Studer
2013-11-18 20:16 ` [PATCH 1/3] arinc: whitespace and formatting fixes Nathan Studer
` (2 more replies)
0 siblings, 3 replies; 16+ messages in thread
From: Nathan Studer @ 2013-11-18 20:16 UTC (permalink / raw)
To: xen-devel
Cc: Simon Martin, Ian Campbell, Stefano Stabellini, George Dunlap,
Ian Jackson, Robert VanVossen, Nathan Studer
From: Nathan Studer <nate.studer@dornerworks.com>
Cleanup trailing whitespace in the arinc653 scheduler and then update it
to support cpu pools.
These changes do not implement arinc653 multicore. Since the schedule only
supports 1 vcpu entry per slot, even if the vcpus of a domain are run on
multiple pcpus, the scheduler will essentially serialize their execution.
The tool side changes maintain compatibility with the old scheduler set/get
functions, even though they are libxc functions, since these functions
were and are the only available interface to set/get the schedule.
Nathan Studer (3):
arinc: whitespace and formatting fixes
arinc: Add cpu-pool support to scheduler.
arinc: Add poolid parameter to scheduler get/set functions.
tools/libxc/xc_arinc653.c | 10 ++-
tools/libxc/xenctrl.h | 12 ++-
xen/common/sched_arinc653.c | 172 ++++++++++++++++++++++++++-----------------
3 files changed, 121 insertions(+), 73 deletions(-)
--
1.7.9.5
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 1/3] arinc: whitespace and formatting fixes
2013-11-18 20:16 [PATCH 0/3] arinc: Implement cpu-pool support Nathan Studer
@ 2013-11-18 20:16 ` Nathan Studer
2013-11-19 9:54 ` Andrew Cooper
2013-11-19 11:30 ` George Dunlap
2013-11-18 20:16 ` [PATCH 2/3] arinc: Add cpu-pool support to scheduler Nathan Studer
2013-11-18 20:16 ` [PATCH 3/3] arinc: Add poolid parameter to scheduler get/set functions Nathan Studer
2 siblings, 2 replies; 16+ messages in thread
From: Nathan Studer @ 2013-11-18 20:16 UTC (permalink / raw)
To: xen-devel
Cc: Simon Martin, Ian Campbell, Stefano Stabellini, George Dunlap,
Ian Jackson, Robert VanVossen, Nathan Studer
From: Nathan Studer <nate.studer@dornerworks.com>
Remove the excessive amount of trailing whitespace in the
arinc653 scheduler file and add a local variables block.
Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Nathan Studer <nate.studer@dornerworks.com>
---
xen/common/sched_arinc653.c | 96 ++++++++++++++++++++++++-------------------
1 file changed, 53 insertions(+), 43 deletions(-)
diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c
index 7b7b387..f4eb943 100644
--- a/xen/common/sched_arinc653.c
+++ b/xen/common/sched_arinc653.c
@@ -40,9 +40,9 @@
* Private Macros *
**************************************************************************/
-/**
- * Retrieve the idle VCPU for a given physical CPU
- */
+/**
+ * Retrieve the idle VCPU for a given physical CPU
+ */
#define IDLETASK(cpu) (idle_vcpu[cpu])
/**
@@ -76,7 +76,7 @@ typedef struct arinc653_vcpu_s
struct list_head list;
} arinc653_vcpu_t;
-/**
+/**
* The sched_entry_t structure holds a single entry of the
* ARINC 653 schedule.
*/
@@ -101,8 +101,8 @@ typedef struct sched_entry_s
typedef struct a653sched_priv_s
{
/**
- * This array holds the active ARINC 653 schedule.
- *
+ * This array holds the active ARINC 653 schedule.
+ *
* When the system tries to start a new VCPU, this schedule is scanned
* to look for a matching (handle, VCPU #) pair. If both the handle (UUID)
* and VCPU number match, then the VCPU is allowed to run. Its run time
@@ -112,12 +112,12 @@ typedef struct a653sched_priv_s
/**
* This variable holds the number of entries that are valid in
- * the arinc653_schedule table.
- *
+ * the arinc653_schedule table.
+ *
* This is not necessarily the same as the number of domains in the
* schedule. A domain could be listed multiple times within the schedule,
* or a domain with multiple VCPUs could have a different
- * schedule entry for each VCPU.
+ * schedule entry for each VCPU.
*/
int num_schedule_entries;
@@ -131,9 +131,9 @@ typedef struct a653sched_priv_s
*/
s_time_t next_major_frame;
- /**
- * pointers to all Xen VCPU structures for iterating through
- */
+ /**
+ * pointers to all Xen VCPU structures for iterating through
+ */
struct list_head vcpu_list;
} a653sched_priv_t;
@@ -143,14 +143,14 @@ typedef struct a653sched_priv_s
/**
* This function compares two domain handles.
- *
+ *
* @param h1 Pointer to handle 1
* @param h2 Pointer to handle 2
- *
+ *
* @return <ul>
- * <li> <0: handle 1 is less than handle 2
- * <li> 0: handle 1 is equal to handle 2
- * <li> >0: handle 1 is greater than handle 2
+ * <li> <0: handle 1 is less than handle 2
+ * <li> 0: handle 1 is equal to handle 2
+ * <li> >0: handle 1 is greater than handle 2
* </ul>
*/
static int dom_handle_cmp(const xen_domain_handle_t h1,
@@ -162,11 +162,11 @@ static int dom_handle_cmp(const xen_domain_handle_t h1,
/**
* This function searches the vcpu list to find a VCPU that matches
* the domain handle and VCPU ID specified.
- *
+ *
* @param ops Pointer to this instance of the scheduler structure
* @param handle Pointer to handler
* @param vcpu_id VCPU ID
- *
+ *
* @return <ul>
* <li> Pointer to the matching VCPU if one is found
* <li> NULL otherwise
@@ -191,7 +191,7 @@ static struct vcpu *find_vcpu(
/**
* This function updates the pointer to the Xen VCPU structure for each entry
* in the ARINC 653 schedule.
- *
+ *
* @param ops Pointer to this instance of the scheduler structure
* @return <None>
*/
@@ -211,7 +211,7 @@ static void update_schedule_vcpus(const struct scheduler *ops)
* in place a new ARINC653 schedule.
*
* @param ops Pointer to this instance of the scheduler structure
- *
+ *
* @return <ul>
* <li> 0 = success
* <li> !0 = error
@@ -253,10 +253,10 @@ arinc653_sched_set(
if ( !found_dom0 )
goto fail;
- /*
+ /*
* Error if the major frame is not large enough to run all entries as
* indicated by comparing the total run time to the major frame length.
- */
+ */
if ( total_runtime > schedule->major_frame )
goto fail;
@@ -276,10 +276,10 @@ arinc653_sched_set(
update_schedule_vcpus(ops);
/*
- * The newly-installed schedule takes effect immediately. We do not even
+ * The newly-installed schedule takes effect immediately. We do not even
* wait for the current major frame to expire.
*
- * Signal a new major frame to begin. The next major frame is set up by
+ * Signal a new major frame to begin. The next major frame is set up by
* the do_schedule callback function when it is next invoked.
*/
sched_priv->next_major_frame = NOW();
@@ -392,8 +392,8 @@ a653sched_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
/*
* Initialize our ARINC 653 scheduler-specific information for the VCPU.
- * The VCPU starts "asleep." When Xen is ready for the VCPU to run, it
- * will call the vcpu_wake scheduler callback function and our scheduler
+ * The VCPU starts "asleep." When Xen is ready for the VCPU to run, it
+ * will call the vcpu_wake scheduler callback function and our scheduler
* will mark the VCPU awake.
*/
svc->vc = vc;
@@ -483,7 +483,7 @@ a653sched_free_domdata(const struct scheduler *ops, void *data)
/**
* Xen scheduler callback function to sleep a VCPU
- *
+ *
* @param ops Pointer to this instance of the scheduler structure
* @param vc Pointer to the VCPU structure for the current domain
*/
@@ -503,7 +503,7 @@ a653sched_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc)
/**
* Xen scheduler callback function to wake up a VCPU
- *
+ *
* @param ops Pointer to this instance of the scheduler structure
* @param vc Pointer to the VCPU structure for the current domain
*/
@@ -519,10 +519,10 @@ a653sched_vcpu_wake(const struct scheduler *ops, struct vcpu *vc)
/**
* Xen scheduler callback function to select a VCPU to run.
* This is the main scheduler routine.
- *
+ *
* @param ops Pointer to this instance of the scheduler structure
* @param now Current time
- *
+ *
* @return Address of the VCPU structure scheduled to be run next
* Amount of time to execute the returned VCPU
* Flag for whether the VCPU was migrated
@@ -559,7 +559,7 @@ a653sched_do_schedule(
}
}
- /*
+ /*
* If we exhausted the domains in the schedule and still have time left
* in the major frame then switch next at the next major frame.
*/
@@ -567,10 +567,10 @@ a653sched_do_schedule(
next_switch_time = sched_priv->next_major_frame;
/*
- * If there are more domains to run in the current major frame, set
- * new_task equal to the address of next domain's VCPU structure.
- * Otherwise, set new_task equal to the address of the idle task's VCPU
- * structure.
+ * If there are more domains to run in the current major frame, set
+ * new_task equal to the address of next domain's VCPU structure.
+ * Otherwise, set new_task equal to the address of the idle task's VCPU
+ * structure.
*/
new_task = (sched_index < sched_priv->num_schedule_entries)
? sched_priv->schedule[sched_index].vc
@@ -584,10 +584,10 @@ a653sched_do_schedule(
new_task = IDLETASK(0);
BUG_ON(new_task == NULL);
- /*
+ /*
* Check to make sure we did not miss a major frame.
- * This is a good test for robust partitioning.
- */
+ * This is a good test for robust partitioning.
+ */
BUG_ON(now >= sched_priv->next_major_frame);
/* Tasklet work (which runs in idle VCPU context) overrides all else. */
@@ -595,8 +595,8 @@ a653sched_do_schedule(
new_task = IDLETASK(0);
/*
- * Return the amount of time the next domain has to run and the address
- * of the selected task's VCPU structure.
+ * Return the amount of time the next domain has to run and the address
+ * of the selected task's VCPU structure.
*/
ret.time = next_switch_time - now;
ret.task = new_task;
@@ -609,10 +609,10 @@ a653sched_do_schedule(
/**
* Xen scheduler callback function to select a CPU for the VCPU to run on
- *
+ *
* @param ops Pointer to this instance of the scheduler structure
* @param v Pointer to the VCPU structure for the current domain
- *
+ *
* @return Number of selected physical CPU
*/
static int
@@ -709,3 +709,13 @@ const struct scheduler sched_arinc653_def = {
.tick_suspend = NULL,
.tick_resume = NULL,
};
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
--
1.7.9.5
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 2/3] arinc: Add cpu-pool support to scheduler.
2013-11-18 20:16 [PATCH 0/3] arinc: Implement cpu-pool support Nathan Studer
2013-11-18 20:16 ` [PATCH 1/3] arinc: whitespace and formatting fixes Nathan Studer
@ 2013-11-18 20:16 ` Nathan Studer
2013-11-19 10:30 ` Andrew Cooper
2013-11-19 11:30 ` George Dunlap
2013-11-18 20:16 ` [PATCH 3/3] arinc: Add poolid parameter to scheduler get/set functions Nathan Studer
2 siblings, 2 replies; 16+ messages in thread
From: Nathan Studer @ 2013-11-18 20:16 UTC (permalink / raw)
To: xen-devel
Cc: Simon Martin, Ian Campbell, Stefano Stabellini, George Dunlap,
Ian Jackson, Robert VanVossen, Nathan Studer
From: Nathan Studer <nate.studer@dornerworks.com>
1. Remove the restriction that dom0 must be in the schedule, since dom-0 may
not belong to the scheduler's pool.
2. Add a schedule entry for each of dom-0's vcpus as they are created.
3. Add code to deal with empty schedules in the do_schedule function.
4. Call the correct idle task for the pcpu on which the scheduling decision
is being made in do_schedule.
5. Add code to prevent migration of a vcpu.
6. Implement a proper cpu_pick function, which prefers the current processor.
These changes do not implement arinc653 multicore. Since the schedule only
supports 1 vcpu entry per slot, even if the vcpus of a domain are run on
multiple pcpus, the scheduler will essentially serialize their execution.
Signed-off-by: Nathan Studer <nate.studer@dornerworks.com>
---
xen/common/sched_arinc653.c | 76 +++++++++++++++++++++++++++++--------------
1 file changed, 52 insertions(+), 24 deletions(-)
diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c
index f4eb943..139177e 100644
--- a/xen/common/sched_arinc653.c
+++ b/xen/common/sched_arinc653.c
@@ -41,6 +41,11 @@
**************************************************************************/
/**
+ * Default timeslice for domain 0.
+ */
+#define DEFAULT_TIMESLICE MILLISECS(10)
+
+/**
* Retrieve the idle VCPU for a given physical CPU
*/
#define IDLETASK(cpu) (idle_vcpu[cpu])
@@ -224,8 +229,6 @@ arinc653_sched_set(
{
a653sched_priv_t *sched_priv = SCHED_PRIV(ops);
s_time_t total_runtime = 0;
- bool_t found_dom0 = 0;
- const static xen_domain_handle_t dom0_handle = {0};
unsigned int i;
/* Check for valid major frame and number of schedule entries. */
@@ -236,10 +239,6 @@ arinc653_sched_set(
for ( i = 0; i < schedule->num_sched_entries; i++ )
{
- if ( dom_handle_cmp(schedule->sched_entries[i].dom_handle,
- dom0_handle) == 0 )
- found_dom0 = 1;
-
/* Check for a valid VCPU ID and run time. */
if ( (schedule->sched_entries[i].vcpu_id >= MAX_VIRT_CPUS)
|| (schedule->sched_entries[i].runtime <= 0) )
@@ -249,10 +248,6 @@ arinc653_sched_set(
total_runtime += schedule->sched_entries[i].runtime;
}
- /* Error if the schedule doesn't contain a slot for domain 0. */
- if ( !found_dom0 )
- goto fail;
-
/*
* Error if the major frame is not large enough to run all entries as
* indicated by comparing the total run time to the major frame length.
@@ -347,12 +342,6 @@ a653sched_init(struct scheduler *ops)
ops->sched_data = prv;
- prv->schedule[0].dom_handle[0] = '\0';
- prv->schedule[0].vcpu_id = 0;
- prv->schedule[0].runtime = MILLISECS(10);
- prv->schedule[0].vc = NULL;
- prv->num_schedule_entries = 1;
- prv->major_frame = MILLISECS(10);
prv->next_major_frame = 0;
INIT_LIST_HEAD(&prv->vcpu_list);
@@ -380,7 +369,9 @@ a653sched_deinit(const struct scheduler *ops)
static void *
a653sched_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
{
+ a653sched_priv_t *sched_priv = SCHED_PRIV(ops);
arinc653_vcpu_t *svc;
+ int entry;
/*
* Allocate memory for the ARINC 653-specific scheduler data information
@@ -390,6 +381,19 @@ a653sched_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
if ( svc == NULL )
return NULL;
+ /* add every one of dom0's vcpus to the schedule */
+ if (vc->domain->domain_id == 0)
+ {
+ entry = sched_priv->num_schedule_entries;
+ sched_priv->schedule[entry].dom_handle[0] = '\0';
+ sched_priv->schedule[entry].vcpu_id = vc->vcpu_id;
+ sched_priv->schedule[entry].runtime = DEFAULT_TIMESLICE;
+ sched_priv->schedule[entry].vc = vc;
+
+ sched_priv->major_frame += DEFAULT_TIMESLICE;
+ ++sched_priv->num_schedule_entries;
+ }
+
/*
* Initialize our ARINC 653 scheduler-specific information for the VCPU.
* The VCPU starts "asleep." When Xen is ready for the VCPU to run, it
@@ -450,7 +454,7 @@ a653sched_alloc_pdata(const struct scheduler *ops, int cpu)
static void
a653sched_free_pdata(const struct scheduler *ops, void *pcpu, int cpu)
{
- /* nop */
+ /* NOP */
}
/**
@@ -538,8 +542,13 @@ a653sched_do_schedule(
static int sched_index = 0;
static s_time_t next_switch_time;
a653sched_priv_t *sched_priv = SCHED_PRIV(ops);
+ const int cpu = smp_processor_id();
- if ( now >= sched_priv->next_major_frame )
+ if ( sched_priv->num_schedule_entries < 1 )
+ {
+ sched_priv->next_major_frame = now + DEFAULT_TIMESLICE;
+ }
+ else if ( now >= sched_priv->next_major_frame )
{
/* time to enter a new major frame
* the first time this function is called, this will be true */
@@ -574,14 +583,14 @@ a653sched_do_schedule(
*/
new_task = (sched_index < sched_priv->num_schedule_entries)
? sched_priv->schedule[sched_index].vc
- : IDLETASK(0);
+ : IDLETASK(cpu);
/* Check to see if the new task can be run (awake & runnable). */
if ( !((new_task != NULL)
&& (AVCPU(new_task) != NULL)
&& AVCPU(new_task)->awake
&& vcpu_runnable(new_task)) )
- new_task = IDLETASK(0);
+ new_task = IDLETASK(cpu);
BUG_ON(new_task == NULL);
/*
@@ -592,7 +601,12 @@ a653sched_do_schedule(
/* Tasklet work (which runs in idle VCPU context) overrides all else. */
if ( tasklet_work_scheduled )
- new_task = IDLETASK(0);
+ new_task = IDLETASK(cpu);
+
+ /* Running this task would result in a migration */
+ if ( (!is_idle_vcpu(new_task))
+ && (new_task->processor != cpu) )
+ new_task = IDLETASK(cpu);
/*
* Return the amount of time the next domain has to run and the address
@@ -600,7 +614,7 @@ a653sched_do_schedule(
*/
ret.time = next_switch_time - now;
ret.task = new_task;
- ret.migrated = 0; /* we do not support migration */
+ ret.migrated = 0;
BUG_ON(ret.time <= 0);
@@ -618,8 +632,22 @@ a653sched_do_schedule(
static int
a653sched_pick_cpu(const struct scheduler *ops, struct vcpu *vc)
{
- /* this implementation only supports one physical CPU */
- return 0;
+ cpumask_t *online;
+ int cpu;
+
+ /* If present, prefer vc's current processor, else
+ just find the first valid vcpu */
+ online = cpupool_scheduler_cpumask(vc->domain->cpupool);
+
+ cpu = cpumask_first(online);
+
+ if ( (cpumask_test_cpu(vc->processor, online)) ||
+ (cpu >= nr_cpu_ids) )
+ {
+ cpu = vc->processor;
+ }
+
+ return cpu;
}
/**
--
1.7.9.5
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 3/3] arinc: Add poolid parameter to scheduler get/set functions.
2013-11-18 20:16 [PATCH 0/3] arinc: Implement cpu-pool support Nathan Studer
2013-11-18 20:16 ` [PATCH 1/3] arinc: whitespace and formatting fixes Nathan Studer
2013-11-18 20:16 ` [PATCH 2/3] arinc: Add cpu-pool support to scheduler Nathan Studer
@ 2013-11-18 20:16 ` Nathan Studer
2013-11-19 10:32 ` Andrew Cooper
2 siblings, 1 reply; 16+ messages in thread
From: Nathan Studer @ 2013-11-18 20:16 UTC (permalink / raw)
To: xen-devel
Cc: Simon Martin, Ian Campbell, Stefano Stabellini, George Dunlap,
Ian Jackson, Robert VanVossen, Nathan Studer
From: Nathan Studer <nate.studer@dornerworks.com>
Also create a backwards compatible interface to these functions.
Signed-off-by: Nathan Studer <nate.studer@dornerworks.com>
---
tools/libxc/xc_arinc653.c | 10 ++++++----
tools/libxc/xenctrl.h | 12 ++++++++++--
2 files changed, 16 insertions(+), 6 deletions(-)
diff --git a/tools/libxc/xc_arinc653.c b/tools/libxc/xc_arinc653.c
index fe2ddcb..3310f38 100644
--- a/tools/libxc/xc_arinc653.c
+++ b/tools/libxc/xc_arinc653.c
@@ -27,8 +27,9 @@
#include "xc_private.h"
int
-xc_sched_arinc653_schedule_set(
+xc_sched_arinc653_schedule_set_v2(
xc_interface *xch,
+ uint32_t cpupool_id,
struct xen_sysctl_arinc653_schedule *schedule)
{
int rc;
@@ -42,7 +43,7 @@ xc_sched_arinc653_schedule_set(
return -1;
sysctl.cmd = XEN_SYSCTL_scheduler_op;
- sysctl.u.scheduler_op.cpupool_id = 0;
+ sysctl.u.scheduler_op.cpupool_id = cpupool_id;
sysctl.u.scheduler_op.sched_id = XEN_SCHEDULER_ARINC653;
sysctl.u.scheduler_op.cmd = XEN_SYSCTL_SCHEDOP_putinfo;
set_xen_guest_handle(sysctl.u.scheduler_op.u.sched_arinc653.schedule,
@@ -56,8 +57,9 @@ xc_sched_arinc653_schedule_set(
}
int
-xc_sched_arinc653_schedule_get(
+xc_sched_arinc653_schedule_get_v2(
xc_interface *xch,
+ uint32_t cpupool_id,
struct xen_sysctl_arinc653_schedule *schedule)
{
int rc;
@@ -71,7 +73,7 @@ xc_sched_arinc653_schedule_get(
return -1;
sysctl.cmd = XEN_SYSCTL_scheduler_op;
- sysctl.u.scheduler_op.cpupool_id = 0;
+ sysctl.u.scheduler_op.cpupool_id = cpupool_id;
sysctl.u.scheduler_op.sched_id = XEN_SCHEDULER_ARINC653;
sysctl.u.scheduler_op.cmd = XEN_SYSCTL_SCHEDOP_getinfo;
set_xen_guest_handle(sysctl.u.scheduler_op.u.sched_arinc653.schedule,
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 4ac6b8a..190f442 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -793,14 +793,22 @@ int xc_sched_credit2_domain_get(xc_interface *xch,
uint32_t domid,
struct xen_domctl_sched_credit2 *sdom);
+#define xc_sched_arinc653_schedule_set(xch, schedule) \
+ xc_sched_arinc653_schedule_set_v2(xch, 0, schedule)
+
int
-xc_sched_arinc653_schedule_set(
+xc_sched_arinc653_schedule_set_v2(
xc_interface *xch,
+ uint32_t cpupool_id,
struct xen_sysctl_arinc653_schedule *schedule);
+#define xc_sched_arinc653_schedule_get(xch, schedule) \
+ xc_sched_arinc653_schedule_get_v2(xch, 0, schedule)
+
int
-xc_sched_arinc653_schedule_get(
+xc_sched_arinc653_schedule_get_v2(
xc_interface *xch,
+ uint32_t cpupool_id,
struct xen_sysctl_arinc653_schedule *schedule);
/**
--
1.7.9.5
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH 1/3] arinc: whitespace and formatting fixes
2013-11-18 20:16 ` [PATCH 1/3] arinc: whitespace and formatting fixes Nathan Studer
@ 2013-11-19 9:54 ` Andrew Cooper
2013-11-19 11:30 ` George Dunlap
1 sibling, 0 replies; 16+ messages in thread
From: Andrew Cooper @ 2013-11-19 9:54 UTC (permalink / raw)
To: Nathan Studer
Cc: Simon Martin, Ian Campbell, Stefano Stabellini, George Dunlap,
Ian Jackson, Robert VanVossen, xen-devel
On 18/11/2013 20:16, Nathan Studer wrote:
> From: Nathan Studer <nate.studer@dornerworks.com>
>
> Remove the excessive amount of trailing whitespace in the
> arinc653 scheduler file and add a local variables block.
>
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Nathan Studer <nate.studer@dornerworks.com>
That's much better, thanks.
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> xen/common/sched_arinc653.c | 96 ++++++++++++++++++++++++-------------------
> 1 file changed, 53 insertions(+), 43 deletions(-)
>
> diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c
> index 7b7b387..f4eb943 100644
> --- a/xen/common/sched_arinc653.c
> +++ b/xen/common/sched_arinc653.c
> @@ -40,9 +40,9 @@
> * Private Macros *
> **************************************************************************/
>
> -/**
> - * Retrieve the idle VCPU for a given physical CPU
> - */
> +/**
> + * Retrieve the idle VCPU for a given physical CPU
> + */
> #define IDLETASK(cpu) (idle_vcpu[cpu])
>
> /**
> @@ -76,7 +76,7 @@ typedef struct arinc653_vcpu_s
> struct list_head list;
> } arinc653_vcpu_t;
>
> -/**
> +/**
> * The sched_entry_t structure holds a single entry of the
> * ARINC 653 schedule.
> */
> @@ -101,8 +101,8 @@ typedef struct sched_entry_s
> typedef struct a653sched_priv_s
> {
> /**
> - * This array holds the active ARINC 653 schedule.
> - *
> + * This array holds the active ARINC 653 schedule.
> + *
> * When the system tries to start a new VCPU, this schedule is scanned
> * to look for a matching (handle, VCPU #) pair. If both the handle (UUID)
> * and VCPU number match, then the VCPU is allowed to run. Its run time
> @@ -112,12 +112,12 @@ typedef struct a653sched_priv_s
>
> /**
> * This variable holds the number of entries that are valid in
> - * the arinc653_schedule table.
> - *
> + * the arinc653_schedule table.
> + *
> * This is not necessarily the same as the number of domains in the
> * schedule. A domain could be listed multiple times within the schedule,
> * or a domain with multiple VCPUs could have a different
> - * schedule entry for each VCPU.
> + * schedule entry for each VCPU.
> */
> int num_schedule_entries;
>
> @@ -131,9 +131,9 @@ typedef struct a653sched_priv_s
> */
> s_time_t next_major_frame;
>
> - /**
> - * pointers to all Xen VCPU structures for iterating through
> - */
> + /**
> + * pointers to all Xen VCPU structures for iterating through
> + */
> struct list_head vcpu_list;
> } a653sched_priv_t;
>
> @@ -143,14 +143,14 @@ typedef struct a653sched_priv_s
>
> /**
> * This function compares two domain handles.
> - *
> + *
> * @param h1 Pointer to handle 1
> * @param h2 Pointer to handle 2
> - *
> + *
> * @return <ul>
> - * <li> <0: handle 1 is less than handle 2
> - * <li> 0: handle 1 is equal to handle 2
> - * <li> >0: handle 1 is greater than handle 2
> + * <li> <0: handle 1 is less than handle 2
> + * <li> 0: handle 1 is equal to handle 2
> + * <li> >0: handle 1 is greater than handle 2
> * </ul>
> */
> static int dom_handle_cmp(const xen_domain_handle_t h1,
> @@ -162,11 +162,11 @@ static int dom_handle_cmp(const xen_domain_handle_t h1,
> /**
> * This function searches the vcpu list to find a VCPU that matches
> * the domain handle and VCPU ID specified.
> - *
> + *
> * @param ops Pointer to this instance of the scheduler structure
> * @param handle Pointer to handler
> * @param vcpu_id VCPU ID
> - *
> + *
> * @return <ul>
> * <li> Pointer to the matching VCPU if one is found
> * <li> NULL otherwise
> @@ -191,7 +191,7 @@ static struct vcpu *find_vcpu(
> /**
> * This function updates the pointer to the Xen VCPU structure for each entry
> * in the ARINC 653 schedule.
> - *
> + *
> * @param ops Pointer to this instance of the scheduler structure
> * @return <None>
> */
> @@ -211,7 +211,7 @@ static void update_schedule_vcpus(const struct scheduler *ops)
> * in place a new ARINC653 schedule.
> *
> * @param ops Pointer to this instance of the scheduler structure
> - *
> + *
> * @return <ul>
> * <li> 0 = success
> * <li> !0 = error
> @@ -253,10 +253,10 @@ arinc653_sched_set(
> if ( !found_dom0 )
> goto fail;
>
> - /*
> + /*
> * Error if the major frame is not large enough to run all entries as
> * indicated by comparing the total run time to the major frame length.
> - */
> + */
> if ( total_runtime > schedule->major_frame )
> goto fail;
>
> @@ -276,10 +276,10 @@ arinc653_sched_set(
> update_schedule_vcpus(ops);
>
> /*
> - * The newly-installed schedule takes effect immediately. We do not even
> + * The newly-installed schedule takes effect immediately. We do not even
> * wait for the current major frame to expire.
> *
> - * Signal a new major frame to begin. The next major frame is set up by
> + * Signal a new major frame to begin. The next major frame is set up by
> * the do_schedule callback function when it is next invoked.
> */
> sched_priv->next_major_frame = NOW();
> @@ -392,8 +392,8 @@ a653sched_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
>
> /*
> * Initialize our ARINC 653 scheduler-specific information for the VCPU.
> - * The VCPU starts "asleep." When Xen is ready for the VCPU to run, it
> - * will call the vcpu_wake scheduler callback function and our scheduler
> + * The VCPU starts "asleep." When Xen is ready for the VCPU to run, it
> + * will call the vcpu_wake scheduler callback function and our scheduler
> * will mark the VCPU awake.
> */
> svc->vc = vc;
> @@ -483,7 +483,7 @@ a653sched_free_domdata(const struct scheduler *ops, void *data)
>
> /**
> * Xen scheduler callback function to sleep a VCPU
> - *
> + *
> * @param ops Pointer to this instance of the scheduler structure
> * @param vc Pointer to the VCPU structure for the current domain
> */
> @@ -503,7 +503,7 @@ a653sched_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc)
>
> /**
> * Xen scheduler callback function to wake up a VCPU
> - *
> + *
> * @param ops Pointer to this instance of the scheduler structure
> * @param vc Pointer to the VCPU structure for the current domain
> */
> @@ -519,10 +519,10 @@ a653sched_vcpu_wake(const struct scheduler *ops, struct vcpu *vc)
> /**
> * Xen scheduler callback function to select a VCPU to run.
> * This is the main scheduler routine.
> - *
> + *
> * @param ops Pointer to this instance of the scheduler structure
> * @param now Current time
> - *
> + *
> * @return Address of the VCPU structure scheduled to be run next
> * Amount of time to execute the returned VCPU
> * Flag for whether the VCPU was migrated
> @@ -559,7 +559,7 @@ a653sched_do_schedule(
> }
> }
>
> - /*
> + /*
> * If we exhausted the domains in the schedule and still have time left
> * in the major frame then switch next at the next major frame.
> */
> @@ -567,10 +567,10 @@ a653sched_do_schedule(
> next_switch_time = sched_priv->next_major_frame;
>
> /*
> - * If there are more domains to run in the current major frame, set
> - * new_task equal to the address of next domain's VCPU structure.
> - * Otherwise, set new_task equal to the address of the idle task's VCPU
> - * structure.
> + * If there are more domains to run in the current major frame, set
> + * new_task equal to the address of next domain's VCPU structure.
> + * Otherwise, set new_task equal to the address of the idle task's VCPU
> + * structure.
> */
> new_task = (sched_index < sched_priv->num_schedule_entries)
> ? sched_priv->schedule[sched_index].vc
> @@ -584,10 +584,10 @@ a653sched_do_schedule(
> new_task = IDLETASK(0);
> BUG_ON(new_task == NULL);
>
> - /*
> + /*
> * Check to make sure we did not miss a major frame.
> - * This is a good test for robust partitioning.
> - */
> + * This is a good test for robust partitioning.
> + */
> BUG_ON(now >= sched_priv->next_major_frame);
>
> /* Tasklet work (which runs in idle VCPU context) overrides all else. */
> @@ -595,8 +595,8 @@ a653sched_do_schedule(
> new_task = IDLETASK(0);
>
> /*
> - * Return the amount of time the next domain has to run and the address
> - * of the selected task's VCPU structure.
> + * Return the amount of time the next domain has to run and the address
> + * of the selected task's VCPU structure.
> */
> ret.time = next_switch_time - now;
> ret.task = new_task;
> @@ -609,10 +609,10 @@ a653sched_do_schedule(
>
> /**
> * Xen scheduler callback function to select a CPU for the VCPU to run on
> - *
> + *
> * @param ops Pointer to this instance of the scheduler structure
> * @param v Pointer to the VCPU structure for the current domain
> - *
> + *
> * @return Number of selected physical CPU
> */
> static int
> @@ -709,3 +709,13 @@ const struct scheduler sched_arinc653_def = {
> .tick_suspend = NULL,
> .tick_resume = NULL,
> };
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 2/3] arinc: Add cpu-pool support to scheduler.
2013-11-18 20:16 ` [PATCH 2/3] arinc: Add cpu-pool support to scheduler Nathan Studer
@ 2013-11-19 10:30 ` Andrew Cooper
2013-11-19 11:18 ` George Dunlap
2013-11-19 13:58 ` Nate Studer
2013-11-19 11:30 ` George Dunlap
1 sibling, 2 replies; 16+ messages in thread
From: Andrew Cooper @ 2013-11-19 10:30 UTC (permalink / raw)
To: Nathan Studer
Cc: Simon Martin, Ian Campbell, Stefano Stabellini, George Dunlap,
Ian Jackson, Robert VanVossen, xen-devel
On 18/11/2013 20:16, Nathan Studer wrote:
> From: Nathan Studer <nate.studer@dornerworks.com>
>
> 1. Remove the restriction that dom0 must be in the schedule, since dom-0 may
> not belong to the scheduler's pool.
> 2. Add a schedule entry for each of dom-0's vcpus as they are created.
> 3. Add code to deal with empty schedules in the do_schedule function.
> 4. Call the correct idle task for the pcpu on which the scheduling decision
> is being made in do_schedule.
> 5. Add code to prevent migration of a vcpu.
> 6. Implement a proper cpu_pick function, which prefers the current processor.
>
> These changes do not implement arinc653 multicore. Since the schedule only
> supports 1 vcpu entry per slot, even if the vcpus of a domain are run on
> multiple pcpus, the scheduler will essentially serialize their execution.
>
> Signed-off-by: Nathan Studer <nate.studer@dornerworks.com>
> ---
> xen/common/sched_arinc653.c | 76 +++++++++++++++++++++++++++++--------------
> 1 file changed, 52 insertions(+), 24 deletions(-)
>
> diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c
> index f4eb943..139177e 100644
> --- a/xen/common/sched_arinc653.c
> +++ b/xen/common/sched_arinc653.c
> @@ -41,6 +41,11 @@
> **************************************************************************/
>
> /**
> + * Default timeslice for domain 0.
> + */
> +#define DEFAULT_TIMESLICE MILLISECS(10)
> +
> +/**
> * Retrieve the idle VCPU for a given physical CPU
> */
> #define IDLETASK(cpu) (idle_vcpu[cpu])
> @@ -224,8 +229,6 @@ arinc653_sched_set(
> {
> a653sched_priv_t *sched_priv = SCHED_PRIV(ops);
> s_time_t total_runtime = 0;
> - bool_t found_dom0 = 0;
> - const static xen_domain_handle_t dom0_handle = {0};
> unsigned int i;
>
> /* Check for valid major frame and number of schedule entries. */
> @@ -236,10 +239,6 @@ arinc653_sched_set(
>
> for ( i = 0; i < schedule->num_sched_entries; i++ )
> {
> - if ( dom_handle_cmp(schedule->sched_entries[i].dom_handle,
> - dom0_handle) == 0 )
> - found_dom0 = 1;
> -
> /* Check for a valid VCPU ID and run time. */
> if ( (schedule->sched_entries[i].vcpu_id >= MAX_VIRT_CPUS)
> || (schedule->sched_entries[i].runtime <= 0) )
> @@ -249,10 +248,6 @@ arinc653_sched_set(
> total_runtime += schedule->sched_entries[i].runtime;
> }
>
> - /* Error if the schedule doesn't contain a slot for domain 0. */
> - if ( !found_dom0 )
> - goto fail;
> -
> /*
> * Error if the major frame is not large enough to run all entries as
> * indicated by comparing the total run time to the major frame length.
> @@ -347,12 +342,6 @@ a653sched_init(struct scheduler *ops)
>
> ops->sched_data = prv;
>
> - prv->schedule[0].dom_handle[0] = '\0';
> - prv->schedule[0].vcpu_id = 0;
> - prv->schedule[0].runtime = MILLISECS(10);
> - prv->schedule[0].vc = NULL;
> - prv->num_schedule_entries = 1;
> - prv->major_frame = MILLISECS(10);
> prv->next_major_frame = 0;
> INIT_LIST_HEAD(&prv->vcpu_list);
>
> @@ -380,7 +369,9 @@ a653sched_deinit(const struct scheduler *ops)
> static void *
> a653sched_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
> {
> + a653sched_priv_t *sched_priv = SCHED_PRIV(ops);
> arinc653_vcpu_t *svc;
> + int entry;
sched_priv->num_schedule_entries is inconsistently used as signed and
unsigned. It should be an unsigned value, and updated to be so
everywhere, including in the a653sched_priv_t structure.
>
> /*
> * Allocate memory for the ARINC 653-specific scheduler data information
> @@ -390,6 +381,19 @@ a653sched_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
> if ( svc == NULL )
> return NULL;
>
> + /* add every one of dom0's vcpus to the schedule */
> + if (vc->domain->domain_id == 0)
Xen style would include spaces immediately inside the brackets.
Also, it looks like you could do with a bounds check against
ARINC653_MAX_DOMAINS_PER_SCHEDULE before trying to put another dom0 into
the mix.
> + {
> + entry = sched_priv->num_schedule_entries;
> + sched_priv->schedule[entry].dom_handle[0] = '\0';
> + sched_priv->schedule[entry].vcpu_id = vc->vcpu_id;
> + sched_priv->schedule[entry].runtime = DEFAULT_TIMESLICE;
> + sched_priv->schedule[entry].vc = vc;
> +
> + sched_priv->major_frame += DEFAULT_TIMESLICE;
> + ++sched_priv->num_schedule_entries;
> + }
> +
> /*
> * Initialize our ARINC 653 scheduler-specific information for the VCPU.
> * The VCPU starts "asleep." When Xen is ready for the VCPU to run, it
> @@ -450,7 +454,7 @@ a653sched_alloc_pdata(const struct scheduler *ops, int cpu)
> static void
> a653sched_free_pdata(const struct scheduler *ops, void *pcpu, int cpu)
> {
> - /* nop */
> + /* NOP */
This change seems a little pointless.
> }
>
> /**
> @@ -538,8 +542,13 @@ a653sched_do_schedule(
> static int sched_index = 0;
> static s_time_t next_switch_time;
> a653sched_priv_t *sched_priv = SCHED_PRIV(ops);
> + const int cpu = smp_processor_id();
This should be an unsigned int.
>
> - if ( now >= sched_priv->next_major_frame )
> + if ( sched_priv->num_schedule_entries < 1 )
> + {
> + sched_priv->next_major_frame = now + DEFAULT_TIMESLICE;
> + }
Xen style would require these braces to be omitted.
> + else if ( now >= sched_priv->next_major_frame )
> {
> /* time to enter a new major frame
> * the first time this function is called, this will be true */
> @@ -574,14 +583,14 @@ a653sched_do_schedule(
> */
> new_task = (sched_index < sched_priv->num_schedule_entries)
> ? sched_priv->schedule[sched_index].vc
> - : IDLETASK(0);
> + : IDLETASK(cpu);
>
> /* Check to see if the new task can be run (awake & runnable). */
> if ( !((new_task != NULL)
> && (AVCPU(new_task) != NULL)
> && AVCPU(new_task)->awake
> && vcpu_runnable(new_task)) )
> - new_task = IDLETASK(0);
> + new_task = IDLETASK(cpu);
> BUG_ON(new_task == NULL);
>
> /*
> @@ -592,7 +601,12 @@ a653sched_do_schedule(
>
> /* Tasklet work (which runs in idle VCPU context) overrides all else. */
> if ( tasklet_work_scheduled )
> - new_task = IDLETASK(0);
> + new_task = IDLETASK(cpu);
> +
> + /* Running this task would result in a migration */
> + if ( (!is_idle_vcpu(new_task))
The brackets around !is_idle_vcpu() are useless.
> + && (new_task->processor != cpu) )
> + new_task = IDLETASK(cpu);
>
> /*
> * Return the amount of time the next domain has to run and the address
> @@ -600,7 +614,7 @@ a653sched_do_schedule(
> */
> ret.time = next_switch_time - now;
> ret.task = new_task;
> - ret.migrated = 0; /* we do not support migration */
> + ret.migrated = 0;
>
> BUG_ON(ret.time <= 0);
>
> @@ -618,8 +632,22 @@ a653sched_do_schedule(
> static int
> a653sched_pick_cpu(const struct scheduler *ops, struct vcpu *vc)
> {
> - /* this implementation only supports one physical CPU */
> - return 0;
> + cpumask_t *online;
> + int cpu;
unsigned int.
> +
> + /* If present, prefer vc's current processor, else
> + just find the first valid vcpu */
Xen style would be:
/*
* If present....
*/
> + online = cpupool_scheduler_cpumask(vc->domain->cpupool);
> +
> + cpu = cpumask_first(online);
> +
> + if ( (cpumask_test_cpu(vc->processor, online)) ||
> + (cpu >= nr_cpu_ids) )
> + {
> + cpu = vc->processor;
> + }
superfluous brackets and braces.
~Andrew
> +
> + return cpu;
> }
>
> /**
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 3/3] arinc: Add poolid parameter to scheduler get/set functions.
2013-11-18 20:16 ` [PATCH 3/3] arinc: Add poolid parameter to scheduler get/set functions Nathan Studer
@ 2013-11-19 10:32 ` Andrew Cooper
2013-11-19 11:32 ` George Dunlap
0 siblings, 1 reply; 16+ messages in thread
From: Andrew Cooper @ 2013-11-19 10:32 UTC (permalink / raw)
To: Nathan Studer
Cc: Simon Martin, Ian Campbell, Stefano Stabellini, George Dunlap,
Ian Jackson, Robert VanVossen, xen-devel
On 18/11/2013 20:16, Nathan Studer wrote:
> From: Nathan Studer <nate.studer@dornerworks.com>
>
> Also create a backwards compatible interface to these functions.
>
> Signed-off-by: Nathan Studer <nate.studer@dornerworks.com>
libxc is an unstable API. No need for _v2 suffixes.
~Andrew
> ---
> tools/libxc/xc_arinc653.c | 10 ++++++----
> tools/libxc/xenctrl.h | 12 ++++++++++--
> 2 files changed, 16 insertions(+), 6 deletions(-)
>
> diff --git a/tools/libxc/xc_arinc653.c b/tools/libxc/xc_arinc653.c
> index fe2ddcb..3310f38 100644
> --- a/tools/libxc/xc_arinc653.c
> +++ b/tools/libxc/xc_arinc653.c
> @@ -27,8 +27,9 @@
> #include "xc_private.h"
>
> int
> -xc_sched_arinc653_schedule_set(
> +xc_sched_arinc653_schedule_set_v2(
> xc_interface *xch,
> + uint32_t cpupool_id,
> struct xen_sysctl_arinc653_schedule *schedule)
> {
> int rc;
> @@ -42,7 +43,7 @@ xc_sched_arinc653_schedule_set(
> return -1;
>
> sysctl.cmd = XEN_SYSCTL_scheduler_op;
> - sysctl.u.scheduler_op.cpupool_id = 0;
> + sysctl.u.scheduler_op.cpupool_id = cpupool_id;
> sysctl.u.scheduler_op.sched_id = XEN_SCHEDULER_ARINC653;
> sysctl.u.scheduler_op.cmd = XEN_SYSCTL_SCHEDOP_putinfo;
> set_xen_guest_handle(sysctl.u.scheduler_op.u.sched_arinc653.schedule,
> @@ -56,8 +57,9 @@ xc_sched_arinc653_schedule_set(
> }
>
> int
> -xc_sched_arinc653_schedule_get(
> +xc_sched_arinc653_schedule_get_v2(
> xc_interface *xch,
> + uint32_t cpupool_id,
> struct xen_sysctl_arinc653_schedule *schedule)
> {
> int rc;
> @@ -71,7 +73,7 @@ xc_sched_arinc653_schedule_get(
> return -1;
>
> sysctl.cmd = XEN_SYSCTL_scheduler_op;
> - sysctl.u.scheduler_op.cpupool_id = 0;
> + sysctl.u.scheduler_op.cpupool_id = cpupool_id;
> sysctl.u.scheduler_op.sched_id = XEN_SCHEDULER_ARINC653;
> sysctl.u.scheduler_op.cmd = XEN_SYSCTL_SCHEDOP_getinfo;
> set_xen_guest_handle(sysctl.u.scheduler_op.u.sched_arinc653.schedule,
> diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> index 4ac6b8a..190f442 100644
> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -793,14 +793,22 @@ int xc_sched_credit2_domain_get(xc_interface *xch,
> uint32_t domid,
> struct xen_domctl_sched_credit2 *sdom);
>
> +#define xc_sched_arinc653_schedule_set(xch, schedule) \
> + xc_sched_arinc653_schedule_set_v2(xch, 0, schedule)
> +
> int
> -xc_sched_arinc653_schedule_set(
> +xc_sched_arinc653_schedule_set_v2(
> xc_interface *xch,
> + uint32_t cpupool_id,
> struct xen_sysctl_arinc653_schedule *schedule);
>
> +#define xc_sched_arinc653_schedule_get(xch, schedule) \
> + xc_sched_arinc653_schedule_get_v2(xch, 0, schedule)
> +
> int
> -xc_sched_arinc653_schedule_get(
> +xc_sched_arinc653_schedule_get_v2(
> xc_interface *xch,
> + uint32_t cpupool_id,
> struct xen_sysctl_arinc653_schedule *schedule);
>
> /**
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 2/3] arinc: Add cpu-pool support to scheduler.
2013-11-19 10:30 ` Andrew Cooper
@ 2013-11-19 11:18 ` George Dunlap
2013-11-19 11:33 ` Andrew Cooper
2013-11-19 13:58 ` Nate Studer
1 sibling, 1 reply; 16+ messages in thread
From: George Dunlap @ 2013-11-19 11:18 UTC (permalink / raw)
To: Andrew Cooper
Cc: Simon Martin, Ian Campbell, Stefano Stabellini, Ian Jackson,
Robert VanVossen, xen-devel, Nathan Studer
On 11/19/2013 10:30 AM, Andrew Cooper wrote:
> On 18/11/2013 20:16, Nathan Studer wrote:
>> @@ -450,7 +454,7 @@ a653sched_alloc_pdata(const struct scheduler *ops, int cpu)
>> static void
>> a653sched_free_pdata(const struct scheduler *ops, void *pcpu, int cpu)
>> {
>> - /* nop */
>> + /* NOP */
>
> This change seems a little pointless.
So is this criticism. If the maintainer thinks the comment would look
better upper-case, he can change it.
-George
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 2/3] arinc: Add cpu-pool support to scheduler.
2013-11-18 20:16 ` [PATCH 2/3] arinc: Add cpu-pool support to scheduler Nathan Studer
2013-11-19 10:30 ` Andrew Cooper
@ 2013-11-19 11:30 ` George Dunlap
1 sibling, 0 replies; 16+ messages in thread
From: George Dunlap @ 2013-11-19 11:30 UTC (permalink / raw)
To: Nathan Studer
Cc: Simon Martin, Ian Campbell, Stefano Stabellini, Ian Jackson,
Robert VanVossen, xen-devel
On 11/18/2013 08:16 PM, Nathan Studer wrote:
> From: Nathan Studer <nate.studer@dornerworks.com>
>
> 1. Remove the restriction that dom0 must be in the schedule, since dom-0 may
> not belong to the scheduler's pool.
> 2. Add a schedule entry for each of dom-0's vcpus as they are created.
> 3. Add code to deal with empty schedules in the do_schedule function.
> 4. Call the correct idle task for the pcpu on which the scheduling decision
> is being made in do_schedule.
> 5. Add code to prevent migration of a vcpu.
> 6. Implement a proper cpu_pick function, which prefers the current processor.
>
> These changes do not implement arinc653 multicore. Since the schedule only
> supports 1 vcpu entry per slot, even if the vcpus of a domain are run on
> multiple pcpus, the scheduler will essentially serialize their execution.
>
> Signed-off-by: Nathan Studer <nate.studer@dornerworks.com>
If this were a change to one of the main schedulers I think I would say
that it was too late for such an intrusive change. But at the moment, I
don't think there are other users of this code, so I'm inclined to be
more permissive.
Unless someone wants to argue otherwise:
Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/3] arinc: whitespace and formatting fixes
2013-11-18 20:16 ` [PATCH 1/3] arinc: whitespace and formatting fixes Nathan Studer
2013-11-19 9:54 ` Andrew Cooper
@ 2013-11-19 11:30 ` George Dunlap
1 sibling, 0 replies; 16+ messages in thread
From: George Dunlap @ 2013-11-19 11:30 UTC (permalink / raw)
To: Nathan Studer
Cc: Simon Martin, Ian Campbell, Stefano Stabellini, Ian Jackson,
Robert VanVossen, xen-devel
On 11/18/2013 08:16 PM, Nathan Studer wrote:
> From: Nathan Studer <nate.studer@dornerworks.com>
>
> Remove the excessive amount of trailing whitespace in the
> arinc653 scheduler file and add a local variables block.
>
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Nathan Studer <nate.studer@dornerworks.com>
Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
> ---
> xen/common/sched_arinc653.c | 96 ++++++++++++++++++++++++-------------------
> 1 file changed, 53 insertions(+), 43 deletions(-)
>
> diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c
> index 7b7b387..f4eb943 100644
> --- a/xen/common/sched_arinc653.c
> +++ b/xen/common/sched_arinc653.c
> @@ -40,9 +40,9 @@
> * Private Macros *
> **************************************************************************/
>
> -/**
> - * Retrieve the idle VCPU for a given physical CPU
> - */
> +/**
> + * Retrieve the idle VCPU for a given physical CPU
> + */
> #define IDLETASK(cpu) (idle_vcpu[cpu])
>
> /**
> @@ -76,7 +76,7 @@ typedef struct arinc653_vcpu_s
> struct list_head list;
> } arinc653_vcpu_t;
>
> -/**
> +/**
> * The sched_entry_t structure holds a single entry of the
> * ARINC 653 schedule.
> */
> @@ -101,8 +101,8 @@ typedef struct sched_entry_s
> typedef struct a653sched_priv_s
> {
> /**
> - * This array holds the active ARINC 653 schedule.
> - *
> + * This array holds the active ARINC 653 schedule.
> + *
> * When the system tries to start a new VCPU, this schedule is scanned
> * to look for a matching (handle, VCPU #) pair. If both the handle (UUID)
> * and VCPU number match, then the VCPU is allowed to run. Its run time
> @@ -112,12 +112,12 @@ typedef struct a653sched_priv_s
>
> /**
> * This variable holds the number of entries that are valid in
> - * the arinc653_schedule table.
> - *
> + * the arinc653_schedule table.
> + *
> * This is not necessarily the same as the number of domains in the
> * schedule. A domain could be listed multiple times within the schedule,
> * or a domain with multiple VCPUs could have a different
> - * schedule entry for each VCPU.
> + * schedule entry for each VCPU.
> */
> int num_schedule_entries;
>
> @@ -131,9 +131,9 @@ typedef struct a653sched_priv_s
> */
> s_time_t next_major_frame;
>
> - /**
> - * pointers to all Xen VCPU structures for iterating through
> - */
> + /**
> + * pointers to all Xen VCPU structures for iterating through
> + */
> struct list_head vcpu_list;
> } a653sched_priv_t;
>
> @@ -143,14 +143,14 @@ typedef struct a653sched_priv_s
>
> /**
> * This function compares two domain handles.
> - *
> + *
> * @param h1 Pointer to handle 1
> * @param h2 Pointer to handle 2
> - *
> + *
> * @return <ul>
> - * <li> <0: handle 1 is less than handle 2
> - * <li> 0: handle 1 is equal to handle 2
> - * <li> >0: handle 1 is greater than handle 2
> + * <li> <0: handle 1 is less than handle 2
> + * <li> 0: handle 1 is equal to handle 2
> + * <li> >0: handle 1 is greater than handle 2
> * </ul>
> */
> static int dom_handle_cmp(const xen_domain_handle_t h1,
> @@ -162,11 +162,11 @@ static int dom_handle_cmp(const xen_domain_handle_t h1,
> /**
> * This function searches the vcpu list to find a VCPU that matches
> * the domain handle and VCPU ID specified.
> - *
> + *
> * @param ops Pointer to this instance of the scheduler structure
> * @param handle Pointer to handler
> * @param vcpu_id VCPU ID
> - *
> + *
> * @return <ul>
> * <li> Pointer to the matching VCPU if one is found
> * <li> NULL otherwise
> @@ -191,7 +191,7 @@ static struct vcpu *find_vcpu(
> /**
> * This function updates the pointer to the Xen VCPU structure for each entry
> * in the ARINC 653 schedule.
> - *
> + *
> * @param ops Pointer to this instance of the scheduler structure
> * @return <None>
> */
> @@ -211,7 +211,7 @@ static void update_schedule_vcpus(const struct scheduler *ops)
> * in place a new ARINC653 schedule.
> *
> * @param ops Pointer to this instance of the scheduler structure
> - *
> + *
> * @return <ul>
> * <li> 0 = success
> * <li> !0 = error
> @@ -253,10 +253,10 @@ arinc653_sched_set(
> if ( !found_dom0 )
> goto fail;
>
> - /*
> + /*
> * Error if the major frame is not large enough to run all entries as
> * indicated by comparing the total run time to the major frame length.
> - */
> + */
> if ( total_runtime > schedule->major_frame )
> goto fail;
>
> @@ -276,10 +276,10 @@ arinc653_sched_set(
> update_schedule_vcpus(ops);
>
> /*
> - * The newly-installed schedule takes effect immediately. We do not even
> + * The newly-installed schedule takes effect immediately. We do not even
> * wait for the current major frame to expire.
> *
> - * Signal a new major frame to begin. The next major frame is set up by
> + * Signal a new major frame to begin. The next major frame is set up by
> * the do_schedule callback function when it is next invoked.
> */
> sched_priv->next_major_frame = NOW();
> @@ -392,8 +392,8 @@ a653sched_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
>
> /*
> * Initialize our ARINC 653 scheduler-specific information for the VCPU.
> - * The VCPU starts "asleep." When Xen is ready for the VCPU to run, it
> - * will call the vcpu_wake scheduler callback function and our scheduler
> + * The VCPU starts "asleep." When Xen is ready for the VCPU to run, it
> + * will call the vcpu_wake scheduler callback function and our scheduler
> * will mark the VCPU awake.
> */
> svc->vc = vc;
> @@ -483,7 +483,7 @@ a653sched_free_domdata(const struct scheduler *ops, void *data)
>
> /**
> * Xen scheduler callback function to sleep a VCPU
> - *
> + *
> * @param ops Pointer to this instance of the scheduler structure
> * @param vc Pointer to the VCPU structure for the current domain
> */
> @@ -503,7 +503,7 @@ a653sched_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc)
>
> /**
> * Xen scheduler callback function to wake up a VCPU
> - *
> + *
> * @param ops Pointer to this instance of the scheduler structure
> * @param vc Pointer to the VCPU structure for the current domain
> */
> @@ -519,10 +519,10 @@ a653sched_vcpu_wake(const struct scheduler *ops, struct vcpu *vc)
> /**
> * Xen scheduler callback function to select a VCPU to run.
> * This is the main scheduler routine.
> - *
> + *
> * @param ops Pointer to this instance of the scheduler structure
> * @param now Current time
> - *
> + *
> * @return Address of the VCPU structure scheduled to be run next
> * Amount of time to execute the returned VCPU
> * Flag for whether the VCPU was migrated
> @@ -559,7 +559,7 @@ a653sched_do_schedule(
> }
> }
>
> - /*
> + /*
> * If we exhausted the domains in the schedule and still have time left
> * in the major frame then switch next at the next major frame.
> */
> @@ -567,10 +567,10 @@ a653sched_do_schedule(
> next_switch_time = sched_priv->next_major_frame;
>
> /*
> - * If there are more domains to run in the current major frame, set
> - * new_task equal to the address of next domain's VCPU structure.
> - * Otherwise, set new_task equal to the address of the idle task's VCPU
> - * structure.
> + * If there are more domains to run in the current major frame, set
> + * new_task equal to the address of next domain's VCPU structure.
> + * Otherwise, set new_task equal to the address of the idle task's VCPU
> + * structure.
> */
> new_task = (sched_index < sched_priv->num_schedule_entries)
> ? sched_priv->schedule[sched_index].vc
> @@ -584,10 +584,10 @@ a653sched_do_schedule(
> new_task = IDLETASK(0);
> BUG_ON(new_task == NULL);
>
> - /*
> + /*
> * Check to make sure we did not miss a major frame.
> - * This is a good test for robust partitioning.
> - */
> + * This is a good test for robust partitioning.
> + */
> BUG_ON(now >= sched_priv->next_major_frame);
>
> /* Tasklet work (which runs in idle VCPU context) overrides all else. */
> @@ -595,8 +595,8 @@ a653sched_do_schedule(
> new_task = IDLETASK(0);
>
> /*
> - * Return the amount of time the next domain has to run and the address
> - * of the selected task's VCPU structure.
> + * Return the amount of time the next domain has to run and the address
> + * of the selected task's VCPU structure.
> */
> ret.time = next_switch_time - now;
> ret.task = new_task;
> @@ -609,10 +609,10 @@ a653sched_do_schedule(
>
> /**
> * Xen scheduler callback function to select a CPU for the VCPU to run on
> - *
> + *
> * @param ops Pointer to this instance of the scheduler structure
> * @param v Pointer to the VCPU structure for the current domain
> - *
> + *
> * @return Number of selected physical CPU
> */
> static int
> @@ -709,3 +709,13 @@ const struct scheduler sched_arinc653_def = {
> .tick_suspend = NULL,
> .tick_resume = NULL,
> };
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 3/3] arinc: Add poolid parameter to scheduler get/set functions.
2013-11-19 10:32 ` Andrew Cooper
@ 2013-11-19 11:32 ` George Dunlap
0 siblings, 0 replies; 16+ messages in thread
From: George Dunlap @ 2013-11-19 11:32 UTC (permalink / raw)
To: Andrew Cooper
Cc: Simon Martin, Ian Campbell, Stefano Stabellini, Ian Jackson,
Robert VanVossen, xen-devel, Nathan Studer
On 11/19/2013 10:32 AM, Andrew Cooper wrote:
> On 18/11/2013 20:16, Nathan Studer wrote:
>> From: Nathan Studer <nate.studer@dornerworks.com>
>>
>> Also create a backwards compatible interface to these functions.
>>
>> Signed-off-by: Nathan Studer <nate.studer@dornerworks.com>
>
> libxc is an unstable API. No need for _v2 suffixes.
Yes -- please just change the function signature and all of its callers
(seems to be libxl only).
-George
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 2/3] arinc: Add cpu-pool support to scheduler.
2013-11-19 11:18 ` George Dunlap
@ 2013-11-19 11:33 ` Andrew Cooper
2013-11-19 13:01 ` Nate Studer
0 siblings, 1 reply; 16+ messages in thread
From: Andrew Cooper @ 2013-11-19 11:33 UTC (permalink / raw)
To: George Dunlap
Cc: Simon Martin, Ian Campbell, Stefano Stabellini, Ian Jackson,
Robert VanVossen, xen-devel, Nathan Studer
On 19/11/2013 11:18, George Dunlap wrote:
> On 11/19/2013 10:30 AM, Andrew Cooper wrote:
>> On 18/11/2013 20:16, Nathan Studer wrote:
>>> @@ -450,7 +454,7 @@ a653sched_alloc_pdata(const struct scheduler
>>> *ops, int cpu)
>>> static void
>>> a653sched_free_pdata(const struct scheduler *ops, void *pcpu, int
>>> cpu)
>>> {
>>> - /* nop */
>>> + /* NOP */
>>
>> This change seems a little pointless.
>
> So is this criticism. If the maintainer thinks the comment would look
> better upper-case, he can change it.
>
> -George
Sorry - I did not intend for this to come across as a criticism. It is
of course the maintainers prerogative as to which case he likes his
comments in.
It is also a seemingly random and unrelated change to the purpose of the
patch, and therefore open to query under our guidelines for patches.
http://wiki.xen.org/wiki/Submitting_Xen_Patches#Break_down_your_patches
~Andrew
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 2/3] arinc: Add cpu-pool support to scheduler.
2013-11-19 11:33 ` Andrew Cooper
@ 2013-11-19 13:01 ` Nate Studer
0 siblings, 0 replies; 16+ messages in thread
From: Nate Studer @ 2013-11-19 13:01 UTC (permalink / raw)
To: Andrew Cooper, George Dunlap
Cc: Simon Martin, Ian Campbell, Stefano Stabellini, Ian Jackson,
Robert VanVossen, xen-devel
On 11/19/2013 6:33 AM, Andrew Cooper wrote:
> On 19/11/2013 11:18, George Dunlap wrote:
>> On 11/19/2013 10:30 AM, Andrew Cooper wrote:
>>> On 18/11/2013 20:16, Nathan Studer wrote:
>>>> @@ -450,7 +454,7 @@ a653sched_alloc_pdata(const struct scheduler
>>>> *ops, int cpu)
>>>> static void
>>>> a653sched_free_pdata(const struct scheduler *ops, void *pcpu, int
>>>> cpu)
>>>> {
>>>> - /* nop */
>>>> + /* NOP */
>>>
>>> This change seems a little pointless.
>>
>> So is this criticism. If the maintainer thinks the comment would look
>> better upper-case, he can change it.
>>
>> -George
>
> Sorry - I did not intend for this to come across as a criticism. It is
> of course the maintainers prerogative as to which case he likes his
> comments in.
>
> It is also a seemingly random and unrelated change to the purpose of the
> patch, and therefore open to query under our guidelines for patches.
>
> http://wiki.xen.org/wiki/Submitting_Xen_Patches#Break_down_your_patches
>
> ~Andrew
>
While I do actually prefer lowercase, I did not intend to change the comment.
I'll remove this change in the next spin of the patches.
Thanks for keeping me honest, Andrew.
Nate
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 2/3] arinc: Add cpu-pool support to scheduler.
2013-11-19 10:30 ` Andrew Cooper
2013-11-19 11:18 ` George Dunlap
@ 2013-11-19 13:58 ` Nate Studer
2013-11-19 14:04 ` Nate Studer
2013-11-19 18:16 ` Andrew Cooper
1 sibling, 2 replies; 16+ messages in thread
From: Nate Studer @ 2013-11-19 13:58 UTC (permalink / raw)
To: Andrew Cooper; +Cc: George Dunlap, Robert VanVossen, xen-devel
On 11/19/2013 5:30 AM, Andrew Cooper wrote:
>>
>> @@ -380,7 +369,9 @@ a653sched_deinit(const struct scheduler *ops)
>> static void *
>> a653sched_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
>> {
>> + a653sched_priv_t *sched_priv = SCHED_PRIV(ops);
>> arinc653_vcpu_t *svc;
>> + int entry;
>
> sched_priv->num_schedule_entries is inconsistently used as signed and
> unsigned. It should be an unsigned value, and updated to be so
> everywhere, including in the a653sched_priv_t structure.
>
Right, this inconsistency should be fixed.
>>
>> /*
>> * Allocate memory for the ARINC 653-specific scheduler data information
>> @@ -390,6 +381,19 @@ a653sched_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
>> if ( svc == NULL )
>> return NULL;
>>
>> + /* add every one of dom0's vcpus to the schedule */
>> + if (vc->domain->domain_id == 0)
>
> Xen style would include spaces immediately inside the brackets.
>
> Also, it looks like you could do with a bounds check against
> ARINC653_MAX_DOMAINS_PER_SCHEDULE before trying to put another dom0 into
> the mix.
Will add bounds checking.
>> /**
>> @@ -538,8 +542,13 @@ a653sched_do_schedule(
>> static int sched_index = 0;
>> static s_time_t next_switch_time;
>> a653sched_priv_t *sched_priv = SCHED_PRIV(ops);
>> + const int cpu = smp_processor_id();
>
> This should be an unsigned int.
Yes it should. This needs to be fixed in pick_cpu as well.
>
>>
>> - if ( now >= sched_priv->next_major_frame )
>> + if ( sched_priv->num_schedule_entries < 1 )
>> + {
>> + sched_priv->next_major_frame = now + DEFAULT_TIMESLICE;
>> + }
>
> Xen style would require these braces to be omitted.
Even when followed by a multiple statement "else if"? I see braces in the same
construct in the credit scheduler.
if ( list_empty(&svc->active_vcpu_elem) )
{
__csched_vcpu_acct_start(prv, svc);
}
else if ( _csched_cpu_pick(ops, current, 0) != cpu )
{
I have no problem changing it, since I want to avoid spreading styling
inconsistencies, but I just want to make sure.
>
>> + else if ( now >= sched_priv->next_major_frame )
>> {
>> /* time to enter a new major frame
>> * the first time this function is called, this will be true */
The remaining comments are style comments, which I will fix up in the next
version of the patch.
Nate
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 2/3] arinc: Add cpu-pool support to scheduler.
2013-11-19 13:58 ` Nate Studer
@ 2013-11-19 14:04 ` Nate Studer
2013-11-19 18:16 ` Andrew Cooper
1 sibling, 0 replies; 16+ messages in thread
From: Nate Studer @ 2013-11-19 14:04 UTC (permalink / raw)
To: Andrew Cooper; +Cc: George Dunlap, Jan Beulich, Robert VanVossen, xen-devel
On 11/19/2013 8:58 AM, Nate Studer wrote:
>
> The remaining comments are style comments, which I will fix up in the next
> version of the patch.
>
> Nate
I suppose while, I am at it, I should also address this style comment from Jan
in a separate patch as well.
http://lists.xenproject.org/archives/html/xen-devel/2013-03/msg01775.html
Nate
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 2/3] arinc: Add cpu-pool support to scheduler.
2013-11-19 13:58 ` Nate Studer
2013-11-19 14:04 ` Nate Studer
@ 2013-11-19 18:16 ` Andrew Cooper
1 sibling, 0 replies; 16+ messages in thread
From: Andrew Cooper @ 2013-11-19 18:16 UTC (permalink / raw)
To: Nate Studer; +Cc: George Dunlap, Robert VanVossen, xen-devel
On 19/11/2013 13:58, Nate Studer wrote:
> On 11/19/2013 5:30 AM, Andrew Cooper wrote:
>
>>>
>>> @@ -380,7 +369,9 @@ a653sched_deinit(const struct scheduler *ops)
>>> static void *
>>> a653sched_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
>>> {
>>> + a653sched_priv_t *sched_priv = SCHED_PRIV(ops);
>>> arinc653_vcpu_t *svc;
>>> + int entry;
>> sched_priv->num_schedule_entries is inconsistently used as signed and
>> unsigned. It should be an unsigned value, and updated to be so
>> everywhere, including in the a653sched_priv_t structure.
>>
> Right, this inconsistency should be fixed.
>
>>>
>>> /*
>>> * Allocate memory for the ARINC 653-specific scheduler data information
>>> @@ -390,6 +381,19 @@ a653sched_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
>>> if ( svc == NULL )
>>> return NULL;
>>>
>>> + /* add every one of dom0's vcpus to the schedule */
>>> + if (vc->domain->domain_id == 0)
>> Xen style would include spaces immediately inside the brackets.
>>
>> Also, it looks like you could do with a bounds check against
>> ARINC653_MAX_DOMAINS_PER_SCHEDULE before trying to put another dom0 into
>> the mix.
> Will add bounds checking.
>
>>> /**
>>> @@ -538,8 +542,13 @@ a653sched_do_schedule(
>>> static int sched_index = 0;
>>> static s_time_t next_switch_time;
>>> a653sched_priv_t *sched_priv = SCHED_PRIV(ops);
>>> + const int cpu = smp_processor_id();
>> This should be an unsigned int.
> Yes it should. This needs to be fixed in pick_cpu as well.
>
>>>
>>> - if ( now >= sched_priv->next_major_frame )
>>> + if ( sched_priv->num_schedule_entries < 1 )
>>> + {
>>> + sched_priv->next_major_frame = now + DEFAULT_TIMESLICE;
>>> + }
>> Xen style would require these braces to be omitted.
> Even when followed by a multiple statement "else if"? I see braces in the same
> construct in the credit scheduler.
>
> if ( list_empty(&svc->active_vcpu_elem) )
> {
> __csched_vcpu_acct_start(prv, svc);
> }
> else if ( _csched_cpu_pick(ops, current, 0) != cpu )
> {
>
> I have no problem changing it, since I want to avoid spreading styling
> inconsistencies, but I just want to make sure.
Yes, even with multiple "else if" statements. In this case, the credit
scheduler would be wrong.
Style fixes like this are typically introduced on a 'when working in the
area' basis. This avoids style fixes for the sake of style fixes, as
much as it prevents propagating bad style.
~Andrew
>
>>> + else if ( now >= sched_priv->next_major_frame )
>>> {
>>> /* time to enter a new major frame
>>> * the first time this function is called, this will be true */
> The remaining comments are style comments, which I will fix up in the next
> version of the patch.
>
> Nate
>
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2013-11-19 18:16 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-11-18 20:16 [PATCH 0/3] arinc: Implement cpu-pool support Nathan Studer
2013-11-18 20:16 ` [PATCH 1/3] arinc: whitespace and formatting fixes Nathan Studer
2013-11-19 9:54 ` Andrew Cooper
2013-11-19 11:30 ` George Dunlap
2013-11-18 20:16 ` [PATCH 2/3] arinc: Add cpu-pool support to scheduler Nathan Studer
2013-11-19 10:30 ` Andrew Cooper
2013-11-19 11:18 ` George Dunlap
2013-11-19 11:33 ` Andrew Cooper
2013-11-19 13:01 ` Nate Studer
2013-11-19 13:58 ` Nate Studer
2013-11-19 14:04 ` Nate Studer
2013-11-19 18:16 ` Andrew Cooper
2013-11-19 11:30 ` George Dunlap
2013-11-18 20:16 ` [PATCH 3/3] arinc: Add poolid parameter to scheduler get/set functions Nathan Studer
2013-11-19 10:32 ` Andrew Cooper
2013-11-19 11:32 ` George Dunlap
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.