linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC v1 00/12] Cyclic Scheduler Against RTC
@ 2016-04-14  6:47 Bill Huey (hui)
  2016-04-14  6:47 ` [PATCH RFC v1 01/12] Kconfig change Bill Huey (hui)
                   ` (11 more replies)
  0 siblings, 12 replies; 13+ messages in thread
From: Bill Huey (hui) @ 2016-04-14  6:47 UTC (permalink / raw)
  To: Peter Zijlstra, Steven Rostedt, Alessandro Zummo, linux-kernel
  Cc: luca abeni, Juri Lelli, Mike Galbraith, Dario Faggioli, Thomas Gleixner

Hi,

Simple compilation updates here along with an admittance logic clean up.
The test program wasn't working properly without it. Slipped up in the rush
to get it out.

64 bit portability fixes coming next. I made a bogus assumption about
needing an RB tree for admittance. That'll go next.

I'd like to also credit Marco Ballesio from Palm as well for the
multimedia insights.  I omitted that in my first message. Hope the
build bots like these changes :)

bill

---

Bill Huey (hui) (12):
  Kconfig change
  Reroute rtc update irqs to the cyclic scheduler handler
  Add cyclic support to rtc-dev.c
  Anonymous struct initialization
  Task tracking per file descriptor
  Add anonymous struct to sched_rt_entity
  kernel/userspace additions for addition ioctl() support for rtc
  Compilation support
  Add priority support for the cyclic scheduler
  Export SCHED_FIFO/RT requeuing functions
  Cyclic scheduler support
  Cyclic/rtc documentation

 Documentation/scheduler/sched-cyclic-rtc.txt | 468 ++++++++++++++++++++
 drivers/rtc/Kconfig                          |   5 +
 drivers/rtc/class.c                          |   3 +
 drivers/rtc/interface.c                      |  23 +
 drivers/rtc/rtc-dev.c                        | 167 ++++++++
 include/linux/init_task.h                    |  18 +
 include/linux/rtc.h                          |   3 +
 include/linux/sched.h                        |  15 +
 include/uapi/linux/rtc.h                     |   4 +
 kernel/sched/Makefile                        |   1 +
 kernel/sched/core.c                          |  13 +
 kernel/sched/cyclic.c                        | 620 +++++++++++++++++++++++++++
 kernel/sched/cyclic.h                        |  86 ++++
 kernel/sched/cyclic_rt.h                     |   7 +
 kernel/sched/rt.c                            |  41 ++
 15 files changed, 1474 insertions(+)
 create mode 100644 Documentation/scheduler/sched-cyclic-rtc.txt
 create mode 100644 kernel/sched/cyclic.c
 create mode 100644 kernel/sched/cyclic.h
 create mode 100644 kernel/sched/cyclic_rt.h

-- 
2.5.0

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH RFC v1 01/12] Kconfig change
  2016-04-14  6:47 [PATCH RFC v1 00/12] Cyclic Scheduler Against RTC Bill Huey (hui)
@ 2016-04-14  6:47 ` Bill Huey (hui)
  2016-04-14  6:47 ` [PATCH RFC v1 02/12] Reroute rtc update irqs to the cyclic scheduler handler Bill Huey (hui)
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Bill Huey (hui) @ 2016-04-14  6:47 UTC (permalink / raw)
  To: Peter Zijlstra, Steven Rostedt, Alessandro Zummo, linux-kernel
  Cc: luca abeni, Juri Lelli, Mike Galbraith, Dario Faggioli, Thomas Gleixner

Add the selection options for the cyclic scheduler

Signed-off-by: Bill Huey (hui) <bill.huey@gmail.com>
---
 drivers/rtc/Kconfig | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/rtc/Kconfig b/drivers/rtc/Kconfig
index 544bd34..8a1b704 100644
--- a/drivers/rtc/Kconfig
+++ b/drivers/rtc/Kconfig
@@ -73,6 +73,11 @@ config RTC_DEBUG
 	  Say yes here to enable debugging support in the RTC framework
 	  and individual RTC drivers.
 
+config RTC_CYCLIC
+	bool "RTC cyclic executive scheduler support"
+	help
+	  Frame/Cyclic executive scheduler support through the RTC interface
+
 comment "RTC interfaces"
 
 config RTC_INTF_SYSFS
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC v1 02/12] Reroute rtc update irqs to the cyclic scheduler handler
  2016-04-14  6:47 [PATCH RFC v1 00/12] Cyclic Scheduler Against RTC Bill Huey (hui)
  2016-04-14  6:47 ` [PATCH RFC v1 01/12] Kconfig change Bill Huey (hui)
@ 2016-04-14  6:47 ` Bill Huey (hui)
  2016-04-14  6:47 ` [PATCH RFC v1 03/12] Add cyclic support to rtc-dev.c Bill Huey (hui)
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Bill Huey (hui) @ 2016-04-14  6:47 UTC (permalink / raw)
  To: Peter Zijlstra, Steven Rostedt, Alessandro Zummo, linux-kernel
  Cc: luca abeni, Juri Lelli, Mike Galbraith, Dario Faggioli, Thomas Gleixner

Redirect rtc update irqs so that it drives the cyclic scheduler timer
handler instead. Let the handler determine which slot to activate next.
Similar to scheduler tick handling but just for the cyclic scheduler.

Signed-off-by: Bill Huey (hui) <bill.huey@gmail.com>
---
 drivers/rtc/interface.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
index 9ef5f6f..6d39d40 100644
--- a/drivers/rtc/interface.c
+++ b/drivers/rtc/interface.c
@@ -17,6 +17,10 @@
 #include <linux/log2.h>
 #include <linux/workqueue.h>
 
+#ifdef CONFIG_RTC_CYCLIC
+#include "../kernel/sched/cyclic.h"
+#endif
+
 static int rtc_timer_enqueue(struct rtc_device *rtc, struct rtc_timer *timer);
 static void rtc_timer_remove(struct rtc_device *rtc, struct rtc_timer *timer);
 
@@ -488,6 +492,9 @@ EXPORT_SYMBOL_GPL(rtc_update_irq_enable);
 void rtc_handle_legacy_irq(struct rtc_device *rtc, int num, int mode)
 {
 	unsigned long flags;
+#ifdef CONFIG_RTC_CYCLIC
+	int handled = 0;
+#endif
 
 	/* mark one irq of the appropriate mode */
 	spin_lock_irqsave(&rtc->irq_lock, flags);
@@ -500,7 +507,23 @@ void rtc_handle_legacy_irq(struct rtc_device *rtc, int num, int mode)
 		rtc->irq_task->func(rtc->irq_task->private_data);
 	spin_unlock_irqrestore(&rtc->irq_task_lock, flags);
 
+#ifdef CONFIG_RTC_CYCLIC
+	/* wake up slot_curr if overrun task */
+	if (RTC_PF) {
+		if (rt_overrun_rq_admitted()) {
+			/* advance the cursor, overrun report */
+			rt_overrun_timer_handler(rtc);
+			handled = 1;
+		}
+	}
+
+	if (!handled) {
+		wake_up_interruptible(&rtc->irq_queue);
+	}
+#else
 	wake_up_interruptible(&rtc->irq_queue);
+#endif
+
 	kill_fasync(&rtc->async_queue, SIGIO, POLL_IN);
 }
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC v1 03/12] Add cyclic support to rtc-dev.c
  2016-04-14  6:47 [PATCH RFC v1 00/12] Cyclic Scheduler Against RTC Bill Huey (hui)
  2016-04-14  6:47 ` [PATCH RFC v1 01/12] Kconfig change Bill Huey (hui)
  2016-04-14  6:47 ` [PATCH RFC v1 02/12] Reroute rtc update irqs to the cyclic scheduler handler Bill Huey (hui)
@ 2016-04-14  6:47 ` Bill Huey (hui)
  2016-04-14  6:47 ` [PATCH RFC v1 04/12] Anonymous struct initialization Bill Huey (hui)
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Bill Huey (hui) @ 2016-04-14  6:47 UTC (permalink / raw)
  To: Peter Zijlstra, Steven Rostedt, Alessandro Zummo, linux-kernel
  Cc: luca abeni, Juri Lelli, Mike Galbraith, Dario Faggioli, Thomas Gleixner

wait-queue changes to rtc_dev_read so that it can support overrun count
reporting when multiple threads are blocked against a single wait object.

ioctl() additions to allow for those calling it to admit the thread to the
cyclic scheduler.

Signed-off-by: Bill Huey (hui) <bill.huey@gmail.com>
---
 drivers/rtc/rtc-dev.c | 167 ++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 167 insertions(+)

diff --git a/drivers/rtc/rtc-dev.c b/drivers/rtc/rtc-dev.c
index a6d9434..82c5cff 100644
--- a/drivers/rtc/rtc-dev.c
+++ b/drivers/rtc/rtc-dev.c
@@ -18,6 +18,15 @@
 #include <linux/sched.h>
 #include "rtc-core.h"
 
+#ifdef CONFIG_RTC_CYCLIC
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+
+#include <../kernel/sched/sched.h>
+#include <../kernel/sched/cyclic.h>
+//#include <../kernel/sched/cyclic_rt.h>
+#endif
+
 static dev_t rtc_devt;
 
 #define RTC_DEV_MAX 16 /* 16 RTCs should be enough for everyone... */
@@ -29,6 +38,10 @@ static int rtc_dev_open(struct inode *inode, struct file *file)
 					struct rtc_device, char_dev);
 	const struct rtc_class_ops *ops = rtc->ops;
 
+#ifdef CONFIG_RTC_CYCLIC
+	reset_rt_overrun();
+#endif
+
 	if (test_and_set_bit_lock(RTC_DEV_BUSY, &rtc->flags))
 		return -EBUSY;
 
@@ -153,13 +166,26 @@ rtc_dev_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
 {
 	struct rtc_device *rtc = file->private_data;
 
+#ifdef CONFIG_RTC_CYCLIC
+	DEFINE_WAIT_FUNC(wait, single_default_wake_function);
+#else
 	DECLARE_WAITQUEUE(wait, current);
+#endif
 	unsigned long data;
+#ifdef CONFIG_RTC_CYCLIC
+	unsigned long flags;
+	int wake = 0, block = 0;
+#endif
 	ssize_t ret;
 
 	if (count != sizeof(unsigned int) && count < sizeof(unsigned long))
 		return -EINVAL;
 
+#ifdef CONFIG_RTC_CYCLIC
+	if (rt_overrun_task_yield(current))
+		goto yield;
+printk("%s: 0 color = %d \n", __func__, current->rt.rt_overrun.color);
+#endif
 	add_wait_queue(&rtc->irq_queue, &wait);
 	do {
 		__set_current_state(TASK_INTERRUPTIBLE);
@@ -169,23 +195,65 @@ rtc_dev_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
 		rtc->irq_data = 0;
 		spin_unlock_irq(&rtc->irq_lock);
 
+#ifdef CONFIG_RTC_CYCLIC
+if (block) {
+	block = 0;
+	if (wake) {
+		printk("%s: wake \n", __func__);
+		wake = 0;
+	} else {
+		printk("%s: ~wake \n", __func__);
+	}
+}
+#endif
 		if (data != 0) {
+#ifdef CONFIG_RTC_CYCLIC
+			/* overrun reporting */
+			raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+			if (_on_rt_overrun_admitted(current)) {
+				/* pass back to userspace */
+				data = rt_task_count(current);
+				rt_task_count(current) = 0;
+			}
+			raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
 			ret = 0;
+printk("%s: 1 color = %d \n", __func__, current->rt.rt_overrun.color);
 			break;
 		}
+#else
+			ret = 0;
+			break;
+		}
+#endif
 		if (file->f_flags & O_NONBLOCK) {
 			ret = -EAGAIN;
+#ifdef CONFIG_RTC_CYCLIC
+printk("%s: 2 color = %d \n", __func__, current->rt.rt_overrun.color);
+#endif
 			break;
 		}
 		if (signal_pending(current)) {
+#ifdef CONFIG_RTC_CYCLIC
+printk("%s: 3 color = %d \n", __func__, current->rt.rt_overrun.color);
+#endif
 			ret = -ERESTARTSYS;
 			break;
 		}
+#ifdef CONFIG_RTC_CYCLIC
+		block = 1;
+#endif
 		schedule();
+#ifdef CONFIG_RTC_CYCLIC
+		/* debugging */
+		wake = 1;
+#endif
 	} while (1);
 	set_current_state(TASK_RUNNING);
 	remove_wait_queue(&rtc->irq_queue, &wait);
 
+#ifdef CONFIG_RTC_CYCLIC
+ret:
+#endif
 	if (ret == 0) {
 		/* Check for any data updates */
 		if (rtc->ops->read_callback)
@@ -201,6 +269,29 @@ rtc_dev_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
 				sizeof(unsigned long);
 	}
 	return ret;
+
+#ifdef CONFIG_RTC_CYCLIC
+yield:
+
+	spin_lock_irq(&rtc->irq_lock);
+	data = rtc->irq_data;
+	rtc->irq_data = 0;
+	spin_unlock_irq(&rtc->irq_lock);
+
+	raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+	if (_on_rt_overrun_admitted(current)) {
+		/* pass back to userspace */
+		data = rt_task_count(current);
+		rt_task_count(current) = 0;
+	}
+	else {
+	}
+
+	raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+	ret = 0;
+
+	goto ret;
+#endif
 }
 
 static unsigned int rtc_dev_poll(struct file *file, poll_table *wait)
@@ -215,6 +306,56 @@ static unsigned int rtc_dev_poll(struct file *file, poll_table *wait)
 	return (data != 0) ? (POLLIN | POLLRDNORM) : 0;
 }
 
+#ifdef CONFIG_RTC_CYCLIC
+extern asmlinkage __visible void __sched notrace preempt_schedule(void);
+
+/* yield behavior * /
+int rt_overrun_task_yield_block(struct task_struct *p)
+{
+	struct rq *rq = task_rq(p);
+	unsigned int block = 1;
+
+	if (test_case)
+	else
+		return 1;
+
+	if (rt_overrun_task_is_best_effort(p)) {
+		// assert that it should be on the rq
+		// move to the end, let pick_next_task_rt() deal with the next runnable task
+		requeue_task_rt2(rq, p, false);
+
+		//clear_overrun_log();
+
+		if (_cond_resched()) {
+			// we reschedule here
+		}
+
+		block = 0;
+	}
+
+	return block;
+} */
+
+int test_admit(u64 slots)
+{
+	/* Only allow the current task to be admitted for now
+	 * and allow for /proc to show the slot pattern
+	 * in a global fashion */
+	return rt_overrun_task_admit(current, slots);
+}
+
+int test_yield(u64 slots)
+{
+	rt_task_yield(current) = slots;
+	return 0;
+}
+
+void test_replenish(void)
+{
+	rt_overrun_task_replenish(current);
+}
+#endif
+
 static long rtc_dev_ioctl(struct file *file,
 		unsigned int cmd, unsigned long arg)
 {
@@ -223,6 +364,9 @@ static long rtc_dev_ioctl(struct file *file,
 	const struct rtc_class_ops *ops = rtc->ops;
 	struct rtc_time tm;
 	struct rtc_wkalrm alarm;
+#ifdef CONFIG_RTC_CYCLIC
+	u64 slots;
+#endif
 	void __user *uarg = (void __user *) arg;
 
 	err = mutex_lock_interruptible(&rtc->ops_lock);
@@ -250,6 +394,12 @@ static long rtc_dev_ioctl(struct file *file,
 				!capable(CAP_SYS_RESOURCE))
 			err = -EACCES;
 		break;
+#ifdef CONFIG_RTC_CYCLIC
+	case RTC_OV_REPLEN:
+		test_replenish();
+		err = -EACCES;
+		break;
+#endif
 	}
 
 	if (err)
@@ -380,7 +530,21 @@ static long rtc_dev_ioctl(struct file *file,
 	case RTC_IRQP_READ:
 		err = put_user(rtc->irq_freq, (unsigned long __user *)uarg);
 		break;
+#ifdef CONFIG_RTC_CYCLIC
+	case RTC_OV_YIELD:
+		mutex_unlock(&rtc->ops_lock);
+		if (copy_from_user(&slots, uarg, sizeof(u64)))
+			return -EFAULT;
+
+		return test_yield(slots);
+
+	case RTC_OV_ADMIT:
+		mutex_unlock(&rtc->ops_lock);
+		if (copy_from_user(&slots, uarg, sizeof(u64)))
+			return -EFAULT;
 
+		return test_admit(slots);
+#endif
 	case RTC_WKALM_SET:
 		mutex_unlock(&rtc->ops_lock);
 		if (copy_from_user(&alarm, uarg, sizeof(alarm)))
@@ -424,6 +588,9 @@ static int rtc_dev_release(struct inode *inode, struct file *file)
 {
 	struct rtc_device *rtc = file->private_data;
 
+#ifdef CONFIG_RTC_CYCLIC
+	rt_overrun_entries_delete_all(rtc);
+#endif
 	/* We shut down the repeating IRQs that userspace enabled,
 	 * since nothing is listening to them.
 	 *  - Update (UIE) ... currently only managed through ioctls
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC v1 04/12] Anonymous struct initialization
  2016-04-14  6:47 [PATCH RFC v1 00/12] Cyclic Scheduler Against RTC Bill Huey (hui)
                   ` (2 preceding siblings ...)
  2016-04-14  6:47 ` [PATCH RFC v1 03/12] Add cyclic support to rtc-dev.c Bill Huey (hui)
@ 2016-04-14  6:47 ` Bill Huey (hui)
  2016-04-14  6:47 ` [PATCH RFC v1 05/12] Task tracking per file descriptor Bill Huey (hui)
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Bill Huey (hui) @ 2016-04-14  6:47 UTC (permalink / raw)
  To: Peter Zijlstra, Steven Rostedt, Alessandro Zummo, linux-kernel
  Cc: luca abeni, Juri Lelli, Mike Galbraith, Dario Faggioli, Thomas Gleixner

Anonymous struct initialization

Signed-off-by: Bill Huey (hui) <bill.huey@gmail.com>
---
 include/linux/init_task.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index f2cb8d4..ac9b0d9 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -183,6 +183,23 @@ extern struct task_group root_task_group;
 # define INIT_KASAN(tsk)
 #endif
 
+#ifdef CONFIG_RTC_CYCLIC
+# define INIT_RT_OVERRUN(tsk)						\
+		.rt_overrun	= {					\
+			.count = 0,					\
+			.task_list = LIST_HEAD_INIT(tsk.rt.rt_overrun.task_list), \
+			.type = 0,					\
+			.color = 0,					\
+			.slots = 0,					\
+			.yield = 0,					\
+			.machine_state = 0,				\
+			.last_machine_state = 0,			\
+			.last_task_state = 0,				\
+		}
+#else
+# define INIT_RT_OVERRUN(tsk)
+#endif
+
 /*
  *  INIT_TASK is used to set up the first task table, touch at
  * your own risk!. Base=0, limit=0x1fffff (=2MB)
@@ -210,6 +227,7 @@ extern struct task_group root_task_group;
 	.rt		= {						\
 		.run_list	= LIST_HEAD_INIT(tsk.rt.run_list),	\
 		.time_slice	= RR_TIMESLICE,				\
+		INIT_RT_OVERRUN(tsk)					\
 	},								\
 	.tasks		= LIST_HEAD_INIT(tsk.tasks),			\
 	INIT_PUSHABLE_TASKS(tsk)					\
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC v1 05/12] Task tracking per file descriptor
  2016-04-14  6:47 [PATCH RFC v1 00/12] Cyclic Scheduler Against RTC Bill Huey (hui)
                   ` (3 preceding siblings ...)
  2016-04-14  6:47 ` [PATCH RFC v1 04/12] Anonymous struct initialization Bill Huey (hui)
@ 2016-04-14  6:47 ` Bill Huey (hui)
  2016-04-14  6:47 ` [PATCH RFC v1 06/12] Add anonymous struct to sched_rt_entity Bill Huey (hui)
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Bill Huey (hui) @ 2016-04-14  6:47 UTC (permalink / raw)
  To: Peter Zijlstra, Steven Rostedt, Alessandro Zummo, linux-kernel
  Cc: luca abeni, Juri Lelli, Mike Galbraith, Dario Faggioli, Thomas Gleixner

Task tracking per file descriptor for thread death clean up.

Signed-off-by: Bill Huey (hui) <bill.huey@gmail.com>
---
 drivers/rtc/class.c | 3 +++
 include/linux/rtc.h | 3 +++
 2 files changed, 6 insertions(+)

diff --git a/drivers/rtc/class.c b/drivers/rtc/class.c
index 74fd974..ad570b9 100644
--- a/drivers/rtc/class.c
+++ b/drivers/rtc/class.c
@@ -201,6 +201,9 @@ struct rtc_device *rtc_device_register(const char *name, struct device *dev,
 	rtc->irq_freq = 1;
 	rtc->max_user_freq = 64;
 	rtc->dev.parent = dev;
+#ifdef CONFIG_RTC_CYCLIC
+	INIT_LIST_HEAD(&rtc->rt_overrun_tasks); //struct list_head
+#endif
 	rtc->dev.class = rtc_class;
 	rtc->dev.groups = rtc_get_dev_attribute_groups();
 	rtc->dev.release = rtc_device_release;
diff --git a/include/linux/rtc.h b/include/linux/rtc.h
index b693ada..1424550 100644
--- a/include/linux/rtc.h
+++ b/include/linux/rtc.h
@@ -114,6 +114,9 @@ struct rtc_timer {
 struct rtc_device {
 	struct device dev;
 	struct module *owner;
+#ifdef CONFIG_RTC_CYCLIC
+	struct list_head rt_overrun_tasks;
+#endif
 
 	int id;
 	char name[RTC_DEVICE_NAME_SIZE];
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC v1 06/12] Add anonymous struct to sched_rt_entity
  2016-04-14  6:47 [PATCH RFC v1 00/12] Cyclic Scheduler Against RTC Bill Huey (hui)
                   ` (4 preceding siblings ...)
  2016-04-14  6:47 ` [PATCH RFC v1 05/12] Task tracking per file descriptor Bill Huey (hui)
@ 2016-04-14  6:47 ` Bill Huey (hui)
  2016-04-14  6:47 ` [PATCH RFC v1 07/12] kernel/userspace additions for addition ioctl() support for rtc Bill Huey (hui)
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Bill Huey (hui) @ 2016-04-14  6:47 UTC (permalink / raw)
  To: Peter Zijlstra, Steven Rostedt, Alessandro Zummo, linux-kernel
  Cc: luca abeni, Juri Lelli, Mike Galbraith, Dario Faggioli, Thomas Gleixner

Add an anonymous struct to support admittance using a red-black tree,
overrun tracking, state for whether or not to yield or block, debugging
support, execution slot pattern for the scheduler.

Signed-off-by: Bill Huey (hui) <bill.huey@gmail.com>
---
 include/linux/sched.h | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 084ed9f..cff56c6 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1305,6 +1305,21 @@ struct sched_rt_entity {
 	/* rq "owned" by this entity/group: */
 	struct rt_rq		*my_q;
 #endif
+#ifdef CONFIG_RTC_CYCLIC
+	struct {
+		struct rb_node node; /* admittance structure */
+		struct list_head task_list;
+		unsigned long count; /* overrun count per slot */
+		int type, color, yield;
+		u64 slots;
+
+		/* debug */
+		unsigned long last_task_state;
+
+		/* instrumentation  */
+		unsigned int machine_state, last_machine_state;
+	} rt_overrun;
+#endif
 };
 
 struct sched_dl_entity {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC v1 07/12] kernel/userspace additions for addition ioctl() support for rtc
  2016-04-14  6:47 [PATCH RFC v1 00/12] Cyclic Scheduler Against RTC Bill Huey (hui)
                   ` (5 preceding siblings ...)
  2016-04-14  6:47 ` [PATCH RFC v1 06/12] Add anonymous struct to sched_rt_entity Bill Huey (hui)
@ 2016-04-14  6:47 ` Bill Huey (hui)
  2016-04-14  6:47 ` [PATCH RFC v1 08/12] Compilation support Bill Huey (hui)
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Bill Huey (hui) @ 2016-04-14  6:47 UTC (permalink / raw)
  To: Peter Zijlstra, Steven Rostedt, Alessandro Zummo, linux-kernel
  Cc: luca abeni, Juri Lelli, Mike Galbraith, Dario Faggioli, Thomas Gleixner

Add additional ioctl() values to rtc so that it can 'admit' the calling
thread into a red-black tree for tracking, set the execution slot pattern,
support for setting whether read() will yield or block.

Signed-off-by: Bill Huey (hui) <bill.huey@gmail.com>
---
 include/uapi/linux/rtc.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/include/uapi/linux/rtc.h b/include/uapi/linux/rtc.h
index f8c82e6..76c9254 100644
--- a/include/uapi/linux/rtc.h
+++ b/include/uapi/linux/rtc.h
@@ -94,6 +94,10 @@ struct rtc_pll_info {
 #define RTC_VL_READ	_IOR('p', 0x13, int)	/* Voltage low detector */
 #define RTC_VL_CLR	_IO('p', 0x14)		/* Clear voltage low information */
 
+#define RTC_OV_ADMIT	_IOW('p', 0x15, unsigned long)	 /* Set test       */
+#define RTC_OV_REPLEN	_IOW('p', 0x16, unsigned long)	 /* Set test       */
+#define RTC_OV_YIELD	_IOW('p', 0x17, unsigned long)	 /* Set test       */
+
 /* interrupt flags */
 #define RTC_IRQF 0x80	/* Any of the following is active */
 #define RTC_PF 0x40	/* Periodic interrupt */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC v1 08/12] Compilation support
  2016-04-14  6:47 [PATCH RFC v1 00/12] Cyclic Scheduler Against RTC Bill Huey (hui)
                   ` (6 preceding siblings ...)
  2016-04-14  6:47 ` [PATCH RFC v1 07/12] kernel/userspace additions for addition ioctl() support for rtc Bill Huey (hui)
@ 2016-04-14  6:47 ` Bill Huey (hui)
  2016-04-14  6:47 ` [PATCH RFC v1 09/12] Add priority support for the cyclic scheduler Bill Huey (hui)
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Bill Huey (hui) @ 2016-04-14  6:47 UTC (permalink / raw)
  To: Peter Zijlstra, Steven Rostedt, Alessandro Zummo, linux-kernel
  Cc: luca abeni, Juri Lelli, Mike Galbraith, Dario Faggioli, Thomas Gleixner

Makefile changes to support the menuconfig option

Signed-off-by: Bill Huey (hui) <bill.huey@gmail.com>
---
 kernel/sched/Makefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
index 302d6eb..df8e131 100644
--- a/kernel/sched/Makefile
+++ b/kernel/sched/Makefile
@@ -19,4 +19,5 @@ obj-$(CONFIG_SCHED_AUTOGROUP) += auto_group.o
 obj-$(CONFIG_SCHEDSTATS) += stats.o
 obj-$(CONFIG_SCHED_DEBUG) += debug.o
 obj-$(CONFIG_CGROUP_CPUACCT) += cpuacct.o
+obj-$(CONFIG_RTC_CYCLIC) += cyclic.o
 obj-$(CONFIG_CPU_FREQ) += cpufreq.o
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC v1 09/12] Add priority support for the cyclic scheduler
  2016-04-14  6:47 [PATCH RFC v1 00/12] Cyclic Scheduler Against RTC Bill Huey (hui)
                   ` (7 preceding siblings ...)
  2016-04-14  6:47 ` [PATCH RFC v1 08/12] Compilation support Bill Huey (hui)
@ 2016-04-14  6:47 ` Bill Huey (hui)
  2016-04-14  6:47 ` [PATCH RFC v1 10/12] Export SCHED_FIFO/RT requeuing functions Bill Huey (hui)
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Bill Huey (hui) @ 2016-04-14  6:47 UTC (permalink / raw)
  To: Peter Zijlstra, Steven Rostedt, Alessandro Zummo, linux-kernel
  Cc: luca abeni, Juri Lelli, Mike Galbraith, Dario Faggioli, Thomas Gleixner

Initial bits to prevent priority changing of cyclic scheduler tasks by
only allow them to be SCHED_FIFO. Fairly hacky at this time and will need
revisiting because of the security concerns.

Affects task death handling since it uses an additional scheduler class
hook for clean up at death. Must be SCHED_FIFO.

Signed-off-by: Bill Huey (hui) <bill.huey@gmail.com>
---
 kernel/sched/core.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 44db0ff..cf6cf57 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -87,6 +87,10 @@
 #include "../workqueue_internal.h"
 #include "../smpboot.h"
 
+#ifdef CONFIG_RTC_CYCLIC
+#include "cyclic.h"
+#endif
+
 #define CREATE_TRACE_POINTS
 #include <trace/events/sched.h>
 
@@ -2074,6 +2078,10 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
 	memset(&p->se.statistics, 0, sizeof(p->se.statistics));
 #endif
 
+#ifdef CONFIG_RTC_CYCLIC
+	RB_CLEAR_NODE(&p->rt.rt_overrun.node);
+#endif
+
 	RB_CLEAR_NODE(&p->dl.rb_node);
 	init_dl_task_timer(&p->dl);
 	__dl_clear_params(p);
@@ -3881,6 +3889,11 @@ recheck:
 		if (dl_policy(policy))
 			return -EPERM;
 
+#ifdef CONFIG_RTC_CYCLIC
+		if (rt_overrun_policy(p, policy))
+			return -EPERM;
+#endif
+
 		/*
 		 * Treat SCHED_IDLE as nice 20. Only allow a switch to
 		 * SCHED_NORMAL if the RLIMIT_NICE would normally permit it.
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC v1 10/12] Export SCHED_FIFO/RT requeuing functions
  2016-04-14  6:47 [PATCH RFC v1 00/12] Cyclic Scheduler Against RTC Bill Huey (hui)
                   ` (8 preceding siblings ...)
  2016-04-14  6:47 ` [PATCH RFC v1 09/12] Add priority support for the cyclic scheduler Bill Huey (hui)
@ 2016-04-14  6:47 ` Bill Huey (hui)
  2016-04-14  6:47 ` [PATCH RFC v1 11/12] Cyclic scheduler support Bill Huey (hui)
  2016-04-14  6:47 ` [PATCH RFC v1 12/12] Cyclic/rtc documentation Bill Huey (hui)
  11 siblings, 0 replies; 13+ messages in thread
From: Bill Huey (hui) @ 2016-04-14  6:47 UTC (permalink / raw)
  To: Peter Zijlstra, Steven Rostedt, Alessandro Zummo, linux-kernel
  Cc: luca abeni, Juri Lelli, Mike Galbraith, Dario Faggioli, Thomas Gleixner

SCHED_FIFO/RT tail/head runqueue insertion support, initial thread death
support via a hook to the scheduler class. Thread death must include
additional semantics to remove/discharge an admitted task properly.

Signed-off-by: Bill Huey (hui) <bill.huey@gmail.com>
---
 kernel/sched/rt.c | 41 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 41 insertions(+)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index c41ea7a..1d77adc 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -8,6 +8,11 @@
 #include <linux/slab.h>
 #include <linux/irq_work.h>
 
+#ifdef CONFIG_RTC_CYCLIC
+#include "cyclic.h"
+extern int rt_overrun_task_admitted1(struct rq *rq, struct task_struct *p);
+#endif
+
 int sched_rr_timeslice = RR_TIMESLICE;
 
 static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun);
@@ -1321,8 +1326,18 @@ enqueue_task_rt(struct rq *rq, struct task_struct *p, int flags)
 
 	if (flags & ENQUEUE_WAKEUP)
 		rt_se->timeout = 0;
+#ifdef CONFIG_RTC_CYCLIC
+	/* if admitted and the current slot then head, otherwise tail */
+	if (rt_overrun_task_admitted1(rq, p)) {
+		if (rt_overrun_task_active(p)) {
+			flags |= ENQUEUE_HEAD;
+		}
+	}
 
 	enqueue_rt_entity(rt_se, flags);
+#else
+	enqueue_rt_entity(rt_se, flags & ENQUEUE_HEAD);
+#endif
 
 	if (!task_current(rq, p) && p->nr_cpus_allowed > 1)
 		enqueue_pushable_task(rq, p);
@@ -1367,6 +1382,18 @@ static void requeue_task_rt(struct rq *rq, struct task_struct *p, int head)
 	}
 }
 
+#ifdef CONFIG_RTC_CYCLIC
+void dequeue_task_rt2(struct rq *rq, struct task_struct *p, int flags)
+{
+	dequeue_task_rt(rq, p, flags);
+}
+
+void requeue_task_rt2(struct rq *rq, struct task_struct *p, int head)
+{
+	requeue_task_rt(rq, p, head);
+}
+#endif
+
 static void yield_task_rt(struct rq *rq)
 {
 	requeue_task_rt(rq, rq->curr, 0);
@@ -2177,6 +2204,10 @@ void __init init_sched_rt_class(void)
 		zalloc_cpumask_var_node(&per_cpu(local_cpu_mask, i),
 					GFP_KERNEL, cpu_to_node(i));
 	}
+
+#ifdef CONFIG_RTC_CYCLIC
+	init_rt_overrun();
+#endif
 }
 #endif /* CONFIG_SMP */
 
@@ -2322,6 +2353,13 @@ static unsigned int get_rr_interval_rt(struct rq *rq, struct task_struct *task)
 		return 0;
 }
 
+#ifdef CONFIG_RTC_CYCLIC
+static void task_dead_rt(struct task_struct *p)
+{
+	rt_overrun_entry_delete(p);
+}
+#endif
+
 const struct sched_class rt_sched_class = {
 	.next			= &fair_sched_class,
 	.enqueue_task		= enqueue_task_rt,
@@ -2344,6 +2382,9 @@ const struct sched_class rt_sched_class = {
 #endif
 
 	.set_curr_task          = set_curr_task_rt,
+#ifdef CONFIG_RTC_CYCLIC
+	.task_dead              = task_dead_rt,
+#endif
 	.task_tick		= task_tick_rt,
 
 	.get_rr_interval	= get_rr_interval_rt,
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC v1 11/12] Cyclic scheduler support
  2016-04-14  6:47 [PATCH RFC v1 00/12] Cyclic Scheduler Against RTC Bill Huey (hui)
                   ` (9 preceding siblings ...)
  2016-04-14  6:47 ` [PATCH RFC v1 10/12] Export SCHED_FIFO/RT requeuing functions Bill Huey (hui)
@ 2016-04-14  6:47 ` Bill Huey (hui)
  2016-04-14  6:47 ` [PATCH RFC v1 12/12] Cyclic/rtc documentation Bill Huey (hui)
  11 siblings, 0 replies; 13+ messages in thread
From: Bill Huey (hui) @ 2016-04-14  6:47 UTC (permalink / raw)
  To: Peter Zijlstra, Steven Rostedt, Alessandro Zummo, linux-kernel
  Cc: luca abeni, Juri Lelli, Mike Galbraith, Dario Faggioli, Thomas Gleixner

Core implementation of the cyclic scheduler that includes admittance
handling, thread death supprot, cyclic timer tick handler, primitive proc
debugging interface, wait-queue modifications.

Signed-off-by: Bill Huey (hui) <bill.huey@gmail.com>
---
 kernel/sched/cyclic.c    | 620 +++++++++++++++++++++++++++++++++++++++++++++++
 kernel/sched/cyclic.h    |  86 +++++++
 kernel/sched/cyclic_rt.h |   7 +
 3 files changed, 713 insertions(+)
 create mode 100644 kernel/sched/cyclic.c
 create mode 100644 kernel/sched/cyclic.h
 create mode 100644 kernel/sched/cyclic_rt.h

diff --git a/kernel/sched/cyclic.c b/kernel/sched/cyclic.c
new file mode 100644
index 0000000..bf4c982
--- /dev/null
+++ b/kernel/sched/cyclic.c
@@ -0,0 +1,620 @@
+/*
+ * cyclic scheduler for rtc support
+ *
+ * Copyright (C) Bill Huey
+ * Author: Bill Huey <bill.huey@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#include <linux/module.h>
+#include <linux/rtc.h>
+#include <linux/sched.h>
+#include "sched.h"
+#include "cyclic.h"
+#include "cyclic_rt.h"
+
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+
+DEFINE_RAW_SPINLOCK(rt_overrun_lock);
+struct rb_root rt_overrun_tree = RB_ROOT;
+
+#define MASK2 0xFFFFffffFFFFfff0
+
+/* must revisit again when I get more time to fix the possbility of
+ * overflow here and 32 bit portability */
+static int cmp_ptr_unsigned_long(long *p, long *q)
+{
+	int result = ((unsigned long)p & MASK2) - ((unsigned long)q & MASK2);
+
+	WARN_ON(sizeof(long *) != 8);
+
+	if (!result)
+		return 0;
+	else if (result > 0)
+		return 1;
+	else
+		return -1;
+}
+
+static int eq_ptr_unsigned_long(long *p, long *q)
+{
+	return (((long)p & MASK2) == ((long)q & MASK2));
+}
+
+#define CMP_PTR_LONG(p,q) cmp_ptr_unsigned_long((long *)p, (long *)q)
+
+static
+struct task_struct *_rt_overrun_entry_find(struct rb_root *root,
+						struct task_struct *p)
+{
+	struct task_struct *ret = NULL;
+	struct rb_node *node = root->rb_node;
+
+	while (node) { // double_rq_lock(struct rq *, struct rq *) cpu_rq
+ 		struct task_struct *task = container_of(node,
+					struct task_struct, rt.rt_overrun.node);
+
+		int result = CMP_PTR_LONG(p, task);
+
+		if (result < 0)
+			node = node->rb_left;
+		else if (result > 0)
+			node = node->rb_right;
+		else {
+			ret = task;
+			goto exit;
+		}
+	}
+exit:
+	return ret;
+}
+
+static int rt_overrun_task_runnable(struct task_struct *p)
+{
+	return task_on_rq_queued(p);
+}
+
+/* avoiding excessive debug printing, splitting the entry point */
+static
+struct task_struct *rt_overrun_entry_find(struct rb_root *root,
+							struct task_struct *p)
+{
+printk("%s: \n", __func__);
+	return _rt_overrun_entry_find(root, p);
+}
+
+static int _rt_overrun_entry_insert(struct rb_root *root, struct task_struct *p)
+{
+	struct rb_node **new = &(root->rb_node), *parent = NULL;
+
+printk("%s: \n", __func__);
+	while (*new) {
+ 		struct task_struct *task = container_of(*new,
+					struct task_struct, rt.rt_overrun.node);
+
+		int result = CMP_PTR_LONG(p, task);
+
+		parent = *new;
+		if (result < 0)
+			new = &((*new)->rb_left);
+		else if (result > 0)
+			new = &((*new)->rb_right);
+		else
+			return 0;
+	}
+
+	/* Add new node and rebalance tree. */
+	rb_link_node(&p->rt.rt_overrun.node, parent, new);
+	rb_insert_color(&p->rt.rt_overrun.node, root);
+
+	return 1;
+}
+
+static void _rt_overrun_entry_delete(struct task_struct *p)
+{
+	struct task_struct *task;
+	int i;
+
+	task = rt_overrun_entry_find(&rt_overrun_tree, p);
+
+	if (task) {
+		printk("%s: p color %d - comm %s - slots 0x%016llx\n",
+			__func__, task->rt.rt_overrun.color, task->comm,
+			task->rt.rt_overrun.slots);
+
+		rb_erase(&task->rt.rt_overrun.node, &rt_overrun_tree);
+		list_del(&task->rt.rt_overrun.task_list);
+		for (i = 0; i < SLOTS; ++i) {
+			if (rt_admit_rq.curr[i] == p)
+				rt_admit_rq.curr[i] = NULL;
+		}
+
+		if (rt_admit_curr == p)
+			rt_admit_curr = NULL;
+	}
+}
+
+void rt_overrun_entry_delete(struct task_struct *p)
+{
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+	_rt_overrun_entry_delete(p);
+	raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+}
+
+/* forward */
+int rt_overrun_task_active(struct task_struct *p);
+
+#define PROCFS_MAX_SIZE  2048
+static char chunk[PROCFS_MAX_SIZE]; // lock this
+
+static
+ssize_t rt_overrun_proc_write(struct file *file, const char *buffer, size_t len,
+				loff_t * off)
+{
+	unsigned long end;
+
+	if (len > PROCFS_MAX_SIZE)
+		end = PROCFS_MAX_SIZE;
+	else
+		end = len;
+
+	if (copy_from_user(chunk, buffer, end))
+		return -EFAULT;
+
+	printk(KERN_INFO "%s: write %lu bytes, s = %s \n", __func__, end,
+							(char *) &chunk[0]);
+	return end;
+}
+
+static int rt_overrun_proc_show(struct seq_file *m, void *v);
+
+static int rt_overrun_proc_open(struct inode *inode, struct  file *file)
+{
+	return single_open(file, rt_overrun_proc_show, NULL);
+}
+
+static const struct file_operations rt_overrun_proc_fops = {
+	.owner = THIS_MODULE,
+	.open = rt_overrun_proc_open,
+	.read = seq_read,
+	.llseek = seq_lseek,
+	.release = single_release,
+	.write = rt_overrun_proc_write,
+};
+
+static int __init rt_overrun_proc_init(void) {
+	proc_create("rt_overrun_proc", 0, NULL, &rt_overrun_proc_fops);
+	return 0;
+}
+
+static void __exit rt_overrun_proc_exit(void) {
+	remove_proc_entry("rt_overrun_proc", NULL);
+}
+
+struct rt_overrun_admit_rq rt_admit_rq;
+
+/*static*/
+void init_rt_overrun(void)
+{
+	rt_overrun_proc_init();
+	reset_rt_overrun();
+}
+
+void reset_rt_overrun(void)
+{
+	int i;
+
+	for (i = 0; i < SLOTS; i++)
+		rt_admit_rq.curr[i] = NULL;
+
+	rt_admit_rq.slot = 0;
+	rt_admit_rq.end = SLOTS;
+}
+
+static int rt_overrun_proc_show(struct seq_file *m, void *v) {
+	int i;
+	unsigned long flags;
+	u64 slots = 0;
+	struct task_struct *task;
+
+	seq_printf(m, "%s: \n", __func__);
+	seq_printf(m, "\n\t");
+
+	raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+
+	if (rt_admit_curr)
+		slots = rt_admit_curr->rt.rt_overrun.slots;
+
+	raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+
+	for (i = 0; i < SLOTS; i++) {
+		if ((i % 4) == 0 )
+			seq_printf(m, "\n\t");
+
+		task = rt_admit_rq.curr[i];
+		if (task)
+			seq_printf(m, " %d", task->rt.rt_overrun.color);
+		else
+			seq_printf(m, " 0");
+	
+		if (task)
+			seq_printf(m, " (%d)",
+				task->rt.rt_overrun.color);
+		else
+			seq_printf(m, " (0)");
+	}
+	seq_printf(m, "\ncurr\n");
+
+	seq_printf(m, "\n\t");
+	for (i = 0; i < SLOTS; ++i) {
+		if (test_bit(i, (unsigned long *) &slots))
+			seq_printf(m, "1");
+		else
+			seq_printf(m, "0");
+
+		if (((i+1) % 16) == 0)
+			seq_printf(m, "\n\t");
+		else if (((i +1) % 4) == 0)
+			seq_printf(m, " ");
+	}
+	seq_printf(m, "\n");
+
+	return 0;
+}
+
+
+static void _rt_overrun_task_replenish(struct task_struct *p)
+{
+	WARN_ONCE(!rt_overrun_entry_find(&rt_overrun_tree, p), "\n");
+
+	rt_admit_curr = p;
+	rt_admit_rq.debug = p;
+	WARN_ONCE(!(CMP_PTR_LONG(rt_admit_curr, p)),
+			"not equal \b");
+}
+
+void rt_overrun_task_replenish(struct task_struct *p)
+{
+	unsigned long flags;
+	raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+	_rt_overrun_task_replenish(p);
+	raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+}
+
+static void _rt_overrun_task_expire(struct task_struct *p)
+{
+printk("%s: \n", __func__);
+	WARN_ONCE(!rt_overrun_entry_find(&rt_overrun_tree, p), "\n");
+
+	rt_admit_curr = NULL;
+}
+
+static void rt_overrun_task_expire(struct task_struct *p)
+{
+	unsigned long flags;
+	raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+	_rt_overrun_task_expire(p);
+	raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+}
+
+static int rt_overrun_slot_color_next(void)
+{
+	int color = rt_admit_rq.color;
+
+	++rt_admit_rq.color;
+
+	return color;
+}
+
+/* potential security problems */
+int rt_overrun_task_admit(struct task_struct *p, u64 slots)
+{
+	int i, ret = 0;
+	unsigned long flags;
+
+	printk("%s: slot = 0x%016llx\n", __func__, slots);
+
+	get_task_struct(p);
+	if (!rt_policy(p->policy)) {
+		printk("%s: policy, admittance failed \n", __func__);
+		put_task_struct(p);
+		return 1;
+	}
+
+	if (p->sched_class != &rt_sched_class) {
+		printk("%s: sched_class, admittance failed \n", __func__);
+		put_task_struct(p);
+		return 1;
+	}
+
+	/* grabs the rq lock here, CPU 0 only */
+	set_cpus_allowed_ptr(p, get_cpu_mask(0));
+
+	raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+	if (!p->rt.rt_overrun.color) {
+		p->rt.rt_overrun.color = rt_overrun_slot_color_next();
+		printk("%s: color = %d \n", __func__, p->rt.rt_overrun.color);
+	}
+
+
+	p->rt.rt_overrun.slots = slots;
+
+	WARN_ONCE(rt_admit_rq.active < 0, "\n");
+	WARN_ONCE(_rt_overrun_entry_find(&rt_overrun_tree, p), "\n");
+
+	p->rt.rt_overrun.count = 0;
+
+	_rt_overrun_entry_insert(&rt_overrun_tree, p);
+	_rt_overrun_task_replenish(p);
+	++rt_admit_rq.active;
+
+	if ((cpumask_weight(&p->cpus_allowed) != 1) ||
+	    !cpumask_test_cpu(0, &p->cpus_allowed)) {
+		printk("%s: failed \n", __func__);
+		ret = 1;
+	} else 
+		printk("%s: success \n", __func__);
+
+	for (i = 0; i < SLOTS; ++i) {
+		/* slots is a u64, ignore the pointer type */
+		if (test_bit(i, (unsigned long *) &slots))
+			rt_admit_rq.curr[i] = p;
+	}
+
+	raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+	put_task_struct(p);
+
+	return ret;
+}
+
+static void rt_overrun_task_discharge(struct task_struct *p)
+{
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+printk("%s: \n", __func__);
+	WARN_ONCE(rt_admit_rq.active <= 0, "\n");
+	WARN_ONCE(!_rt_overrun_entry_find(&rt_overrun_tree, p), "\n");
+	--rt_admit_rq.active;
+
+	/* assert */
+	_rt_overrun_task_expire(p);
+	_rt_overrun_entry_delete(p);
+
+	raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+}
+
+void rt_overrun_entries_delete_all(struct rtc_device *rtc)
+{
+	unsigned long flags;
+	struct task_struct *task;
+
+	struct list_head *pos;
+
+printk("%s: \n", __func__);
+	raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+	list_for_each (pos, &rtc->rt_overrun_tasks) {
+		task = list_entry(pos, struct task_struct,
+							rt.rt_overrun.task_list);
+printk("%s: rt_overrun_tasks p 0x%016llx - comm %s\n", __func__, (u64) task->rt.rt_overrun.color,
+			task->comm);
+		_rt_overrun_entry_delete(task);
+	}
+
+	rt_admit_rq.active = 0;
+	rt_admit_rq.color = 0;
+	raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+}
+
+/* must think about locking here, nothing for now BUG */
+int rt_overrun_task_admitted1(struct rq *rq, struct task_struct *p)
+{
+	int ret = 0;
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+	if (rt_admit_rq.active) { // --billh
+		if ((rt_admit_curr == p)
+		 || _on_rt_overrun_admitted(p)
+		 || _rt_overrun_entry_find(&rt_overrun_tree, p))
+			ret = 1;
+	}
+	raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+
+	return ret;
+}
+
+void rt_overrun_check(struct rq *rq, struct task_struct *p)
+{
+	get_task_struct(p);
+	WARN_ONCE(rt_overrun_task_admitted1(rq, p) &&
+		cpumask_equal(get_cpu_mask(0), &p->cpus_allowed),
+		"not bounded to CPU 0\n");
+	put_task_struct(p);
+}
+
+
+/* must think about locking here, nothing for now BUG */
+int rt_overrun_rq_admitted(void)
+{
+	return rt_admit_rq.active;
+}
+
+/* must think about locking here, nothing for now BUG */
+int rt_overrun_task_active(struct task_struct *p)
+{
+	if (CMP_PTR_LONG(rt_admit_curr, p))
+		return 1;
+	else
+		return 0;
+}
+
+static struct task_struct *rt_overrun_get_next_task(void)
+{
+	/* return the next slot, advance the cursor */
+	WARN_ONCE(!rt_admit_rq.active, "\n");
+
+	if (rt_admit_rq.slot < (SLOTS -1)) {
+		++rt_admit_rq.slot;
+	} else {
+//		printk("%s: slot wrap = 0 \n", __func__);
+		rt_admit_rq.slot = 0;
+	}
+
+	return rt_admit_curr;
+}
+
+#define PRT_RUNNABLE()				\
+	if (tail == 1)				\
+		printk("on rq \n");		\
+	else if (tail == 0)			\
+		printk("not on rq \n");	\
+	else					\
+		printk("\n");
+
+/* rq lock already grabbed, interrupts off */
+void rt_overrun_timer_handler(struct rtc_device *rtc)
+{
+	int cpu = smp_processor_id();
+	struct rq *rq = cpu_rq(cpu);
+	unsigned long irq_data;
+
+	struct task_struct *curr_slot, *next_slot;
+	int tail = 2;
+	int wake_next = 0, curr_runnable = 0;
+	int same;
+
+	WARN_ON(!irqs_disabled());
+
+	printk("%s: ---\n", __func__);
+
+	/* this is incorrect, but is working for now */
+	WARN_ON((rq->cpu != 0));
+	raw_spin_lock(&rq->lock);
+	raw_spin_lock(&rt_overrun_lock);
+
+	curr_slot = rt_admit_curr;
+	irq_data = rtc->irq_data;
+
+	/* suppress rtc_read_dev wake if curr_slot ==  NULL */
+	if (curr_slot) {
+		if (rt_overrun_task_runnable(curr_slot))
+			curr_runnable = tail = 1;
+		else
+			curr_runnable = tail = 0;
+	}
+
+	if (curr_slot)
+		printk("%s: curr_slot %d ", __func__, curr_slot->rt.rt_overrun.color);
+	PRT_RUNNABLE();
+
+	next_slot = rt_overrun_get_next_task();
+	tail = 2;
+
+	/* */
+	if (curr_slot == next_slot) {
+		same = 1;
+	} else {
+		same = 0;
+		/* deactivate edge, runnable case */
+		if (curr_slot && curr_runnable) {
+			requeue_task_rt2(rq, curr_slot, 0); // tail
+			resched_curr(rq);
+		}
+	}
+
+	/* transition edge, record per task overrun */
+	if (curr_slot && !same) {
+		++curr_slot->rt.rt_overrun.count;
+		printk("%s: overrun inc %ld\n", __func__,
+						curr_slot->rt.rt_overrun.count);
+	}
+
+	/* activate edge, wake/top next_slot */
+	if (next_slot) {
+		if (!same) {
+			printk("%s: ~same\n", __func__);
+			if (rt_overrun_task_runnable(next_slot)) {
+				printk("%s: next runnable requeue top\n", __func__);
+				requeue_task_rt2(rq, next_slot, 1); // head
+				resched_curr(rq);
+				tail = 1;
+			} else {
+				printk("%s: ~next runnable\n", __func__);
+				tail = 0;
+				wake_next = 1;
+			}
+		} /* same, then chain the activations */
+	}
+
+	if (next_slot)
+		printk("%s: next_slot %d ", __func__, next_slot->rt.rt_overrun.color);
+	PRT_RUNNABLE();
+
+	rt_admit_curr = next_slot;
+
+	raw_spin_unlock(&rt_overrun_lock);
+	raw_spin_unlock(&rq->lock);
+
+	/* set to reschedule at interrupt return, wake attempt should
+	 * do this for us already */
+	if (wake_next) {
+		wake_up_interruptible_sync_poll(&rtc->irq_queue, next_slot);
+		if (same) {
+			printk("%s: same\n", __func__);
+		}
+	}
+	else
+		printk("%s: pass\n", __func__);
+}
+
+int rt_overrun_task_yield(struct task_struct *p)
+{
+	return rt_task_yield(p);
+}
+
+// default_wake_function wake_up_state list_head rtc_device
+int single_default_wake_function(wait_queue_t *curr, unsigned mode,
+				int wake_flags, void *key)
+{
+	unsigned long flags;
+	struct task_struct *p = key, *task = curr->private;
+	int on = 0;
+
+	/* If not admitted to rt_overrun, then wake it normally with at the
+	 * normal timer interrupt handler */
+
+	raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+	if (p) on = _on_rt_overrun_admitted(p);
+	raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+
+	/* wake only one thread for this case */
+	if (key == NULL) {
+		printk("%s: wake 0 p 0x%08llx, task 0x%08llx, admit %d,"
+			" flags %ld\n",
+				__func__, (u64) p, (u64) task, on, flags);
+		return wake_up_state(task, mode);
+	} else if (key == task) {
+		if (on) {
+			printk("%s: wake 1 p 0x%08llx, task 0x%08llx, "
+				"admit %d, flags %ld\n",
+				__func__, (u64) p, (u64) task, on, flags);
+			return wake_up_state(task, mode);
+		} else {
+			printk("%s: ignore 0 p 0x%08llx, task 0x%08llx, "
+				"flags %ld\n",
+				__func__, (u64) p, (u64) task, flags);
+			return 0;
+		}
+	} else {
+		printk("%s: ignore 1 p 0x%08llx, task 0x%08llx, flags %ld\n",
+			__func__, (u64) p, (u64) task, flags);
+		return 0;
+	}
+}
diff --git a/kernel/sched/cyclic.h b/kernel/sched/cyclic.h
new file mode 100644
index 0000000..829ad6d
--- /dev/null
+++ b/kernel/sched/cyclic.h
@@ -0,0 +1,86 @@
+/* cyclic.h rt_overrun */
+
+//#ifndef _CYCLIC_H
+//#define _CYCLIC_H
+//#else
+
+#include <linux/spinlock.h>
+#include <linux/rbtree.h>
+#include <linux/rtc.h>
+
+extern int rt_overrun_task_active(struct task_struct *p);
+extern void rt_overrun_stat_tick_blocked(void);
+
+extern void rt_overrun_stat_dequeue(void);
+extern void rt_overrun_stat_enqueue(struct task_struct *p);
+
+extern void rt_overrun_timer_handler(struct rtc_device *rtc);
+extern int rt_overrun_rq_admitted(void);
+extern void init_rt_overrun(void);
+extern void rt_overrun_entry_delete(struct task_struct *p);
+
+extern void rt_overrun_task_replenish(struct task_struct *p);
+extern int rt_overrun_task_admit(struct task_struct *p, u64 slots);
+
+extern int rt_overrun_task_yield(struct task_struct *p);
+extern void rt_overrun_entries_delete_all(struct rtc_device *);
+extern void reset_rt_overrun(void);
+
+extern struct raw_spinlock rt_overrun_lock;
+extern int single_default_wake_function(wait_queue_t *curr, unsigned mode,
+					int wake_flags, void *key);
+#define SLOTS 64
+
+#define rt_admit_curr		(rt_admit_rq.curr[rt_admit_rq.slot])
+#define rt_task_count(a)	(a->rt.rt_overrun.count)
+#define rt_task_yield(a)	(a->rt.rt_overrun.yield)
+
+/* slot admittance queue */
+struct rt_overrun_admit_rq {
+	int active;
+	int slot, end;
+	struct task_struct *curr[SLOTS];
+	struct task_struct *debug;
+	int color;
+};
+
+extern struct rt_overrun_admit_rq rt_admit_rq;
+
+static inline int rt_overrun_policy(struct task_struct *p, int policy)
+{
+	int ret;
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&rt_overrun_lock, flags);
+	ret = RB_EMPTY_NODE(&p->rt.rt_overrun.node);
+	raw_spin_unlock_irqrestore(&rt_overrun_lock, flags);
+
+	return ret;
+}
+
+static inline int _on_rt_overrun_admitted(struct task_struct *p)
+{
+	struct sched_rt_entity *rt_se = &p->rt;
+	struct rb_node *node = &rt_se->rt_overrun.node;
+
+	if (node)
+		return !RB_EMPTY_NODE(node);
+	else
+		return 0;
+}
+
+static inline int on_rt_overrun_admitted(struct task_struct *p)
+{
+	int ret;
+
+	raw_spin_lock(&rt_overrun_lock);
+	ret = _on_rt_overrun_admitted(p);
+	raw_spin_unlock(&rt_overrun_lock);
+
+	return ret;
+}
+
+extern int single_default_wake_function(wait_queue_t *curr, unsigned mode,
+				int wake_flags, void *key);
+
+//#endif
diff --git a/kernel/sched/cyclic_rt.h b/kernel/sched/cyclic_rt.h
new file mode 100644
index 0000000..c6c89bf
--- /dev/null
+++ b/kernel/sched/cyclic_rt.h
@@ -0,0 +1,7 @@
+/*
+ */
+
+#ifdef CONFIG_RTC_CYCLIC
+extern void dequeue_task_rt2(struct rq *rq, struct task_struct *p, int flags);
+extern void requeue_task_rt2(struct rq *rq, struct task_struct *p, int head);
+#endif
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC v1 12/12] Cyclic/rtc documentation
  2016-04-14  6:47 [PATCH RFC v1 00/12] Cyclic Scheduler Against RTC Bill Huey (hui)
                   ` (10 preceding siblings ...)
  2016-04-14  6:47 ` [PATCH RFC v1 11/12] Cyclic scheduler support Bill Huey (hui)
@ 2016-04-14  6:47 ` Bill Huey (hui)
  11 siblings, 0 replies; 13+ messages in thread
From: Bill Huey (hui) @ 2016-04-14  6:47 UTC (permalink / raw)
  To: Peter Zijlstra, Steven Rostedt, Alessandro Zummo, linux-kernel
  Cc: luca abeni, Juri Lelli, Mike Galbraith, Dario Faggioli, Thomas Gleixner

Initial attempt at documentation with a test program

Signed-off-by: Bill Huey (hui) <bill.huey@gmail.com>
---
 Documentation/scheduler/sched-cyclic-rtc.txt | 468 +++++++++++++++++++++++++++
 1 file changed, 468 insertions(+)
 create mode 100644 Documentation/scheduler/sched-cyclic-rtc.txt

diff --git a/Documentation/scheduler/sched-cyclic-rtc.txt b/Documentation/scheduler/sched-cyclic-rtc.txt
new file mode 100644
index 0000000..4d22381
--- /dev/null
+++ b/Documentation/scheduler/sched-cyclic-rtc.txt
@@ -0,0 +1,468 @@
+[in progress]
+
+"Work Conserving"
+
+When a task is active and calls read(), it will block/yield depending on
+is requested from the cyclic scheduler. A RT_OV_YIELD call to ioctl()
+specifies the behavior for the calling thread.
+
+In the case where read() is called before the time slice is over, it will
+allow other tasks to run with the leftover time.
+
+"Overrun Reporting/Apps"
+
+Calls to read() will return the overrun count and zero the counter. This
+can be used to adjust the execution time of the thread so that it can run
+within that slot so that thread can meet some deadline constraint.
+
+[no decision has been made to return a more meaningful set of numbers as
+you can just get time stamps and do the math in userspace but it could
+be changed to do so]
+
+The behavior of the read() depends on whether it has been admitted or not
+via an ioctl() using RTC_OV_ADMIT. If it is then it will return the overrun
+count. If this is not admitted then it returns value corresponding to the
+default read() behavior for rtc.
+
+See the sample test sources for details.
+
+Using a video game as an example, having a rendering engine overrunning its
+slot driving by a vertical retrace interrupt can cause visual skipping and
+hurt interactivity. Adapting the computation from the read() result can
+allow for the frame buffer swap at the frame interrupt. If read() reports
+and it can simplify calculations and adapt to fit within that slot.
+It would then allow the program to respond to events (touches, buttons)
+minimizing the possibility of perceived pauses.
+
+The slot allocation scheme for the video game must have some inherit
+definition of interactivity. That determines appropriate slot allocation
+amognst a mixture of soft/hard real-time. A general policy must be created
+for the system, and all programs, to meet a real-time criteria.
+
+"Admittance"
+
+Admittance of a task is done through a ioctl() call using RTC_OV_ADMIT.
+This passes 64 bit wide bitmap that maps onto a entries in the slot map.
+
+(slot map of two threads)
+execution direction ->
+
+1000 1000 1000 1000...
+0100 0100 0100 0100...
+
+(bit pattern of two threads)
+0001 0001 0001 0001...
+0010 0010 0010 0010...
+
+(hex)
+0x1111
+0x2222
+
+The slot map is an array of 64 entries of threads. An index is increment
+through determine what the next active thread-slot will be. The end of the
+index set in /proc/rt_overrun_proc
+
+"Slot/slice activation"
+
+Move the task to the front of the SCHED_FIFO list when active, the tail when
+inactive.
+
+"RTC Infrastructure and Interrupt Routing"
+
+The cyclic scheduler is driven by the update interrupt in the RTC
+infrastructure but can be rerouted to any periodic interrupt source.
+
+One of those applications could be when interrupts from a display refresh
+happen or some interval where an external controller such as a drum pad,
+touch event or whatever.
+
+"Embedded Environments"
+
+This is single run queue only and targeting embedded scenarios where not all
+cores are guaranteed to be available. Older Qualcomm MSM kernels have a very
+aggressive cpu hotplug as a means of fully powering off cores. The only
+guaranteed CPU to run is CPU 0.
+
+"Project History"
+
+This was originally created when I was at HP/Palm to solve issues related
+to touch event handling and lag working with the real-time media subsystem.
+The typical workaround used to prevent skipping is to use large buffers to
+prevent data underruns. The programs running at SCHED_FIFO which can
+starve the system from handling external events in a timely manner like
+buttons or touch events. The lack of a globally defined policy of how to
+use real-time resources can causes long pauses between handling touch
+events and other kinds of implicit deadline misses.
+
+By choosing some kind of slot execution pattern, it was hoped that it that
+can be controlled globally across the system so that some basic interactive
+guarantees can be met. Whether the tasks be some combination of soft or
+hard real-time, a mechanism like this can help guide how SCHED_FIFO tasks
+are run versus letting SCHED_FIFO tasks run wildly.
+
+"Future work"
+
+Possible integration with the deadline scheduler. Power management
+awareness, CPU clock governor. Turning off the scheduler tick when there
+are no runnable tasks, other things...
+
+"Power management"
+
+Governor awareness...
+
+[more]
+
+----------------------------
+
+/*
+ *	Based on the:
+ *
+ *      Real Time Clock Driver Test/Example Program
+ *      by Copyright (C) 1996, Paul Gortmaker.
+ * 
+ *	Simplification and multi-threading support for interrupt event testing
+ *	by Bill Huey at <bill.huey@gmail.com>
+ *
+ *      Released under the GNU General Public License, version 2,
+ *      included herein by reference.
+ */
+
+#include <stdio.h>
+#include <linux/rtc.h>
+#include <sys/ioctl.h>
+#include <sys/time.h>
+#include <sys/types.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <pthread.h>
+#include <linux/types.h>
+#include <sys/mman.h>
+#include <sched.h>
+#include <errno.h>
+
+/*
+ * This expects the new RTC class driver framework, working with
+ * clocks that will often not be clones of what the PC-AT had.
+ * Use the command line to specify another RTC if you need one.
+ */
+static const char default_rtc[] = "/dev/rtc0";
+
+#define RTC_OV_ADMIT	_IOW('p', 0x15, unsigned long)
+#define RTC_OV_REPLEN	_IOW('p', 0x16, unsigned long)
+#define RTC_OV_YIELD	_IOW('p', 0x17, unsigned long)
+
+//#if 0
+#define THREADS	(3)
+#define handle_error_en(en, msg) \
+               do { errno = en; perror(msg); exit(EXIT_FAILURE); } while (0)
+
+#define OBJ (THREADS + 1)
+
+//#define LARGE (1000 * 1000)
+
+pthread_mutex_t mutex[OBJ];
+pthread_cond_t condvar[OBJ];
+
+pthread_mutex_t start_mutex = PTHREAD_MUTEX_INITIALIZER;
+pthread_cond_t start_condvar = PTHREAD_COND_INITIALIZER;
+int start[OBJ];
+
+pthread_t threads[OBJ];
+volatile int die = 0;
+int fd;
+
+#define ioctl_enotty_err(a, b, c, label, ret)			\
+	retval = ioctl(fd, a, ret);				\
+	if (retval == -1) {					\
+		if (errno == ENOTTY) {				\
+			fprintf(stderr, b);			\
+			goto label;				\
+		}						\
+		perror(c);					\
+		exit(errno);					\
+	}
+
+#define ioctl_err(a, b, c)					\
+	retval = ioctl(fd, a, c);				\
+	if (retval == -1) {					\
+		perror(b);					\
+		exit(errno);					\
+	}
+
+#define read_err(a) 						\
+	retval = read(fd, &data, sizeof(unsigned long));	\
+	if (retval == -1) {					\
+		perror("read");					\
+		exit(errno);					\
+	}
+
+void init_pthreads(void)
+{
+	int i;
+
+	for (i = 0; i < OBJ; ++i) {
+		start[i] = 1;
+		pthread_mutex_init(&mutex[i], NULL);
+		pthread_cond_init(&condvar[i], NULL);
+	}
+}
+
+/* Just loop and exit */
+void *thread(void *threadid)
+{
+	long tid = (unsigned long)threadid;
+	pthread_t self = pthread_self();
+	unsigned int i,j;
+	pid_t pid;
+	struct sched_param param;
+	int retval;
+	__aligned_u64 slots = 0;
+	unsigned long data;
+
+	fprintf(stderr, "\tthread id = %ld\n", tid);
+	param.sched_priority = sched_get_priority_min(SCHED_RR);
+	if (sched_setscheduler( 0, SCHED_RR, &param) == -1)
+		perror("sched_setscheduler failed\n");
+
+	pid = getpid();
+
+	if (start) {
+		start[tid] = 0;
+		pthread_mutex_lock(&start_mutex);
+		pthread_cond_signal(&start_condvar);
+		pthread_mutex_unlock(&start_mutex);
+	}
+
+	/* admit the task before doing yields */
+//	fprintf(stderr, "\n");
+	for (i = 0; i < 64; ++i) {
+		if (((tid + i) % THREADS) == 0) {
+//			fprintf(stderr, "%d\n", i);
+			slots |= ((long long unsigned) 1 << i);
+		}
+	}
+
+	fprintf(stderr, "slots = 0x%016llx\n", slots);
+	ioctl_err(RTC_OV_ADMIT, "RTC_OV_ADMIT ioctl", &slots);
+
+//	slots = 1; /* set yield instead of block */
+//	ioctl_err(RTC_OV_YIELD, "RTC_OV_YIELD ioctl", &slots);
+
+	while (!die)
+		;
+
+	read_err();
+	fprintf(stderr, "tid %ld, 0x%04lx\n", tid, data);
+
+#if 0
+	ioctl_enotty_err(RTC_IRQP_SET,
+		"\n...Periodic IRQ rate is fixed\n",
+		"RTC_IRQP_SET ioctl",
+		done, (unsigned long ) slots);
+
+	ioctl_err(RTC_PIE_ON, "RTC_PIE_ON ioctl", 0);
+
+	while (!die)
+		;
+
+	ioctl_err(RTC_PIE_OFF, "RTC_PIE_OFF ioctl", 0);
+#endif
+	/* body */
+
+	fprintf(stderr, "\tthread exited running SCHED_RR = %ld\n", tid);
+	pthread_exit(NULL);
+}
+
+void thread_spawn(int val)
+{
+	int result, retval;
+	long tid;
+	int i;
+	pthread_attr_t threads_attr[OBJ];
+	cpu_set_t cpuset;
+
+	struct sched_param schedparam;
+
+	schedparam.sched_priority = 3;
+
+	init_pthreads();
+
+	tid = 0;
+	while(tid < THREADS) {
+		fprintf(stderr, "\ncreated thread %ld\n", tid);
+		pthread_attr_init(&threads_attr[tid]);
+		pthread_attr_setinheritsched(&threads_attr[tid], PTHREAD_EXPLICIT_SCHED);
+		pthread_attr_setschedpolicy(&threads_attr[tid], SCHED_RR);
+		pthread_attr_setschedparam(&threads_attr[tid], &schedparam);
+		pthread_attr_destroy(&threads_attr[tid]);
+
+		pthread_mutex_lock(&start_mutex);
+		result = pthread_create(&threads[tid], &threads_attr[tid], thread, (void *)tid);
+		pthread_cond_wait(&start_condvar, &start_mutex);
+		pthread_mutex_unlock(&start_mutex);
+
+		if (result != 0)
+			handle_error_en(result, "pthread_create");
+		++tid;
+	}
+
+	ioctl_err(RTC_PIE_ON, "RTC_PIE_ON ioctl", 0);
+
+	sleep(3);
+	
+//	10/val; // deliberate divide by zero
+
+	sleep(1);
+	die = 1;
+
+	for (i = 0; i < THREADS; i++)
+		pthread_join(threads[i], NULL);
+
+	ioctl_err(RTC_PIE_OFF, "RTC_PIE_OFF ioctl", 0);
+
+	fprintf(stderr, "pthread done\n");
+}
+//#endif
+
+int main(int argc, char **argv)
+{
+	int	i,j,k,
+		blocking,
+		delay,
+		retval, irqcount = 0;
+	unsigned long data;
+	__aligned_u64 slots;
+	struct rtc_time rtc_tm;
+	const char *rtc = default_rtc;
+	struct timeval start, end, diff;
+
+	struct sched_param param;
+	pid_t pid = getpid();
+
+	/* testing thread should be SCHED_FIFO or RR */
+	param.sched_priority = sched_get_priority_min(SCHED_RR);
+
+	if (sched_setscheduler(pid, SCHED_RR, &param) == -1)
+
+		perror("sched_setscheduler failed\n");
+
+	switch (argc) {
+	case 2:
+		rtc = argv[1];
+		/* FALLTHROUGH */
+	case 1:
+		break;
+	default:
+		fprintf(stderr, "usage:  rtctest [rtcdev]\n");
+		return 1;
+	}
+
+	fd = open(rtc, O_RDONLY);
+	if (fd ==  -1) {
+		perror(rtc);
+		exit(errno);
+	}
+
+	fprintf(stderr, "\n\t\t\tRTC Driver Test Example.\n\n");
+
+	/* Admit this task, enable tick tracking and set the slots */
+//	slots = 0xFFFFffffFFFFffff;
+	slots = 0x3333333333333333;
+	ioctl_err(RTC_OV_ADMIT, "RTC_OV_ADMIT ioctl", &slots);
+
+#if 0
+#endif
+test_PIE:
+	/* Read periodic IRQ rate */
+	ioctl_enotty_err(RTC_IRQP_READ,
+		"\nNo periodic IRQ support\n",
+		"RTC_IRQP_READ ioctl",
+		done, &slots);
+
+	fprintf(stderr, "\nPeriodic IRQ rate is %ldHz.\n", (unsigned long) slots);
+
+	fprintf(stderr, "Counting 20 interrupts at:");
+	fflush(stderr);
+
+	/* The frequencies 128Hz, 256Hz, ... 8192Hz are only allowed for root. */
+	for (slots=2; slots<=64; slots*=2) {
+		/* not all RTCs can change their periodic IRQ rate */
+		ioctl_enotty_err(RTC_IRQP_SET,
+			"\n...Periodic IRQ rate is fixed\n",
+			"RTC_IRQP_SET ioctl",
+			done, (unsigned long ) slots);
+
+		fprintf(stderr, "\n%ldHz:\t", (unsigned long ) slots);
+		fflush(stderr);
+
+		/* Enable periodic interrupts */
+		ioctl_err(RTC_PIE_ON, "RTC_PIE_ON ioctl", 0);
+
+//		blocking = 0; delay = 0;
+		blocking = 0; delay = 1;
+//		blocking = 1; delay = 0;
+//		blocking = 1; delay = 1;
+		for (i=1; i<6; i++) {
+
+#define LARGE 5000
+#define LARGE2 50000
+#define work() 									\
+										\
+			/* This blocks */					\
+			if (blocking) {						\
+				if (delay)					\
+					fprintf(stderr, " ignoring delay ");	\
+										\
+				gettimeofday(&start, NULL);			\
+				fprintf(stderr, " ");				\
+				read_err()					\
+			} else {						\
+				/* delay for testing yield only	*/		\
+				if (delay) {					\
+					fprintf(stderr, ".");			\
+					for(j = LARGE; j > 0; --j)		\
+						for(k = LARGE2; k > 0; --k)	\
+							;			\
+				} else						\
+					fprintf(stderr, "`");			\
+										\
+				/* really a yield */				\
+				read_err()					\
+				/* fake diff values on a yield  */		\
+				gettimeofday(&start, NULL);			\
+			}							\
+										\
+			gettimeofday(&end, NULL);				\
+										\
+			timersub(&end, &start, &diff);				\
+			if (!blocking && (diff.tv_sec > 0 ||			\
+			    diff.tv_usec > ((1000000L / slots) * 1.10))) {	\
+				fprintf(stderr,					\
+					"\nPIE delta error: %ld.%06ld should be close to 0.%06ld\n", \
+				       diff.tv_sec, diff.tv_usec,		\
+				       (1000000L / (unsigned long) slots));	\
+				fflush(stdout);					\
+				exit(-1);					\
+			}							\
+										\
+			fprintf(stderr, "%d 0x%04lx,", i, data);		\
+			fflush(stderr);						\
+			irqcount++;					
+
+			work()
+		}
+		/* Disable periodic interrupts */
+		ioctl_err(RTC_PIE_OFF, "RTC_PIE_OFF ioctl", 0);
+	}
+
+done:
+	fprintf(stderr, "\n\n\t\t\t *** Test complete ***\n");
+
+	thread_spawn(0);
+
+	close(fd);
+
+	return 0;
+}
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-04-14  6:51 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-14  6:47 [PATCH RFC v1 00/12] Cyclic Scheduler Against RTC Bill Huey (hui)
2016-04-14  6:47 ` [PATCH RFC v1 01/12] Kconfig change Bill Huey (hui)
2016-04-14  6:47 ` [PATCH RFC v1 02/12] Reroute rtc update irqs to the cyclic scheduler handler Bill Huey (hui)
2016-04-14  6:47 ` [PATCH RFC v1 03/12] Add cyclic support to rtc-dev.c Bill Huey (hui)
2016-04-14  6:47 ` [PATCH RFC v1 04/12] Anonymous struct initialization Bill Huey (hui)
2016-04-14  6:47 ` [PATCH RFC v1 05/12] Task tracking per file descriptor Bill Huey (hui)
2016-04-14  6:47 ` [PATCH RFC v1 06/12] Add anonymous struct to sched_rt_entity Bill Huey (hui)
2016-04-14  6:47 ` [PATCH RFC v1 07/12] kernel/userspace additions for addition ioctl() support for rtc Bill Huey (hui)
2016-04-14  6:47 ` [PATCH RFC v1 08/12] Compilation support Bill Huey (hui)
2016-04-14  6:47 ` [PATCH RFC v1 09/12] Add priority support for the cyclic scheduler Bill Huey (hui)
2016-04-14  6:47 ` [PATCH RFC v1 10/12] Export SCHED_FIFO/RT requeuing functions Bill Huey (hui)
2016-04-14  6:47 ` [PATCH RFC v1 11/12] Cyclic scheduler support Bill Huey (hui)
2016-04-14  6:47 ` [PATCH RFC v1 12/12] Cyclic/rtc documentation Bill Huey (hui)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).