All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xenomai-core] [PATCH v3 6/9] rtai: Try to fix _shm_free
  2009-10-20 11:37 [Xenomai-core] [PATCH v3 0/9] heap setup/cleanup fixes, refactorings & more Jan Kiszka
                   ` (4 preceding siblings ...)
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 5/9] nucleus: Avoid returning errors from xnheap_destroy_mapped Jan Kiszka
@ 2009-10-20 11:37 ` Jan Kiszka
  2009-10-24 17:25   ` Philippe Gerum
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 7/9] native: Do not requeue on auto-cleanup errors Jan Kiszka
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 33+ messages in thread
From: Jan Kiszka @ 2009-10-20 11:37 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

This is totally untested but should not make things worse than they
already are.

Signed-off-by: Jan Kiszka <jan.kiszka@domain.hid>
---

 ksrc/skins/rtai/shm.c |   31 +++++++++++++++++++------------
 1 files changed, 19 insertions(+), 12 deletions(-)

diff --git a/ksrc/skins/rtai/shm.c b/ksrc/skins/rtai/shm.c
index 4c56495..21c3b07 100644
--- a/ksrc/skins/rtai/shm.c
+++ b/ksrc/skins/rtai/shm.c
@@ -260,19 +260,24 @@ void *rt_heap_open(unsigned long name, int size, int suprt)
 	return _shm_alloc(name, size, suprt, 0, &opaque);
 }
 
-#ifndef CONFIG_XENO_OPT_PERVASIVE
+#ifdef CONFIG_XENO_OPT_PERVASIVE
+static void __heap_flush_shared(xnheap_t *heap)
+{
+	xnheap_free(&kheap, heap);
+}
+#else /* !CONFIG_XENO_OPT_PERVASIVE */
 static void __heap_flush_private(xnheap_t *heap,
 				 void *heapmem, u_long heapsize, void *cookie)
 {
 	xnarch_free_host_mem(heapmem, heapsize);
 }
-#endif /* CONFIG_XENO_OPT_PERVASIVE */
+#endif /* !CONFIG_XENO_OPT_PERVASIVE */
 
 static int _shm_free(unsigned long name)
 {
-	int ret = 0;
 	xnholder_t *holder;
 	xnshm_a_t *p;
+	int ret;
 	spl_t s;
 
 	xnlock_get_irqsave(&nklock, s);
@@ -283,27 +288,29 @@ static int _shm_free(unsigned long name)
 		p = link2shma(holder);
 
 		if (p->name == name && --p->ref == 0) {
+			removeq(&xnshm_allocq, &p->link);
 			if (p->handle)
 				xnregistry_remove(p->handle);
+
+			xnlock_put_irqrestore(&nklock, s);
+
 			if (p->heap == &kheap)
 				xnheap_free(&kheap, p->chunk);
 			else {
-				/* Should release lock here? 
-				 * Can destroy_mapped suspend ?
-				 * [YES!]
-				 */
 #ifdef CONFIG_XENO_OPT_PERVASIVE
-				xnheap_destroy_mapped(p->heap, NULL, NULL);
+				xnheap_destroy_mapped(p->heap,
+						      __heap_flush_shared,
+						      NULL);
 #else /* !CONFIG_XENO_OPT_PERVASIVE */
 				xnheap_destroy(p->heap,
 					       &__heap_flush_private, NULL);
-#endif /* !CONFIG_XENO_OPT_PERVASIVE */
 				xnheap_free(&kheap, p->heap);
+#endif /* !CONFIG_XENO_OPT_PERVASIVE */
 			}
-			removeq(&xnshm_allocq, &p->link);
 			ret = p->size;
 			xnheap_free(&kheap, p);
-			break;
+
+			return ret;
 		}
 
 		holder = nextq(&xnshm_allocq, holder);
@@ -311,7 +318,7 @@ static int _shm_free(unsigned long name)
 
 	xnlock_put_irqrestore(&nklock, s);
 
-	return ret;
+	return 0;
 }
 
 int rt_shm_free(unsigned long name)



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Xenomai-core] [PATCH v3 0/9] heap setup/cleanup fixes, refactorings & more
@ 2009-10-20 11:37 Jan Kiszka
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 3/9] nucleus: Fix race window in heap mapping procedure Jan Kiszka
                   ` (8 more replies)
  0 siblings, 9 replies; 33+ messages in thread
From: Jan Kiszka @ 2009-10-20 11:37 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

Third round of this series. This time I dug deeper into the heap
management, trying to simplify its use which should remove a few race
conditions of the existing code.

The central change is patch 5: xnheap_destroy_mapped will no longer
return an error, it will always start deferred deletion in case the
heap's memory is still in use. This simplifies the use a lot, allowing
among other things to drop -EBUSY from the list of possible return codes
of rt_queue/heap_delete.

The series furthermore contains an attempt to fix RTAI's shm code, fixes
the know leakages of rt_mutex/queue/heap auto-deletion and introduces
complete heap statistics (this time without using a Linux spin lock).

Please pull the series (or cherry-pick individual patches) from

    git://xenomai.org/xenomai-jki.git for-upstream

if there are no concerns.

Jan Kiszka (9):
      native: Release fastlock to the proper heap
      nucleus: Use Linux spin lock for heap list management
      nucleus: Fix race window in heap mapping procedure
      nucleus: xnheap_destroy does not fail
      nucleus: Avoid returning errors from xnheap_destroy_mapped
      rtai: Try to fix _shm_free
      native: Do not requeue on auto-cleanup errors
      native: Fix memory leak on heap/queue auto-deletion
      nucleus: Include all heaps in statistics

 include/asm-generic/bits/heap.h |    2 +-
 include/asm-generic/system.h    |    2 +-
 include/native/ppd.h            |   16 +--
 include/nucleus/heap.h          |   32 +++--
 ksrc/drivers/ipc/iddp.c         |    3 +-
 ksrc/drivers/ipc/xddp.c         |    6 +-
 ksrc/nucleus/heap.c             |  258 +++++++++++++++++++++++++++------------
 ksrc/nucleus/module.c           |    2 +-
 ksrc/nucleus/pod.c              |    5 +-
 ksrc/nucleus/shadow.c           |    5 +-
 ksrc/skins/native/heap.c        |   41 +++---
 ksrc/skins/native/mutex.c       |   14 ++-
 ksrc/skins/native/pipe.c        |    4 +-
 ksrc/skins/native/queue.c       |   34 +++---
 ksrc/skins/native/syscall.c     |   25 +---
 ksrc/skins/posix/shm.c          |    4 +-
 ksrc/skins/psos+/rn.c           |    6 +-
 ksrc/skins/rtai/shm.c           |   47 ++++---
 ksrc/skins/vrtx/heap.c          |    6 +-
 ksrc/skins/vrtx/syscall.c       |    3 +-
 20 files changed, 317 insertions(+), 198 deletions(-)

[1] http://thread.gmane.org/gmane.linux.real-time.xenomai.devel/6559


^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Xenomai-core] [PATCH v3 4/9] nucleus: xnheap_destroy does not fail
  2009-10-20 11:37 [Xenomai-core] [PATCH v3 0/9] heap setup/cleanup fixes, refactorings & more Jan Kiszka
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 3/9] nucleus: Fix race window in heap mapping procedure Jan Kiszka
@ 2009-10-20 11:37 ` Jan Kiszka
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 2/9] nucleus: Use Linux spin lock for heap list management Jan Kiszka
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 33+ messages in thread
From: Jan Kiszka @ 2009-10-20 11:37 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

There is no error returned by xnheap_destroy. So drop its return value
and update all callers.

Signed-off-by: Jan Kiszka <jan.kiszka@domain.hid>
---

 include/nucleus/heap.h    |   12 ++++++------
 ksrc/nucleus/heap.c       |   17 +++++++----------
 ksrc/skins/native/heap.c  |    2 +-
 ksrc/skins/native/queue.c |    2 +-
 ksrc/skins/rtai/shm.c     |    5 ++---
 5 files changed, 17 insertions(+), 21 deletions(-)

diff --git a/include/nucleus/heap.h b/include/nucleus/heap.h
index f39ef64..ca691bf 100644
--- a/include/nucleus/heap.h
+++ b/include/nucleus/heap.h
@@ -226,12 +226,12 @@ int xnheap_init(xnheap_t *heap,
 		u_long heapsize,
 		u_long pagesize);
 
-int xnheap_destroy(xnheap_t *heap,
-		   void (*flushfn)(xnheap_t *heap,
-				   void *extaddr,
-				   u_long extsize,
-				   void *cookie),
-		   void *cookie);
+void xnheap_destroy(xnheap_t *heap,
+		    void (*flushfn)(xnheap_t *heap,
+				    void *extaddr,
+				    u_long extsize,
+				    void *cookie),
+		    void *cookie);
 
 int xnheap_extend(xnheap_t *heap,
 		  void *extaddr,
diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
index 0a9bfdf..4958daa 100644
--- a/ksrc/nucleus/heap.c
+++ b/ksrc/nucleus/heap.c
@@ -253,9 +253,6 @@ EXPORT_SYMBOL_GPL(xnheap_init);
  * @param cookie If @a flushfn is non-NULL, @a cookie is an opaque
  * pointer which will be passed unmodified to @a flushfn.
  *
- * @return 0 is returned on success, or -EBUSY if external mappings
- * are still pending on the heap memory.
- *
  * Environments:
  *
  * This service can be called from:
@@ -267,16 +264,17 @@ EXPORT_SYMBOL_GPL(xnheap_init);
  * Rescheduling: never.
  */
 
-int xnheap_destroy(xnheap_t *heap,
-		   void (*flushfn) (xnheap_t *heap,
+void xnheap_destroy(xnheap_t *heap,
+		    void (*flushfn)(xnheap_t *heap,
 				    void *extaddr,
-				    u_long extsize, void *cookie), void *cookie)
+				    u_long extsize, void *cookie),
+		    void *cookie)
 {
 	xnholder_t *holder;
 	spl_t s;
 
 	if (!flushfn)
-		return 0;
+		return;
 
 	xnlock_get_irqsave(&heap->lock, s);
 
@@ -287,8 +285,6 @@ int xnheap_destroy(xnheap_t *heap,
 	}
 
 	xnlock_put_irqrestore(&heap->lock, s);
-
-	return 0;
 }
 EXPORT_SYMBOL_GPL(xnheap_destroy);
 
@@ -1267,7 +1263,8 @@ int xnheap_init_mapped(xnheap_t *heap, u_long heapsize, int memflags)
 int xnheap_destroy_mapped(xnheap_t *heap, void (*release)(struct xnheap *heap),
 			  void __user *mapaddr)
 {
-	return xnheap_destroy(heap, &xnheap_free_extent, NULL);
+	xnheap_destroy(heap, &xnheap_free_extent, NULL);
+	return 0;
 }
 #endif /* !CONFIG_XENO_OPT_PERVASIVE */
 
diff --git a/ksrc/skins/native/heap.c b/ksrc/skins/native/heap.c
index 9c3810c..0d7ad83 100644
--- a/ksrc/skins/native/heap.c
+++ b/ksrc/skins/native/heap.c
@@ -416,7 +416,7 @@ int rt_heap_delete_inner(RT_HEAP *heap, void __user *mapaddr)
 					    __heap_post_release, mapaddr);
 	else
 #endif /* CONFIG_XENO_OPT_PERVASIVE */
-		err = xnheap_destroy(&heap->heap_base, &__heap_flush_private, NULL);
+		xnheap_destroy(&heap->heap_base, &__heap_flush_private, NULL);
 
 	xnlock_get_irqsave(&nklock, s);
 
diff --git a/ksrc/skins/native/queue.c b/ksrc/skins/native/queue.c
index 527bde8..f913675 100644
--- a/ksrc/skins/native/queue.c
+++ b/ksrc/skins/native/queue.c
@@ -377,7 +377,7 @@ int rt_queue_delete_inner(RT_QUEUE *q, void __user *mapaddr)
 					    __queue_post_release, mapaddr);
 	else
 #endif /* CONFIG_XENO_OPT_PERVASIVE */
-		err = xnheap_destroy(&q->bufpool, &__queue_flush_private, NULL);
+		xnheap_destroy(&q->bufpool, &__queue_flush_private, NULL);
 
 	xnlock_get_irqsave(&nklock, s);
 
diff --git a/ksrc/skins/rtai/shm.c b/ksrc/skins/rtai/shm.c
index 81de434..fddf455 100644
--- a/ksrc/skins/rtai/shm.c
+++ b/ksrc/skins/rtai/shm.c
@@ -295,9 +295,8 @@ static int _shm_free(unsigned long name)
 #ifdef CONFIG_XENO_OPT_PERVASIVE
 				ret = xnheap_destroy_mapped(p->heap, NULL, NULL);
 #else /* !CONFIG_XENO_OPT_PERVASIVE */
-				ret =
-				    xnheap_destroy(p->heap,
-						   &__heap_flush_private, NULL);
+				xnheap_destroy(p->heap,
+					       &__heap_flush_private, NULL);
 #endif /* !CONFIG_XENO_OPT_PERVASIVE */
 				if (ret)
 					goto unlock_and_exit;



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Xenomai-core] [PATCH v3 1/9] native: Release fastlock to the proper heap
  2009-10-20 11:37 [Xenomai-core] [PATCH v3 0/9] heap setup/cleanup fixes, refactorings & more Jan Kiszka
                   ` (2 preceding siblings ...)
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 2/9] nucleus: Use Linux spin lock for heap list management Jan Kiszka
@ 2009-10-20 11:37 ` Jan Kiszka
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 5/9] nucleus: Avoid returning errors from xnheap_destroy_mapped Jan Kiszka
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 33+ messages in thread
From: Jan Kiszka @ 2009-10-20 11:37 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

Don't assume rt_task_delete is only called for in-kernel users, it may
be triggered via auto-cleanup also on user space objects. So check for
the creator and release the fastlock to the correct heap.

Signed-off-by: Jan Kiszka <jan.kiszka@domain.hid>
---

 ksrc/skins/native/mutex.c |   14 ++++++++++++--
 1 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/ksrc/skins/native/mutex.c b/ksrc/skins/native/mutex.c
index 20eb484..6cf7eb1 100644
--- a/ksrc/skins/native/mutex.c
+++ b/ksrc/skins/native/mutex.c
@@ -47,6 +47,7 @@
 #include <nucleus/pod.h>
 #include <nucleus/registry.h>
 #include <nucleus/heap.h>
+#include <nucleus/sys_ppd.h>
 #include <native/task.h>
 #include <native/mutex.h>
 
@@ -316,8 +317,17 @@ int rt_mutex_delete(RT_MUTEX *mutex)
 	err = rt_mutex_delete_inner(mutex);
 
 #ifdef CONFIG_XENO_FASTSYNCH
-	if (!err)
-		xnfree(mutex->synch_base.fastlock);
+	if (!err) {
+#ifdef CONFIG_XENO_OPT_PERVASIVE
+		if (mutex->cpid) {
+			int global = xnsynch_test_flags(&mutex->synch_base,
+							RT_MUTEX_EXPORTED);
+			xnheap_free(&xnsys_ppd_get(global)->sem_heap,
+				    mutex->synch_base.fastlock);
+		} else
+#endif /* CONFIG_XENO_OPT_PERVASIVE */
+			xnfree(mutex->synch_base.fastlock);
+	}
 #endif /* CONFIG_XENO_FASTSYNCH */
 
 	return err;



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Xenomai-core] [PATCH v3 2/9] nucleus: Use Linux spin lock for heap list management
  2009-10-20 11:37 [Xenomai-core] [PATCH v3 0/9] heap setup/cleanup fixes, refactorings & more Jan Kiszka
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 3/9] nucleus: Fix race window in heap mapping procedure Jan Kiszka
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 4/9] nucleus: xnheap_destroy does not fail Jan Kiszka
@ 2009-10-20 11:37 ` Jan Kiszka
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 1/9] native: Release fastlock to the proper heap Jan Kiszka
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 33+ messages in thread
From: Jan Kiszka @ 2009-10-20 11:37 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

No need for hard nklock protection of kheapq and the map counter, a
normal spin lock suffices as all users must run over the root thread
anyway.

Signed-off-by: Jan Kiszka <jan.kiszka@domain.hid>
---

 ksrc/nucleus/heap.c |   26 ++++++++++++--------------
 1 files changed, 12 insertions(+), 14 deletions(-)

diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
index 9ca2591..9ba5227 100644
--- a/ksrc/nucleus/heap.c
+++ b/ksrc/nucleus/heap.c
@@ -951,8 +951,10 @@ EXPORT_SYMBOL_GPL(xnheap_check_block);
 #include <linux/device.h>
 #include <linux/vmalloc.h>
 #include <linux/mm.h>
+#include <linux/spinlock.h>
 
 static DEFINE_XNQUEUE(kheapq);	/* Shared heap queue. */
+static DEFINE_SPINLOCK(kheapq_lock);
 
 static inline void *__alloc_and_reserve_heap(size_t size, int kmflags)
 {
@@ -1022,14 +1024,13 @@ static void __unreserve_and_free_heap(void *ptr, size_t size, int kmflags)
 static void xnheap_vmclose(struct vm_area_struct *vma)
 {
 	xnheap_t *heap = vma->vm_private_data;
-	spl_t s;
 
-	xnlock_get_irqsave(&nklock, s);
+	spin_lock(&kheapq_lock);
 
 	if (atomic_dec_and_test(&heap->archdep.numaps)) {
 		if (heap->archdep.release) {
 			removeq(&kheapq, &heap->link);
-			xnlock_put_irqrestore(&nklock, s);
+			spin_unlock(&kheapq_lock);
 			__unreserve_and_free_heap(heap->archdep.heapbase,
 						  xnheap_extentsize(heap),
 						  heap->archdep.kmflags);
@@ -1038,7 +1039,7 @@ static void xnheap_vmclose(struct vm_area_struct *vma)
 		}
 	}
 
-	xnlock_put_irqrestore(&nklock, s);
+	spin_unlock(&kheapq_lock);
 }
 
 static struct vm_operations_struct xnheap_vmops = {
@@ -1068,9 +1069,8 @@ static int xnheap_ioctl(struct inode *inode,
 {
 	xnheap_t *heap;
 	int err = 0;
-	spl_t s;
 
-	xnlock_get_irqsave(&nklock, s);
+	spin_lock(&kheapq_lock);
 
 	heap = __validate_heap_addr((void *)arg);
 
@@ -1083,7 +1083,7 @@ static int xnheap_ioctl(struct inode *inode,
 
       unlock_and_exit:
 
-	xnlock_put_irqrestore(&nklock, s);
+	spin_unlock(&kheapq_lock);
 
 	return err;
 }
@@ -1148,7 +1148,6 @@ static int xnheap_mmap(struct file *file, struct vm_area_struct *vma)
 int xnheap_init_mapped(xnheap_t *heap, u_long heapsize, int memflags)
 {
 	void *heapbase;
-	spl_t s;
 	int err;
 
 	/* Caller must have accounted for internal overhead. */
@@ -1172,9 +1171,9 @@ int xnheap_init_mapped(xnheap_t *heap, u_long heapsize, int memflags)
 	heap->archdep.heapbase = heapbase;
 	heap->archdep.release = NULL;
 
-	xnlock_get_irqsave(&nklock, s);
+	spin_lock(&kheapq_lock);
 	appendq(&kheapq, &heap->link);
-	xnlock_put_irqrestore(&nklock, s);
+	spin_unlock(&kheapq_lock);
 
 	return 0;
 }
@@ -1184,20 +1183,19 @@ int xnheap_destroy_mapped(xnheap_t *heap, void (*release)(struct xnheap *heap),
 {
 	int ret = 0, ccheck;
 	unsigned long len;
-	spl_t s;
 
 	ccheck = mapaddr ? 1 : 0;
 
-	xnlock_get_irqsave(&nklock, s);
+	spin_lock(&kheapq_lock);
 
 	if (atomic_read(&heap->archdep.numaps) > ccheck) {
 		heap->archdep.release = release;
-		xnlock_put_irqrestore(&nklock, s);
+		spin_unlock(&kheapq_lock);
 		return -EBUSY;
 	}
 
 	removeq(&kheapq, &heap->link); /* Prevent further mapping. */
-	xnlock_put_irqrestore(&nklock, s);
+	spin_unlock(&kheapq_lock);
 
 	len = xnheap_extentsize(heap);
 



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Xenomai-core] [PATCH v3 5/9] nucleus: Avoid returning errors from xnheap_destroy_mapped
  2009-10-20 11:37 [Xenomai-core] [PATCH v3 0/9] heap setup/cleanup fixes, refactorings & more Jan Kiszka
                   ` (3 preceding siblings ...)
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 1/9] native: Release fastlock to the proper heap Jan Kiszka
@ 2009-10-20 11:37 ` Jan Kiszka
  2009-10-24 17:22   ` Philippe Gerum
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 6/9] rtai: Try to fix _shm_free Jan Kiszka
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 33+ messages in thread
From: Jan Kiszka @ 2009-10-20 11:37 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

Allowing xnheap_delete_mapped to return an error and then attempting to
recover from it does not work out very well: Corner cases are racy,
intransparent to the user, and proper error handling imposes a lot of
complexity on the caller - if it actually bothers to check the return
value...

Fortunately, there is no reason for this function to fail: If the heap
is still mapped, just install the provide cleanup handler and switch to
deferred removal. If the unmapping fails, we either raced with some
other caller of unmap or user space provided a bogus address, or
something else is wrong. In any case, leaving the cleanup callback
behind is the best we can do anyway.

Removing the return value immediately allows to simplify the callers,
namemly rt_queue_delete and rt_heap_delete.

Note: This is still not 100% waterproof. If we issue
xnheap_destroy_mapped from module cleanup passing a release handler
that belongs to the module text, deferred release will cause a crash.
But this corner case is no new regression, so let's keep the head in the
sand.

Signed-off-by: Jan Kiszka <jan.kiszka@domain.hid>
---

 include/nucleus/heap.h    |    6 +++---
 ksrc/nucleus/heap.c       |   45 +++++++++++++++++++++++++++------------------
 ksrc/skins/native/heap.c  |   21 ++++++---------------
 ksrc/skins/native/queue.c |   21 ++++++---------------
 ksrc/skins/rtai/shm.c     |    6 +-----
 5 files changed, 43 insertions(+), 56 deletions(-)

diff --git a/include/nucleus/heap.h b/include/nucleus/heap.h
index ca691bf..44db738 100644
--- a/include/nucleus/heap.h
+++ b/include/nucleus/heap.h
@@ -204,9 +204,9 @@ int xnheap_init_mapped(xnheap_t *heap,
 		       u_long heapsize,
 		       int memflags);
 
-int xnheap_destroy_mapped(xnheap_t *heap,
-			  void (*release)(struct xnheap *heap),
-			  void __user *mapaddr);
+void xnheap_destroy_mapped(xnheap_t *heap,
+			   void (*release)(struct xnheap *heap),
+			   void __user *mapaddr);
 
 #define xnheap_mapped_offset(heap,ptr) \
 (((caddr_t)(ptr)) - ((caddr_t)(heap)->archdep.heapbase))
diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
index 4958daa..96c46f8 100644
--- a/ksrc/nucleus/heap.c
+++ b/ksrc/nucleus/heap.c
@@ -1173,42 +1173,51 @@ int xnheap_init_mapped(xnheap_t *heap, u_long heapsize, int memflags)
 	return 0;
 }
 
-int xnheap_destroy_mapped(xnheap_t *heap, void (*release)(struct xnheap *heap),
-			  void __user *mapaddr)
+void xnheap_destroy_mapped(xnheap_t *heap,
+			   void (*release)(struct xnheap *heap),
+			   void __user *mapaddr)
 {
-	int ret = 0, ccheck;
 	unsigned long len;
 
-	ccheck = mapaddr ? 1 : 0;
+	/*
+	 * Trying to unmap user memory without providing a release handler for
+	 * deferred cleanup is a bug.
+	 */
+	XENO_ASSERT(NUCLEUS, !mapaddr || release, /* nop */);
 
 	spin_lock(&kheapq_lock);
 
-	if (heap->archdep.numaps > ccheck) {
-		heap->archdep.release = release;
-		spin_unlock(&kheapq_lock);
-		return -EBUSY;
-	}
-
 	removeq(&kheapq, &heap->link); /* Prevent further mapping. */
+
+	heap->archdep.release = release;
+
+	if (heap->archdep.numaps == 0)
+		mapaddr = NULL; /* nothing left to unmap */
+	else
+		release = NULL; /* will be called by Linux on unmap */
+
 	spin_unlock(&kheapq_lock);
 
 	len = xnheap_extentsize(heap);
 
 	if (mapaddr) {
 		down_write(&current->mm->mmap_sem);
-		heap->archdep.release = NULL;
-		ret = do_munmap(current->mm, (unsigned long)mapaddr, len);
+		do_munmap(current->mm, (unsigned long)mapaddr, len);
 		up_write(&current->mm->mmap_sem);
 	}
 
-	if (ret == 0) {
+	if (heap->archdep.numaps > 0) {
+		/* The release handler is supposed to clean up the rest. */
+		XENO_ASSERT(NUCLEUS, heap->archdep.release, /* nop */);
+		return;
+	}
+
+	if (!mapaddr) {
 		__unreserve_and_free_heap(heap->archdep.heapbase, len,
 					  heap->archdep.kmflags);
 		if (release)
 			release(heap);
 	}
-
-	return ret;
 }
 
 static struct file_operations xnheap_fops = {
@@ -1260,11 +1269,11 @@ int xnheap_init_mapped(xnheap_t *heap, u_long heapsize, int memflags)
 	return -ENOMEM;
 }
 
-int xnheap_destroy_mapped(xnheap_t *heap, void (*release)(struct xnheap *heap),
-			  void __user *mapaddr)
+void xnheap_destroy_mapped(xnheap_t *heap,
+			   void (*release)(struct xnheap *heap),
+			   void __user *mapaddr)
 {
 	xnheap_destroy(heap, &xnheap_free_extent, NULL);
-	return 0;
 }
 #endif /* !CONFIG_XENO_OPT_PERVASIVE */
 
diff --git a/ksrc/skins/native/heap.c b/ksrc/skins/native/heap.c
index 0d7ad83..f7411e8 100644
--- a/ksrc/skins/native/heap.c
+++ b/ksrc/skins/native/heap.c
@@ -357,9 +357,6 @@ static void __heap_post_release(struct xnheap *h)
  *
  * @return 0 is returned upon success. Otherwise:
  *
- * - -EBUSY is returned if @a heap is in use by another process and the
- * descriptor is not destroyed.
- *
  * - -EINVAL is returned if @a heap is not a heap descriptor.
  *
  * - -EIDRM is returned if @a heap is a deleted heap descriptor.
@@ -379,7 +376,7 @@ static void __heap_post_release(struct xnheap *h)
 
 int rt_heap_delete_inner(RT_HEAP *heap, void __user *mapaddr)
 {
-	int err = 0;
+	int err;
 	spl_t s;
 
 	if (!xnpod_root_p())
@@ -412,22 +409,16 @@ int rt_heap_delete_inner(RT_HEAP *heap, void __user *mapaddr)
 
 #ifdef CONFIG_XENO_OPT_PERVASIVE
 	if (heap->mode & H_MAPPABLE)
-		err = xnheap_destroy_mapped(&heap->heap_base,
-					    __heap_post_release, mapaddr);
+		xnheap_destroy_mapped(&heap->heap_base,
+				      __heap_post_release, mapaddr);
 	else
 #endif /* CONFIG_XENO_OPT_PERVASIVE */
+	{
 		xnheap_destroy(&heap->heap_base, &__heap_flush_private, NULL);
-
-	xnlock_get_irqsave(&nklock, s);
-
-	if (err)
-		heap->magic = XENO_HEAP_MAGIC;
-	else if (!(heap->mode & H_MAPPABLE))
 		__heap_post_release(&heap->heap_base);
+	}
 
-	xnlock_put_irqrestore(&nklock, s);
-
-	return err;
+	return 0;
 }
 
 int rt_heap_delete(RT_HEAP *heap)
diff --git a/ksrc/skins/native/queue.c b/ksrc/skins/native/queue.c
index f913675..3592a4a 100644
--- a/ksrc/skins/native/queue.c
+++ b/ksrc/skins/native/queue.c
@@ -326,9 +326,6 @@ static void __queue_post_release(struct xnheap *heap)
  * - -EPERM is returned if this service was called from an
  * asynchronous context.
  *
- * - -EBUSY is returned if an attempt is made to delete a shared queue
- * which is still bound to a process.
- *
  * Environments:
  *
  * This service can be called from:
@@ -341,7 +338,7 @@ static void __queue_post_release(struct xnheap *heap)
 
 int rt_queue_delete_inner(RT_QUEUE *q, void __user *mapaddr)
 {
-	int err = 0;
+	int err;
 	spl_t s;
 
 	if (xnpod_asynch_p())
@@ -373,22 +370,16 @@ int rt_queue_delete_inner(RT_QUEUE *q, void __user *mapaddr)
 
 #ifdef CONFIG_XENO_OPT_PERVASIVE
 	if (q->mode & Q_SHARED)
-		err = xnheap_destroy_mapped(&q->bufpool,
-					    __queue_post_release, mapaddr);
+		xnheap_destroy_mapped(&q->bufpool,
+				      __queue_post_release, mapaddr);
 	else
 #endif /* CONFIG_XENO_OPT_PERVASIVE */
+	{
 		xnheap_destroy(&q->bufpool, &__queue_flush_private, NULL);
-
-	xnlock_get_irqsave(&nklock, s);
-
-	if (err)
-		q->magic = XENO_QUEUE_MAGIC;
-	else if (!(q->mode & Q_SHARED))
 		__queue_post_release(&q->bufpool);
+	}
 
-	xnlock_put_irqrestore(&nklock, s);
-
-	return err;
+	return 0;
 }
 
 int rt_queue_delete(RT_QUEUE *q)
diff --git a/ksrc/skins/rtai/shm.c b/ksrc/skins/rtai/shm.c
index fddf455..4c56495 100644
--- a/ksrc/skins/rtai/shm.c
+++ b/ksrc/skins/rtai/shm.c
@@ -293,13 +293,11 @@ static int _shm_free(unsigned long name)
 				 * [YES!]
 				 */
 #ifdef CONFIG_XENO_OPT_PERVASIVE
-				ret = xnheap_destroy_mapped(p->heap, NULL, NULL);
+				xnheap_destroy_mapped(p->heap, NULL, NULL);
 #else /* !CONFIG_XENO_OPT_PERVASIVE */
 				xnheap_destroy(p->heap,
 					       &__heap_flush_private, NULL);
 #endif /* !CONFIG_XENO_OPT_PERVASIVE */
-				if (ret)
-					goto unlock_and_exit;
 				xnheap_free(&kheap, p->heap);
 			}
 			removeq(&xnshm_allocq, &p->link);
@@ -311,8 +309,6 @@ static int _shm_free(unsigned long name)
 		holder = nextq(&xnshm_allocq, holder);
 	}
 
-      unlock_and_exit:
-
 	xnlock_put_irqrestore(&nklock, s);
 
 	return ret;



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Xenomai-core] [PATCH v3 3/9] nucleus: Fix race window in heap mapping procedure
  2009-10-20 11:37 [Xenomai-core] [PATCH v3 0/9] heap setup/cleanup fixes, refactorings & more Jan Kiszka
@ 2009-10-20 11:37 ` Jan Kiszka
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 4/9] nucleus: xnheap_destroy does not fail Jan Kiszka
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 33+ messages in thread
From: Jan Kiszka @ 2009-10-20 11:37 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

There is a race window between setting the heap address via the IOCTL
and actually mapping it. A heap that is invalidated after the IOCTL can
still be mapped by user space. Fix this by pushing the heap validation
into xnheap_mmap.

Moreover, make sure that we update archdep.numaps along with declaring
the heap valid. Otherwise xnheap_destroy_mapped may draw wrong
conclusions about the heap mapping state.

As archdep.numaps is now exclusively modified under heapq_lock, we can
switch it to a non-atomic counter.

Signed-off-by: Jan Kiszka <jan.kiszka@domain.hid>
---

 include/asm-generic/bits/heap.h |    2 +
 include/asm-generic/system.h    |    2 +
 ksrc/nucleus/heap.c             |   55 +++++++++++++++++++--------------------
 ksrc/skins/native/heap.c        |    7 +++--
 4 files changed, 33 insertions(+), 33 deletions(-)

diff --git a/include/asm-generic/bits/heap.h b/include/asm-generic/bits/heap.h
index 40d31c8..4d0c41a 100644
--- a/include/asm-generic/bits/heap.h
+++ b/include/asm-generic/bits/heap.h
@@ -27,7 +27,7 @@
 static inline void xnarch_init_heapcb (xnarch_heapcb_t *hcb)
 
 {
-    atomic_set(&hcb->numaps,0);
+    hcb->numaps = 0;
     hcb->kmflags = 0;
     hcb->heapbase = NULL;
 }
diff --git a/include/asm-generic/system.h b/include/asm-generic/system.h
index fcec613..c0f1698 100644
--- a/include/asm-generic/system.h
+++ b/include/asm-generic/system.h
@@ -270,7 +270,7 @@ struct xnheap;
 
 typedef struct xnarch_heapcb {
 
-	atomic_t numaps;	/* # of active user-space mappings. */
+	unsigned long numaps;	/* # of active user-space mappings. */
 	int kmflags;		/* Kernel memory flags (0 if vmalloc()). */
 	void *heapbase;		/* Shared heap memory base. */
 	void (*release)(struct xnheap *heap); /* callback upon last unmap */
diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
index 9ba5227..0a9bfdf 100644
--- a/ksrc/nucleus/heap.c
+++ b/ksrc/nucleus/heap.c
@@ -1027,7 +1027,7 @@ static void xnheap_vmclose(struct vm_area_struct *vma)
 
 	spin_lock(&kheapq_lock);
 
-	if (atomic_dec_and_test(&heap->archdep.numaps)) {
+	if (--heap->archdep.numaps == 0) {
 		if (heap->archdep.release) {
 			removeq(&kheapq, &heap->link);
 			spin_unlock(&kheapq_lock);
@@ -1067,31 +1067,15 @@ static inline xnheap_t *__validate_heap_addr(void *addr)
 static int xnheap_ioctl(struct inode *inode,
 			struct file *file, unsigned int cmd, unsigned long arg)
 {
-	xnheap_t *heap;
-	int err = 0;
-
-	spin_lock(&kheapq_lock);
-
-	heap = __validate_heap_addr((void *)arg);
-
-	if (!heap) {
-		err = -EINVAL;
-		goto unlock_and_exit;
-	}
-
-	file->private_data = heap;
-
-      unlock_and_exit:
-
-	spin_unlock(&kheapq_lock);
-
-	return err;
+	file->private_data = (void *)arg;
+	return 0;
 }
 
 static int xnheap_mmap(struct file *file, struct vm_area_struct *vma)
 {
 	unsigned long offset, size, vaddr;
 	xnheap_t *heap;
+	int err;
 
 	if (vma->vm_ops != NULL || file->private_data == NULL)
 		/* Caller should mmap() once for a given file instance, after
@@ -1103,21 +1087,34 @@ static int xnheap_mmap(struct file *file, struct vm_area_struct *vma)
 
 	offset = vma->vm_pgoff << PAGE_SHIFT;
 	size = vma->vm_end - vma->vm_start;
-	heap = (xnheap_t *)file->private_data;
 
+	spin_lock(&kheapq_lock);
+
+	heap = __validate_heap_addr(file->private_data);
+	if (!heap) {
+		spin_unlock(&kheapq_lock);
+		return -EINVAL;
+	}
+
+	heap->archdep.numaps++;
+
+	spin_unlock(&kheapq_lock);
+
+	err = -ENXIO;
 	if (offset + size > xnheap_extentsize(heap))
-		return -ENXIO;
+		goto deref_out;
 
 	if (countq(&heap->extents) > 1)
 		/* Cannot map multi-extent heaps, we need the memory
 		   area we map from to be contiguous. */
-		return -ENXIO;
+		goto deref_out;
 
 	vma->vm_ops = &xnheap_vmops;
 	vma->vm_private_data = file->private_data;
 
 	vaddr = (unsigned long)heap->archdep.heapbase + offset;
 
+	err = -EAGAIN;
 	if ((heap->archdep.kmflags & ~XNHEAP_GFP_NONCACHED) == 0) {
 		unsigned long maddr = vma->vm_start;
 
@@ -1126,7 +1123,7 @@ static int xnheap_mmap(struct file *file, struct vm_area_struct *vma)
 
 		while (size > 0) {
 			if (xnarch_remap_vm_page(vma, maddr, vaddr))
-				return -EAGAIN;
+				goto deref_out;
 
 			maddr += PAGE_SIZE;
 			vaddr += PAGE_SIZE;
@@ -1136,13 +1133,15 @@ static int xnheap_mmap(struct file *file, struct vm_area_struct *vma)
 					      vma->vm_start,
 					      virt_to_phys((void *)vaddr),
 					      size, vma->vm_page_prot))
-		return -EAGAIN;
+		goto deref_out;
 
 	xnarch_fault_range(vma);
 
-	atomic_inc(&heap->archdep.numaps);
-
 	return 0;
+
+deref_out:
+	xnheap_vmclose(vma);
+	return err;
 }
 
 int xnheap_init_mapped(xnheap_t *heap, u_long heapsize, int memflags)
@@ -1188,7 +1187,7 @@ int xnheap_destroy_mapped(xnheap_t *heap, void (*release)(struct xnheap *heap),
 
 	spin_lock(&kheapq_lock);
 
-	if (atomic_read(&heap->archdep.numaps) > ccheck) {
+	if (heap->archdep.numaps > ccheck) {
 		heap->archdep.release = release;
 		spin_unlock(&kheapq_lock);
 		return -EBUSY;
diff --git a/ksrc/skins/native/heap.c b/ksrc/skins/native/heap.c
index 0a24735..9c3810c 100644
--- a/ksrc/skins/native/heap.c
+++ b/ksrc/skins/native/heap.c
@@ -60,11 +60,12 @@ static int __heap_read_proc(char *page,
 	int len;
 	spl_t s;
 
-	p += sprintf(p, "type=%s:size=%lu:used=%lu:numaps=%d\n",
+	p += sprintf(p, "type=%s:size=%lu:used=%lu:numaps=%lu\n",
 		     (heap->mode & H_SHARED) == H_SHARED ? "shared" :
 		     (heap->mode & H_MAPPABLE) ? "mappable" : "kernel",
-		     xnheap_usable_mem(&heap->heap_base), xnheap_used_mem(&heap->heap_base),
-		     atomic_read(&heap->heap_base.archdep.numaps));
+		     xnheap_usable_mem(&heap->heap_base),
+		     xnheap_used_mem(&heap->heap_base),
+		     heap->heap_base.archdep.numaps);
 
 	xnlock_get_irqsave(&nklock, s);
 



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Xenomai-core] [PATCH v3 8/9] native: Fix memory leak on heap/queue auto-deletion
  2009-10-20 11:37 [Xenomai-core] [PATCH v3 0/9] heap setup/cleanup fixes, refactorings & more Jan Kiszka
                   ` (7 preceding siblings ...)
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 9/9] nucleus: Include all heaps in statistics Jan Kiszka
@ 2009-10-20 11:37 ` Jan Kiszka
  2009-10-22 10:30   ` [Xenomai-core] [PATCH] native: Avoid double release on queue/heap auto-cleanup Jan Kiszka
  8 siblings, 1 reply; 33+ messages in thread
From: Jan Kiszka @ 2009-10-20 11:37 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

We are currently leaking user space heap/queue objects when the owner
terminates without deleting them before. Fix it by releasing the objects
in the corresponding cleanup callbacks which are also called on owner
termination.

Signed-off-by: Jan Kiszka <jan.kiszka@domain.hid>
---

 ksrc/skins/native/heap.c    |    5 +++++
 ksrc/skins/native/queue.c   |    5 +++++
 ksrc/skins/native/syscall.c |   25 ++++++-------------------
 3 files changed, 16 insertions(+), 19 deletions(-)

diff --git a/ksrc/skins/native/heap.c b/ksrc/skins/native/heap.c
index f7411e8..886758c 100644
--- a/ksrc/skins/native/heap.c
+++ b/ksrc/skins/native/heap.c
@@ -341,6 +341,11 @@ static void __heap_post_release(struct xnheap *h)
 		xnpod_schedule();
 
 	xnlock_put_irqrestore(&nklock, s);
+
+#ifdef CONFIG_XENO_OPT_PERVASIVE
+	if (heap->cpid)
+		xnfree(heap);
+#endif
 }
 
 /**
diff --git a/ksrc/skins/native/queue.c b/ksrc/skins/native/queue.c
index 3592a4a..249947a 100644
--- a/ksrc/skins/native/queue.c
+++ b/ksrc/skins/native/queue.c
@@ -303,6 +303,11 @@ static void __queue_post_release(struct xnheap *heap)
 		xnpod_schedule();
 
 	xnlock_put_irqrestore(&nklock, s);
+
+#ifdef CONFIG_XENO_OPT_PERVASIVE
+	if (q->cpid)
+		xnfree(q);
+#endif
 }
 
 /**
diff --git a/ksrc/skins/native/syscall.c b/ksrc/skins/native/syscall.c
index 28c720e..cb9f075 100644
--- a/ksrc/skins/native/syscall.c
+++ b/ksrc/skins/native/syscall.c
@@ -2073,24 +2073,17 @@ static int __rt_queue_delete(struct pt_regs *regs)
 {
 	RT_QUEUE_PLACEHOLDER ph;
 	RT_QUEUE *q;
-	int err;
 
 	if (__xn_safe_copy_from_user(&ph, (void __user *)__xn_reg_arg1(regs),
 				     sizeof(ph)))
 		return -EFAULT;
 
 	q = (RT_QUEUE *)xnregistry_fetch(ph.opaque);
-
 	if (!q)
-		err = -ESRCH;
-	else {
-		/* Callee will check the queue descriptor for validity again. */
-		err = rt_queue_delete_inner(q, (void __user *)ph.mapbase);
-		if (!err && q->cpid)
-			xnfree(q);
-	}
+		return -ESRCH;
 
-	return err;
+	/* Callee will check the queue descriptor for validity again. */
+	return rt_queue_delete_inner(q, (void __user *)ph.mapbase);
 }
 
 /*
@@ -2604,7 +2597,6 @@ static int __rt_heap_delete(struct pt_regs *regs)
 {
 	RT_HEAP_PLACEHOLDER ph;
 	RT_HEAP *heap;
-	int err;
 
 	if (__xn_safe_copy_from_user(&ph, (void __user *)__xn_reg_arg1(regs),
 				     sizeof(ph)))
@@ -2613,15 +2605,10 @@ static int __rt_heap_delete(struct pt_regs *regs)
 	heap = (RT_HEAP *)xnregistry_fetch(ph.opaque);
 
 	if (!heap)
-		err = -ESRCH;
-	else {
-		/* Callee will check the heap descriptor for validity again. */
-		err = rt_heap_delete_inner(heap, (void __user *)ph.mapbase);
-		if (!err && heap->cpid)
-			xnfree(heap);
-	}
+		return -ESRCH;
 
-	return err;
+	/* Callee will check the heap descriptor for validity again. */
+	return rt_heap_delete_inner(heap, (void __user *)ph.mapbase);
 }
 
 /*



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Xenomai-core] [PATCH v3 7/9] native: Do not requeue on auto-cleanup errors
  2009-10-20 11:37 [Xenomai-core] [PATCH v3 0/9] heap setup/cleanup fixes, refactorings & more Jan Kiszka
                   ` (5 preceding siblings ...)
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 6/9] rtai: Try to fix _shm_free Jan Kiszka
@ 2009-10-20 11:37 ` Jan Kiszka
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 9/9] nucleus: Include all heaps in statistics Jan Kiszka
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 8/9] native: Fix memory leak on heap/queue auto-deletion Jan Kiszka
  8 siblings, 0 replies; 33+ messages in thread
From: Jan Kiszka @ 2009-10-20 11:37 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

Migrating an object to the global queue in case of an error during
deletion is racy and may paper over potential bugs. Now that the main
reason for this approach is no longer existing (rt_heap/queue_delete
will not return EBUSY anymore), replace the requeueing with basic
consistency checks.

Signed-off-by: Jan Kiszka <jan.kiszka@domain.hid>
---

 include/native/ppd.h |   16 ++++------------
 1 files changed, 4 insertions(+), 12 deletions(-)

diff --git a/include/native/ppd.h b/include/native/ppd.h
index 3dbda6a..c6e7479 100644
--- a/include/native/ppd.h
+++ b/include/native/ppd.h
@@ -101,19 +101,11 @@ static inline xeno_rholder_t *xeno_get_rholder(void)
 			xnlock_put_irqrestore(&nklock, s);		\
 			obj = rlink2##__name(holder);			\
 			err = rt_##__name##_delete(obj);		\
+			XENO_ASSERT(NATIVE, !err || err == -EIDRM, );	\
 			__xeno_trace_release(#__name, obj, err);	\
-			if (unlikely(err)) {				\
-				if ((__rq) != &__native_global_rholder.__name##q) { \
-					xnlock_get_irqsave(&nklock, s);	\
-					nholder = popq((rq), holder);	\
-					appendq(&__native_global_rholder.__name##q, holder); \
-					obj->rqueue = &__native_global_rholder.__name##q; \
-				}					\
-			} else {					\
-				if (__release)				\
-					__xeno_release_obj(obj);	\
-				xnlock_get_irqsave(&nklock, s);		\
-			}						\
+			if (!err && __release)				\
+				__xeno_release_obj(obj);		\
+			xnlock_get_irqsave(&nklock, s);			\
 		}							\
 		xnlock_put_irqrestore(&nklock, s);			\
 	} while(0)



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Xenomai-core] [PATCH v3 9/9] nucleus: Include all heaps in statistics
  2009-10-20 11:37 [Xenomai-core] [PATCH v3 0/9] heap setup/cleanup fixes, refactorings & more Jan Kiszka
                   ` (6 preceding siblings ...)
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 7/9] native: Do not requeue on auto-cleanup errors Jan Kiszka
@ 2009-10-20 11:37 ` Jan Kiszka
  2009-10-20 23:41   ` Philippe Gerum
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 8/9] native: Fix memory leak on heap/queue auto-deletion Jan Kiszka
  8 siblings, 1 reply; 33+ messages in thread
From: Jan Kiszka @ 2009-10-20 11:37 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

This extends /proc/xenomai/heap with statistics about all currently used
heaps. It takes care to flush nklock while iterating of this potentially
long list.

Signed-off-by: Jan Kiszka <jan.kiszka@domain.hid>
---

 include/nucleus/heap.h    |   12 +++-
 ksrc/drivers/ipc/iddp.c   |    3 +
 ksrc/drivers/ipc/xddp.c   |    6 ++
 ksrc/nucleus/heap.c       |  131 ++++++++++++++++++++++++++++++++++++++++-----
 ksrc/nucleus/module.c     |    2 -
 ksrc/nucleus/pod.c        |    5 +-
 ksrc/nucleus/shadow.c     |    5 +-
 ksrc/skins/native/heap.c  |    6 +-
 ksrc/skins/native/pipe.c  |    4 +
 ksrc/skins/native/queue.c |    6 +-
 ksrc/skins/posix/shm.c    |    4 +
 ksrc/skins/psos+/rn.c     |    6 +-
 ksrc/skins/rtai/shm.c     |    7 ++
 ksrc/skins/vrtx/heap.c    |    6 +-
 ksrc/skins/vrtx/syscall.c |    3 +
 15 files changed, 169 insertions(+), 37 deletions(-)

diff --git a/include/nucleus/heap.h b/include/nucleus/heap.h
index 44db738..f653cd7 100644
--- a/include/nucleus/heap.h
+++ b/include/nucleus/heap.h
@@ -115,6 +115,10 @@ typedef struct xnheap {
 
 	XNARCH_DECL_DISPLAY_CONTEXT();
 
+	xnholder_t stat_link;	/* Link in heapq */
+
+	char name[48];
+
 } xnheap_t;
 
 extern xnheap_t kheap;
@@ -202,7 +206,8 @@ void xnheap_cleanup_proc(void);
 
 int xnheap_init_mapped(xnheap_t *heap,
 		       u_long heapsize,
-		       int memflags);
+		       int memflags,
+		       const char *name, ...);
 
 void xnheap_destroy_mapped(xnheap_t *heap,
 			   void (*release)(struct xnheap *heap),
@@ -224,7 +229,10 @@ void xnheap_destroy_mapped(xnheap_t *heap,
 int xnheap_init(xnheap_t *heap,
 		void *heapaddr,
 		u_long heapsize,
-		u_long pagesize);
+		u_long pagesize,
+		const char *name, ...);
+
+void xnheap_set_name(xnheap_t *heap, const char *name, ...);
 
 void xnheap_destroy(xnheap_t *heap,
 		    void (*flushfn)(xnheap_t *heap,
diff --git a/ksrc/drivers/ipc/iddp.c b/ksrc/drivers/ipc/iddp.c
index a407946..b6382f1 100644
--- a/ksrc/drivers/ipc/iddp.c
+++ b/ksrc/drivers/ipc/iddp.c
@@ -559,7 +559,8 @@ static int __iddp_bind_socket(struct rtipc_private *priv,
 		}
 
 		ret = xnheap_init(&sk->privpool,
-				  poolmem, poolsz, XNHEAP_PAGE_SIZE);
+				  poolmem, poolsz, XNHEAP_PAGE_SIZE,
+				  "ippd: %d", port);
 		if (ret) {
 			xnarch_free_host_mem(poolmem, poolsz);
 			goto fail;
diff --git a/ksrc/drivers/ipc/xddp.c b/ksrc/drivers/ipc/xddp.c
index f62147a..a5dafef 100644
--- a/ksrc/drivers/ipc/xddp.c
+++ b/ksrc/drivers/ipc/xddp.c
@@ -703,7 +703,7 @@ static int __xddp_bind_socket(struct rtipc_private *priv,
 		}
 
 		ret = xnheap_init(&sk->privpool,
-				  poolmem, poolsz, XNHEAP_PAGE_SIZE);
+				  poolmem, poolsz, XNHEAP_PAGE_SIZE, "");
 		if (ret) {
 			xnarch_free_host_mem(poolmem, poolsz);
 			goto fail;
@@ -746,6 +746,10 @@ static int __xddp_bind_socket(struct rtipc_private *priv,
 	sk->minor = ret;
 	sa->sipc_port = ret;
 	sk->name = *sa;
+
+	if (poolsz > 0)
+		xnheap_set_name(sk->bufpool, "xddp: %d", sa->sipc_port);
+
 	/* Set default destination if unset at binding time. */
 	if (sk->peer.sipc_port < 0)
 		sk->peer = *sa;
diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
index 96c46f8..793d1c5 100644
--- a/ksrc/nucleus/heap.c
+++ b/ksrc/nucleus/heap.c
@@ -76,6 +76,9 @@ EXPORT_SYMBOL_GPL(kheap);
 xnheap_t kstacks;	/* Private stack pool */
 #endif
 
+static DEFINE_XNQUEUE(heapq);	/* Heap list for /proc reporting */
+static unsigned long heapq_rev;
+
 static void init_extent(xnheap_t *heap, xnextent_t *extent)
 {
 	caddr_t freepage;
@@ -108,7 +111,7 @@ static void init_extent(xnheap_t *heap, xnextent_t *extent)
  */
 
 /*!
- * \fn xnheap_init(xnheap_t *heap,void *heapaddr,u_long heapsize,u_long pagesize)
+ * \fn xnheap_init(xnheap_t *heap,void *heapaddr,u_long heapsize,u_long pagesize,const char *name,...)
  * \brief Initialize a memory heap.
  *
  * Initializes a memory heap suitable for time-bounded allocation
@@ -145,6 +148,10 @@ static void init_extent(xnheap_t *heap, xnextent_t *extent)
  * best one for your needs. In the current implementation, pagesize
  * must be a power of two in the range [ 8 .. 32768 ] inclusive.
  *
+ * @param name Name displayed in statistic outputs. This parameter can
+ * be a format string, in which case succeeding parameters will be used
+ * to resolve the final name.
+ *
  * @return 0 is returned upon success, or one of the following error
  * codes:
  *
@@ -161,12 +168,13 @@ static void init_extent(xnheap_t *heap, xnextent_t *extent)
  * Rescheduling: never.
  */
 
-int xnheap_init(xnheap_t *heap,
-		void *heapaddr, u_long heapsize, u_long pagesize)
+static int xnheap_init_va(xnheap_t *heap, void *heapaddr, u_long heapsize,
+			  u_long pagesize, const char *name, va_list args)
 {
 	unsigned cpu, nr_cpus = xnarch_num_online_cpus();
 	u_long hdrsize, shiftsize, pageshift;
 	xnextent_t *extent;
+	spl_t s;
 
 	/*
 	 * Perform some parametrical checks first.
@@ -232,12 +240,71 @@ int xnheap_init(xnheap_t *heap,
 
 	appendq(&heap->extents, &extent->link);
 
+	vsnprintf(heap->name, sizeof(heap->name), name, args);
+
+	xnlock_get_irqsave(&nklock, s);
+	appendq(&heapq, &heap->stat_link);
+	heapq_rev++;
+	xnlock_put_irqrestore(&nklock, s);
+
 	xnarch_init_display_context(heap);
 
 	return 0;
 }
+
+int xnheap_init(xnheap_t *heap, void *heapaddr, u_long heapsize,
+		u_long pagesize, const char *name, ...)
+{
+	va_list args;
+	int ret;
+
+	va_start(args, name);
+	ret = xnheap_init_va(heap, heapaddr, heapsize, pagesize, name, args);
+	va_end(args);
+
+	return ret;
+}
 EXPORT_SYMBOL_GPL(xnheap_init);
 
+/*!
+ * \fn xnheap_set_name(xnheap_t *heap,const char *name,...)
+ * \brief Overwrite the heap's name.
+ *
+ * Set the heap name that will be used in statistic outputs. This service
+ * is useful if the final name is not yet defined on xnheap_init().
+ *
+ * @param heap The address of a heap descriptor.
+ *
+ * @param name Name displayed in statistic outputs. This parameter can
+ * be a format string, in which case succeeding parameters will be used
+ * to resolve the final name.
+ *
+ * Environments:
+ *
+ * This service can be called from:
+ *
+ * - Kernel module initialization/cleanup code
+ * - Kernel-based task
+ * - User-space task
+ *
+ * Rescheduling: never.
+ */
+
+void xnheap_set_name(xnheap_t *heap, const char *name, ...)
+{
+	va_list args;
+	spl_t s;
+
+	va_start(args, name);
+
+	xnlock_get_irqsave(&nklock, s);
+	vsnprintf(heap->name, sizeof(heap->name), name, args);
+	xnlock_put_irqrestore(&nklock, s);
+
+	va_end(args);
+}
+EXPORT_SYMBOL_GPL(xnheap_set_name);
+
 /*! 
  * \fn void xnheap_destroy(xnheap_t *heap, void (*flushfn)(xnheap_t *heap, void *extaddr, u_long extsize, void *cookie), void *cookie)
  * \brief Destroys a memory heap.
@@ -273,6 +340,11 @@ void xnheap_destroy(xnheap_t *heap,
 	xnholder_t *holder;
 	spl_t s;
 
+	xnlock_get_irqsave(&nklock, s);
+	removeq(&heapq, &heap->stat_link);
+	heapq_rev++;
+	xnlock_put_irqrestore(&nklock, s);
+
 	if (!flushfn)
 		return;
 
@@ -1140,9 +1212,11 @@ deref_out:
 	return err;
 }
 
-int xnheap_init_mapped(xnheap_t *heap, u_long heapsize, int memflags)
+int xnheap_init_mapped(xnheap_t *heap, u_long heapsize, int memflags,
+		       const char *name, ...)
 {
 	void *heapbase;
+	va_list args;
 	int err;
 
 	/* Caller must have accounted for internal overhead. */
@@ -1156,7 +1230,9 @@ int xnheap_init_mapped(xnheap_t *heap, u_long heapsize, int memflags)
 	if (!heapbase)
 		return -ENOMEM;
 
-	err = xnheap_init(heap, heapbase, heapsize, PAGE_SIZE);
+	va_start(args, name);
+	err = xnheap_init_va(heap, heapbase, heapsize, PAGE_SIZE, name, args);
+	va_end(args);
 	if (err) {
 		__unreserve_and_free_heap(heapbase, heapsize, memflags);
 		return err;
@@ -1178,6 +1254,7 @@ void xnheap_destroy_mapped(xnheap_t *heap,
 			   void __user *mapaddr)
 {
 	unsigned long len;
+	spl_t s;
 
 	/*
 	 * Trying to unmap user memory without providing a release handler for
@@ -1185,6 +1262,11 @@ void xnheap_destroy_mapped(xnheap_t *heap,
 	 */
 	XENO_ASSERT(NUCLEUS, !mapaddr || release, /* nop */);
 
+	xnlock_get_irqsave(&nklock, s);
+	removeq(&heapq, &heap->stat_link);
+	heapq_rev++;
+	xnlock_put_irqrestore(&nklock, s);
+
 	spin_lock(&kheapq_lock);
 
 	removeq(&kheapq, &heap->link); /* Prevent further mapping. */
@@ -1285,22 +1367,41 @@ static int heap_read_proc(char *page,
 			  char **start,
 			  off_t off, int count, int *eof, void *data)
 {
+	unsigned long rev;
+	xnholder_t *entry;
+	xnheap_t *heap;
 	int len;
+	spl_t s;
 
 	if (!xnpod_active_p())
 		return -ESRCH;
 
-	len = sprintf(page, "size=%lu:used=%lu:pagesz=%lu  (main heap)\n",
-		      xnheap_usable_mem(&kheap),
-		      xnheap_used_mem(&kheap),
-		      xnheap_page_size(&kheap));
+	xnlock_get_irqsave(&nklock, s);
 
-#if CONFIG_XENO_OPT_SYS_STACKPOOLSZ > 0
-	len += sprintf(page + len, "size=%lu:used=%lu:pagesz=%lu  (stack pool)\n",
-		       xnheap_usable_mem(&kstacks),
-		       xnheap_used_mem(&kstacks),
-		       xnheap_page_size(&kstacks));
-#endif
+restart:
+	len = 0;
+
+	entry = getheadq(&heapq);
+	while (entry) {
+		heap = container_of(entry, xnheap_t, stat_link);
+		len += sprintf(page + len,
+			       "size=%lu:used=%lu:pagesz=%lu  (%s)\n",
+			       xnheap_usable_mem(heap),
+			       xnheap_used_mem(heap),
+			       xnheap_page_size(heap),
+			       heap->name);
+
+		rev = heapq_rev;
+
+		xnlock_sync_irq(&nklock, s);
+
+		if (heapq_rev != rev)
+			goto restart;
+
+		entry = nextq(&heapq, entry);
+	}
+
+	xnlock_put_irqrestore(&nklock, s);
 
 	len -= off;
 	if (len <= off + count)
diff --git a/ksrc/nucleus/module.c b/ksrc/nucleus/module.c
index 141276a..a0efc14 100644
--- a/ksrc/nucleus/module.c
+++ b/ksrc/nucleus/module.c
@@ -100,7 +100,7 @@ int __init __xeno_sys_init(void)
 #ifndef __XENO_SIM__
 	ret = xnheap_init_mapped(&__xnsys_global_ppd.sem_heap,
 				 CONFIG_XENO_OPT_GLOBAL_SEM_HEAPSZ * 1024,
-				 XNARCH_SHARED_HEAP_FLAGS);
+				 XNARCH_SHARED_HEAP_FLAGS, "global sem heap");
 	if (ret)
 		goto cleanup_arch;
 #endif
diff --git a/ksrc/nucleus/pod.c b/ksrc/nucleus/pod.c
index 370ad28..07fbcdb 100644
--- a/ksrc/nucleus/pod.c
+++ b/ksrc/nucleus/pod.c
@@ -384,9 +384,8 @@ int xnpod_init(void)
 	heapaddr = xnarch_alloc_host_mem(xnmod_sysheap_size);
 	if (heapaddr == NULL ||
 	    xnheap_init(&kheap, heapaddr, xnmod_sysheap_size,
-			XNHEAP_PAGE_SIZE) != 0) {
+			XNHEAP_PAGE_SIZE, "main heap") != 0)
 		return -ENOMEM;
-	}
 
 #if CONFIG_XENO_OPT_SYS_STACKPOOLSZ > 0
 	/*
@@ -404,7 +403,7 @@ int xnpod_init(void)
 	heapaddr = xnarch_alloc_stack_mem(CONFIG_XENO_OPT_SYS_STACKPOOLSZ * 1024);
 	if (heapaddr == NULL ||
 	    xnheap_init(&kstacks, heapaddr, CONFIG_XENO_OPT_SYS_STACKPOOLSZ * 1024,
-			XNHEAP_PAGE_SIZE) != 0) {
+			XNHEAP_PAGE_SIZE, "stack pool") != 0) {
 		xnheap_destroy(&kheap, &xnpod_flush_heap, NULL);
 		return -ENOMEM;
 	}
diff --git a/ksrc/nucleus/shadow.c b/ksrc/nucleus/shadow.c
index d6d1203..e8876d8 100644
--- a/ksrc/nucleus/shadow.c
+++ b/ksrc/nucleus/shadow.c
@@ -1886,7 +1886,9 @@ static void *xnshadow_sys_event(int event, void *data)
 
 		err = xnheap_init_mapped(&p->sem_heap,
 					 CONFIG_XENO_OPT_SEM_HEAPSZ * 1024,
-					 XNARCH_SHARED_HEAP_FLAGS);
+					 XNARCH_SHARED_HEAP_FLAGS,
+					 "private sem heap [%d]",
+					 current->pid);
 		if (err) {
 			xnarch_free_host_mem(p, sizeof(*p));
 			return ERR_PTR(err);
@@ -1896,6 +1898,7 @@ static void *xnshadow_sys_event(int event, void *data)
 
 	case XNSHADOW_CLIENT_DETACH:
 		p = ppd2sys((xnshadow_ppd_t *) data);
+
 		xnheap_destroy_mapped(&p->sem_heap, post_ppd_release, NULL);
 
 		return NULL;
diff --git a/ksrc/skins/native/heap.c b/ksrc/skins/native/heap.c
index 886758c..3072f99 100644
--- a/ksrc/skins/native/heap.c
+++ b/ksrc/skins/native/heap.c
@@ -266,7 +266,8 @@ int rt_heap_create(RT_HEAP *heap, const char *name, size_t heapsize, int mode)
 					 heapsize,
 					 ((mode & H_DMA) ? GFP_DMA : 0)
 					 | ((mode & H_NONCACHED) ?
-					    XNHEAP_GFP_NONCACHED : 0));
+					    XNHEAP_GFP_NONCACHED : 0),
+					 "rt_heap: %s", name);
 		if (err)
 			return err;
 
@@ -286,7 +287,8 @@ int rt_heap_create(RT_HEAP *heap, const char *name, size_t heapsize, int mode)
 		if (!heapmem)
 			return -ENOMEM;
 
-		err = xnheap_init(&heap->heap_base, heapmem, heapsize, XNHEAP_PAGE_SIZE);
+		err = xnheap_init(&heap->heap_base, heapmem, heapsize,
+				  XNHEAP_PAGE_SIZE, "rt_heap: %s", name);
 		if (err) {
 			xnarch_free_host_mem(heapmem, heapsize);
 			return err;
diff --git a/ksrc/skins/native/pipe.c b/ksrc/skins/native/pipe.c
index 672fa7a..18d9aa0 100644
--- a/ksrc/skins/native/pipe.c
+++ b/ksrc/skins/native/pipe.c
@@ -295,7 +295,9 @@ int rt_pipe_create(RT_PIPE *pipe, const char *name, int minor, size_t poolsize)
 			return -ENOMEM;
 
 		/* Use natural page size */
-		err = xnheap_init(&pipe->privpool, poolmem, poolsize, XNHEAP_PAGE_SIZE);
+		err = xnheap_init(&pipe->privpool, poolmem, poolsize,
+				  XNHEAP_PAGE_SIZE,
+				  "rt_pipe: %d / %s", minor, name);
 		if (err) {
 			xnarch_free_host_mem(poolmem, poolsize);
 			return err;
diff --git a/ksrc/skins/native/queue.c b/ksrc/skins/native/queue.c
index 249947a..becc0f0 100644
--- a/ksrc/skins/native/queue.c
+++ b/ksrc/skins/native/queue.c
@@ -229,7 +229,8 @@ int rt_queue_create(RT_QUEUE *q,
 		err = xnheap_init_mapped(&q->bufpool,
 					 poolsize,
 					 ((mode & Q_DMA) ? GFP_DMA 
-					  : XNARCH_SHARED_HEAP_FLAGS));
+					  : XNARCH_SHARED_HEAP_FLAGS),
+					 "rt_queue: %s", name);
 		if (err)
 			return err;
 
@@ -249,7 +250,8 @@ int rt_queue_create(RT_QUEUE *q,
 		if (!poolmem)
 			return -ENOMEM;
 
-		err = xnheap_init(&q->bufpool, poolmem, poolsize, XNHEAP_PAGE_SIZE);
+		err = xnheap_init(&q->bufpool, poolmem, poolsize,
+				  XNHEAP_PAGE_SIZE, "rt_queue: %s", name);
 		if (err) {
 			xnarch_free_host_mem(poolmem, poolsize);
 			return err;
diff --git a/ksrc/skins/posix/shm.c b/ksrc/skins/posix/shm.c
index c92096a..0f0bd1a 100644
--- a/ksrc/skins/posix/shm.c
+++ b/ksrc/skins/posix/shm.c
@@ -539,7 +539,9 @@ int ftruncate(int fd, off_t len)
 			int flags = XNARCH_SHARED_HEAP_FLAGS |
 				((desc_flags & O_DIRECT) ? GFP_DMA : 0);
 
-			err = -xnheap_init_mapped(&shm->heapbase, len, flags);
+			err = -xnheap_init_mapped(&shm->heapbase, len, flags,
+						  "posix shm: %s",
+						  shm->nodebase.name);
 			if (err)
 				goto err_up;
 
diff --git a/ksrc/skins/psos+/rn.c b/ksrc/skins/psos+/rn.c
index 98f4500..3eb3ab9 100644
--- a/ksrc/skins/psos+/rn.c
+++ b/ksrc/skins/psos+/rn.c
@@ -191,7 +191,8 @@ u_long rn_create(const char *name,
 
 		rnsize = xnheap_rounded_size(rnsize, PAGE_SIZE),
 		err = xnheap_init_mapped(&rn->heapbase, rnsize, 
-					 XNARCH_SHARED_HEAP_FLAGS);
+					 XNARCH_SHARED_HEAP_FLAGS,
+					 "psosrn: %s", name);
 
 		if (err)
 			return err;
@@ -206,7 +207,8 @@ u_long rn_create(const char *name,
 		 * Caller must have accounted for overhead and
 		 * alignment since it supplies the memory space.
 		 */
-		if (xnheap_init(&rn->heapbase, rnaddr, rnsize, XNHEAP_PAGE_SIZE) != 0)
+		if (xnheap_init(&rn->heapbase, rnaddr, rnsize,
+				XNHEAP_PAGE_SIZE, "psosrn: %s", name) != 0)
 			return ERR_TINYRN;
 
 	inith(&rn->link);
diff --git a/ksrc/skins/rtai/shm.c b/ksrc/skins/rtai/shm.c
index 21c3b07..0b33a57 100644
--- a/ksrc/skins/rtai/shm.c
+++ b/ksrc/skins/rtai/shm.c
@@ -151,7 +151,8 @@ static xnshm_a_t *create_new_heap(unsigned long name, int heapsize, int suprt)
 	err = xnheap_init_mapped(p->heap,
 				 heapsize,
 				 (suprt == USE_GFP_KERNEL ? GFP_KERNEL : 0)
-				 | XNARCH_SHARED_HEAP_FLAGS);
+				 | XNARCH_SHARED_HEAP_FLAGS,
+				 "rtai heap: 0x%lx", name);
 #else /* !CONFIG_XENO_OPT_PERVASIVE */
 	{
 		void *heapmem;
@@ -164,7 +165,9 @@ static xnshm_a_t *create_new_heap(unsigned long name, int heapsize, int suprt)
 			err = -ENOMEM;
 		} else {
 
-			err = xnheap_init(p->heap, heapmem, heapsize, XNHEAP_PAGE_SIZE);
+			err = xnheap_init(p->heap, heapmem, heapsize,
+					  XNHEAP_PAGE_SIZE,
+					  "rtai heap: 0x%lx", name);
 			if (err) {
 				xnarch_free_host_mem(heapmem, heapsize);
 			}
diff --git a/ksrc/skins/vrtx/heap.c b/ksrc/skins/vrtx/heap.c
index de60792..8f8a1d5 100644
--- a/ksrc/skins/vrtx/heap.c
+++ b/ksrc/skins/vrtx/heap.c
@@ -165,7 +165,8 @@ int sc_hcreate(char *heapaddr, u_long heapsize, unsigned log2psize, int *errp)
 #ifdef CONFIG_XENO_OPT_PERVASIVE
 		heapsize = xnheap_rounded_size(heapsize, PAGE_SIZE);
 		err = xnheap_init_mapped(&heap->sysheap, heapsize, 
-					 XNARCH_SHARED_HEAP_FLAGS);
+					 XNARCH_SHARED_HEAP_FLAGS,
+					 "vrtx sysheap");
 
 		if (err) {
 			*errp = ER_MEM;
@@ -184,7 +185,8 @@ int sc_hcreate(char *heapaddr, u_long heapsize, unsigned log2psize, int *errp)
 		 * Caller must have accounted for overhead and
 		 * alignment since it supplies the memory space.
 		 */
-		err = xnheap_init(&heap->sysheap, heapaddr, heapsize, pagesize);
+		err = xnheap_init(&heap->sysheap, heapaddr, heapsize, pagesize,
+				  "vrtx sysheap");
 
 		if (err) {
 			if (err == -EINVAL)
diff --git a/ksrc/skins/vrtx/syscall.c b/ksrc/skins/vrtx/syscall.c
index 1623066..e3d2e31 100644
--- a/ksrc/skins/vrtx/syscall.c
+++ b/ksrc/skins/vrtx/syscall.c
@@ -1052,7 +1052,8 @@ static int __sc_pcreate(struct pt_regs *regs)
 	/* Block size. */
 	bsize = __xn_reg_arg3(regs);
 
-	err = xnheap_init_mapped(ptheap, ptsize, XNARCH_SHARED_HEAP_FLAGS);
+	err = xnheap_init_mapped(ptheap, ptsize, XNARCH_SHARED_HEAP_FLAGS,
+				 "vrtx ptheap");
 
 	if (err)
 		goto free_heap;



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [Xenomai-core] [PATCH v3 9/9] nucleus: Include all heaps in statistics
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 9/9] nucleus: Include all heaps in statistics Jan Kiszka
@ 2009-10-20 23:41   ` Philippe Gerum
  2009-10-22 10:52     ` Jan Kiszka
  0 siblings, 1 reply; 33+ messages in thread
From: Philippe Gerum @ 2009-10-20 23:41 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: xenomai

On Tue, 2009-10-20 at 13:37 +0200, Jan Kiszka wrote:
> This extends /proc/xenomai/heap with statistics about all currently used
> heaps. It takes care to flush nklock while iterating of this potentially
> long list.
> 
> Signed-off-by: Jan Kiszka <jan.kiszka@domain.hid>
> ---
> 
>  include/nucleus/heap.h    |   12 +++-
>  ksrc/drivers/ipc/iddp.c   |    3 +
>  ksrc/drivers/ipc/xddp.c   |    6 ++
>  ksrc/nucleus/heap.c       |  131 ++++++++++++++++++++++++++++++++++++++++-----
>  ksrc/nucleus/module.c     |    2 -
>  ksrc/nucleus/pod.c        |    5 +-
>  ksrc/nucleus/shadow.c     |    5 +-
>  ksrc/skins/native/heap.c  |    6 +-
>  ksrc/skins/native/pipe.c  |    4 +
>  ksrc/skins/native/queue.c |    6 +-
>  ksrc/skins/posix/shm.c    |    4 +
>  ksrc/skins/psos+/rn.c     |    6 +-
>  ksrc/skins/rtai/shm.c     |    7 ++
>  ksrc/skins/vrtx/heap.c    |    6 +-
>  ksrc/skins/vrtx/syscall.c |    3 +
>  15 files changed, 169 insertions(+), 37 deletions(-)
> 
> diff --git a/include/nucleus/heap.h b/include/nucleus/heap.h
> index 44db738..f653cd7 100644
> --- a/include/nucleus/heap.h
> +++ b/include/nucleus/heap.h
> @@ -115,6 +115,10 @@ typedef struct xnheap {
>  
>  	XNARCH_DECL_DISPLAY_CONTEXT();
>  
> +	xnholder_t stat_link;	/* Link in heapq */
> +
> +	char name[48];

s,48,XNOBJECT_NAME_LEN

> +
>  } xnheap_t;
>  
>  extern xnheap_t kheap;
> @@ -202,7 +206,8 @@ void xnheap_cleanup_proc(void);
>  
>  int xnheap_init_mapped(xnheap_t *heap,
>  		       u_long heapsize,
> -		       int memflags);
> +		       int memflags,
> +		       const char *name, ...);
>  

The va_list is handy, but this breaks the common pattern used throughout
the rest of the nucleus, based on passing pre-formatted labels. So
either we make all creation calls use va_lists (but xnthread would need
more work), or we make xnheap_init_mapped use the not-so-handy current
form.

Actually, providing xnheap_set_name() and a name parameter/va_list to
xnheap_init* is one too many, this clutters an inner interface
uselessly. The latter should go away, assuming that anon heaps may still
exist.

>  void xnheap_destroy_mapped(xnheap_t *heap,
>  			   void (*release)(struct xnheap *heap),
> @@ -224,7 +229,10 @@ void xnheap_destroy_mapped(xnheap_t *heap,
>  int xnheap_init(xnheap_t *heap,
>  		void *heapaddr,
>  		u_long heapsize,
> -		u_long pagesize);
> +		u_long pagesize,
> +		const char *name, ...);
> +
> +void xnheap_set_name(xnheap_t *heap, const char *name, ...);
>  
>  void xnheap_destroy(xnheap_t *heap,
>  		    void (*flushfn)(xnheap_t *heap,
> diff --git a/ksrc/drivers/ipc/iddp.c b/ksrc/drivers/ipc/iddp.c
> index a407946..b6382f1 100644
> --- a/ksrc/drivers/ipc/iddp.c
> +++ b/ksrc/drivers/ipc/iddp.c
> @@ -559,7 +559,8 @@ static int __iddp_bind_socket(struct rtipc_private *priv,
>  		}
>  
>  		ret = xnheap_init(&sk->privpool,
> -				  poolmem, poolsz, XNHEAP_PAGE_SIZE);
> +				  poolmem, poolsz, XNHEAP_PAGE_SIZE,
> +				  "ippd: %d", port);
>  		if (ret) {
>  			xnarch_free_host_mem(poolmem, poolsz);
>  			goto fail;
> diff --git a/ksrc/drivers/ipc/xddp.c b/ksrc/drivers/ipc/xddp.c
> index f62147a..a5dafef 100644
> --- a/ksrc/drivers/ipc/xddp.c
> +++ b/ksrc/drivers/ipc/xddp.c
> @@ -703,7 +703,7 @@ static int __xddp_bind_socket(struct rtipc_private *priv,
>  		}
>  
>  		ret = xnheap_init(&sk->privpool,
> -				  poolmem, poolsz, XNHEAP_PAGE_SIZE);
> +				  poolmem, poolsz, XNHEAP_PAGE_SIZE, "");
>  		if (ret) {
>  			xnarch_free_host_mem(poolmem, poolsz);
>  			goto fail;
> @@ -746,6 +746,10 @@ static int __xddp_bind_socket(struct rtipc_private *priv,
>  	sk->minor = ret;
>  	sa->sipc_port = ret;
>  	sk->name = *sa;
> +
> +	if (poolsz > 0)
> +		xnheap_set_name(sk->bufpool, "xddp: %d", sa->sipc_port);
> +
>  	/* Set default destination if unset at binding time. */
>  	if (sk->peer.sipc_port < 0)
>  		sk->peer = *sa;
> diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
> index 96c46f8..793d1c5 100644
> --- a/ksrc/nucleus/heap.c
> +++ b/ksrc/nucleus/heap.c
> @@ -76,6 +76,9 @@ EXPORT_SYMBOL_GPL(kheap);
>  xnheap_t kstacks;	/* Private stack pool */
>  #endif
>  
> +static DEFINE_XNQUEUE(heapq);	/* Heap list for /proc reporting */
> +static unsigned long heapq_rev;
> +
>  static void init_extent(xnheap_t *heap, xnextent_t *extent)
>  {
>  	caddr_t freepage;
> @@ -108,7 +111,7 @@ static void init_extent(xnheap_t *heap, xnextent_t *extent)
>   */
>  
>  /*!
> - * \fn xnheap_init(xnheap_t *heap,void *heapaddr,u_long heapsize,u_long pagesize)
> + * \fn xnheap_init(xnheap_t *heap,void *heapaddr,u_long heapsize,u_long pagesize,const char *name,...)
>   * \brief Initialize a memory heap.
>   *
>   * Initializes a memory heap suitable for time-bounded allocation
> @@ -145,6 +148,10 @@ static void init_extent(xnheap_t *heap, xnextent_t *extent)
>   * best one for your needs. In the current implementation, pagesize
>   * must be a power of two in the range [ 8 .. 32768 ] inclusive.
>   *
> + * @param name Name displayed in statistic outputs. This parameter can
> + * be a format string, in which case succeeding parameters will be used
> + * to resolve the final name.
> + *
>   * @return 0 is returned upon success, or one of the following error
>   * codes:
>   *
> @@ -161,12 +168,13 @@ static void init_extent(xnheap_t *heap, xnextent_t *extent)
>   * Rescheduling: never.
>   */
>  
> -int xnheap_init(xnheap_t *heap,
> -		void *heapaddr, u_long heapsize, u_long pagesize)
> +static int xnheap_init_va(xnheap_t *heap, void *heapaddr, u_long heapsize,
> +			  u_long pagesize, const char *name, va_list args)
>  {
>  	unsigned cpu, nr_cpus = xnarch_num_online_cpus();
>  	u_long hdrsize, shiftsize, pageshift;
>  	xnextent_t *extent;
> +	spl_t s;
>  
>  	/*
>  	 * Perform some parametrical checks first.
> @@ -232,12 +240,71 @@ int xnheap_init(xnheap_t *heap,
>  
>  	appendq(&heap->extents, &extent->link);
>  
> +	vsnprintf(heap->name, sizeof(heap->name), name, args);
> +
> +	xnlock_get_irqsave(&nklock, s);
> +	appendq(&heapq, &heap->stat_link);
> +	heapq_rev++;
> +	xnlock_put_irqrestore(&nklock, s);
> +
>  	xnarch_init_display_context(heap);
>  
>  	return 0;
>  }
> +
> +int xnheap_init(xnheap_t *heap, void *heapaddr, u_long heapsize,
> +		u_long pagesize, const char *name, ...)
> +{
> +	va_list args;
> +	int ret;
> +
> +	va_start(args, name);
> +	ret = xnheap_init_va(heap, heapaddr, heapsize, pagesize, name, args);
> +	va_end(args);
> +
> +	return ret;
> +}
>  EXPORT_SYMBOL_GPL(xnheap_init);
>  
> +/*!
> + * \fn xnheap_set_name(xnheap_t *heap,const char *name,...)
> + * \brief Overwrite the heap's name.

At some point, there should be a common base struct containing
everything needed to manipulate named objects, we would include in
higher level containers. It seems we are over-specializing quite generic
work.

> + *
> + * Set the heap name that will be used in statistic outputs. This service
> + * is useful if the final name is not yet defined on xnheap_init().
> + *
> + * @param heap The address of a heap descriptor.
> + *
> + * @param name Name displayed in statistic outputs. This parameter can
> + * be a format string, in which case succeeding parameters will be used
> + * to resolve the final name.
> + *
> + * Environments:
> + *
> + * This service can be called from:
> + *
> + * - Kernel module initialization/cleanup code
> + * - Kernel-based task
> + * - User-space task
> + *
> + * Rescheduling: never.
> + */
> +
> +void xnheap_set_name(xnheap_t *heap, const char *name, ...)
> +{
> +	va_list args;
> +	spl_t s;
> +
> +	va_start(args, name);
> +
> +	xnlock_get_irqsave(&nklock, s);
> +	vsnprintf(heap->name, sizeof(heap->name), name, args);
> +	xnlock_put_irqrestore(&nklock, s);
> +
> +	va_end(args);
> +}
> +EXPORT_SYMBOL_GPL(xnheap_set_name);
> +
>  /*! 
>   * \fn void xnheap_destroy(xnheap_t *heap, void (*flushfn)(xnheap_t *heap, void *extaddr, u_long extsize, void *cookie), void *cookie)
>   * \brief Destroys a memory heap.
> @@ -273,6 +340,11 @@ void xnheap_destroy(xnheap_t *heap,
>  	xnholder_t *holder;
>  	spl_t s;
>  
> +	xnlock_get_irqsave(&nklock, s);
> +	removeq(&heapq, &heap->stat_link);
> +	heapq_rev++;
> +	xnlock_put_irqrestore(&nklock, s);
> +
>  	if (!flushfn)
>  		return;
>  
> @@ -1140,9 +1212,11 @@ deref_out:
>  	return err;
>  }
>  
> -int xnheap_init_mapped(xnheap_t *heap, u_long heapsize, int memflags)
> +int xnheap_init_mapped(xnheap_t *heap, u_long heapsize, int memflags,
> +		       const char *name, ...)
>  {
>  	void *heapbase;
> +	va_list args;
>  	int err;
>  
>  	/* Caller must have accounted for internal overhead. */
> @@ -1156,7 +1230,9 @@ int xnheap_init_mapped(xnheap_t *heap, u_long heapsize, int memflags)
>  	if (!heapbase)
>  		return -ENOMEM;
>  
> -	err = xnheap_init(heap, heapbase, heapsize, PAGE_SIZE);
> +	va_start(args, name);
> +	err = xnheap_init_va(heap, heapbase, heapsize, PAGE_SIZE, name, args);
> +	va_end(args);
>  	if (err) {
>  		__unreserve_and_free_heap(heapbase, heapsize, memflags);
>  		return err;
> @@ -1178,6 +1254,7 @@ void xnheap_destroy_mapped(xnheap_t *heap,
>  			   void __user *mapaddr)
>  {
>  	unsigned long len;
> +	spl_t s;
>  
>  	/*
>  	 * Trying to unmap user memory without providing a release handler for
> @@ -1185,6 +1262,11 @@ void xnheap_destroy_mapped(xnheap_t *heap,
>  	 */
>  	XENO_ASSERT(NUCLEUS, !mapaddr || release, /* nop */);
>  
> +	xnlock_get_irqsave(&nklock, s);
> +	removeq(&heapq, &heap->stat_link);
> +	heapq_rev++;
> +	xnlock_put_irqrestore(&nklock, s);
> +
>  	spin_lock(&kheapq_lock);
>  
>  	removeq(&kheapq, &heap->link); /* Prevent further mapping. */
> @@ -1285,22 +1367,41 @@ static int heap_read_proc(char *page,
>  			  char **start,
>  			  off_t off, int count, int *eof, void *data)
>  {
> +	unsigned long rev;
> +	xnholder_t *entry;
> +	xnheap_t *heap;
>  	int len;
> +	spl_t s;
>  
>  	if (!xnpod_active_p())
>  		return -ESRCH;
>  
> -	len = sprintf(page, "size=%lu:used=%lu:pagesz=%lu  (main heap)\n",
> -		      xnheap_usable_mem(&kheap),
> -		      xnheap_used_mem(&kheap),
> -		      xnheap_page_size(&kheap));
> +	xnlock_get_irqsave(&nklock, s);
>  
> -#if CONFIG_XENO_OPT_SYS_STACKPOOLSZ > 0
> -	len += sprintf(page + len, "size=%lu:used=%lu:pagesz=%lu  (stack pool)\n",
> -		       xnheap_usable_mem(&kstacks),
> -		       xnheap_used_mem(&kstacks),
> -		       xnheap_page_size(&kstacks));
> -#endif
> +restart:
> +	len = 0;
> +
> +	entry = getheadq(&heapq);
> +	while (entry) {
> +		heap = container_of(entry, xnheap_t, stat_link);
> +		len += sprintf(page + len,
> +			       "size=%lu:used=%lu:pagesz=%lu  (%s)\n",
> +			       xnheap_usable_mem(heap),
> +			       xnheap_used_mem(heap),
> +			       xnheap_page_size(heap),
> +			       heap->name);
> +
> +		rev = heapq_rev;
> +
> +		xnlock_sync_irq(&nklock, s);
> +
> +		if (heapq_rev != rev)
> +			goto restart;
> +
> +		entry = nextq(&heapq, entry);
> +	}
> +
> +	xnlock_put_irqrestore(&nklock, s);
>  
>  	len -= off;
>  	if (len <= off + count)
> diff --git a/ksrc/nucleus/module.c b/ksrc/nucleus/module.c
> index 141276a..a0efc14 100644
> --- a/ksrc/nucleus/module.c
> +++ b/ksrc/nucleus/module.c
> @@ -100,7 +100,7 @@ int __init __xeno_sys_init(void)
>  #ifndef __XENO_SIM__
>  	ret = xnheap_init_mapped(&__xnsys_global_ppd.sem_heap,
>  				 CONFIG_XENO_OPT_GLOBAL_SEM_HEAPSZ * 1024,
> -				 XNARCH_SHARED_HEAP_FLAGS);
> +				 XNARCH_SHARED_HEAP_FLAGS, "global sem heap");
>  	if (ret)
>  		goto cleanup_arch;
>  #endif
> diff --git a/ksrc/nucleus/pod.c b/ksrc/nucleus/pod.c
> index 370ad28..07fbcdb 100644
> --- a/ksrc/nucleus/pod.c
> +++ b/ksrc/nucleus/pod.c
> @@ -384,9 +384,8 @@ int xnpod_init(void)
>  	heapaddr = xnarch_alloc_host_mem(xnmod_sysheap_size);
>  	if (heapaddr == NULL ||
>  	    xnheap_init(&kheap, heapaddr, xnmod_sysheap_size,
> -			XNHEAP_PAGE_SIZE) != 0) {
> +			XNHEAP_PAGE_SIZE, "main heap") != 0)
>  		return -ENOMEM;
> -	}
>  
>  #if CONFIG_XENO_OPT_SYS_STACKPOOLSZ > 0
>  	/*
> @@ -404,7 +403,7 @@ int xnpod_init(void)
>  	heapaddr = xnarch_alloc_stack_mem(CONFIG_XENO_OPT_SYS_STACKPOOLSZ * 1024);
>  	if (heapaddr == NULL ||
>  	    xnheap_init(&kstacks, heapaddr, CONFIG_XENO_OPT_SYS_STACKPOOLSZ * 1024,
> -			XNHEAP_PAGE_SIZE) != 0) {
> +			XNHEAP_PAGE_SIZE, "stack pool") != 0) {
>  		xnheap_destroy(&kheap, &xnpod_flush_heap, NULL);
>  		return -ENOMEM;
>  	}
> diff --git a/ksrc/nucleus/shadow.c b/ksrc/nucleus/shadow.c
> index d6d1203..e8876d8 100644
> --- a/ksrc/nucleus/shadow.c
> +++ b/ksrc/nucleus/shadow.c
> @@ -1886,7 +1886,9 @@ static void *xnshadow_sys_event(int event, void *data)
>  
>  		err = xnheap_init_mapped(&p->sem_heap,
>  					 CONFIG_XENO_OPT_SEM_HEAPSZ * 1024,
> -					 XNARCH_SHARED_HEAP_FLAGS);
> +					 XNARCH_SHARED_HEAP_FLAGS,
> +					 "private sem heap [%d]",
> +					 current->pid);
>  		if (err) {
>  			xnarch_free_host_mem(p, sizeof(*p));
>  			return ERR_PTR(err);
> @@ -1896,6 +1898,7 @@ static void *xnshadow_sys_event(int event, void *data)
>  
>  	case XNSHADOW_CLIENT_DETACH:
>  		p = ppd2sys((xnshadow_ppd_t *) data);
> +
>  		xnheap_destroy_mapped(&p->sem_heap, post_ppd_release, NULL);
>  
>  		return NULL;
> diff --git a/ksrc/skins/native/heap.c b/ksrc/skins/native/heap.c
> index 886758c..3072f99 100644
> --- a/ksrc/skins/native/heap.c
> +++ b/ksrc/skins/native/heap.c
> @@ -266,7 +266,8 @@ int rt_heap_create(RT_HEAP *heap, const char *name, size_t heapsize, int mode)
>  					 heapsize,
>  					 ((mode & H_DMA) ? GFP_DMA : 0)
>  					 | ((mode & H_NONCACHED) ?
> -					    XNHEAP_GFP_NONCACHED : 0));
> +					    XNHEAP_GFP_NONCACHED : 0),
> +					 "rt_heap: %s", name);
>  		if (err)
>  			return err;
>  
> @@ -286,7 +287,8 @@ int rt_heap_create(RT_HEAP *heap, const char *name, size_t heapsize, int mode)
>  		if (!heapmem)
>  			return -ENOMEM;
>  
> -		err = xnheap_init(&heap->heap_base, heapmem, heapsize, XNHEAP_PAGE_SIZE);
> +		err = xnheap_init(&heap->heap_base, heapmem, heapsize,
> +				  XNHEAP_PAGE_SIZE, "rt_heap: %s", name);
>  		if (err) {
>  			xnarch_free_host_mem(heapmem, heapsize);
>  			return err;
> diff --git a/ksrc/skins/native/pipe.c b/ksrc/skins/native/pipe.c
> index 672fa7a..18d9aa0 100644
> --- a/ksrc/skins/native/pipe.c
> +++ b/ksrc/skins/native/pipe.c
> @@ -295,7 +295,9 @@ int rt_pipe_create(RT_PIPE *pipe, const char *name, int minor, size_t poolsize)
>  			return -ENOMEM;
>  
>  		/* Use natural page size */
> -		err = xnheap_init(&pipe->privpool, poolmem, poolsize, XNHEAP_PAGE_SIZE);
> +		err = xnheap_init(&pipe->privpool, poolmem, poolsize,
> +				  XNHEAP_PAGE_SIZE,
> +				  "rt_pipe: %d / %s", minor, name);
>  		if (err) {
>  			xnarch_free_host_mem(poolmem, poolsize);
>  			return err;
> diff --git a/ksrc/skins/native/queue.c b/ksrc/skins/native/queue.c
> index 249947a..becc0f0 100644
> --- a/ksrc/skins/native/queue.c
> +++ b/ksrc/skins/native/queue.c
> @@ -229,7 +229,8 @@ int rt_queue_create(RT_QUEUE *q,
>  		err = xnheap_init_mapped(&q->bufpool,
>  					 poolsize,
>  					 ((mode & Q_DMA) ? GFP_DMA 
> -					  : XNARCH_SHARED_HEAP_FLAGS));
> +					  : XNARCH_SHARED_HEAP_FLAGS),
> +					 "rt_queue: %s", name);
>  		if (err)
>  			return err;
>  
> @@ -249,7 +250,8 @@ int rt_queue_create(RT_QUEUE *q,
>  		if (!poolmem)
>  			return -ENOMEM;
>  
> -		err = xnheap_init(&q->bufpool, poolmem, poolsize, XNHEAP_PAGE_SIZE);
> +		err = xnheap_init(&q->bufpool, poolmem, poolsize,
> +				  XNHEAP_PAGE_SIZE, "rt_queue: %s", name);
>  		if (err) {
>  			xnarch_free_host_mem(poolmem, poolsize);
>  			return err;
> diff --git a/ksrc/skins/posix/shm.c b/ksrc/skins/posix/shm.c
> index c92096a..0f0bd1a 100644
> --- a/ksrc/skins/posix/shm.c
> +++ b/ksrc/skins/posix/shm.c
> @@ -539,7 +539,9 @@ int ftruncate(int fd, off_t len)
>  			int flags = XNARCH_SHARED_HEAP_FLAGS |
>  				((desc_flags & O_DIRECT) ? GFP_DMA : 0);
>  
> -			err = -xnheap_init_mapped(&shm->heapbase, len, flags);
> +			err = -xnheap_init_mapped(&shm->heapbase, len, flags,
> +						  "posix shm: %s",
> +						  shm->nodebase.name);
>  			if (err)
>  				goto err_up;
>  
> diff --git a/ksrc/skins/psos+/rn.c b/ksrc/skins/psos+/rn.c
> index 98f4500..3eb3ab9 100644
> --- a/ksrc/skins/psos+/rn.c
> +++ b/ksrc/skins/psos+/rn.c
> @@ -191,7 +191,8 @@ u_long rn_create(const char *name,
>  
>  		rnsize = xnheap_rounded_size(rnsize, PAGE_SIZE),
>  		err = xnheap_init_mapped(&rn->heapbase, rnsize, 
> -					 XNARCH_SHARED_HEAP_FLAGS);
> +					 XNARCH_SHARED_HEAP_FLAGS,
> +					 "psosrn: %s", name);
>  
>  		if (err)
>  			return err;
> @@ -206,7 +207,8 @@ u_long rn_create(const char *name,
>  		 * Caller must have accounted for overhead and
>  		 * alignment since it supplies the memory space.
>  		 */
> -		if (xnheap_init(&rn->heapbase, rnaddr, rnsize, XNHEAP_PAGE_SIZE) != 0)
> +		if (xnheap_init(&rn->heapbase, rnaddr, rnsize,
> +				XNHEAP_PAGE_SIZE, "psosrn: %s", name) != 0)
>  			return ERR_TINYRN;
>  
>  	inith(&rn->link);
> diff --git a/ksrc/skins/rtai/shm.c b/ksrc/skins/rtai/shm.c
> index 21c3b07..0b33a57 100644
> --- a/ksrc/skins/rtai/shm.c
> +++ b/ksrc/skins/rtai/shm.c
> @@ -151,7 +151,8 @@ static xnshm_a_t *create_new_heap(unsigned long name, int heapsize, int suprt)
>  	err = xnheap_init_mapped(p->heap,
>  				 heapsize,
>  				 (suprt == USE_GFP_KERNEL ? GFP_KERNEL : 0)
> -				 | XNARCH_SHARED_HEAP_FLAGS);
> +				 | XNARCH_SHARED_HEAP_FLAGS,
> +				 "rtai heap: 0x%lx", name);
>  #else /* !CONFIG_XENO_OPT_PERVASIVE */
>  	{
>  		void *heapmem;
> @@ -164,7 +165,9 @@ static xnshm_a_t *create_new_heap(unsigned long name, int heapsize, int suprt)
>  			err = -ENOMEM;
>  		} else {
>  
> -			err = xnheap_init(p->heap, heapmem, heapsize, XNHEAP_PAGE_SIZE);
> +			err = xnheap_init(p->heap, heapmem, heapsize,
> +					  XNHEAP_PAGE_SIZE,
> +					  "rtai heap: 0x%lx", name);
>  			if (err) {
>  				xnarch_free_host_mem(heapmem, heapsize);
>  			}
> diff --git a/ksrc/skins/vrtx/heap.c b/ksrc/skins/vrtx/heap.c
> index de60792..8f8a1d5 100644
> --- a/ksrc/skins/vrtx/heap.c
> +++ b/ksrc/skins/vrtx/heap.c
> @@ -165,7 +165,8 @@ int sc_hcreate(char *heapaddr, u_long heapsize, unsigned log2psize, int *errp)
>  #ifdef CONFIG_XENO_OPT_PERVASIVE
>  		heapsize = xnheap_rounded_size(heapsize, PAGE_SIZE);
>  		err = xnheap_init_mapped(&heap->sysheap, heapsize, 
> -					 XNARCH_SHARED_HEAP_FLAGS);
> +					 XNARCH_SHARED_HEAP_FLAGS,
> +					 "vrtx sysheap");
>  
>  		if (err) {
>  			*errp = ER_MEM;
> @@ -184,7 +185,8 @@ int sc_hcreate(char *heapaddr, u_long heapsize, unsigned log2psize, int *errp)
>  		 * Caller must have accounted for overhead and
>  		 * alignment since it supplies the memory space.
>  		 */
> -		err = xnheap_init(&heap->sysheap, heapaddr, heapsize, pagesize);
> +		err = xnheap_init(&heap->sysheap, heapaddr, heapsize, pagesize,
> +				  "vrtx sysheap");
>  
>  		if (err) {
>  			if (err == -EINVAL)
> diff --git a/ksrc/skins/vrtx/syscall.c b/ksrc/skins/vrtx/syscall.c
> index 1623066..e3d2e31 100644
> --- a/ksrc/skins/vrtx/syscall.c
> +++ b/ksrc/skins/vrtx/syscall.c
> @@ -1052,7 +1052,8 @@ static int __sc_pcreate(struct pt_regs *regs)
>  	/* Block size. */
>  	bsize = __xn_reg_arg3(regs);
>  
> -	err = xnheap_init_mapped(ptheap, ptsize, XNARCH_SHARED_HEAP_FLAGS);
> +	err = xnheap_init_mapped(ptheap, ptsize, XNARCH_SHARED_HEAP_FLAGS,
> +				 "vrtx ptheap");
>  
>  	if (err)
>  		goto free_heap;
> 
-- 
Philippe.




^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Xenomai-core] [PATCH] native: Avoid double release on queue/heap auto-cleanup
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 8/9] native: Fix memory leak on heap/queue auto-deletion Jan Kiszka
@ 2009-10-22 10:30   ` Jan Kiszka
  0 siblings, 0 replies; 33+ messages in thread
From: Jan Kiszka @ 2009-10-22 10:30 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai-core

Jan Kiszka wrote:
> We are currently leaking user space heap/queue objects when the owner
> terminates without deleting them before. Fix it by releasing the objects
> in the corresponding cleanup callbacks which are also called on owner
> termination.

Just realized that we need this patch in addition:

------------>

Commit 3a7330b164 also requires this patch to avoid that the queue and
heap objects are released twice on automatic cleanup (via the xnheap
release handler and via the ppd object queue flush).

Signed-off-by: Jan Kiszka <jan.kiszka@domain.hid>
---

 include/native/heap.h  |    2 +-
 include/native/queue.h |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/native/heap.h b/include/native/heap.h
index 5444a43..0c2a7a2 100644
--- a/include/native/heap.h
+++ b/include/native/heap.h
@@ -117,7 +117,7 @@ void __native_heap_pkg_cleanup(void);
 
 static inline void __native_heap_flush_rq(xnqueue_t *rq)
 {
-	xeno_flush_rq(RT_HEAP, rq, heap);
+	xeno_flush_rq_norelease(RT_HEAP, rq, heap);
 }
 
 int rt_heap_delete_inner(RT_HEAP *heap,
diff --git a/include/native/queue.h b/include/native/queue.h
index 77925c2..2951c42 100644
--- a/include/native/queue.h
+++ b/include/native/queue.h
@@ -129,7 +129,7 @@ void __native_queue_pkg_cleanup(void);
 
 static inline void __native_queue_flush_rq(xnqueue_t *rq)
 {
-	xeno_flush_rq(RT_QUEUE, rq, queue);
+	xeno_flush_rq_norelease(RT_QUEUE, rq, queue);
 }
 
 ssize_t rt_queue_receive_inner(RT_QUEUE *q, void **bufp,


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [Xenomai-core] [PATCH v3 9/9] nucleus: Include all heaps in statistics
  2009-10-20 23:41   ` Philippe Gerum
@ 2009-10-22 10:52     ` Jan Kiszka
  2009-11-11 12:59       ` Jan Kiszka
  0 siblings, 1 reply; 33+ messages in thread
From: Jan Kiszka @ 2009-10-22 10:52 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

Philippe Gerum wrote:
> On Tue, 2009-10-20 at 13:37 +0200, Jan Kiszka wrote:
>> This extends /proc/xenomai/heap with statistics about all currently used
>> heaps. It takes care to flush nklock while iterating of this potentially
>> long list.
>>
>> Signed-off-by: Jan Kiszka <jan.kiszka@domain.hid>
>> ---
>>
>>  include/nucleus/heap.h    |   12 +++-
>>  ksrc/drivers/ipc/iddp.c   |    3 +
>>  ksrc/drivers/ipc/xddp.c   |    6 ++
>>  ksrc/nucleus/heap.c       |  131 ++++++++++++++++++++++++++++++++++++++++-----
>>  ksrc/nucleus/module.c     |    2 -
>>  ksrc/nucleus/pod.c        |    5 +-
>>  ksrc/nucleus/shadow.c     |    5 +-
>>  ksrc/skins/native/heap.c  |    6 +-
>>  ksrc/skins/native/pipe.c  |    4 +
>>  ksrc/skins/native/queue.c |    6 +-
>>  ksrc/skins/posix/shm.c    |    4 +
>>  ksrc/skins/psos+/rn.c     |    6 +-
>>  ksrc/skins/rtai/shm.c     |    7 ++
>>  ksrc/skins/vrtx/heap.c    |    6 +-
>>  ksrc/skins/vrtx/syscall.c |    3 +
>>  15 files changed, 169 insertions(+), 37 deletions(-)
>>
>> diff --git a/include/nucleus/heap.h b/include/nucleus/heap.h
>> index 44db738..f653cd7 100644
>> --- a/include/nucleus/heap.h
>> +++ b/include/nucleus/heap.h
>> @@ -115,6 +115,10 @@ typedef struct xnheap {
>>
>>       XNARCH_DECL_DISPLAY_CONTEXT();
>>
>> +     xnholder_t stat_link;   /* Link in heapq */
>> +
>> +     char name[48];
> 
> s,48,XNOBJECT_NAME_LEN

OK, but XNOBJECT_NAME_LEN+16 (due to class prefixes and additional
information like the minor ID).

> 
>> +
>>  } xnheap_t;
>>
>>  extern xnheap_t kheap;
>> @@ -202,7 +206,8 @@ void xnheap_cleanup_proc(void);
>>
>>  int xnheap_init_mapped(xnheap_t *heap,
>>                      u_long heapsize,
>> -                    int memflags);
>> +                    int memflags,
>> +                    const char *name, ...);
>>
> 
> The va_list is handy, but this breaks the common pattern used throughout
> the rest of the nucleus, based on passing pre-formatted labels. So
> either we make all creation calls use va_lists (but xnthread would need
> more work), or we make xnheap_init_mapped use the not-so-handy current
> form.
> 
> Actually, providing xnheap_set_name() and a name parameter/va_list to
> xnheap_init* is one too many, this clutters an inner interface
> uselessly. The latter should go away, assuming that anon heaps may still
> exist.

If we want to print all heaps, we should at least set a name indicating
their class. And therefore we need to pass a descriptive name along with
/every/ heap initialization. Forcing the majority of xnheap_init users
to additionally issue xnheap_set_name is the cluttering a wanted to
avoid. Only a minority needs this split-up, and therefore you see both
interfaces in my patch.

> 
>>  void xnheap_destroy_mapped(xnheap_t *heap,
>>                          void (*release)(struct xnheap *heap),
>> @@ -224,7 +229,10 @@ void xnheap_destroy_mapped(xnheap_t *heap,
>>  int xnheap_init(xnheap_t *heap,
>>               void *heapaddr,
>>               u_long heapsize,
>> -             u_long pagesize);
>> +             u_long pagesize,
>> +             const char *name, ...);
>> +
>> +void xnheap_set_name(xnheap_t *heap, const char *name, ...);
>>
>>  void xnheap_destroy(xnheap_t *heap,
>>                   void (*flushfn)(xnheap_t *heap,
>> diff --git a/ksrc/drivers/ipc/iddp.c b/ksrc/drivers/ipc/iddp.c
>> index a407946..b6382f1 100644
>> --- a/ksrc/drivers/ipc/iddp.c
>> +++ b/ksrc/drivers/ipc/iddp.c
>> @@ -559,7 +559,8 @@ static int __iddp_bind_socket(struct rtipc_private *priv,
>>               }
>>
>>               ret = xnheap_init(&sk->privpool,
>> -                               poolmem, poolsz, XNHEAP_PAGE_SIZE);
>> +                               poolmem, poolsz, XNHEAP_PAGE_SIZE,
>> +                               "ippd: %d", port);
>>               if (ret) {
>>                       xnarch_free_host_mem(poolmem, poolsz);
>>                       goto fail;
>> diff --git a/ksrc/drivers/ipc/xddp.c b/ksrc/drivers/ipc/xddp.c
>> index f62147a..a5dafef 100644
>> --- a/ksrc/drivers/ipc/xddp.c
>> +++ b/ksrc/drivers/ipc/xddp.c
>> @@ -703,7 +703,7 @@ static int __xddp_bind_socket(struct rtipc_private *priv,
>>               }
>>
>>               ret = xnheap_init(&sk->privpool,
>> -                               poolmem, poolsz, XNHEAP_PAGE_SIZE);
>> +                               poolmem, poolsz, XNHEAP_PAGE_SIZE, "");
>>               if (ret) {
>>                       xnarch_free_host_mem(poolmem, poolsz);
>>                       goto fail;
>> @@ -746,6 +746,10 @@ static int __xddp_bind_socket(struct rtipc_private *priv,
>>       sk->minor = ret;
>>       sa->sipc_port = ret;
>>       sk->name = *sa;
>> +
>> +     if (poolsz > 0)
>> +             xnheap_set_name(sk->bufpool, "xddp: %d", sa->sipc_port);
>> +
>>       /* Set default destination if unset at binding time. */
>>       if (sk->peer.sipc_port < 0)
>>               sk->peer = *sa;
>> diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
>> index 96c46f8..793d1c5 100644
>> --- a/ksrc/nucleus/heap.c
>> +++ b/ksrc/nucleus/heap.c
>> @@ -76,6 +76,9 @@ EXPORT_SYMBOL_GPL(kheap);
>>  xnheap_t kstacks;    /* Private stack pool */
>>  #endif
>>
>> +static DEFINE_XNQUEUE(heapq);        /* Heap list for /proc reporting */
>> +static unsigned long heapq_rev;
>> +
>>  static void init_extent(xnheap_t *heap, xnextent_t *extent)
>>  {
>>       caddr_t freepage;
>> @@ -108,7 +111,7 @@ static void init_extent(xnheap_t *heap, xnextent_t *extent)
>>   */
>>
>>  /*!
>> - * \fn xnheap_init(xnheap_t *heap,void *heapaddr,u_long heapsize,u_long pagesize)
>> + * \fn xnheap_init(xnheap_t *heap,void *heapaddr,u_long heapsize,u_long pagesize,const char *name,...)
>>   * \brief Initialize a memory heap.
>>   *
>>   * Initializes a memory heap suitable for time-bounded allocation
>> @@ -145,6 +148,10 @@ static void init_extent(xnheap_t *heap, xnextent_t *extent)
>>   * best one for your needs. In the current implementation, pagesize
>>   * must be a power of two in the range [ 8 .. 32768 ] inclusive.
>>   *
>> + * @param name Name displayed in statistic outputs. This parameter can
>> + * be a format string, in which case succeeding parameters will be used
>> + * to resolve the final name.
>> + *
>>   * @return 0 is returned upon success, or one of the following error
>>   * codes:
>>   *
>> @@ -161,12 +168,13 @@ static void init_extent(xnheap_t *heap, xnextent_t *extent)
>>   * Rescheduling: never.
>>   */
>>
>> -int xnheap_init(xnheap_t *heap,
>> -             void *heapaddr, u_long heapsize, u_long pagesize)
>> +static int xnheap_init_va(xnheap_t *heap, void *heapaddr, u_long heapsize,
>> +                       u_long pagesize, const char *name, va_list args)
>>  {
>>       unsigned cpu, nr_cpus = xnarch_num_online_cpus();
>>       u_long hdrsize, shiftsize, pageshift;
>>       xnextent_t *extent;
>> +     spl_t s;
>>
>>       /*
>>        * Perform some parametrical checks first.
>> @@ -232,12 +240,71 @@ int xnheap_init(xnheap_t *heap,
>>
>>       appendq(&heap->extents, &extent->link);
>>
>> +     vsnprintf(heap->name, sizeof(heap->name), name, args);
>> +
>> +     xnlock_get_irqsave(&nklock, s);
>> +     appendq(&heapq, &heap->stat_link);
>> +     heapq_rev++;
>> +     xnlock_put_irqrestore(&nklock, s);
>> +
>>       xnarch_init_display_context(heap);
>>
>>       return 0;
>>  }
>> +
>> +int xnheap_init(xnheap_t *heap, void *heapaddr, u_long heapsize,
>> +             u_long pagesize, const char *name, ...)
>> +{
>> +     va_list args;
>> +     int ret;
>> +
>> +     va_start(args, name);
>> +     ret = xnheap_init_va(heap, heapaddr, heapsize, pagesize, name, args);
>> +     va_end(args);
>> +
>> +     return ret;
>> +}
>>  EXPORT_SYMBOL_GPL(xnheap_init);
>>
>> +/*!
>> + * \fn xnheap_set_name(xnheap_t *heap,const char *name,...)
>> + * \brief Overwrite the heap's name.
> 
> At some point, there should be a common base struct containing
> everything needed to manipulate named objects, we would include in
> higher level containers. It seems we are over-specializing quite generic
> work.

Probably right (though this case remains special to some degree because
names may consist of numeric IDs that are formatted on visualization). I
think I should rename 'name' to 'display_name' or so to clarify that we
are not dealing with a generic object name here.

Jan

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Xenomai-core] [PATCH v3 5/9] nucleus: Avoid returning errors from xnheap_destroy_mapped
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 5/9] nucleus: Avoid returning errors from xnheap_destroy_mapped Jan Kiszka
@ 2009-10-24 17:22   ` Philippe Gerum
  2009-11-02 16:04     ` Philippe Gerum
  0 siblings, 1 reply; 33+ messages in thread
From: Philippe Gerum @ 2009-10-24 17:22 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: xenomai

On Tue, 2009-10-20 at 13:37 +0200, Jan Kiszka wrote:
> Allowing xnheap_delete_mapped to return an error and then attempting to
> recover from it does not work out very well: Corner cases are racy,
> intransparent to the user, and proper error handling imposes a lot of
> complexity on the caller - if it actually bothers to check the return
> value...
> 
> Fortunately, there is no reason for this function to fail: If the heap
> is still mapped, just install the provide cleanup handler and switch to
> deferred removal. If the unmapping fails, we either raced with some
> other caller of unmap or user space provided a bogus address, or
> something else is wrong. In any case, leaving the cleanup callback
> behind is the best we can do anyway.
> 
> Removing the return value immediately allows to simplify the callers,
> namemly rt_queue_delete and rt_heap_delete.
> 
> Note: This is still not 100% waterproof. If we issue
> xnheap_destroy_mapped from module cleanup passing a release handler
> that belongs to the module text, deferred release will cause a crash.
> But this corner case is no new regression, so let's keep the head in the
> sand.

I agree with this one, eventually. This does make things clearer, and
removes some opportunities for the upper interfaces to shot themselves
in the foot. Merged, thanks.

> 
> Signed-off-by: Jan Kiszka <jan.kiszka@domain.hid>
> ---
> 
>  include/nucleus/heap.h    |    6 +++---
>  ksrc/nucleus/heap.c       |   45 +++++++++++++++++++++++++++------------------
>  ksrc/skins/native/heap.c  |   21 ++++++---------------
>  ksrc/skins/native/queue.c |   21 ++++++---------------
>  ksrc/skins/rtai/shm.c     |    6 +-----
>  5 files changed, 43 insertions(+), 56 deletions(-)
> 
> diff --git a/include/nucleus/heap.h b/include/nucleus/heap.h
> index ca691bf..44db738 100644
> --- a/include/nucleus/heap.h
> +++ b/include/nucleus/heap.h
> @@ -204,9 +204,9 @@ int xnheap_init_mapped(xnheap_t *heap,
>  		       u_long heapsize,
>  		       int memflags);
>  
> -int xnheap_destroy_mapped(xnheap_t *heap,
> -			  void (*release)(struct xnheap *heap),
> -			  void __user *mapaddr);
> +void xnheap_destroy_mapped(xnheap_t *heap,
> +			   void (*release)(struct xnheap *heap),
> +			   void __user *mapaddr);
>  
>  #define xnheap_mapped_offset(heap,ptr) \
>  (((caddr_t)(ptr)) - ((caddr_t)(heap)->archdep.heapbase))
> diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
> index 4958daa..96c46f8 100644
> --- a/ksrc/nucleus/heap.c
> +++ b/ksrc/nucleus/heap.c
> @@ -1173,42 +1173,51 @@ int xnheap_init_mapped(xnheap_t *heap, u_long heapsize, int memflags)
>  	return 0;
>  }
>  
> -int xnheap_destroy_mapped(xnheap_t *heap, void (*release)(struct xnheap *heap),
> -			  void __user *mapaddr)
> +void xnheap_destroy_mapped(xnheap_t *heap,
> +			   void (*release)(struct xnheap *heap),
> +			   void __user *mapaddr)
>  {
> -	int ret = 0, ccheck;
>  	unsigned long len;
>  
> -	ccheck = mapaddr ? 1 : 0;
> +	/*
> +	 * Trying to unmap user memory without providing a release handler for
> +	 * deferred cleanup is a bug.
> +	 */
> +	XENO_ASSERT(NUCLEUS, !mapaddr || release, /* nop */);
>  
>  	spin_lock(&kheapq_lock);
>  
> -	if (heap->archdep.numaps > ccheck) {
> -		heap->archdep.release = release;
> -		spin_unlock(&kheapq_lock);
> -		return -EBUSY;
> -	}
> -
>  	removeq(&kheapq, &heap->link); /* Prevent further mapping. */
> +
> +	heap->archdep.release = release;
> +
> +	if (heap->archdep.numaps == 0)
> +		mapaddr = NULL; /* nothing left to unmap */
> +	else
> +		release = NULL; /* will be called by Linux on unmap */
> +
>  	spin_unlock(&kheapq_lock);
>  
>  	len = xnheap_extentsize(heap);
>  
>  	if (mapaddr) {
>  		down_write(&current->mm->mmap_sem);
> -		heap->archdep.release = NULL;
> -		ret = do_munmap(current->mm, (unsigned long)mapaddr, len);
> +		do_munmap(current->mm, (unsigned long)mapaddr, len);
>  		up_write(&current->mm->mmap_sem);
>  	}
>  
> -	if (ret == 0) {
> +	if (heap->archdep.numaps > 0) {
> +		/* The release handler is supposed to clean up the rest. */
> +		XENO_ASSERT(NUCLEUS, heap->archdep.release, /* nop */);
> +		return;
> +	}
> +
> +	if (!mapaddr) {
>  		__unreserve_and_free_heap(heap->archdep.heapbase, len,
>  					  heap->archdep.kmflags);
>  		if (release)
>  			release(heap);
>  	}
> -
> -	return ret;
>  }
>  
>  static struct file_operations xnheap_fops = {
> @@ -1260,11 +1269,11 @@ int xnheap_init_mapped(xnheap_t *heap, u_long heapsize, int memflags)
>  	return -ENOMEM;
>  }
>  
> -int xnheap_destroy_mapped(xnheap_t *heap, void (*release)(struct xnheap *heap),
> -			  void __user *mapaddr)
> +void xnheap_destroy_mapped(xnheap_t *heap,
> +			   void (*release)(struct xnheap *heap),
> +			   void __user *mapaddr)
>  {
>  	xnheap_destroy(heap, &xnheap_free_extent, NULL);
> -	return 0;
>  }
>  #endif /* !CONFIG_XENO_OPT_PERVASIVE */
>  
> diff --git a/ksrc/skins/native/heap.c b/ksrc/skins/native/heap.c
> index 0d7ad83..f7411e8 100644
> --- a/ksrc/skins/native/heap.c
> +++ b/ksrc/skins/native/heap.c
> @@ -357,9 +357,6 @@ static void __heap_post_release(struct xnheap *h)
>   *
>   * @return 0 is returned upon success. Otherwise:
>   *
> - * - -EBUSY is returned if @a heap is in use by another process and the
> - * descriptor is not destroyed.
> - *
>   * - -EINVAL is returned if @a heap is not a heap descriptor.
>   *
>   * - -EIDRM is returned if @a heap is a deleted heap descriptor.
> @@ -379,7 +376,7 @@ static void __heap_post_release(struct xnheap *h)
>  
>  int rt_heap_delete_inner(RT_HEAP *heap, void __user *mapaddr)
>  {
> -	int err = 0;
> +	int err;
>  	spl_t s;
>  
>  	if (!xnpod_root_p())
> @@ -412,22 +409,16 @@ int rt_heap_delete_inner(RT_HEAP *heap, void __user *mapaddr)
>  
>  #ifdef CONFIG_XENO_OPT_PERVASIVE
>  	if (heap->mode & H_MAPPABLE)
> -		err = xnheap_destroy_mapped(&heap->heap_base,
> -					    __heap_post_release, mapaddr);
> +		xnheap_destroy_mapped(&heap->heap_base,
> +				      __heap_post_release, mapaddr);
>  	else
>  #endif /* CONFIG_XENO_OPT_PERVASIVE */
> +	{
>  		xnheap_destroy(&heap->heap_base, &__heap_flush_private, NULL);
> -
> -	xnlock_get_irqsave(&nklock, s);
> -
> -	if (err)
> -		heap->magic = XENO_HEAP_MAGIC;
> -	else if (!(heap->mode & H_MAPPABLE))
>  		__heap_post_release(&heap->heap_base);
> +	}
>  
> -	xnlock_put_irqrestore(&nklock, s);
> -
> -	return err;
> +	return 0;
>  }
>  
>  int rt_heap_delete(RT_HEAP *heap)
> diff --git a/ksrc/skins/native/queue.c b/ksrc/skins/native/queue.c
> index f913675..3592a4a 100644
> --- a/ksrc/skins/native/queue.c
> +++ b/ksrc/skins/native/queue.c
> @@ -326,9 +326,6 @@ static void __queue_post_release(struct xnheap *heap)
>   * - -EPERM is returned if this service was called from an
>   * asynchronous context.
>   *
> - * - -EBUSY is returned if an attempt is made to delete a shared queue
> - * which is still bound to a process.
> - *
>   * Environments:
>   *
>   * This service can be called from:
> @@ -341,7 +338,7 @@ static void __queue_post_release(struct xnheap *heap)
>  
>  int rt_queue_delete_inner(RT_QUEUE *q, void __user *mapaddr)
>  {
> -	int err = 0;
> +	int err;
>  	spl_t s;
>  
>  	if (xnpod_asynch_p())
> @@ -373,22 +370,16 @@ int rt_queue_delete_inner(RT_QUEUE *q, void __user *mapaddr)
>  
>  #ifdef CONFIG_XENO_OPT_PERVASIVE
>  	if (q->mode & Q_SHARED)
> -		err = xnheap_destroy_mapped(&q->bufpool,
> -					    __queue_post_release, mapaddr);
> +		xnheap_destroy_mapped(&q->bufpool,
> +				      __queue_post_release, mapaddr);
>  	else
>  #endif /* CONFIG_XENO_OPT_PERVASIVE */
> +	{
>  		xnheap_destroy(&q->bufpool, &__queue_flush_private, NULL);
> -
> -	xnlock_get_irqsave(&nklock, s);
> -
> -	if (err)
> -		q->magic = XENO_QUEUE_MAGIC;
> -	else if (!(q->mode & Q_SHARED))
>  		__queue_post_release(&q->bufpool);
> +	}
>  
> -	xnlock_put_irqrestore(&nklock, s);
> -
> -	return err;
> +	return 0;
>  }
>  
>  int rt_queue_delete(RT_QUEUE *q)
> diff --git a/ksrc/skins/rtai/shm.c b/ksrc/skins/rtai/shm.c
> index fddf455..4c56495 100644
> --- a/ksrc/skins/rtai/shm.c
> +++ b/ksrc/skins/rtai/shm.c
> @@ -293,13 +293,11 @@ static int _shm_free(unsigned long name)
>  				 * [YES!]
>  				 */
>  #ifdef CONFIG_XENO_OPT_PERVASIVE
> -				ret = xnheap_destroy_mapped(p->heap, NULL, NULL);
> +				xnheap_destroy_mapped(p->heap, NULL, NULL);
>  #else /* !CONFIG_XENO_OPT_PERVASIVE */
>  				xnheap_destroy(p->heap,
>  					       &__heap_flush_private, NULL);
>  #endif /* !CONFIG_XENO_OPT_PERVASIVE */
> -				if (ret)
> -					goto unlock_and_exit;
>  				xnheap_free(&kheap, p->heap);
>  			}
>  			removeq(&xnshm_allocq, &p->link);
> @@ -311,8 +309,6 @@ static int _shm_free(unsigned long name)
>  		holder = nextq(&xnshm_allocq, holder);
>  	}
>  
> -      unlock_and_exit:
> -
>  	xnlock_put_irqrestore(&nklock, s);
>  
>  	return ret;
> 
-- 
Philippe.




^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Xenomai-core] [PATCH v3 6/9] rtai: Try to fix _shm_free
  2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 6/9] rtai: Try to fix _shm_free Jan Kiszka
@ 2009-10-24 17:25   ` Philippe Gerum
  0 siblings, 0 replies; 33+ messages in thread
From: Philippe Gerum @ 2009-10-24 17:25 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: xenomai

On Tue, 2009-10-20 at 13:37 +0200, Jan Kiszka wrote:
> This is totally untested but should not make things worse than they
> already are.

I cowardly agree with this accurate analysis.

Looks ok anyway, and if not eventually, well, there are one or two -rc's
left, for the one or two users of this skin to check and validate those
changes.

> 
> Signed-off-by: Jan Kiszka <jan.kiszka@domain.hid>
> ---
> 
>  ksrc/skins/rtai/shm.c |   31 +++++++++++++++++++------------
>  1 files changed, 19 insertions(+), 12 deletions(-)
> 
> diff --git a/ksrc/skins/rtai/shm.c b/ksrc/skins/rtai/shm.c
> index 4c56495..21c3b07 100644
> --- a/ksrc/skins/rtai/shm.c
> +++ b/ksrc/skins/rtai/shm.c
> @@ -260,19 +260,24 @@ void *rt_heap_open(unsigned long name, int size, int suprt)
>  	return _shm_alloc(name, size, suprt, 0, &opaque);
>  }
>  
> -#ifndef CONFIG_XENO_OPT_PERVASIVE
> +#ifdef CONFIG_XENO_OPT_PERVASIVE
> +static void __heap_flush_shared(xnheap_t *heap)
> +{
> +	xnheap_free(&kheap, heap);
> +}
> +#else /* !CONFIG_XENO_OPT_PERVASIVE */
>  static void __heap_flush_private(xnheap_t *heap,
>  				 void *heapmem, u_long heapsize, void *cookie)
>  {
>  	xnarch_free_host_mem(heapmem, heapsize);
>  }
> -#endif /* CONFIG_XENO_OPT_PERVASIVE */
> +#endif /* !CONFIG_XENO_OPT_PERVASIVE */
>  
>  static int _shm_free(unsigned long name)
>  {
> -	int ret = 0;
>  	xnholder_t *holder;
>  	xnshm_a_t *p;
> +	int ret;
>  	spl_t s;
>  
>  	xnlock_get_irqsave(&nklock, s);
> @@ -283,27 +288,29 @@ static int _shm_free(unsigned long name)
>  		p = link2shma(holder);
>  
>  		if (p->name == name && --p->ref == 0) {
> +			removeq(&xnshm_allocq, &p->link);
>  			if (p->handle)
>  				xnregistry_remove(p->handle);
> +
> +			xnlock_put_irqrestore(&nklock, s);
> +
>  			if (p->heap == &kheap)
>  				xnheap_free(&kheap, p->chunk);
>  			else {
> -				/* Should release lock here? 
> -				 * Can destroy_mapped suspend ?
> -				 * [YES!]
> -				 */
>  #ifdef CONFIG_XENO_OPT_PERVASIVE
> -				xnheap_destroy_mapped(p->heap, NULL, NULL);
> +				xnheap_destroy_mapped(p->heap,
> +						      __heap_flush_shared,
> +						      NULL);
>  #else /* !CONFIG_XENO_OPT_PERVASIVE */
>  				xnheap_destroy(p->heap,
>  					       &__heap_flush_private, NULL);
> -#endif /* !CONFIG_XENO_OPT_PERVASIVE */
>  				xnheap_free(&kheap, p->heap);
> +#endif /* !CONFIG_XENO_OPT_PERVASIVE */
>  			}
> -			removeq(&xnshm_allocq, &p->link);
>  			ret = p->size;
>  			xnheap_free(&kheap, p);
> -			break;
> +
> +			return ret;
>  		}
>  
>  		holder = nextq(&xnshm_allocq, holder);
> @@ -311,7 +318,7 @@ static int _shm_free(unsigned long name)
>  
>  	xnlock_put_irqrestore(&nklock, s);
>  
> -	return ret;
> +	return 0;
>  }
>  
>  int rt_shm_free(unsigned long name)
> 
-- 
Philippe.




^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Xenomai-core] [PATCH v3 5/9] nucleus: Avoid returning errors from xnheap_destroy_mapped
  2009-10-24 17:22   ` Philippe Gerum
@ 2009-11-02 16:04     ` Philippe Gerum
  2009-11-02 16:41       ` Jan Kiszka
  0 siblings, 1 reply; 33+ messages in thread
From: Philippe Gerum @ 2009-11-02 16:04 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: xenomai

On Sat, 2009-10-24 at 19:22 +0200, Philippe Gerum wrote:
> On Tue, 2009-10-20 at 13:37 +0200, Jan Kiszka wrote:
> > Allowing xnheap_delete_mapped to return an error and then attempting to
> > recover from it does not work out very well: Corner cases are racy,
> > intransparent to the user, and proper error handling imposes a lot of
> > complexity on the caller - if it actually bothers to check the return
> > value...
> > 
> > Fortunately, there is no reason for this function to fail: If the heap
> > is still mapped, just install the provide cleanup handler and switch to
> > deferred removal. If the unmapping fails, we either raced with some
> > other caller of unmap or user space provided a bogus address, or
> > something else is wrong. In any case, leaving the cleanup callback
> > behind is the best we can do anyway.
> > 
> > Removing the return value immediately allows to simplify the callers,
> > namemly rt_queue_delete and rt_heap_delete.
> > 
> > Note: This is still not 100% waterproof. If we issue
> > xnheap_destroy_mapped from module cleanup passing a release handler
> > that belongs to the module text, deferred release will cause a crash.
> > But this corner case is no new regression, so let's keep the head in the
> > sand.
> 
> I agree with this one, eventually. This does make things clearer, and
> removes some opportunities for the upper interfaces to shot themselves
> in the foot. Merged, thanks.

Well, actually, it does make things clearer, but it is broken. Enabling
list debugging makes the nucleus pull the break after a double unlink in
vmclose().

Basically, the issue is that calling rt_queue/heap_delete() explicitly
from userland will break, due to the vmclose() handler being indirectly
called by do_munmap() for the last mapping. The nasty thing is that
without debugs on, kheapq is just silently trashed.

Fix is on its way, along with nommu support for shared heaps as well.

> > 
> > Signed-off-by: Jan Kiszka <jan.kiszka@domain.hid>
> > ---
> > 
> >  include/nucleus/heap.h    |    6 +++---
> >  ksrc/nucleus/heap.c       |   45 +++++++++++++++++++++++++++------------------
> >  ksrc/skins/native/heap.c  |   21 ++++++---------------
> >  ksrc/skins/native/queue.c |   21 ++++++---------------
> >  ksrc/skins/rtai/shm.c     |    6 +-----
> >  5 files changed, 43 insertions(+), 56 deletions(-)
> > 
> > diff --git a/include/nucleus/heap.h b/include/nucleus/heap.h
> > index ca691bf..44db738 100644
> > --- a/include/nucleus/heap.h
> > +++ b/include/nucleus/heap.h
> > @@ -204,9 +204,9 @@ int xnheap_init_mapped(xnheap_t *heap,
> >  		       u_long heapsize,
> >  		       int memflags);
> >  
> > -int xnheap_destroy_mapped(xnheap_t *heap,
> > -			  void (*release)(struct xnheap *heap),
> > -			  void __user *mapaddr);
> > +void xnheap_destroy_mapped(xnheap_t *heap,
> > +			   void (*release)(struct xnheap *heap),
> > +			   void __user *mapaddr);
> >  
> >  #define xnheap_mapped_offset(heap,ptr) \
> >  (((caddr_t)(ptr)) - ((caddr_t)(heap)->archdep.heapbase))
> > diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
> > index 4958daa..96c46f8 100644
> > --- a/ksrc/nucleus/heap.c
> > +++ b/ksrc/nucleus/heap.c
> > @@ -1173,42 +1173,51 @@ int xnheap_init_mapped(xnheap_t *heap, u_long heapsize, int memflags)
> >  	return 0;
> >  }
> >  
> > -int xnheap_destroy_mapped(xnheap_t *heap, void (*release)(struct xnheap *heap),
> > -			  void __user *mapaddr)
> > +void xnheap_destroy_mapped(xnheap_t *heap,
> > +			   void (*release)(struct xnheap *heap),
> > +			   void __user *mapaddr)
> >  {
> > -	int ret = 0, ccheck;
> >  	unsigned long len;
> >  
> > -	ccheck = mapaddr ? 1 : 0;
> > +	/*
> > +	 * Trying to unmap user memory without providing a release handler for
> > +	 * deferred cleanup is a bug.
> > +	 */
> > +	XENO_ASSERT(NUCLEUS, !mapaddr || release, /* nop */);
> >  
> >  	spin_lock(&kheapq_lock);
> >  
> > -	if (heap->archdep.numaps > ccheck) {
> > -		heap->archdep.release = release;
> > -		spin_unlock(&kheapq_lock);
> > -		return -EBUSY;
> > -	}
> > -
> >  	removeq(&kheapq, &heap->link); /* Prevent further mapping. */
> > +
> > +	heap->archdep.release = release;
> > +
> > +	if (heap->archdep.numaps == 0)
> > +		mapaddr = NULL; /* nothing left to unmap */
> > +	else
> > +		release = NULL; /* will be called by Linux on unmap */
> > +
> >  	spin_unlock(&kheapq_lock);
> >  
> >  	len = xnheap_extentsize(heap);
> >  
> >  	if (mapaddr) {
> >  		down_write(&current->mm->mmap_sem);
> > -		heap->archdep.release = NULL;
> > -		ret = do_munmap(current->mm, (unsigned long)mapaddr, len);
> > +		do_munmap(current->mm, (unsigned long)mapaddr, len);
> >  		up_write(&current->mm->mmap_sem);
> >  	}
> >  
> > -	if (ret == 0) {
> > +	if (heap->archdep.numaps > 0) {
> > +		/* The release handler is supposed to clean up the rest. */
> > +		XENO_ASSERT(NUCLEUS, heap->archdep.release, /* nop */);
> > +		return;
> > +	}
> > +
> > +	if (!mapaddr) {
> >  		__unreserve_and_free_heap(heap->archdep.heapbase, len,
> >  					  heap->archdep.kmflags);
> >  		if (release)
> >  			release(heap);
> >  	}
> > -
> > -	return ret;
> >  }
> >  
> >  static struct file_operations xnheap_fops = {
> > @@ -1260,11 +1269,11 @@ int xnheap_init_mapped(xnheap_t *heap, u_long heapsize, int memflags)
> >  	return -ENOMEM;
> >  }
> >  
> > -int xnheap_destroy_mapped(xnheap_t *heap, void (*release)(struct xnheap *heap),
> > -			  void __user *mapaddr)
> > +void xnheap_destroy_mapped(xnheap_t *heap,
> > +			   void (*release)(struct xnheap *heap),
> > +			   void __user *mapaddr)
> >  {
> >  	xnheap_destroy(heap, &xnheap_free_extent, NULL);
> > -	return 0;
> >  }
> >  #endif /* !CONFIG_XENO_OPT_PERVASIVE */
> >  
> > diff --git a/ksrc/skins/native/heap.c b/ksrc/skins/native/heap.c
> > index 0d7ad83..f7411e8 100644
> > --- a/ksrc/skins/native/heap.c
> > +++ b/ksrc/skins/native/heap.c
> > @@ -357,9 +357,6 @@ static void __heap_post_release(struct xnheap *h)
> >   *
> >   * @return 0 is returned upon success. Otherwise:
> >   *
> > - * - -EBUSY is returned if @a heap is in use by another process and the
> > - * descriptor is not destroyed.
> > - *
> >   * - -EINVAL is returned if @a heap is not a heap descriptor.
> >   *
> >   * - -EIDRM is returned if @a heap is a deleted heap descriptor.
> > @@ -379,7 +376,7 @@ static void __heap_post_release(struct xnheap *h)
> >  
> >  int rt_heap_delete_inner(RT_HEAP *heap, void __user *mapaddr)
> >  {
> > -	int err = 0;
> > +	int err;
> >  	spl_t s;
> >  
> >  	if (!xnpod_root_p())
> > @@ -412,22 +409,16 @@ int rt_heap_delete_inner(RT_HEAP *heap, void __user *mapaddr)
> >  
> >  #ifdef CONFIG_XENO_OPT_PERVASIVE
> >  	if (heap->mode & H_MAPPABLE)
> > -		err = xnheap_destroy_mapped(&heap->heap_base,
> > -					    __heap_post_release, mapaddr);
> > +		xnheap_destroy_mapped(&heap->heap_base,
> > +				      __heap_post_release, mapaddr);
> >  	else
> >  #endif /* CONFIG_XENO_OPT_PERVASIVE */
> > +	{
> >  		xnheap_destroy(&heap->heap_base, &__heap_flush_private, NULL);
> > -
> > -	xnlock_get_irqsave(&nklock, s);
> > -
> > -	if (err)
> > -		heap->magic = XENO_HEAP_MAGIC;
> > -	else if (!(heap->mode & H_MAPPABLE))
> >  		__heap_post_release(&heap->heap_base);
> > +	}
> >  
> > -	xnlock_put_irqrestore(&nklock, s);
> > -
> > -	return err;
> > +	return 0;
> >  }
> >  
> >  int rt_heap_delete(RT_HEAP *heap)
> > diff --git a/ksrc/skins/native/queue.c b/ksrc/skins/native/queue.c
> > index f913675..3592a4a 100644
> > --- a/ksrc/skins/native/queue.c
> > +++ b/ksrc/skins/native/queue.c
> > @@ -326,9 +326,6 @@ static void __queue_post_release(struct xnheap *heap)
> >   * - -EPERM is returned if this service was called from an
> >   * asynchronous context.
> >   *
> > - * - -EBUSY is returned if an attempt is made to delete a shared queue
> > - * which is still bound to a process.
> > - *
> >   * Environments:
> >   *
> >   * This service can be called from:
> > @@ -341,7 +338,7 @@ static void __queue_post_release(struct xnheap *heap)
> >  
> >  int rt_queue_delete_inner(RT_QUEUE *q, void __user *mapaddr)
> >  {
> > -	int err = 0;
> > +	int err;
> >  	spl_t s;
> >  
> >  	if (xnpod_asynch_p())
> > @@ -373,22 +370,16 @@ int rt_queue_delete_inner(RT_QUEUE *q, void __user *mapaddr)
> >  
> >  #ifdef CONFIG_XENO_OPT_PERVASIVE
> >  	if (q->mode & Q_SHARED)
> > -		err = xnheap_destroy_mapped(&q->bufpool,
> > -					    __queue_post_release, mapaddr);
> > +		xnheap_destroy_mapped(&q->bufpool,
> > +				      __queue_post_release, mapaddr);
> >  	else
> >  #endif /* CONFIG_XENO_OPT_PERVASIVE */
> > +	{
> >  		xnheap_destroy(&q->bufpool, &__queue_flush_private, NULL);
> > -
> > -	xnlock_get_irqsave(&nklock, s);
> > -
> > -	if (err)
> > -		q->magic = XENO_QUEUE_MAGIC;
> > -	else if (!(q->mode & Q_SHARED))
> >  		__queue_post_release(&q->bufpool);
> > +	}
> >  
> > -	xnlock_put_irqrestore(&nklock, s);
> > -
> > -	return err;
> > +	return 0;
> >  }
> >  
> >  int rt_queue_delete(RT_QUEUE *q)
> > diff --git a/ksrc/skins/rtai/shm.c b/ksrc/skins/rtai/shm.c
> > index fddf455..4c56495 100644
> > --- a/ksrc/skins/rtai/shm.c
> > +++ b/ksrc/skins/rtai/shm.c
> > @@ -293,13 +293,11 @@ static int _shm_free(unsigned long name)
> >  				 * [YES!]
> >  				 */
> >  #ifdef CONFIG_XENO_OPT_PERVASIVE
> > -				ret = xnheap_destroy_mapped(p->heap, NULL, NULL);
> > +				xnheap_destroy_mapped(p->heap, NULL, NULL);
> >  #else /* !CONFIG_XENO_OPT_PERVASIVE */
> >  				xnheap_destroy(p->heap,
> >  					       &__heap_flush_private, NULL);
> >  #endif /* !CONFIG_XENO_OPT_PERVASIVE */
> > -				if (ret)
> > -					goto unlock_and_exit;
> >  				xnheap_free(&kheap, p->heap);
> >  			}
> >  			removeq(&xnshm_allocq, &p->link);
> > @@ -311,8 +309,6 @@ static int _shm_free(unsigned long name)
> >  		holder = nextq(&xnshm_allocq, holder);
> >  	}
> >  
> > -      unlock_and_exit:
> > -
> >  	xnlock_put_irqrestore(&nklock, s);
> >  
> >  	return ret;
> > 


-- 
Philippe.




^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Xenomai-core] [PATCH v3 5/9] nucleus: Avoid returning errors from xnheap_destroy_mapped
  2009-11-02 16:04     ` Philippe Gerum
@ 2009-11-02 16:41       ` Jan Kiszka
  2009-11-02 16:51         ` Philippe Gerum
  0 siblings, 1 reply; 33+ messages in thread
From: Jan Kiszka @ 2009-11-02 16:41 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

Philippe Gerum wrote:
> On Sat, 2009-10-24 at 19:22 +0200, Philippe Gerum wrote:
>> On Tue, 2009-10-20 at 13:37 +0200, Jan Kiszka wrote:
>>> Allowing xnheap_delete_mapped to return an error and then attempting to
>>> recover from it does not work out very well: Corner cases are racy,
>>> intransparent to the user, and proper error handling imposes a lot of
>>> complexity on the caller - if it actually bothers to check the return
>>> value...
>>>
>>> Fortunately, there is no reason for this function to fail: If the heap
>>> is still mapped, just install the provide cleanup handler and switch to
>>> deferred removal. If the unmapping fails, we either raced with some
>>> other caller of unmap or user space provided a bogus address, or
>>> something else is wrong. In any case, leaving the cleanup callback
>>> behind is the best we can do anyway.
>>>
>>> Removing the return value immediately allows to simplify the callers,
>>> namemly rt_queue_delete and rt_heap_delete.
>>>
>>> Note: This is still not 100% waterproof. If we issue
>>> xnheap_destroy_mapped from module cleanup passing a release handler
>>> that belongs to the module text, deferred release will cause a crash.
>>> But this corner case is no new regression, so let's keep the head in the
>>> sand.
>> I agree with this one, eventually. This does make things clearer, and
>> removes some opportunities for the upper interfaces to shot themselves
>> in the foot. Merged, thanks.
> 
> Well, actually, it does make things clearer, but it is broken. Enabling
> list debugging makes the nucleus pull the break after a double unlink in
> vmclose().
> 
> Basically, the issue is that calling rt_queue/heap_delete() explicitly
> from userland will break, due to the vmclose() handler being indirectly
> called by do_munmap() for the last mapping. The nasty thing is that
> without debugs on, kheapq is just silently trashed.
> 
> Fix is on its way, along with nommu support for shared heaps as well.

OK, I see. Just on minor add-on to your fix:

diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
index ec14f73..1ae6af6 100644
--- a/ksrc/nucleus/heap.c
+++ b/ksrc/nucleus/heap.c
@@ -1241,6 +1241,7 @@ void xnheap_destroy_mapped(xnheap_t *heap,
 		down_write(&current->mm->mmap_sem);
 		heap->archdep.release = NULL;
 		do_munmap(current->mm, (unsigned long)mapaddr, len);
+		heap->archdep.release = release;
 		up_write(&current->mm->mmap_sem);
 	}
 
@@ -1252,7 +1253,6 @@ void xnheap_destroy_mapped(xnheap_t *heap,
 	if (heap->archdep.numaps > 0) {
 		/* The release handler is supposed to clean up the rest. */
 		XENO_ASSERT(NUCLEUS, release != NULL, /* nop */);
-		heap->archdep.release = release;
 		return;
 	}
 

This is safer than leaving a potential race window open between dropping
mmap_sem and fixing up archdep.release again.

Jan

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [Xenomai-core] [PATCH v3 5/9] nucleus: Avoid returning errors from xnheap_destroy_mapped
  2009-11-02 16:41       ` Jan Kiszka
@ 2009-11-02 16:51         ` Philippe Gerum
  2009-11-02 16:57           ` Jan Kiszka
  0 siblings, 1 reply; 33+ messages in thread
From: Philippe Gerum @ 2009-11-02 16:51 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: xenomai

On Mon, 2009-11-02 at 17:41 +0100, Jan Kiszka wrote:
> Philippe Gerum wrote:
> > On Sat, 2009-10-24 at 19:22 +0200, Philippe Gerum wrote:
> >> On Tue, 2009-10-20 at 13:37 +0200, Jan Kiszka wrote:
> >>> Allowing xnheap_delete_mapped to return an error and then attempting to
> >>> recover from it does not work out very well: Corner cases are racy,
> >>> intransparent to the user, and proper error handling imposes a lot of
> >>> complexity on the caller - if it actually bothers to check the return
> >>> value...
> >>>
> >>> Fortunately, there is no reason for this function to fail: If the heap
> >>> is still mapped, just install the provide cleanup handler and switch to
> >>> deferred removal. If the unmapping fails, we either raced with some
> >>> other caller of unmap or user space provided a bogus address, or
> >>> something else is wrong. In any case, leaving the cleanup callback
> >>> behind is the best we can do anyway.
> >>>
> >>> Removing the return value immediately allows to simplify the callers,
> >>> namemly rt_queue_delete and rt_heap_delete.
> >>>
> >>> Note: This is still not 100% waterproof. If we issue
> >>> xnheap_destroy_mapped from module cleanup passing a release handler
> >>> that belongs to the module text, deferred release will cause a crash.
> >>> But this corner case is no new regression, so let's keep the head in the
> >>> sand.
> >> I agree with this one, eventually. This does make things clearer, and
> >> removes some opportunities for the upper interfaces to shot themselves
> >> in the foot. Merged, thanks.
> > 
> > Well, actually, it does make things clearer, but it is broken. Enabling
> > list debugging makes the nucleus pull the break after a double unlink in
> > vmclose().
> > 
> > Basically, the issue is that calling rt_queue/heap_delete() explicitly
> > from userland will break, due to the vmclose() handler being indirectly
> > called by do_munmap() for the last mapping. The nasty thing is that
> > without debugs on, kheapq is just silently trashed.
> > 
> > Fix is on its way, along with nommu support for shared heaps as well.
> 
> OK, I see. Just on minor add-on to your fix:
> 
> diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
> index ec14f73..1ae6af6 100644
> --- a/ksrc/nucleus/heap.c
> +++ b/ksrc/nucleus/heap.c
> @@ -1241,6 +1241,7 @@ void xnheap_destroy_mapped(xnheap_t *heap,
>  		down_write(&current->mm->mmap_sem);
>  		heap->archdep.release = NULL;
>  		do_munmap(current->mm, (unsigned long)mapaddr, len);
> +		heap->archdep.release = release;
>  		up_write(&current->mm->mmap_sem);
>  	}
>  
> @@ -1252,7 +1253,6 @@ void xnheap_destroy_mapped(xnheap_t *heap,
>  	if (heap->archdep.numaps > 0) {
>  		/* The release handler is supposed to clean up the rest. */
>  		XENO_ASSERT(NUCLEUS, release != NULL, /* nop */);
> -		heap->archdep.release = release;
>  		return;
>  	}
>  
> 
> This is safer than leaving a potential race window open between dropping
> mmap_sem and fixing up archdep.release again.
> 

Actually, we have to hold the kheap lock, in case weird code starts
mapping randomly from userland without getting a valid descriptor
through a skin call.

> Jan
> 


-- 
Philippe.




^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Xenomai-core] [PATCH v3 5/9] nucleus: Avoid returning errors from xnheap_destroy_mapped
  2009-11-02 16:51         ` Philippe Gerum
@ 2009-11-02 16:57           ` Jan Kiszka
  2009-11-02 18:01             ` Jan Kiszka
  0 siblings, 1 reply; 33+ messages in thread
From: Jan Kiszka @ 2009-11-02 16:57 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

Philippe Gerum wrote:
> On Mon, 2009-11-02 at 17:41 +0100, Jan Kiszka wrote:
>> Philippe Gerum wrote:
>>> On Sat, 2009-10-24 at 19:22 +0200, Philippe Gerum wrote:
>>>> On Tue, 2009-10-20 at 13:37 +0200, Jan Kiszka wrote:
>>>>> Allowing xnheap_delete_mapped to return an error and then attempting to
>>>>> recover from it does not work out very well: Corner cases are racy,
>>>>> intransparent to the user, and proper error handling imposes a lot of
>>>>> complexity on the caller - if it actually bothers to check the return
>>>>> value...
>>>>>
>>>>> Fortunately, there is no reason for this function to fail: If the heap
>>>>> is still mapped, just install the provide cleanup handler and switch to
>>>>> deferred removal. If the unmapping fails, we either raced with some
>>>>> other caller of unmap or user space provided a bogus address, or
>>>>> something else is wrong. In any case, leaving the cleanup callback
>>>>> behind is the best we can do anyway.
>>>>>
>>>>> Removing the return value immediately allows to simplify the callers,
>>>>> namemly rt_queue_delete and rt_heap_delete.
>>>>>
>>>>> Note: This is still not 100% waterproof. If we issue
>>>>> xnheap_destroy_mapped from module cleanup passing a release handler
>>>>> that belongs to the module text, deferred release will cause a crash.
>>>>> But this corner case is no new regression, so let's keep the head in the
>>>>> sand.
>>>> I agree with this one, eventually. This does make things clearer, and
>>>> removes some opportunities for the upper interfaces to shot themselves
>>>> in the foot. Merged, thanks.
>>> Well, actually, it does make things clearer, but it is broken. Enabling
>>> list debugging makes the nucleus pull the break after a double unlink in
>>> vmclose().
>>>
>>> Basically, the issue is that calling rt_queue/heap_delete() explicitly
>>> from userland will break, due to the vmclose() handler being indirectly
>>> called by do_munmap() for the last mapping. The nasty thing is that
>>> without debugs on, kheapq is just silently trashed.
>>>
>>> Fix is on its way, along with nommu support for shared heaps as well.
>> OK, I see. Just on minor add-on to your fix:
>>
>> diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
>> index ec14f73..1ae6af6 100644
>> --- a/ksrc/nucleus/heap.c
>> +++ b/ksrc/nucleus/heap.c
>> @@ -1241,6 +1241,7 @@ void xnheap_destroy_mapped(xnheap_t *heap,
>>  		down_write(&current->mm->mmap_sem);
>>  		heap->archdep.release = NULL;
>>  		do_munmap(current->mm, (unsigned long)mapaddr, len);
>> +		heap->archdep.release = release;
>>  		up_write(&current->mm->mmap_sem);
>>  	}
>>  
>> @@ -1252,7 +1253,6 @@ void xnheap_destroy_mapped(xnheap_t *heap,
>>  	if (heap->archdep.numaps > 0) {
>>  		/* The release handler is supposed to clean up the rest. */
>>  		XENO_ASSERT(NUCLEUS, release != NULL, /* nop */);
>> -		heap->archdep.release = release;
>>  		return;
>>  	}
>>  
>>
>> This is safer than leaving a potential race window open between dropping
>> mmap_sem and fixing up archdep.release again.
>>
> 
> Actually, we have to hold the kheap lock, in case weird code starts
> mapping randomly from userland without getting a valid descriptor
> through a skin call.

Yep, that as well.

Jan

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Xenomai-core] [PATCH v3 5/9] nucleus: Avoid returning errors from xnheap_destroy_mapped
  2009-11-02 16:57           ` Jan Kiszka
@ 2009-11-02 18:01             ` Jan Kiszka
  2009-11-02 18:19               ` [Xenomai-help] Xenomai on ARMadeus Pierre Ficheux
  2009-11-02 18:26               ` [Xenomai-core] [PATCH v3 5/9] nucleus: Avoid returning errors from xnheap_destroy_mapped Philippe Gerum
  0 siblings, 2 replies; 33+ messages in thread
From: Jan Kiszka @ 2009-11-02 18:01 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

Jan Kiszka wrote:
> Philippe Gerum wrote:
>> On Mon, 2009-11-02 at 17:41 +0100, Jan Kiszka wrote:
>>> Philippe Gerum wrote:
>>>> On Sat, 2009-10-24 at 19:22 +0200, Philippe Gerum wrote:
>>>>> On Tue, 2009-10-20 at 13:37 +0200, Jan Kiszka wrote:
>>>>>> Allowing xnheap_delete_mapped to return an error and then attempting to
>>>>>> recover from it does not work out very well: Corner cases are racy,
>>>>>> intransparent to the user, and proper error handling imposes a lot of
>>>>>> complexity on the caller - if it actually bothers to check the return
>>>>>> value...
>>>>>>
>>>>>> Fortunately, there is no reason for this function to fail: If the heap
>>>>>> is still mapped, just install the provide cleanup handler and switch to
>>>>>> deferred removal. If the unmapping fails, we either raced with some
>>>>>> other caller of unmap or user space provided a bogus address, or
>>>>>> something else is wrong. In any case, leaving the cleanup callback
>>>>>> behind is the best we can do anyway.
>>>>>>
>>>>>> Removing the return value immediately allows to simplify the callers,
>>>>>> namemly rt_queue_delete and rt_heap_delete.
>>>>>>
>>>>>> Note: This is still not 100% waterproof. If we issue
>>>>>> xnheap_destroy_mapped from module cleanup passing a release handler
>>>>>> that belongs to the module text, deferred release will cause a crash.
>>>>>> But this corner case is no new regression, so let's keep the head in the
>>>>>> sand.
>>>>> I agree with this one, eventually. This does make things clearer, and
>>>>> removes some opportunities for the upper interfaces to shot themselves
>>>>> in the foot. Merged, thanks.
>>>> Well, actually, it does make things clearer, but it is broken. Enabling
>>>> list debugging makes the nucleus pull the break after a double unlink in
>>>> vmclose().
>>>>
>>>> Basically, the issue is that calling rt_queue/heap_delete() explicitly
>>>> from userland will break, due to the vmclose() handler being indirectly
>>>> called by do_munmap() for the last mapping. The nasty thing is that
>>>> without debugs on, kheapq is just silently trashed.
>>>>
>>>> Fix is on its way, along with nommu support for shared heaps as well.
>>> OK, I see. Just on minor add-on to your fix:
>>>
>>> diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
>>> index ec14f73..1ae6af6 100644
>>> --- a/ksrc/nucleus/heap.c
>>> +++ b/ksrc/nucleus/heap.c
>>> @@ -1241,6 +1241,7 @@ void xnheap_destroy_mapped(xnheap_t *heap,
>>>  		down_write(&current->mm->mmap_sem);
>>>  		heap->archdep.release = NULL;
>>>  		do_munmap(current->mm, (unsigned long)mapaddr, len);
>>> +		heap->archdep.release = release;
>>>  		up_write(&current->mm->mmap_sem);
>>>  	}
>>>  
>>> @@ -1252,7 +1253,6 @@ void xnheap_destroy_mapped(xnheap_t *heap,
>>>  	if (heap->archdep.numaps > 0) {
>>>  		/* The release handler is supposed to clean up the rest. */
>>>  		XENO_ASSERT(NUCLEUS, release != NULL, /* nop */);
>>> -		heap->archdep.release = release;
>>>  		return;
>>>  	}
>>>  
>>>
>>> This is safer than leaving a potential race window open between dropping
>>> mmap_sem and fixing up archdep.release again.
>>>
>> Actually, we have to hold the kheap lock, in case weird code starts
>> mapping randomly from userland without getting a valid descriptor
>> through a skin call.
> 
> Yep, that as well.
> 

Note that 6b1a185b46 doesn't obsolete my patch (pull it from my tree if
you like).

Jan

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux


^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Xenomai-help] Xenomai on ARMadeus
  2009-11-02 18:01             ` Jan Kiszka
@ 2009-11-02 18:19               ` Pierre Ficheux
  2009-11-02 18:22                 ` Gilles Chanteperdrix
  2009-11-02 18:38                 ` Gilles Chanteperdrix
  2009-11-02 18:26               ` [Xenomai-core] [PATCH v3 5/9] nucleus: Avoid returning errors from xnheap_destroy_mapped Philippe Gerum
  1 sibling, 2 replies; 33+ messages in thread
From: Pierre Ficheux @ 2009-11-02 18:19 UTC (permalink / raw)
  To: Xenomai-help

Hi,

I've just tried to use Xenomai on ARMadeus board (APF9328, kernel
2.6.29). I got the following message when starting "latency" :

Any idea?

# /usr/xenomai/bin/latency
== Sampling period: 100 us
== Test mode: periodic user-mode task
== All results in microseconds
warming up...
Unhandled fault: external abort on non-linefetch (0x01a) at 0x40006010


# uname -a
Linux armadeus 2.6.29.6 #1 PREEMPT Fri Oct 30 13:11:26 CET 2009 armv4tl
unknown

# cat /proc/ipipe/version
1.13-00


Looks like there was a discussion  about a similar problem some days ago
 with Gilles  (Freescale CPU too).

http://www.mail-archive.com/xenomai@xenomai.org

Thx,

regards


-- 
Pierre FICHEUX -/- CTO OW/OS4I, France -\- pierre.ficheux@domain.hid
                                         http://www.os4i.com
                                         http://www.ficheux.org
I would love to change the world, but they won't give me the source code




^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Xenomai-help] Xenomai on ARMadeus
  2009-11-02 18:19               ` [Xenomai-help] Xenomai on ARMadeus Pierre Ficheux
@ 2009-11-02 18:22                 ` Gilles Chanteperdrix
  2009-11-02 18:38                 ` Gilles Chanteperdrix
  1 sibling, 0 replies; 33+ messages in thread
From: Gilles Chanteperdrix @ 2009-11-02 18:22 UTC (permalink / raw)
  To: Pierre Ficheux; +Cc: Xenomai-help

Pierre Ficheux wrote:
> Hi,
> 
> I've just tried to use Xenomai on ARMadeus board (APF9328, kernel
> 2.6.29). I got the following message when starting "latency" :
> 
> Any idea?
> 
> # /usr/xenomai/bin/latency
> == Sampling period: 100 us
> == Test mode: periodic user-mode task
> == All results in microseconds
> warming up...
> Unhandled fault: external abort on non-linefetch (0x01a) at 0x40006010
> 
> 
> # uname -a
> Linux armadeus 2.6.29.6 #1 PREEMPT Fri Oct 30 13:11:26 CET 2009 armv4tl
> unknown
> 
> # cat /proc/ipipe/version
> 1.13-00
> 
> 
> Looks like there was a discussion  about a similar problem some days ago
>  with Gilles  (Freescale CPU too).
> 
> http://www.mail-archive.com/xenomai@xenomai.org

Yes, the result of which is that you need some code to allow code not
running in supervisor mode to access I/O addresses.

You can also configure xenomai with --enable-arm-mach=generic, which
will avoid access to I/Os from user-space.

-- 
					    Gilles.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Xenomai-core] [PATCH v3 5/9] nucleus: Avoid returning errors from xnheap_destroy_mapped
  2009-11-02 18:01             ` Jan Kiszka
  2009-11-02 18:19               ` [Xenomai-help] Xenomai on ARMadeus Pierre Ficheux
@ 2009-11-02 18:26               ` Philippe Gerum
  2009-11-03  8:26                 ` Jan Kiszka
  1 sibling, 1 reply; 33+ messages in thread
From: Philippe Gerum @ 2009-11-02 18:26 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: xenomai

On Mon, 2009-11-02 at 19:01 +0100, Jan Kiszka wrote:
> Jan Kiszka wrote:
> > Philippe Gerum wrote:
> >> On Mon, 2009-11-02 at 17:41 +0100, Jan Kiszka wrote:
> >>> Philippe Gerum wrote:
> >>>> On Sat, 2009-10-24 at 19:22 +0200, Philippe Gerum wrote:
> >>>>> On Tue, 2009-10-20 at 13:37 +0200, Jan Kiszka wrote:
> >>>>>> Allowing xnheap_delete_mapped to return an error and then attempting to
> >>>>>> recover from it does not work out very well: Corner cases are racy,
> >>>>>> intransparent to the user, and proper error handling imposes a lot of
> >>>>>> complexity on the caller - if it actually bothers to check the return
> >>>>>> value...
> >>>>>>
> >>>>>> Fortunately, there is no reason for this function to fail: If the heap
> >>>>>> is still mapped, just install the provide cleanup handler and switch to
> >>>>>> deferred removal. If the unmapping fails, we either raced with some
> >>>>>> other caller of unmap or user space provided a bogus address, or
> >>>>>> something else is wrong. In any case, leaving the cleanup callback
> >>>>>> behind is the best we can do anyway.
> >>>>>>
> >>>>>> Removing the return value immediately allows to simplify the callers,
> >>>>>> namemly rt_queue_delete and rt_heap_delete.
> >>>>>>
> >>>>>> Note: This is still not 100% waterproof. If we issue
> >>>>>> xnheap_destroy_mapped from module cleanup passing a release handler
> >>>>>> that belongs to the module text, deferred release will cause a crash.
> >>>>>> But this corner case is no new regression, so let's keep the head in the
> >>>>>> sand.
> >>>>> I agree with this one, eventually. This does make things clearer, and
> >>>>> removes some opportunities for the upper interfaces to shot themselves
> >>>>> in the foot. Merged, thanks.
> >>>> Well, actually, it does make things clearer, but it is broken. Enabling
> >>>> list debugging makes the nucleus pull the break after a double unlink in
> >>>> vmclose().
> >>>>
> >>>> Basically, the issue is that calling rt_queue/heap_delete() explicitly
> >>>> from userland will break, due to the vmclose() handler being indirectly
> >>>> called by do_munmap() for the last mapping. The nasty thing is that
> >>>> without debugs on, kheapq is just silently trashed.
> >>>>
> >>>> Fix is on its way, along with nommu support for shared heaps as well.
> >>> OK, I see. Just on minor add-on to your fix:
> >>>
> >>> diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
> >>> index ec14f73..1ae6af6 100644
> >>> --- a/ksrc/nucleus/heap.c
> >>> +++ b/ksrc/nucleus/heap.c
> >>> @@ -1241,6 +1241,7 @@ void xnheap_destroy_mapped(xnheap_t *heap,
> >>>  		down_write(&current->mm->mmap_sem);
> >>>  		heap->archdep.release = NULL;
> >>>  		do_munmap(current->mm, (unsigned long)mapaddr, len);
> >>> +		heap->archdep.release = release;
> >>>  		up_write(&current->mm->mmap_sem);
> >>>  	}
> >>>  
> >>> @@ -1252,7 +1253,6 @@ void xnheap_destroy_mapped(xnheap_t *heap,
> >>>  	if (heap->archdep.numaps > 0) {
> >>>  		/* The release handler is supposed to clean up the rest. */
> >>>  		XENO_ASSERT(NUCLEUS, release != NULL, /* nop */);
> >>> -		heap->archdep.release = release;
> >>>  		return;
> >>>  	}
> >>>  
> >>>
> >>> This is safer than leaving a potential race window open between dropping
> >>> mmap_sem and fixing up archdep.release again.
> >>>
> >> Actually, we have to hold the kheap lock, in case weird code starts
> >> mapping randomly from userland without getting a valid descriptor
> >> through a skin call.
> > 
> > Yep, that as well.
> > 
> 
> Note that 6b1a185b46 doesn't obsolete my patch (pull it from my tree if
> you like).

Are you still referring to a race with the vmclose() handler?

> 
> Jan
> 


-- 
Philippe.




^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Xenomai-help] Xenomai on ARMadeus
  2009-11-02 18:19               ` [Xenomai-help] Xenomai on ARMadeus Pierre Ficheux
  2009-11-02 18:22                 ` Gilles Chanteperdrix
@ 2009-11-02 18:38                 ` Gilles Chanteperdrix
  2009-11-02 19:19                   ` gwenhael.goavec
  1 sibling, 1 reply; 33+ messages in thread
From: Gilles Chanteperdrix @ 2009-11-02 18:38 UTC (permalink / raw)
  To: Pierre Ficheux; +Cc: Xenomai-help

Pierre Ficheux wrote:
> Hi,
> 
> I've just tried to use Xenomai on ARMadeus board (APF9328, kernel
> 2.6.29).

By the way, did you make any change to the I-pipe patch to get it
running on this board?


-- 
					    Gilles.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Xenomai-help] Xenomai on ARMadeus
  2009-11-02 18:38                 ` Gilles Chanteperdrix
@ 2009-11-02 19:19                   ` gwenhael.goavec
  2009-11-02 22:29                     ` Gilles Chanteperdrix
  0 siblings, 1 reply; 33+ messages in thread
From: gwenhael.goavec @ 2009-11-02 19:19 UTC (permalink / raw)
  To: xenomai

On Mon, 02 Nov 2009 19:38:51 +0100
Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org>
wrote:

> By the way, did you make any change to the I-pipe patch to get it
> running on this board?

The only change realized, is made because Armadeus and
I-pipe does the same change on
mach-imx/include/mach/imxfb.h :

--- a/arch/arm/mach-imx/include/mach/imxfb.h
+++ b/arch/arm/mach-imx/include/mach/imxfb.h
@@ -14,7 +14,6 @@
 #define PCR_BPIX_8 (3 << 25)
 #define PCR_BPIX_12    (4 << 25)
 #define PCR_BPIX_16    (4 << 25)
+#define PCR_BPIX_MASK   (7 << 25)
 #define PCR_PIXPOL (1 << 24)
 #define PCR_FLMPOL (1 << 23)
 #define PCR_LPPOL  (1 << 22)

I take this opportunity to ask a question:
on APF27 with imx27, I found that the system freeze
when an interrupt handler is in place and an
interruption happens.
if interrupt handler is in place without interruption no
problems and if interruption happens without interrupt
handler it's ok too.

If someone could help or advise me. 
thank you very much

Gwenhael

Signed-off-by: gwenhael.goavec-merou@domain.hid


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Xenomai-help] Xenomai on ARMadeus
  2009-11-02 19:19                   ` gwenhael.goavec
@ 2009-11-02 22:29                     ` Gilles Chanteperdrix
  2009-11-03  7:36                       ` gwenhael.goavec
       [not found]                       ` <20091103082204.248eed59@domain.hid>
  0 siblings, 2 replies; 33+ messages in thread
From: Gilles Chanteperdrix @ 2009-11-02 22:29 UTC (permalink / raw)
  To: gwenhael.goavec; +Cc: xenomai

gwenhael.goavec wrote:
> On Mon, 02 Nov 2009 19:38:51 +0100
> Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org>
> wrote:
> 
>> By the way, did you make any change to the I-pipe patch to get it
>> running on this board?
> 
> The only change realized, is made because Armadeus and
> I-pipe does the same change on
> mach-imx/include/mach/imxfb.h :
> 
> --- a/arch/arm/mach-imx/include/mach/imxfb.h
> +++ b/arch/arm/mach-imx/include/mach/imxfb.h
> @@ -14,7 +14,6 @@
>  #define PCR_BPIX_8 (3 << 25)
>  #define PCR_BPIX_12    (4 << 25)
>  #define PCR_BPIX_16    (4 << 25)
> +#define PCR_BPIX_MASK   (7 << 25)
>  #define PCR_PIXPOL (1 << 24)
>  #define PCR_FLMPOL (1 << 23)
>  #define PCR_LPPOL  (1 << 22)
> 
> I take this opportunity to ask a question:
> on APF27 with imx27, I found that the system freeze
> when an interrupt handler is in place and an
> interruption happens.
> if interrupt handler is in place without interruption no
> problems and if interruption happens without interrupt
> handler it's ok too.
> 
> If someone could help or advise me. 
> thank you very much
> 
> Gwenhael
> 
> Signed-off-by: gwenhael.goavec-merou@domain.hid

What version of the I-pipe patch?

-- 
					    Gilles.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Xenomai-help] Xenomai on ARMadeus
  2009-11-02 22:29                     ` Gilles Chanteperdrix
@ 2009-11-03  7:36                       ` gwenhael.goavec
       [not found]                       ` <20091103082204.248eed59@domain.hid>
  1 sibling, 0 replies; 33+ messages in thread
From: gwenhael.goavec @ 2009-11-03  7:36 UTC (permalink / raw)
  To: xenomai

On Mon, 02 Nov 2009 23:29:31 +0100
Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org>
wrote:

> gwenhael.goavec wrote:
> > On Mon, 02 Nov 2009 19:38:51 +0100
> > Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org>
> > wrote:
> > 
> >> By the way, did you make any change to the I-pipe patch to get it
> >> running on this board?
> > 
> > The only change realized, is made because Armadeus and
> > I-pipe does the same change on
> > mach-imx/include/mach/imxfb.h :
> > 
> > --- a/arch/arm/mach-imx/include/mach/imxfb.h
> > +++ b/arch/arm/mach-imx/include/mach/imxfb.h
> > @@ -14,7 +14,6 @@
> >  #define PCR_BPIX_8 (3 << 25)
> >  #define PCR_BPIX_12    (4 << 25)
> >  #define PCR_BPIX_16    (4 << 25)
> > +#define PCR_BPIX_MASK   (7 << 25)
> >  #define PCR_PIXPOL (1 << 24)
> >  #define PCR_FLMPOL (1 << 23)
> >  #define PCR_LPPOL  (1 << 22)
> > 
> > I take this opportunity to ask a question:
> > on APF27 with imx27, I found that the system freeze
> > when an interrupt handler is in place and an
> > interruption happens.
> > if interrupt handler is in place without interruption no
> > problems and if interruption happens without interrupt
> > handler it's ok too.
> > 
> > If someone could help or advise me. 
> > thank you very much
> > 
> > Gwenhael
> > 
> > Signed-off-by: gwenhael.goavec-merou@domain.hid
> 
> What version of the I-pipe patch?
> 
> -- 
> 					    Gilles.

Version 1.13-03 for 2.6.29 

Gwenhael

Signed-off-by: gwenhael.goavec-merou@domain.hid


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Xenomai-core] [PATCH v3 5/9] nucleus: Avoid returning errors from xnheap_destroy_mapped
  2009-11-02 18:26               ` [Xenomai-core] [PATCH v3 5/9] nucleus: Avoid returning errors from xnheap_destroy_mapped Philippe Gerum
@ 2009-11-03  8:26                 ` Jan Kiszka
  0 siblings, 0 replies; 33+ messages in thread
From: Jan Kiszka @ 2009-11-03  8:26 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

[-- Attachment #1: Type: text/plain, Size: 3874 bytes --]

Philippe Gerum wrote:
> On Mon, 2009-11-02 at 19:01 +0100, Jan Kiszka wrote:
>> Jan Kiszka wrote:
>>> Philippe Gerum wrote:
>>>> On Mon, 2009-11-02 at 17:41 +0100, Jan Kiszka wrote:
>>>>> Philippe Gerum wrote:
>>>>>> On Sat, 2009-10-24 at 19:22 +0200, Philippe Gerum wrote:
>>>>>>> On Tue, 2009-10-20 at 13:37 +0200, Jan Kiszka wrote:
>>>>>>>> Allowing xnheap_delete_mapped to return an error and then attempting to
>>>>>>>> recover from it does not work out very well: Corner cases are racy,
>>>>>>>> intransparent to the user, and proper error handling imposes a lot of
>>>>>>>> complexity on the caller - if it actually bothers to check the return
>>>>>>>> value...
>>>>>>>>
>>>>>>>> Fortunately, there is no reason for this function to fail: If the heap
>>>>>>>> is still mapped, just install the provide cleanup handler and switch to
>>>>>>>> deferred removal. If the unmapping fails, we either raced with some
>>>>>>>> other caller of unmap or user space provided a bogus address, or
>>>>>>>> something else is wrong. In any case, leaving the cleanup callback
>>>>>>>> behind is the best we can do anyway.
>>>>>>>>
>>>>>>>> Removing the return value immediately allows to simplify the callers,
>>>>>>>> namemly rt_queue_delete and rt_heap_delete.
>>>>>>>>
>>>>>>>> Note: This is still not 100% waterproof. If we issue
>>>>>>>> xnheap_destroy_mapped from module cleanup passing a release handler
>>>>>>>> that belongs to the module text, deferred release will cause a crash.
>>>>>>>> But this corner case is no new regression, so let's keep the head in the
>>>>>>>> sand.
>>>>>>> I agree with this one, eventually. This does make things clearer, and
>>>>>>> removes some opportunities for the upper interfaces to shot themselves
>>>>>>> in the foot. Merged, thanks.
>>>>>> Well, actually, it does make things clearer, but it is broken. Enabling
>>>>>> list debugging makes the nucleus pull the break after a double unlink in
>>>>>> vmclose().
>>>>>>
>>>>>> Basically, the issue is that calling rt_queue/heap_delete() explicitly
>>>>>> from userland will break, due to the vmclose() handler being indirectly
>>>>>> called by do_munmap() for the last mapping. The nasty thing is that
>>>>>> without debugs on, kheapq is just silently trashed.
>>>>>>
>>>>>> Fix is on its way, along with nommu support for shared heaps as well.
>>>>> OK, I see. Just on minor add-on to your fix:
>>>>>
>>>>> diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
>>>>> index ec14f73..1ae6af6 100644
>>>>> --- a/ksrc/nucleus/heap.c
>>>>> +++ b/ksrc/nucleus/heap.c
>>>>> @@ -1241,6 +1241,7 @@ void xnheap_destroy_mapped(xnheap_t *heap,
>>>>>  		down_write(&current->mm->mmap_sem);
>>>>>  		heap->archdep.release = NULL;
>>>>>  		do_munmap(current->mm, (unsigned long)mapaddr, len);
>>>>> +		heap->archdep.release = release;
>>>>>  		up_write(&current->mm->mmap_sem);
>>>>>  	}
>>>>>  
>>>>> @@ -1252,7 +1253,6 @@ void xnheap_destroy_mapped(xnheap_t *heap,
>>>>>  	if (heap->archdep.numaps > 0) {
>>>>>  		/* The release handler is supposed to clean up the rest. */
>>>>>  		XENO_ASSERT(NUCLEUS, release != NULL, /* nop */);
>>>>> -		heap->archdep.release = release;
>>>>>  		return;
>>>>>  	}
>>>>>  
>>>>>
>>>>> This is safer than leaving a potential race window open between dropping
>>>>> mmap_sem and fixing up archdep.release again.
>>>>>
>>>> Actually, we have to hold the kheap lock, in case weird code starts
>>>> mapping randomly from userland without getting a valid descriptor
>>>> through a skin call.
>>> Yep, that as well.
>>>
>> Note that 6b1a185b46 doesn't obsolete my patch (pull it from my tree if
>> you like).
> 
> Are you still referring to a race with the vmclose() handler?
> 

Went through it again, and it's safe as it is (my patch would actually
open a new whole) - dropped.

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 257 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Xenomai-help] Xenomai on ARMadeus
       [not found]                       ` <20091103082204.248eed59@domain.hid>
@ 2009-11-04 13:14                         ` Gilles Chanteperdrix
       [not found]                           ` <fbc4f538a6f4d84cfe514aba0985a525.squirrel@domain.hid>
  0 siblings, 1 reply; 33+ messages in thread
From: Gilles Chanteperdrix @ 2009-11-04 13:14 UTC (permalink / raw)
  To: Xenomai help

gwenhael.goavec wrote:
> On Mon, 02 Nov 2009 23:29:31 +0100
> Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org>
> wrote:
> 
>> gwenhael.goavec wrote:
>>> On Mon, 02 Nov 2009 19:38:51 +0100
>>> Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org>
>>> wrote:
>>>
>>>> By the way, did you make any change to the I-pipe patch to get it
>>>> running on this board?
>>> The only change realized, is made because Armadeus and
>>> I-pipe does the same change on
>>> mach-imx/include/mach/imxfb.h :
>>>
>>> --- a/arch/arm/mach-imx/include/mach/imxfb.h
>>> +++ b/arch/arm/mach-imx/include/mach/imxfb.h
>>> @@ -14,7 +14,6 @@
>>>  #define PCR_BPIX_8 (3 << 25)
>>>  #define PCR_BPIX_12    (4 << 25)
>>>  #define PCR_BPIX_16    (4 << 25)
>>> +#define PCR_BPIX_MASK   (7 << 25)
>>>  #define PCR_PIXPOL (1 << 24)
>>>  #define PCR_FLMPOL (1 << 23)
>>>  #define PCR_LPPOL  (1 << 22)
>>>
>>> I take this opportunity to ask a question:
>>> on APF27 with imx27, I found that the system freeze
>>> when an interrupt handler is in place and an
>>> interruption happens.
>>> if interrupt handler is in place without interruption no
>>> problems and if interruption happens without interrupt
>>> handler it's ok too.
>>>
>>> If someone could help or advise me. 
>>> thank you very much

The reason I knew for this problem was if handle_edge_irq was used for
some interrupts. But problems of this kind for IMX GPIOs are fixed in
1.3-03.

Do you have this problem with in-kernel or out-of-tree drivers? With
real-time or non real-time interrupts? Could you show us
/proc/interrupts before the system freezes?

-- 
                                          Gilles




^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Xenomai-core] [PATCH v3 9/9] nucleus: Include all heaps in statistics
  2009-10-22 10:52     ` Jan Kiszka
@ 2009-11-11 12:59       ` Jan Kiszka
  2009-11-15 17:38         ` Philippe Gerum
  0 siblings, 1 reply; 33+ messages in thread
From: Jan Kiszka @ 2009-11-11 12:59 UTC (permalink / raw)
  Cc: xenomai

Jan Kiszka wrote:
> Philippe Gerum wrote:
>> On Tue, 2009-10-20 at 13:37 +0200, Jan Kiszka wrote:
>>> This extends /proc/xenomai/heap with statistics about all currently used
>>> heaps. It takes care to flush nklock while iterating of this potentially
>>> long list.
>>>
>>> Signed-off-by: Jan Kiszka <jan.kiszka@domain.hid>
>>> ---
>>>
>>>  include/nucleus/heap.h    |   12 +++-
>>>  ksrc/drivers/ipc/iddp.c   |    3 +
>>>  ksrc/drivers/ipc/xddp.c   |    6 ++
>>>  ksrc/nucleus/heap.c       |  131 ++++++++++++++++++++++++++++++++++++++++-----
>>>  ksrc/nucleus/module.c     |    2 -
>>>  ksrc/nucleus/pod.c        |    5 +-
>>>  ksrc/nucleus/shadow.c     |    5 +-
>>>  ksrc/skins/native/heap.c  |    6 +-
>>>  ksrc/skins/native/pipe.c  |    4 +
>>>  ksrc/skins/native/queue.c |    6 +-
>>>  ksrc/skins/posix/shm.c    |    4 +
>>>  ksrc/skins/psos+/rn.c     |    6 +-
>>>  ksrc/skins/rtai/shm.c     |    7 ++
>>>  ksrc/skins/vrtx/heap.c    |    6 +-
>>>  ksrc/skins/vrtx/syscall.c |    3 +
>>>  15 files changed, 169 insertions(+), 37 deletions(-)
>>>
>>> diff --git a/include/nucleus/heap.h b/include/nucleus/heap.h
>>> index 44db738..f653cd7 100644
>>> --- a/include/nucleus/heap.h
>>> +++ b/include/nucleus/heap.h
>>> @@ -115,6 +115,10 @@ typedef struct xnheap {
>>>
>>>       XNARCH_DECL_DISPLAY_CONTEXT();
>>>
>>> +     xnholder_t stat_link;   /* Link in heapq */
>>> +
>>> +     char name[48];
>> s,48,XNOBJECT_NAME_LEN
> 
> OK, but XNOBJECT_NAME_LEN+16 (due to class prefixes and additional
> information like the minor ID).
> 
>>> +
>>>  } xnheap_t;
>>>
>>>  extern xnheap_t kheap;
>>> @@ -202,7 +206,8 @@ void xnheap_cleanup_proc(void);
>>>
>>>  int xnheap_init_mapped(xnheap_t *heap,
>>>                      u_long heapsize,
>>> -                    int memflags);
>>> +                    int memflags,
>>> +                    const char *name, ...);
>>>
>> The va_list is handy, but this breaks the common pattern used throughout
>> the rest of the nucleus, based on passing pre-formatted labels. So
>> either we make all creation calls use va_lists (but xnthread would need
>> more work), or we make xnheap_init_mapped use the not-so-handy current
>> form.
>>
>> Actually, providing xnheap_set_name() and a name parameter/va_list to
>> xnheap_init* is one too many, this clutters an inner interface
>> uselessly. The latter should go away, assuming that anon heaps may still
>> exist.
> 
> If we want to print all heaps, we should at least set a name indicating
> their class. And therefore we need to pass a descriptive name along with
> /every/ heap initialization. Forcing the majority of xnheap_init users
> to additionally issue xnheap_set_name is the cluttering a wanted to
> avoid. Only a minority needs this split-up, and therefore you see both
> interfaces in my patch.
> 
>>>  void xnheap_destroy_mapped(xnheap_t *heap,
>>>                          void (*release)(struct xnheap *heap),
>>> @@ -224,7 +229,10 @@ void xnheap_destroy_mapped(xnheap_t *heap,
>>>  int xnheap_init(xnheap_t *heap,
>>>               void *heapaddr,
>>>               u_long heapsize,
>>> -             u_long pagesize);
>>> +             u_long pagesize,
>>> +             const char *name, ...);
>>> +
>>> +void xnheap_set_name(xnheap_t *heap, const char *name, ...);
>>>
>>>  void xnheap_destroy(xnheap_t *heap,
>>>                   void (*flushfn)(xnheap_t *heap,
>>> diff --git a/ksrc/drivers/ipc/iddp.c b/ksrc/drivers/ipc/iddp.c
>>> index a407946..b6382f1 100644
>>> --- a/ksrc/drivers/ipc/iddp.c
>>> +++ b/ksrc/drivers/ipc/iddp.c
>>> @@ -559,7 +559,8 @@ static int __iddp_bind_socket(struct rtipc_private *priv,
>>>               }
>>>
>>>               ret = xnheap_init(&sk->privpool,
>>> -                               poolmem, poolsz, XNHEAP_PAGE_SIZE);
>>> +                               poolmem, poolsz, XNHEAP_PAGE_SIZE,
>>> +                               "ippd: %d", port);
>>>               if (ret) {
>>>                       xnarch_free_host_mem(poolmem, poolsz);
>>>                       goto fail;
>>> diff --git a/ksrc/drivers/ipc/xddp.c b/ksrc/drivers/ipc/xddp.c
>>> index f62147a..a5dafef 100644
>>> --- a/ksrc/drivers/ipc/xddp.c
>>> +++ b/ksrc/drivers/ipc/xddp.c
>>> @@ -703,7 +703,7 @@ static int __xddp_bind_socket(struct rtipc_private *priv,
>>>               }
>>>
>>>               ret = xnheap_init(&sk->privpool,
>>> -                               poolmem, poolsz, XNHEAP_PAGE_SIZE);
>>> +                               poolmem, poolsz, XNHEAP_PAGE_SIZE, "");
>>>               if (ret) {
>>>                       xnarch_free_host_mem(poolmem, poolsz);
>>>                       goto fail;
>>> @@ -746,6 +746,10 @@ static int __xddp_bind_socket(struct rtipc_private *priv,
>>>       sk->minor = ret;
>>>       sa->sipc_port = ret;
>>>       sk->name = *sa;
>>> +
>>> +     if (poolsz > 0)
>>> +             xnheap_set_name(sk->bufpool, "xddp: %d", sa->sipc_port);
>>> +
>>>       /* Set default destination if unset at binding time. */
>>>       if (sk->peer.sipc_port < 0)
>>>               sk->peer = *sa;
>>> diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
>>> index 96c46f8..793d1c5 100644
>>> --- a/ksrc/nucleus/heap.c
>>> +++ b/ksrc/nucleus/heap.c
>>> @@ -76,6 +76,9 @@ EXPORT_SYMBOL_GPL(kheap);
>>>  xnheap_t kstacks;    /* Private stack pool */
>>>  #endif
>>>
>>> +static DEFINE_XNQUEUE(heapq);        /* Heap list for /proc reporting */
>>> +static unsigned long heapq_rev;
>>> +
>>>  static void init_extent(xnheap_t *heap, xnextent_t *extent)
>>>  {
>>>       caddr_t freepage;
>>> @@ -108,7 +111,7 @@ static void init_extent(xnheap_t *heap, xnextent_t *extent)
>>>   */
>>>
>>>  /*!
>>> - * \fn xnheap_init(xnheap_t *heap,void *heapaddr,u_long heapsize,u_long pagesize)
>>> + * \fn xnheap_init(xnheap_t *heap,void *heapaddr,u_long heapsize,u_long pagesize,const char *name,...)
>>>   * \brief Initialize a memory heap.
>>>   *
>>>   * Initializes a memory heap suitable for time-bounded allocation
>>> @@ -145,6 +148,10 @@ static void init_extent(xnheap_t *heap, xnextent_t *extent)
>>>   * best one for your needs. In the current implementation, pagesize
>>>   * must be a power of two in the range [ 8 .. 32768 ] inclusive.
>>>   *
>>> + * @param name Name displayed in statistic outputs. This parameter can
>>> + * be a format string, in which case succeeding parameters will be used
>>> + * to resolve the final name.
>>> + *
>>>   * @return 0 is returned upon success, or one of the following error
>>>   * codes:
>>>   *
>>> @@ -161,12 +168,13 @@ static void init_extent(xnheap_t *heap, xnextent_t *extent)
>>>   * Rescheduling: never.
>>>   */
>>>
>>> -int xnheap_init(xnheap_t *heap,
>>> -             void *heapaddr, u_long heapsize, u_long pagesize)
>>> +static int xnheap_init_va(xnheap_t *heap, void *heapaddr, u_long heapsize,
>>> +                       u_long pagesize, const char *name, va_list args)
>>>  {
>>>       unsigned cpu, nr_cpus = xnarch_num_online_cpus();
>>>       u_long hdrsize, shiftsize, pageshift;
>>>       xnextent_t *extent;
>>> +     spl_t s;
>>>
>>>       /*
>>>        * Perform some parametrical checks first.
>>> @@ -232,12 +240,71 @@ int xnheap_init(xnheap_t *heap,
>>>
>>>       appendq(&heap->extents, &extent->link);
>>>
>>> +     vsnprintf(heap->name, sizeof(heap->name), name, args);
>>> +
>>> +     xnlock_get_irqsave(&nklock, s);
>>> +     appendq(&heapq, &heap->stat_link);
>>> +     heapq_rev++;
>>> +     xnlock_put_irqrestore(&nklock, s);
>>> +
>>>       xnarch_init_display_context(heap);
>>>
>>>       return 0;
>>>  }
>>> +
>>> +int xnheap_init(xnheap_t *heap, void *heapaddr, u_long heapsize,
>>> +             u_long pagesize, const char *name, ...)
>>> +{
>>> +     va_list args;
>>> +     int ret;
>>> +
>>> +     va_start(args, name);
>>> +     ret = xnheap_init_va(heap, heapaddr, heapsize, pagesize, name, args);
>>> +     va_end(args);
>>> +
>>> +     return ret;
>>> +}
>>>  EXPORT_SYMBOL_GPL(xnheap_init);
>>>
>>> +/*!
>>> + * \fn xnheap_set_name(xnheap_t *heap,const char *name,...)
>>> + * \brief Overwrite the heap's name.
>> At some point, there should be a common base struct containing
>> everything needed to manipulate named objects, we would include in
>> higher level containers. It seems we are over-specializing quite generic
>> work.
> 
> Probably right (though this case remains special to some degree because
> names may consist of numeric IDs that are formatted on visualization). I
> think I should rename 'name' to 'display_name' or so to clarify that we
> are not dealing with a generic object name here.
> 

There wasn't any further feedback regarding this. What shall be the
direction now? I would really like to see this analysis feature in final
2.5 so that people can check which heap overflew on ENOMEM.

Jan

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Xenomai-help] Xenomai on ARMadeus
       [not found]                           ` <fbc4f538a6f4d84cfe514aba0985a525.squirrel@domain.hid>
@ 2009-11-12 14:59                             ` Gilles Chanteperdrix
  0 siblings, 0 replies; 33+ messages in thread
From: Gilles Chanteperdrix @ 2009-11-12 14:59 UTC (permalink / raw)
  To: Gwenhaël Goavec-Merou; +Cc: Xenomai help

Gwenhaël Goavec-Merou wrote:
> Sorry for the late
>> gwenhael.goavec wrote:
>>> On Mon, 02 Nov 2009 23:29:31 +0100
>>> Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org>
>>> wrote:
>>>
>>>> gwenhael.goavec wrote:
>>>>> On Mon, 02 Nov 2009 19:38:51 +0100
>>>>> Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org>
>>>>> wrote:
>>>>>
>>>>>> By the way, did you make any change to the I-pipe patch to get it
>>>>>> running on this board?
>>>>> The only change realized, is made because Armadeus and
>>>>> I-pipe does the same change on
>>>>> mach-imx/include/mach/imxfb.h :
>>>>>
>>>>> --- a/arch/arm/mach-imx/include/mach/imxfb.h
>>>>> +++ b/arch/arm/mach-imx/include/mach/imxfb.h
>>>>> @@ -14,7 +14,6 @@
>>>>>  #define PCR_BPIX_8 (3 << 25)
>>>>>  #define PCR_BPIX_12    (4 << 25)
>>>>>  #define PCR_BPIX_16    (4 << 25)
>>>>> +#define PCR_BPIX_MASK   (7 << 25)
>>>>>  #define PCR_PIXPOL (1 << 24)
>>>>>  #define PCR_FLMPOL (1 << 23)
>>>>>  #define PCR_LPPOL  (1 << 22)
>>>>>
>>>>> I take this opportunity to ask a question:
>>>>> on APF27 with imx27, I found that the system freeze
>>>>> when an interrupt handler is in place and an
>>>>> interruption happens.
>>>>> if interrupt handler is in place without interruption no
>>>>> problems and if interruption happens without interrupt
>>>>> handler it's ok too.
>>>>>
>>>>> If someone could help or advise me.
>>>>> thank you very much
>> The reason I knew for this problem was if handle_edge_irq was used for
>> some interrupts. But problems of this kind for IMX GPIOs are fixed in
>> 1.3-03.
>>
>> Do you have this problem with in-kernel or out-of-tree drivers? With
>> real-time or non real-time interrupts? Could you show us
>> /proc/interrupts before the system freezes?
>>
> I use an RTDM real-time driver for testing
> GPIO interrupt don't appears in /proc/interrupts
> 
> ~ # cat /proc/interrupts
>            CPU0
>  20:         69           -  IMX-uart
>  26:       1664           -  i.MX Timer Tick
>  29:      84439           -  mxc_nd
>  50:         75           -  fec
>  53:          0           -  VPU_CODEC_IRQ
> Err:          0

Appear in /proc/interrupts only the Linux domain interrupts.
Xenomai domain interrupts appear in /proc/xenomai/irq.

Now, if you are sure to use the latest I-pipe patch, you should try and
follow the course of the interrupts masking, acking, calling the handler
and unmasking, to understand what happens. The most likely problem is
that your interrupt handler gets called in an infinite loop, for some
reason. The reason may be that the interrupt controller is configured
for the wrong type (edge/level) of irq, or simply that your interrupt
handler is bogus, or something else.

We could help you if you showed us the driver code, and told us what
kind of interrupt (edge, level) the hardware is using.

You could also try to implement the driver as a plain linux driver and
run it over the xenomai patched kernel and a plain linux kernel, to see
if the problem is in the interrupt handling code, I-pipe irq handling
code, or something else.

-- 
                                          Gilles



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Xenomai-core] [PATCH v3 9/9] nucleus: Include all heaps in statistics
  2009-11-11 12:59       ` Jan Kiszka
@ 2009-11-15 17:38         ` Philippe Gerum
  2009-11-16 12:38           ` Jan Kiszka
  0 siblings, 1 reply; 33+ messages in thread
From: Philippe Gerum @ 2009-11-15 17:38 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: xenomai

On Wed, 2009-11-11 at 13:59 +0100, Jan Kiszka wrote:
> Jan Kiszka wrote:
> > Philippe Gerum wrote:
> >> On Tue, 2009-10-20 at 13:37 +0200, Jan Kiszka wrote:
> >>> This extends /proc/xenomai/heap with statistics about all currently used
> >>> heaps. It takes care to flush nklock while iterating of this potentially
> >>> long list.
> >>>
> >>> Signed-off-by: Jan Kiszka <jan.kiszka@domain.hid>
> >>> ---
> >>>
> >>>  include/nucleus/heap.h    |   12 +++-
> >>>  ksrc/drivers/ipc/iddp.c   |    3 +
> >>>  ksrc/drivers/ipc/xddp.c   |    6 ++
> >>>  ksrc/nucleus/heap.c       |  131 ++++++++++++++++++++++++++++++++++++++++-----
> >>>  ksrc/nucleus/module.c     |    2 -
> >>>  ksrc/nucleus/pod.c        |    5 +-
> >>>  ksrc/nucleus/shadow.c     |    5 +-
> >>>  ksrc/skins/native/heap.c  |    6 +-
> >>>  ksrc/skins/native/pipe.c  |    4 +
> >>>  ksrc/skins/native/queue.c |    6 +-
> >>>  ksrc/skins/posix/shm.c    |    4 +
> >>>  ksrc/skins/psos+/rn.c     |    6 +-
> >>>  ksrc/skins/rtai/shm.c     |    7 ++
> >>>  ksrc/skins/vrtx/heap.c    |    6 +-
> >>>  ksrc/skins/vrtx/syscall.c |    3 +
> >>>  15 files changed, 169 insertions(+), 37 deletions(-)
> >>>
> >>> diff --git a/include/nucleus/heap.h b/include/nucleus/heap.h
> >>> index 44db738..f653cd7 100644
> >>> --- a/include/nucleus/heap.h
> >>> +++ b/include/nucleus/heap.h
> >>> @@ -115,6 +115,10 @@ typedef struct xnheap {
> >>>
> >>>       XNARCH_DECL_DISPLAY_CONTEXT();
> >>>
> >>> +     xnholder_t stat_link;   /* Link in heapq */
> >>> +
> >>> +     char name[48];
> >> s,48,XNOBJECT_NAME_LEN
> > 
> > OK, but XNOBJECT_NAME_LEN+16 (due to class prefixes and additional
> > information like the minor ID).
> > 
> >>> +
> >>>  } xnheap_t;
> >>>
> >>>  extern xnheap_t kheap;
> >>> @@ -202,7 +206,8 @@ void xnheap_cleanup_proc(void);
> >>>
> >>>  int xnheap_init_mapped(xnheap_t *heap,
> >>>                      u_long heapsize,
> >>> -                    int memflags);
> >>> +                    int memflags,
> >>> +                    const char *name, ...);
> >>>
> >> The va_list is handy, but this breaks the common pattern used throughout
> >> the rest of the nucleus, based on passing pre-formatted labels. So
> >> either we make all creation calls use va_lists (but xnthread would need
> >> more work), or we make xnheap_init_mapped use the not-so-handy current
> >> form.
> >>
> >> Actually, providing xnheap_set_name() and a name parameter/va_list to
> >> xnheap_init* is one too many, this clutters an inner interface
> >> uselessly. The latter should go away, assuming that anon heaps may still
> >> exist.
> > 
> > If we want to print all heaps, we should at least set a name indicating
> > their class. And therefore we need to pass a descriptive name along with
> > /every/ heap initialization. Forcing the majority of xnheap_init users
> > to additionally issue xnheap_set_name is the cluttering a wanted to
> > avoid. Only a minority needs this split-up, and therefore you see both
> > interfaces in my patch.
> > 
> >>>  void xnheap_destroy_mapped(xnheap_t *heap,
> >>>                          void (*release)(struct xnheap *heap),
> >>> @@ -224,7 +229,10 @@ void xnheap_destroy_mapped(xnheap_t *heap,
> >>>  int xnheap_init(xnheap_t *heap,
> >>>               void *heapaddr,
> >>>               u_long heapsize,
> >>> -             u_long pagesize);
> >>> +             u_long pagesize,
> >>> +             const char *name, ...);
> >>> +
> >>> +void xnheap_set_name(xnheap_t *heap, const char *name, ...);
> >>>
> >>>  void xnheap_destroy(xnheap_t *heap,
> >>>                   void (*flushfn)(xnheap_t *heap,
> >>> diff --git a/ksrc/drivers/ipc/iddp.c b/ksrc/drivers/ipc/iddp.c
> >>> index a407946..b6382f1 100644
> >>> --- a/ksrc/drivers/ipc/iddp.c
> >>> +++ b/ksrc/drivers/ipc/iddp.c
> >>> @@ -559,7 +559,8 @@ static int __iddp_bind_socket(struct rtipc_private *priv,
> >>>               }
> >>>
> >>>               ret = xnheap_init(&sk->privpool,
> >>> -                               poolmem, poolsz, XNHEAP_PAGE_SIZE);
> >>> +                               poolmem, poolsz, XNHEAP_PAGE_SIZE,
> >>> +                               "ippd: %d", port);
> >>>               if (ret) {
> >>>                       xnarch_free_host_mem(poolmem, poolsz);
> >>>                       goto fail;
> >>> diff --git a/ksrc/drivers/ipc/xddp.c b/ksrc/drivers/ipc/xddp.c
> >>> index f62147a..a5dafef 100644
> >>> --- a/ksrc/drivers/ipc/xddp.c
> >>> +++ b/ksrc/drivers/ipc/xddp.c
> >>> @@ -703,7 +703,7 @@ static int __xddp_bind_socket(struct rtipc_private *priv,
> >>>               }
> >>>
> >>>               ret = xnheap_init(&sk->privpool,
> >>> -                               poolmem, poolsz, XNHEAP_PAGE_SIZE);
> >>> +                               poolmem, poolsz, XNHEAP_PAGE_SIZE, "");
> >>>               if (ret) {
> >>>                       xnarch_free_host_mem(poolmem, poolsz);
> >>>                       goto fail;
> >>> @@ -746,6 +746,10 @@ static int __xddp_bind_socket(struct rtipc_private *priv,
> >>>       sk->minor = ret;
> >>>       sa->sipc_port = ret;
> >>>       sk->name = *sa;
> >>> +
> >>> +     if (poolsz > 0)
> >>> +             xnheap_set_name(sk->bufpool, "xddp: %d", sa->sipc_port);
> >>> +
> >>>       /* Set default destination if unset at binding time. */
> >>>       if (sk->peer.sipc_port < 0)
> >>>               sk->peer = *sa;
> >>> diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
> >>> index 96c46f8..793d1c5 100644
> >>> --- a/ksrc/nucleus/heap.c
> >>> +++ b/ksrc/nucleus/heap.c
> >>> @@ -76,6 +76,9 @@ EXPORT_SYMBOL_GPL(kheap);
> >>>  xnheap_t kstacks;    /* Private stack pool */
> >>>  #endif
> >>>
> >>> +static DEFINE_XNQUEUE(heapq);        /* Heap list for /proc reporting */
> >>> +static unsigned long heapq_rev;
> >>> +
> >>>  static void init_extent(xnheap_t *heap, xnextent_t *extent)
> >>>  {
> >>>       caddr_t freepage;
> >>> @@ -108,7 +111,7 @@ static void init_extent(xnheap_t *heap, xnextent_t *extent)
> >>>   */
> >>>
> >>>  /*!
> >>> - * \fn xnheap_init(xnheap_t *heap,void *heapaddr,u_long heapsize,u_long pagesize)
> >>> + * \fn xnheap_init(xnheap_t *heap,void *heapaddr,u_long heapsize,u_long pagesize,const char *name,...)
> >>>   * \brief Initialize a memory heap.
> >>>   *
> >>>   * Initializes a memory heap suitable for time-bounded allocation
> >>> @@ -145,6 +148,10 @@ static void init_extent(xnheap_t *heap, xnextent_t *extent)
> >>>   * best one for your needs. In the current implementation, pagesize
> >>>   * must be a power of two in the range [ 8 .. 32768 ] inclusive.
> >>>   *
> >>> + * @param name Name displayed in statistic outputs. This parameter can
> >>> + * be a format string, in which case succeeding parameters will be used
> >>> + * to resolve the final name.
> >>> + *
> >>>   * @return 0 is returned upon success, or one of the following error
> >>>   * codes:
> >>>   *
> >>> @@ -161,12 +168,13 @@ static void init_extent(xnheap_t *heap, xnextent_t *extent)
> >>>   * Rescheduling: never.
> >>>   */
> >>>
> >>> -int xnheap_init(xnheap_t *heap,
> >>> -             void *heapaddr, u_long heapsize, u_long pagesize)
> >>> +static int xnheap_init_va(xnheap_t *heap, void *heapaddr, u_long heapsize,
> >>> +                       u_long pagesize, const char *name, va_list args)
> >>>  {
> >>>       unsigned cpu, nr_cpus = xnarch_num_online_cpus();
> >>>       u_long hdrsize, shiftsize, pageshift;
> >>>       xnextent_t *extent;
> >>> +     spl_t s;
> >>>
> >>>       /*
> >>>        * Perform some parametrical checks first.
> >>> @@ -232,12 +240,71 @@ int xnheap_init(xnheap_t *heap,
> >>>
> >>>       appendq(&heap->extents, &extent->link);
> >>>
> >>> +     vsnprintf(heap->name, sizeof(heap->name), name, args);
> >>> +
> >>> +     xnlock_get_irqsave(&nklock, s);
> >>> +     appendq(&heapq, &heap->stat_link);
> >>> +     heapq_rev++;
> >>> +     xnlock_put_irqrestore(&nklock, s);
> >>> +
> >>>       xnarch_init_display_context(heap);
> >>>
> >>>       return 0;
> >>>  }
> >>> +
> >>> +int xnheap_init(xnheap_t *heap, void *heapaddr, u_long heapsize,
> >>> +             u_long pagesize, const char *name, ...)
> >>> +{
> >>> +     va_list args;
> >>> +     int ret;
> >>> +
> >>> +     va_start(args, name);
> >>> +     ret = xnheap_init_va(heap, heapaddr, heapsize, pagesize, name, args);
> >>> +     va_end(args);
> >>> +
> >>> +     return ret;
> >>> +}
> >>>  EXPORT_SYMBOL_GPL(xnheap_init);
> >>>
> >>> +/*!
> >>> + * \fn xnheap_set_name(xnheap_t *heap,const char *name,...)
> >>> + * \brief Overwrite the heap's name.
> >> At some point, there should be a common base struct containing
> >> everything needed to manipulate named objects, we would include in
> >> higher level containers. It seems we are over-specializing quite generic
> >> work.
> > 
> > Probably right (though this case remains special to some degree because
> > names may consist of numeric IDs that are formatted on visualization). I
> > think I should rename 'name' to 'display_name' or so to clarify that we
> > are not dealing with a generic object name here.
> > 
> 
> There wasn't any further feedback regarding this. What shall be the
> direction now? I would really like to see this analysis feature in final
> 2.5 so that people can check which heap overflew on ENOMEM.
> 

Same answer as previously I'm afraid. Adding a string format arg list to
xnheap_init(), which already takes 4 args, would just clutter the
interface, way more than introducing a dedicated routine to attach a
label to a heap. E.g.

-		err = xnheap_init(&pipe->privpool, poolmem, poolsize, XNHEAP_PAGE_SIZE);
+		err = xnheap_init(&pipe->privpool, poolmem, poolsize,
+				  XNHEAP_PAGE_SIZE,
+				  "rt_pipe: %d / %s", minor, name);

introduces a 7-args call form, which is not that nice, especially since
I'm trying now to reduce those abuses throughout the code (see
xnpod_init_thread/start_thread). 

Additionally, we want any "name" property to be XNOBJECT_NAME_LEN long, 
and nothing less or more, because this is a common assumption everywhere
in the code, and I see no reason for xnheap to diverge from that. Not to
speak of the fact that such a name is a natural candidate for index key
for registration purpose, and the descriptive labels introduced would
not be valid keys.

I would rather pick your suggestion to introduce a display-oriented name
we could call a label instead, to make clear that we are only dealing
with pretty-printing in /proc.

I.e.

- xnheap_init() remains unchanged
- xnheap_set_label(fmt, args...) is introduced

> Jan
> 


-- 
Philippe.




^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Xenomai-core] [PATCH v3 9/9] nucleus: Include all heaps in statistics
  2009-11-15 17:38         ` Philippe Gerum
@ 2009-11-16 12:38           ` Jan Kiszka
  0 siblings, 0 replies; 33+ messages in thread
From: Jan Kiszka @ 2009-11-16 12:38 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

Philippe Gerum wrote:
> ...
> I would rather pick your suggestion to introduce a display-oriented name
> we could call a label instead, to make clear that we are only dealing
> with pretty-printing in /proc.
> 
> I.e.
> 
> - xnheap_init() remains unchanged
> - xnheap_set_label(fmt, args...) is introduced
> 

OK, will take this route!

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux


^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2009-11-16 12:38 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-10-20 11:37 [Xenomai-core] [PATCH v3 0/9] heap setup/cleanup fixes, refactorings & more Jan Kiszka
2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 3/9] nucleus: Fix race window in heap mapping procedure Jan Kiszka
2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 4/9] nucleus: xnheap_destroy does not fail Jan Kiszka
2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 2/9] nucleus: Use Linux spin lock for heap list management Jan Kiszka
2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 1/9] native: Release fastlock to the proper heap Jan Kiszka
2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 5/9] nucleus: Avoid returning errors from xnheap_destroy_mapped Jan Kiszka
2009-10-24 17:22   ` Philippe Gerum
2009-11-02 16:04     ` Philippe Gerum
2009-11-02 16:41       ` Jan Kiszka
2009-11-02 16:51         ` Philippe Gerum
2009-11-02 16:57           ` Jan Kiszka
2009-11-02 18:01             ` Jan Kiszka
2009-11-02 18:19               ` [Xenomai-help] Xenomai on ARMadeus Pierre Ficheux
2009-11-02 18:22                 ` Gilles Chanteperdrix
2009-11-02 18:38                 ` Gilles Chanteperdrix
2009-11-02 19:19                   ` gwenhael.goavec
2009-11-02 22:29                     ` Gilles Chanteperdrix
2009-11-03  7:36                       ` gwenhael.goavec
     [not found]                       ` <20091103082204.248eed59@domain.hid>
2009-11-04 13:14                         ` Gilles Chanteperdrix
     [not found]                           ` <fbc4f538a6f4d84cfe514aba0985a525.squirrel@domain.hid>
2009-11-12 14:59                             ` Gilles Chanteperdrix
2009-11-02 18:26               ` [Xenomai-core] [PATCH v3 5/9] nucleus: Avoid returning errors from xnheap_destroy_mapped Philippe Gerum
2009-11-03  8:26                 ` Jan Kiszka
2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 6/9] rtai: Try to fix _shm_free Jan Kiszka
2009-10-24 17:25   ` Philippe Gerum
2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 7/9] native: Do not requeue on auto-cleanup errors Jan Kiszka
2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 9/9] nucleus: Include all heaps in statistics Jan Kiszka
2009-10-20 23:41   ` Philippe Gerum
2009-10-22 10:52     ` Jan Kiszka
2009-11-11 12:59       ` Jan Kiszka
2009-11-15 17:38         ` Philippe Gerum
2009-11-16 12:38           ` Jan Kiszka
2009-10-20 11:37 ` [Xenomai-core] [PATCH v3 8/9] native: Fix memory leak on heap/queue auto-deletion Jan Kiszka
2009-10-22 10:30   ` [Xenomai-core] [PATCH] native: Avoid double release on queue/heap auto-cleanup Jan Kiszka

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.