xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Ben Sanda <Ben.Sanda@dornerworks.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Xentrace on Xilinx ARM
Date: Fri, 4 Mar 2016 20:53:40 +0000	[thread overview]
Message-ID: <A2A949C387F3A54C9B4DAC2DCD2E9A85C38D51D4@Quimby.dw.local> (raw)


[-- Attachment #1.1: Type: text/plain, Size: 5952 bytes --]

Hello,

My name is Ben Sanda, I'm a kernel/firmware developer with DornerWorks
engineering. Our team is working on support for Xen on the new Xilinx
Ultrascale+ MPSoC platforms (ARM A53 core) and I've specifically been tasked
with characterizing performance, particularly that of the schedulers. I wanted
to make use of the xentrace tool to help give us some timing and performance
benchmarks, but searching over the Xen mailing lists it appears xentrace has not
yet been ported to ARM. When you run it crashes complaining about allocating
memory buffers. While we could just write some custom quick program to
collect the data we need, I would rather help get xentrace working on ARM
so is generally available to everyone and usable for any benchmarking
moving forward.

In searching for existing topics on this my main reference thread for this has
been the "[Xen-devel] xentrace, arm, hvm" email chain started by Pavlo Suikov
here: http://xen.markmail.org/thread/zochggqxcifs5cdi

I have been trying to follow that email chain, which made some suggestions as to
how xentrace could be ported to ARM and where things are going wrong, but it
never came to any concrete conclusions. I have gathered from the thread that
there are issues with the memory allocation for the xentrace buffers due to MFN
and PFN mapping differences on the ARM vs x86 when attempting to map
from the XEN HEAP. I followed the suggestions posed by the thread as follows
(performed against the Xilinx/master Git version of the Xen source here:
https://github.com/Xilinx/xen):

First, in mm.c, I modified the xenmem_add_to_physmap_one() and its call to
rcu_lock_domain_by_any_id() call to provide a special case for the DOM_XEN
domain request. For this I created a new function, get_pg_owner(), which does
the same domain checks as get_pg_owner() on the x86 source performs. This allows
the correct domid_t to be returned.

--- xen-src/xen/arch/arm/mm.c   2016-03-04 10:44:31.364572302 -0800
+++ xen-src_mod/xen/arch/arm/mm.c   2016-02-24 09:41:43.000000000 -0800
@@ -41,6 +41,7 @@
#include <xen/pfn.h>
 struct domain *dom_xen, *dom_io, *dom_cow;
+static struct domain *get_pg_owner(domid_t domid);
 /* Static start-of-day pagetables that we use before the allocators
  * are up. These are used by all CPUs during bringup before switching
@@ -1099,7 +1100,8 @@
     {
         struct domain *od;
         p2m_type_t p2mt;
-        od = rcu_lock_domain_by_any_id(foreign_domid);
+        od = get_pg_owner(foreign_domid);
+
         if ( od == NULL )
             return -ESRCH;
@@ -1132,7 +1134,15 @@
             return -EINVAL;
         }
-        mfn = page_to_mfn(page);
+        if(od->domain_id != DOMID_XEN)
+        {
+            mfn = page_to_mfn(page);
+        }
+        else
+        {
+            mfn = idx;
+        }
+
         t = p2m_map_foreign;
         rcu_unlock_domain(od);
@@ -1312,6 +1321,42 @@
     unmap_domain_page(p);
}
+static struct domain *get_pg_owner(domid_t domid)
+{
+    struct domain *pg_owner = NULL, *curr = current->domain;
+
+    if ( likely(domid == DOMID_SELF) )
+    {
+        pg_owner = rcu_lock_current_domain();
+        goto out;
+    }
+
+    if ( unlikely(domid == curr->domain_id) )
+    {
+        goto out;
+    }
+
+    switch ( domid )
+    {
+    case DOMID_IO:
+        pg_owner = rcu_lock_domain(dom_io);
+        break;
+    case DOMID_XEN:
+        /*printk("DOM_XEN Selected\n");*/
+        pg_owner = rcu_lock_domain(dom_xen);
+        break;
+    default:
+        if ( (pg_owner = rcu_lock_domain_by_id(domid)) == NULL )
+        {
+            break;
+        }
+        break;
+    }
+
+ out:
+    return pg_owner;
+}

Second I modified p2m_lookup() in p2m.c to manage the fact that xentrace
provides a MFN, not a PFN to the domain lookup calls. It now checks for DOM_XEN,
and if found, returns the MFN directly instead of trying to translate it from a
PFN. It also sets the page type to p2m_raw_rw. (Which I guessed was the correct
type for read/write to the XEN Heap? I'm not sure if that's correct.)

@@ -228,10 +228,19 @@
     paddr_t ret;
     struct p2m_domain *p2m = &d->arch.p2m;
-    spin_lock(&p2m->lock);
-    ret = __p2m_lookup(d, paddr, t);
-    spin_unlock(&p2m->lock);
+    if(d->domain_id != DOMID_XEN)
+    {
+        spin_lock(&p2m->lock);
+        ret = __p2m_lookup(d, paddr, t);
+        spin_unlock(&p2m->lock);
+    }
+    else
+    {
+        *t = p2m_ram_rw;
+        ret = paddr;
+    }
+
     return ret;
}

The result is that now when I call xentrace() no errors are reported, however; I
also don't observe any trace logs actually being collected. I can invoke
xentrace (as xentrace -D -S 256 -t 10 -e all out.bin), and I see xentrace start,
and out.bin created, but it's always empty.

[root@xilinx-dom0 ~]# xentrace -D -S 256 -T 10 -e all out.bin
change evtmask to 0xffff000
(XEN) xentrace: requesting 2 t_info pages for 256 trace pages on 4 cpus
(XEN) xentrace: p0 mfn 7ffb6 offset 65
(XEN) xentrace: p1 mfn 7fd7c offset 321
(XEN) xentrace: p2 mfn 7fc7c offset 577
(XEN) xentrace: p3 mfn 7fb7c offset 833
(XEN) xentrace: initialised
[root@xilinx-dom0 ~]# ls -l
total 5257236
-rwxrwxr-x    1 root   root   9417104 Feb 10  2016 Dom1-Kernel*
-rw-rw-r--    1 root   root   073741824 Mar  4  2016 Dom1.img
-rw-r--r--    1 root   root   3221225472 Mar  4  2016 linaro-openembedded-fs.img
-rw-r--r--    1 root   root   0 Jan  1 00:00 out.bin
-rw-r--r--    1 root   root   1073741824 Mar  4  2016 ubuntu-core-fs.img
-rwxrwxr-x    1 root   root   4104 Mar  4  2016 xzd_bare.img*
[root@xilinx-dom0 ~]#

Thank you for any assistance,

Benjamin Sanda
Embedded Software Engineer
616.389.6138
Ben.Sanda@dornerworks.com<mailto:Ben.Sanda@dornerworks.com>

DornerWorks, Ltd.
3445 Lake Eastbrook Blvd. SE
Grand Rapids, MI 49546



[-- Attachment #1.2: Type: text/html, Size: 20028 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

             reply	other threads:[~2016-03-04 20:53 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-04 20:53 Ben Sanda [this message]
2016-03-05 15:43 ` Xentrace on Xilinx ARM Dario Faggioli
2016-03-07  3:20   ` Ben Sanda
2016-03-07 19:36   ` Ben Sanda
2016-03-07 20:30     ` Paul Sujkov
2016-03-07 20:32       ` Ben Sanda
2016-03-08 12:41     ` Dario Faggioli
2016-03-08 18:04       ` Ben Sanda
2016-03-08 18:15         ` Dario Faggioli
2016-03-08 18:28         ` Paul Sujkov
2016-03-08 18:32           ` Andrew Cooper
2016-03-09 11:22             ` Dario Faggioli
2016-03-08 18:44         ` George Dunlap
2016-03-08 20:51           ` Ben Sanda
2016-03-09 11:05             ` George Dunlap
2016-03-09 16:28               ` Ben Sanda
2016-03-09 11:41             ` Paul Sujkov
2016-03-09 16:26               ` Ben Sanda

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=A2A949C387F3A54C9B4DAC2DCD2E9A85C38D51D4@Quimby.dw.local \
    --to=ben.sanda@dornerworks.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).