xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Question about mapping between domains
@ 2015-07-09 13:31 Oleksandr Dmytryshyn
  2015-07-13  9:04 ` Ian Campbell
  0 siblings, 1 reply; 12+ messages in thread
From: Oleksandr Dmytryshyn @ 2015-07-09 13:31 UTC (permalink / raw)
  To: xen-devel
  Cc: Keir Fraser, Ian Campbell, Tim Deegan, Ian Jackson,
	Stefano Stabellini, Jan Beulich

Hi to all.

I'm trying to map and then unmap some memory from one domain to another.
For example from DomU to DomD. DomU - not privileged domain, DomD - privileged
(driver domain). And DomD is mapped 1:1. I use a typical way - allocate grant
references and claim forein access in the DomU and map by grant references in the 
DomD. Then I unmap mapped memory.

I want to map/unmap memory to existing buffer in DomD. And this map/unmap procedure
should be done a lot of times. I've used virtual block device (VBD) driver as
reference. But there is a difference in compare to the VBD driver. I use
a buffer which was allocated previously in another driver (in DRM driver).
I need to map a DRM dumb buffer from DomU to DomD. VBD backend driver uses
pages taken from __get_free_pages().

Here is my mapping code (in DomD):
--------------------------------------------------------------------------------
/* map dumb fb */
paddr = cma_obj->paddr;
for (i = 0; i < n_mfns; i++) {
	cur_pfn = __phys_to_pfn(paddr);
	vaddr = (unsigned long)pfn_to_kaddr(cur_pfn);

	pages_mfns[i] = pfn_to_page(cur_pfn);

	gnttab_set_map_op(&map_mfns[i], vaddr, GNTMAP_host_map,
				gnt_mfns[i], args->fe_domid);

	paddr += PAGE_SIZE;
}
ret = gnttab_map_refs(map_mfns, NULL, pages_mfns, n_mfns);
BUG_ON(ret);
--------------------------------------------------------------------------------
Where 'cma_obj' is real object allocated in the DRM driver.

After mapping all works fine.

Here is my unmapping code (in DomD):
--------------------------------------------------------------------------------
paddr = cma_obj->paddr;
cur_idx = 0;
for (i = 0; i < n_mfns; i++) {
	if (handles_mfns[i] == DRMFRONT_INVALID_HANDLE) {
		/* for now */
		dev_err(dev->dev,
			"invalid handle[%d] -- could not use it\n", i);
		continue;
	}

	gnttab_set_unmap_op(&unmap_mfns[cur_idx],
			    (unsigned long)phys_to_virt(paddr),
			    GNTMAP_host_map,
			    handles_mfns[i]);

	handles_mfns[i] = DRMFRONT_INVALID_HANDLE;

	cur_idx++;
	paddr += PAGE_SIZE;

	if (cur_idx == MAX_MAP_OP_COUNT || i == n_mfns - 1) {
		ret = gnttab_unmap_refs(unmap_mfns, NULL,
					&pages_mfns[i + 1 - cur_idx],
					cur_idx);
		BUG_ON(ret);

		cur_idx = 0;
	}
}
--------------------------------------------------------------------------------

The next crash appeared after unmap (in DomD):
--------------------------------------------------------------------------------
Unhandled fault: terminal exception (0x002) at 0xcdbfb000
Internal error: : 2 [#1] PREEMPT SMP ARM
CPU: 1 PID: 853 Comm: drmback Not tainted 3.14.33-0-ivi-arm-rcar-m2-rt31-00060-g653c5ff-dirty #173
task: cfa9d800 ti: ce298000 task.ti: ce298000
PC is at __copy_from_user+0xcc/0x3b0
LR is at 0x6
pc : [<c019e73c>]    lr : [<00000006>]    psr: 00000013
sp : ce299ef4  ip : 0000001c  fp : ce299f44
r10: b652e9a4  r9 : ce298000  r8 : 00000004
r7 : cdbfb000  r6 : cfaa3580  r5 : b652e9a4  r4 : 00000004
r3 : 00000000  r2 : ffffffe4  r1 : b652e9a8  r0 : cdbfb000
Flags: nzcv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment user
Control: 10c5307d  Table: 5e23806a  DAC: 00000015
Process drmback (pid: 853, stack limit = 0xce298240)
Stack: (0xce299ef4 to 0xce29a000)
9ee0:                                              b652e9a4 cfaa3580 cdbfb000
9f00: 00000004 cdbfb000 00000004 00000000 00000004 c01efdec ce299f78 ce228700
9f20: 00000004 b652e9a4 ce299f78 00000004 ce298000 b652e9a4 ce299f74 ce299f48
9f40: c00ca158 c01efd88 c00e339c c00e28d8 00000000 00000000 ce228700 ce228701
9f60: 00000004 b652e9a4 ce299fa4 ce299f78 c00ca2cc c00ca094 00000000 00000000
9f80: 00018208 00000001 b652eb2c 00000004 c000f944 00000000 00000000 ce299fa8
9fa0: c000f7c0 c00ca294 00018208 00000001 00000006 b652e9a4 00000004 b652e9a4
9fc0: 00018208 00000001 b652eb2c 00000004 00000002 00000000 00000000 b652e9bc
9fe0: 00000000 b652e998 b6ea0f94 b6ea0fa4 80000010 00000006 18140681 076136f5
Backtrace: 
[<c01efd7c>] (evtchn_write) from [<c00ca158>] (vfs_write+0xd0/0x17c)
 r10:b652e9a4 r9:ce298000 r8:00000004 r7:ce299f78 r6:b652e9a4 r5:00000004
 r4:ce228700 r3:ce299f78
[<c00ca088>] (vfs_write) from [<c00ca2cc>] (SyS_write+0x44/0x84)
 r10:b652e9a4 r8:00000004 r7:ce228701 r6:ce228700 r5:00000000 r4:00000000
[<c00ca288>] (SyS_write) from [<c000f7c0>] (ret_fast_syscall+0x0/0x30)
 r10:00000000 r8:c000f944 r7:00000004 r6:b652eb2c r5:00000001 r4:00018208
Code: e4803004 e4804004 e4805004 e4806004 (e4807004) 
---[ end trace 0000000000000002 ]---
------------[ cut here ]------------
Unhandled fault: terminal exception (0x002) at 0xcd7fc000
Internal error: : 2 [#2] PREEMPT SMP ARM
CPU: 1 PID: 852 Comm: drmback Tainted: G      D W    3.14.33-0-ivi-arm-rcar-m2-rt31-00060-g653c5ff-dirty #173
task: cfa9ee00 ti: ce28a000 task.ti: ce28a000
PC is at __copy_from_user+0xcc/0x3b0
LR is at 0x8
pc : [<c019e73c>]    lr : [<00000008>]    psr: 00000013
sp : ce28bef4  ip : 0000001c  fp : ce28bf44
r10: b6d2e9a4  r9 : ce28a000  r8 : 00000004
r7 : cd7fc000  r6 : cfaa3800  r5 : b6d2e9a4  r4 : 00000004
r3 : 00000000  r2 : ffffffe4  r1 : b6d2e9a8  r0 : cd7fc000
Flags: nzcv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment user
Control: 10c5307d  Table: 5e23806a  DAC: 00000015
Process drmback (pid: 852, stack limit = 0xce28a240)
Stack: (0xce28bef4 to 0xce28c000)
bee0:                                              b6d2e9a4 cfaa3800 cd7fc000
bf00: 00000004 cd7fc000 00000004 00000000 00000004 c01efdec ce28bf78 ce228200
bf20: 00000004 b6d2e9a4 ce28bf78 00000004 ce28a000 b6d2e9a4 ce28bf74 ce28bf48
bf40: c00ca158 c01efd88 c00e339c c00e28d8 00000000 00000000 ce228200 ce228201
bf60: 00000004 b6d2e9a4 ce28bfa4 ce28bf78 c00ca2cc c00ca094 00000000 00000000
bf80: 00018018 00000001 b6d2eb2c 00000004 c000f944 00000000 00000000 ce28bfa8
bfa0: c000f7c0 c00ca294 00018018 00000001 00000009 b6d2e9a4 00000004 b6d2e9a4
bfc0: 00018018 00000001 b6d2eb2c 00000004 00000002 00000000 00000000 b6d2e9bc
bfe0: 00000000 b6d2e998 b6ea0f94 b6ea0fa4 80000010 00000009 48544044 08175716
Backtrace: 
[<c01efd7c>] (evtchn_write) from [<c00ca158>] (vfs_write+0xd0/0x17c)
 r10:b6d2e9a4 r9:ce28a000 r8:00000004 r7:ce28bf78 r6:b6d2e9a4 r5:00000004
 r4:ce228200 r3:ce28bf78
[<c00ca088>] (vfs_write) from [<c00ca2cc>] (SyS_write+0x44/0x84)
 r10:b6d2e9a4 r8:00000004 r7:ce228201 r6:ce228200 r5:00000000 r4:00000000
[<c00ca288>] (SyS_write) from [<c000f7c0>] (ret_fast_syscall+0x0/0x30)
 r10:00000000 r8:c000f944 r7:00000004 r6:b6d2eb2c r5:00000001 r4:00018018
Code: e4803004 e4804004 e4805004 e4806004 (e4807004)
--------------------------------------------------------------------------------

Here addresses 0xcdbfb000 and 0xcd7fc000 are addresses of the unmapped DRM
dumb buffers.

I've done some debug. Before mapping we have some pages in the DomD with pfns[]
which corresponds to some mfns1[] (I check this inside Xen). Then after mmap
those pfns corresponds to another mfns2[]. And finally after unmap pfns[]
corresponds to invalid mfns.

Here is a short call stack from the Hypervisor (unmap):
--------------------------------------------------------------------------------
__gnttab_unmap_common() ->
	replace_grant_host_mapping() ->
		guest_physmap_remove_page()
--------------------------------------------------------------------------------

The problem is that Xen "takes a bite" memory from DomD during unmap procedure.
And then I can see some crashes (when the DRM driver tries to access to
non-existing pages).

Thus I need to restore original mapping in the DomD after unmap.

We made a hack which helps to restore original mapping in the DomD.

Here is diff for Xen:
--------------------------------------------------------------------------------
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index db5e5db..6c66734 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1060,12 +1060,13 @@ __gnttab_unmap_common(
     struct domain   *ld, *rd;
     struct grant_table *lgt, *rgt;
     struct active_grant_entry *act;
+    unsigned long    mfn;
     s16              rc = 0;
 
     ld = current->domain;
     lgt = ld->grant_table;
 
-    op->frame = (unsigned long)(op->dev_bus_addr >> PAGE_SHIFT);
+    op->frame = 0;
 
     if ( unlikely(op->handle >= lgt->maptrack_limit) )
     {
@@ -1145,11 +1146,21 @@ __gnttab_unmap_common(
 
     if ( (op->host_addr != 0) && (op->flags & GNTMAP_host_map) )
     {
-        if ( (rc = replace_grant_host_mapping(op->host_addr,
-                                              op->frame, op->new_addr, 
-                                              op->flags)) < 0 )
-            goto act_release_out;
+        if ( op->dev_bus_addr == 0 )
+        {
+            if ( (rc = replace_grant_host_mapping(op->host_addr,
+                                                  op->frame, op->new_addr,
+                                                  op->flags)) < 0 )
+                goto act_release_out;
 
+        }
+        else
+        {
+            mfn = (unsigned long)(op->dev_bus_addr >> PAGE_SHIFT);
+            if ( (rc = create_grant_host_mapping(op->host_addr, mfn,
+                                                 op->flags, 0)) < 0 )
+                goto act_release_out;
+        }
         ASSERT(act->pin & (GNTPIN_hstw_mask | GNTPIN_hstr_mask));
         op->map->flags &= ~GNTMAP_host_map;
         if ( op->flags & GNTMAP_readonly )
--------------------------------------------------------------------------------

Here is diff for Kernel:
--------------------------------------------------------------------------------
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index a5af2a2..c6fa51e 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -170,6 +170,22 @@ gnttab_set_unmap_op(struct gnttab_unmap_grant_ref *unmap, phys_addr_t addr,
 	unmap->dev_bus_addr = 0;
 }
 
+static inline void
+gnttab_set_unmap_restore_op(struct gnttab_unmap_grant_ref *unmap,
+			    phys_addr_t addr, phys_addr_t maddr, uint32_t flags,
+			    grant_handle_t handle)
+{
+	if (flags & GNTMAP_contains_pte)
+		unmap->host_addr = addr;
+	else if (xen_feature(XENFEAT_auto_translated_physmap))
+		unmap->host_addr = __pa(addr);
+	else
+		unmap->host_addr = addr;
+
+	unmap->handle = handle;
+	unmap->dev_bus_addr = maddr;
+}
+
 int arch_gnttab_map_shared(xen_pfn_t *frames, unsigned long nr_gframes,
 			   unsigned long max_nr_gframes,
 			   void **__shared);
--------------------------------------------------------------------------------

Xen side:
The main idea is that 'dev_bus_addr' field equals to '0' in current
kernel implementation. And we use this field to pass additional parameter.
This parameter will be new maddress for the unmapped page.

Kernel side:
Introduced new function which passes an additional parameter 'maddr'.
Unmapped page will be mapped to this new maddr by Xen.

Now there is my new unmap procedure (taking in account that DomD has 1:1 mapping,
where pfn = mfn):
--------------------------------------------------------------------------------
paddr = cma_obj->paddr;
cur_idx = 0;
for (i = 0; i < n_mfns; i++) {
	if (handles_mfns[i] == DRMFRONT_INVALID_HANDLE) {
		/* for now */
		dev_err(dev->dev,
			"invalid handle[%d] -- could not use it\n", i);
		continue;
	}

	gnttab_set_unmap_restore_op(&unmap_mfns[cur_idx],
				    (unsigned long)phys_to_virt(paddr),
				    paddr, GNTMAP_host_map,
				    handles_mfns[i]);

	handles_mfns[i] = DRMFRONT_INVALID_HANDLE;

	cur_idx++;
	paddr += PAGE_SIZE;

	if (cur_idx == MAX_MAP_OP_COUNT || i == n_mfns - 1) {
		ret = gnttab_unmap_refs(unmap_mfns, NULL,
					&pages_mfns[i + 1 - cur_idx],
					cur_idx);
		BUG_ON(ret);

		cur_idx = 0;
	}
}
--------------------------------------------------------------------------------

With this hack I can map/unmap memory in DomD a lot of times without any crashes
inside the DRM driver.

I have some questions:
1. Is this a correct solution?
2. Could this solution be considered as a normal (not hack)?
3. If not then could anybody help me to implement this in the right way?

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: Question about mapping between domains
  2015-07-09 13:31 Question about mapping between domains Oleksandr Dmytryshyn
@ 2015-07-13  9:04 ` Ian Campbell
  2015-07-14 15:31   ` Oleksandr Dmytryshyn
  2015-07-17  7:43   ` Oleksandr Dmytryshyn
  0 siblings, 2 replies; 12+ messages in thread
From: Ian Campbell @ 2015-07-13  9:04 UTC (permalink / raw)
  To: Oleksandr Dmytryshyn
  Cc: Keir Fraser, Ian Jackson, Tim Deegan, xen-devel,
	Stefano Stabellini, Jan Beulich

On Thu, 2015-07-09 at 16:31 +0300, Oleksandr Dmytryshyn wrote:
> I have some questions:
> 1. Is this a correct solution?
> 2. Could this solution be considered as a normal (not hack)?
> 3. If not then could anybody help me to implement this in the right way?

The way we deal with this elsewhere in the kernel is that we only ever
do grant mappings over ballooned out pages, which are allocated via
gnttab_alloc_pages. That way when they are unmapped the page is expected
to be entry and no backing mfn is lost. The page can then subsequently
be ballooned back in as normal.

There is an additional quirk for a 1:1 mapped dom0 which is that we
don't actually decrease reservation when ballooning, but keep the 1:1
mfn in anticipation of ballooning it back in later.

If you can't arrange to use already ballooned buffers for your DMA
buffer then you will need to manually balloon it out before and balloon
it back in later.

You may also want to extend the dom0 1:1 quirk described above to your
1:1 mapped domD.

If you have sufficient control over/knowledge of the domD IPA space then
you could also try and arrange that the region used for these mappings
does not correspond to any real RAM in the guest (i.e. stick it in an
MMIO hole). That depends on you never needing to find an associated
struct page though, which will depend on your use case.

Ian.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about mapping between domains
  2015-07-13  9:04 ` Ian Campbell
@ 2015-07-14 15:31   ` Oleksandr Dmytryshyn
  2015-07-14 15:41     ` Oleksandr Dmytryshyn
  2015-07-14 15:49     ` Ian Campbell
  2015-07-17  7:43   ` Oleksandr Dmytryshyn
  1 sibling, 2 replies; 12+ messages in thread
From: Oleksandr Dmytryshyn @ 2015-07-14 15:31 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Keir Fraser, Ian Jackson, Tim Deegan, xen-devel,
	Stefano Stabellini, Jan Beulich

Hi, Ian. Thank You for the responce.

Currently have 3 kernels: Thin Dom0 (privileged), DomD (privileged
driver domain),
DomU (not privileged)

On Mon, Jul 13, 2015 at 12:04 PM, Ian Campbell <ian.campbell@citrix.com> wrote:
> The way we deal with this elsewhere in the kernel is that we only ever
> do grant mappings over ballooned out pages, which are allocated via
> gnttab_alloc_pages. That way when they are unmapped the page is expected
> to be entry and no backing mfn is lost. The page can then subsequently
> be ballooned back in as normal.
We can not use this case because our DRM driver has already allocated memory
which will be mapped later.

> There is an additional quirk for a 1:1 mapped dom0 which is that we
> don't actually decrease reservation when ballooning, but keep the 1:1
> mfn in anticipation of ballooning it back in later.
Could You please tell me a bit more information about this quirk. How this quirk
can be enabled?

> If you can't arrange to use already ballooned buffers for your DMA
> buffer then you will need to manually balloon it out before and balloon
> it back in later.
This is my case. I'll try to to this.

> You may also want to extend the dom0 1:1 quirk described above to your
> 1:1 mapped domD.
Necessarily I will do this.

> If you have sufficient control over/knowledge of the domD IPA space then
> you could also try and arrange that the region used for these mappings
> does not correspond to any real RAM in the guest (i.e. stick it in an
> MMIO hole). That depends on you never needing to find an associated
> struct page though, which will depend on your use case.
Necessarily I will do this.

> Ian.
>

Oleksandr Dmytryshyn | Product Engineering and Development
GlobalLogic
M +38.067.382.2525
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about mapping between domains
  2015-07-14 15:31   ` Oleksandr Dmytryshyn
@ 2015-07-14 15:41     ` Oleksandr Dmytryshyn
  2015-07-14 15:50       ` Ian Campbell
  2015-07-14 15:49     ` Ian Campbell
  1 sibling, 1 reply; 12+ messages in thread
From: Oleksandr Dmytryshyn @ 2015-07-14 15:41 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Keir Fraser, Ian Jackson, Tim Deegan, xen-devel,
	Stefano Stabellini, Jan Beulich

On Tue, Jul 14, 2015 at 6:31 PM, Oleksandr Dmytryshyn
<oleksandr.dmytryshyn@globallogic.com> wrote:
>
> Hi, Ian. Thank You for the responce.
>
> Currently have 3 kernels: Thin Dom0 (privileged), DomD (privileged
> driver domain),
> DomU (not privileged)
>
> On Mon, Jul 13, 2015 at 12:04 PM, Ian Campbell <ian.campbell@citrix.com> wrote:
> > The way we deal with this elsewhere in the kernel is that we only ever
> > do grant mappings over ballooned out pages, which are allocated via
> > gnttab_alloc_pages. That way when they are unmapped the page is expected
> > to be entry and no backing mfn is lost. The page can then subsequently
> > be ballooned back in as normal.
> We can not use this case because our DRM driver has already allocated memory
> which will be mapped later.
>
> > There is an additional quirk for a 1:1 mapped dom0 which is that we
> > don't actually decrease reservation when ballooning, but keep the 1:1
> > mfn in anticipation of ballooning it back in later.
> Could You please tell me a bit more information about this quirk. How this quirk
> can be enabled?
>
> > If you can't arrange to use already ballooned buffers for your DMA
> > buffer then you will need to manually balloon it out before and balloon
> > it back in later.
> This is my case. I'll try to to this.
Here is one question.
Could anybody tell me how to manually balloon a page in/out?

> > You may also want to extend the dom0 1:1 quirk described above to your
> > 1:1 mapped domD.
> Necessarily I will do this.
>
> > If you have sufficient control over/knowledge of the domD IPA space then
> > you could also try and arrange that the region used for these mappings
> > does not correspond to any real RAM in the guest (i.e. stick it in an
> > MMIO hole). That depends on you never needing to find an associated
> > struct page though, which will depend on your use case.
> Necessarily I will do this.
>
> > Ian.
> >
>
> Oleksandr Dmytryshyn | Product Engineering and Development
> GlobalLogic
> M +38.067.382.2525
> www.globallogic.com
>
> http://www.globallogic.com/email_disclaimer.txt

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about mapping between domains
  2015-07-14 15:31   ` Oleksandr Dmytryshyn
  2015-07-14 15:41     ` Oleksandr Dmytryshyn
@ 2015-07-14 15:49     ` Ian Campbell
  1 sibling, 0 replies; 12+ messages in thread
From: Ian Campbell @ 2015-07-14 15:49 UTC (permalink / raw)
  To: Oleksandr Dmytryshyn
  Cc: Keir Fraser, Ian Jackson, Tim Deegan, xen-devel,
	Stefano Stabellini, Jan Beulich

On Tue, 2015-07-14 at 18:31 +0300, Oleksandr Dmytryshyn wrote:
> > There is an additional quirk for a 1:1 mapped dom0 which is that we
> > don't actually decrease reservation when ballooning, but keep the 1:1
> > mfn in anticipation of ballooning it back in later.
> Could You please tell me a bit more information about this quirk. How this quirk
> can be enabled?

It's enabled by the same dom0_11_mapping which the dom0 domain_builder
uses, look for uses of is_domain_direct_mapped, in particular the ones
in xen/common/memory.c.

Ian.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about mapping between domains
  2015-07-14 15:41     ` Oleksandr Dmytryshyn
@ 2015-07-14 15:50       ` Ian Campbell
  2015-07-15  8:28         ` Oleksandr Dmytryshyn
  0 siblings, 1 reply; 12+ messages in thread
From: Ian Campbell @ 2015-07-14 15:50 UTC (permalink / raw)
  To: Oleksandr Dmytryshyn
  Cc: Keir Fraser, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Jan Beulich

On Tue, 2015-07-14 at 18:41 +0300, Oleksandr Dmytryshyn wrote:
> On Tue, Jul 14, 2015 at 6:31 PM, Oleksandr Dmytryshyn
> <oleksandr.dmytryshyn@globallogic.com> wrote:
> >
> > Hi, Ian. Thank You for the responce.
> >
> > Currently have 3 kernels: Thin Dom0 (privileged), DomD (privileged
> > driver domain),
> > DomU (not privileged)
> >
> > On Mon, Jul 13, 2015 at 12:04 PM, Ian Campbell <ian.campbell@citrix.com> wrote:
> > > The way we deal with this elsewhere in the kernel is that we only ever
> > > do grant mappings over ballooned out pages, which are allocated via
> > > gnttab_alloc_pages. That way when they are unmapped the page is expected
> > > to be entry and no backing mfn is lost. The page can then subsequently
> > > be ballooned back in as normal.
> > We can not use this case because our DRM driver has already allocated memory
> > which will be mapped later.
> >
> > > There is an additional quirk for a 1:1 mapped dom0 which is that we
> > > don't actually decrease reservation when ballooning, but keep the 1:1
> > > mfn in anticipation of ballooning it back in later.
> > Could You please tell me a bit more information about this quirk. How this quirk
> > can be enabled?
> >
> > > If you can't arrange to use already ballooned buffers for your DMA
> > > buffer then you will need to manually balloon it out before and balloon
> > > it back in later.
> > This is my case. I'll try to to this.
> Here is one question.
> Could anybody tell me how to manually balloon a page in/out?

Look at how the balloon driver does it, the hypercalls you want are
XENMEM_(increase|decrease)_reservation.

Ian.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about mapping between domains
  2015-07-14 15:50       ` Ian Campbell
@ 2015-07-15  8:28         ` Oleksandr Dmytryshyn
  2015-07-15 11:51           ` Stefano Stabellini
  0 siblings, 1 reply; 12+ messages in thread
From: Oleksandr Dmytryshyn @ 2015-07-15  8:28 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Keir Fraser, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Jan Beulich

Hi, Ian. Thank You for the response.

> Look at how the balloon driver does it, the hypercalls you want are
> XENMEM_(increase|decrease)_reservation.
I'll try to use those hypercalls.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about mapping between domains
  2015-07-15  8:28         ` Oleksandr Dmytryshyn
@ 2015-07-15 11:51           ` Stefano Stabellini
  2015-07-15 12:00             ` Ian Campbell
  0 siblings, 1 reply; 12+ messages in thread
From: Stefano Stabellini @ 2015-07-15 11:51 UTC (permalink / raw)
  To: Oleksandr Dmytryshyn
  Cc: Keir Fraser, Ian Campbell, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Jan Beulich

On Wed, 15 Jul 2015, Oleksandr Dmytryshyn wrote:
> Hi, Ian. Thank You for the response.
> 
> > Look at how the balloon driver does it, the hypercalls you want are
> > XENMEM_(increase|decrease)_reservation.
> I'll try to use those hypercalls.

In the modern Linux kernels, you just need to call gnttab_alloc_pages
(see drivers/xen/grant-table.c:gnttab_alloc_pages).

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about mapping between domains
  2015-07-15 11:51           ` Stefano Stabellini
@ 2015-07-15 12:00             ` Ian Campbell
  0 siblings, 0 replies; 12+ messages in thread
From: Ian Campbell @ 2015-07-15 12:00 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Keir Fraser, Ian Jackson, Oleksandr Dmytryshyn, Tim Deegan,
	xen-devel, Stefano Stabellini, Jan Beulich

On Wed, 2015-07-15 at 12:51 +0100, Stefano Stabellini wrote:
> On Wed, 15 Jul 2015, Oleksandr Dmytryshyn wrote:
> > Hi, Ian. Thank You for the response.
> > 
> > > Look at how the balloon driver does it, the hypercalls you want are
> > > XENMEM_(increase|decrease)_reservation.
> > I'll try to use those hypercalls.
> 
> In the modern Linux kernels, you just need to call gnttab_alloc_pages
> (see drivers/xen/grant-table.c:gnttab_alloc_pages).

The problem here is to grant map pages to fill and existing buffer which
is already allocated/supplied elswhere (in the GPU stack I suppose).

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about mapping between domains
  2015-07-13  9:04 ` Ian Campbell
  2015-07-14 15:31   ` Oleksandr Dmytryshyn
@ 2015-07-17  7:43   ` Oleksandr Dmytryshyn
  2015-07-17  8:59     ` Ian Campbell
  1 sibling, 1 reply; 12+ messages in thread
From: Oleksandr Dmytryshyn @ 2015-07-17  7:43 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Keir Fraser, Ian Jackson, Tim Deegan, xen-devel,
	Stefano Stabellini, Jan Beulich

Hi, Ian. Thank You for tips.

On Mon, Jul 13, 2015 at 12:04 PM, Ian Campbell <ian.campbell@citrix.com> wrote:
> There is an additional quirk for a 1:1 mapped dom0 which is that we
> don't actually decrease reservation when ballooning, but keep the 1:1
> mfn in anticipation of ballooning it back in later.
Currently we have this quirk enabled in DomD (driver domain)

> If you can't arrange to use already ballooned buffers for your DMA
> buffer then you will need to manually balloon it out before and balloon
> it back in later.
I've tried this and all is working (I can map and then unmap memory in both
directions DomU -> DomD, DomD -> DomU)

> You may also want to extend the dom0 1:1 quirk described above to your
> 1:1 mapped domD.
Currently this quirk is enabled in DomD. In this case I can map memory from
DomU to DomD (as it done in all PV drivers). But is this quirk is
enabled in DomU,
I can also map memory from DomD to DomU.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about mapping between domains
  2015-07-17  7:43   ` Oleksandr Dmytryshyn
@ 2015-07-17  8:59     ` Ian Campbell
  2015-07-22 12:07       ` Oleksandr Dmytryshyn
  0 siblings, 1 reply; 12+ messages in thread
From: Ian Campbell @ 2015-07-17  8:59 UTC (permalink / raw)
  To: Oleksandr Dmytryshyn
  Cc: Keir Fraser, Ian Jackson, Tim Deegan, xen-devel,
	Stefano Stabellini, Jan Beulich

On Fri, 2015-07-17 at 10:43 +0300, Oleksandr Dmytryshyn wrote:
> Hi, Ian. Thank You for tips.
> 
> On Mon, Jul 13, 2015 at 12:04 PM, Ian Campbell <ian.campbell@citrix.com> wrote:
> > There is an additional quirk for a 1:1 mapped dom0 which is that we
> > don't actually decrease reservation when ballooning, but keep the 1:1
> > mfn in anticipation of ballooning it back in later.
> Currently we have this quirk enabled in DomD (driver domain)
> 
> > If you can't arrange to use already ballooned buffers for your DMA
> > buffer then you will need to manually balloon it out before and balloon
> > it back in later.
> I've tried this and all is working (I can map and then unmap memory in both
> directions DomU -> DomD, DomD -> DomU)
> 
> > You may also want to extend the dom0 1:1 quirk described above to your
> > 1:1 mapped domD.
> Currently this quirk is enabled in DomD. In this case I can map memory from
> DomU to DomD (as it done in all PV drivers). But is this quirk is

                                                   ^if?

> enabled in DomU, I can also map memory from DomD to DomU.

Does this mean everything is working as you need, or is there a further
issue which needs addressing?

Ian.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about mapping between domains
  2015-07-17  8:59     ` Ian Campbell
@ 2015-07-22 12:07       ` Oleksandr Dmytryshyn
  0 siblings, 0 replies; 12+ messages in thread
From: Oleksandr Dmytryshyn @ 2015-07-22 12:07 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Keir Fraser, Ian Jackson, Tim Deegan, xen-devel,
	Stefano Stabellini, Jan Beulich

On Fri, Jul 17, 2015 at 11:59 AM, Ian Campbell <ian.campbell@citrix.com> wrote:
> Does this mean everything is working as you need, or is there a further
> issue which needs addressing?
All is working as needed. Thank You.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2015-07-22 12:07 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-07-09 13:31 Question about mapping between domains Oleksandr Dmytryshyn
2015-07-13  9:04 ` Ian Campbell
2015-07-14 15:31   ` Oleksandr Dmytryshyn
2015-07-14 15:41     ` Oleksandr Dmytryshyn
2015-07-14 15:50       ` Ian Campbell
2015-07-15  8:28         ` Oleksandr Dmytryshyn
2015-07-15 11:51           ` Stefano Stabellini
2015-07-15 12:00             ` Ian Campbell
2015-07-14 15:49     ` Ian Campbell
2015-07-17  7:43   ` Oleksandr Dmytryshyn
2015-07-17  8:59     ` Ian Campbell
2015-07-22 12:07       ` Oleksandr Dmytryshyn

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).