All of lore.kernel.org
 help / color / mirror / Atom feed
* DomU application crashes while mmap'ing device memory on x86_64
@ 2016-11-14 16:07 Oleksandr Andrushchenko
  2016-11-22 18:27 ` Oleksandr Andrushchenko
  0 siblings, 1 reply; 5+ messages in thread
From: Oleksandr Andrushchenko @ 2016-11-14 16:07 UTC (permalink / raw)
  To: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 5840 bytes --]

Hi, there!

Sorry for the long read ahead, but it seems I've got stuck...

I am working on a PV driver and facing an mmap issue.
This actually happens when user-space tries to mmap
the memory allocated by the driver:

cma_obj->vaddr = dma_alloc_wc(drm->dev, size, &cma_obj->paddr,
      GFP_KERNEL | __GFP_NOWARN);

and maping:

vma->vm_flags &= ~VM_PFNMAP;
vma->vm_pgoff = 0;

ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
 cma_obj->paddr, vma->vm_end - vma->vm_start);

Return of the dma_mmap_wc is 0, but I see in the DomU kernel logs:

Nov 14 10:30:18 DomU kernel: [ 1169.569909] ------------[ cut here
]------------
Nov 14 10:30:18 DomU kernel: [ 1169.569911] WARNING: CPU: 1 PID: 5146 at
/home/kernel/COD/linux/arch/x86/xen/multicalls.c:129
xen_mc_flush+0x19c/0x1b0
Nov 14 10:30:18 DomU kernel: [ 1169.569912] Modules linked in:
xen_drmfront(OE) drm_kms_helper(OE) drm(OE) fb_sys_fops syscopyarea
sysfillrect sysimgblt crct10dif_pclmul crc32_pclmul ghash_clmulni_intel
aesni_intel aes_x86_64 lrw glue_helper ablk_helper cryptd intel_rapl_perf
autofs4 [last unloaded: xen_drmfront]
Nov 14 10:30:18 DomU kernel: [ 1169.569919] CPU: 1 PID: 5146 Comm:
lt-modetest Tainted: G        W  OE   4.9.0-040900rc3-generic #201610291831
Nov 14 10:30:18 DomU kernel: [ 1169.569920]  ffffc900406ffb10
ffffffff81416bf2 0000000000000000 0000000000000000
Nov 14 10:30:18 DomU kernel: [ 1169.569923]  ffffc900406ffb50
ffffffff8108361b 00000081406ffb30 ffff88003f90b8e0
Nov 14 10:30:18 DomU kernel: [ 1169.569925]  0000000000000001
0000000000000010 0000000000000000 0000000000000201
Nov 14 10:30:18 DomU kernel: [ 1169.569928] Call Trace:
Nov 14 10:30:18 DomU kernel: [ 1169.569930]  [<ffffffff81416bf2>]
dump_stack+0x63/0x81
Nov 14 10:30:18 DomU kernel: [ 1169.569932]  [<ffffffff8108361b>]
__warn+0xcb/0xf0
Nov 14 10:30:18 DomU kernel: [ 1169.569934]  [<ffffffff8108374d>]
warn_slowpath_null+0x1d/0x20
Nov 14 10:30:18 DomU kernel: [ 1169.569936]  [<ffffffff8101d60c>]
xen_mc_flush+0x19c/0x1b0
Nov 14 10:30:18 DomU kernel: [ 1169.569938]  [<ffffffff8101d716>]
__xen_mc_entry+0xf6/0x150
Nov 14 10:30:18 DomU kernel: [ 1169.569940]  [<ffffffff81020476>]
xen_extend_mmu_update+0x56/0xd0
Nov 14 10:30:18 DomU kernel: [ 1169.569942]  [<ffffffff81021d67>]
xen_set_pte_at+0x177/0x2f0
Nov 14 10:30:18 DomU kernel: [ 1169.569944]  [<ffffffff811e064b>]
remap_pfn_range+0x30b/0x430
Nov 14 10:30:18 DomU kernel: [ 1169.569946]  [<ffffffff815a8267>]
dma_common_mmap+0x87/0xa0
Nov 14 10:30:18 DomU kernel: [ 1169.569953]  [<ffffffffc00ffa8f>]
drm_gem_cma_mmap_obj+0x8f/0xa0 [drm]
Nov 14 10:30:18 DomU kernel: [ 1169.569960]  [<ffffffffc00ffac5>]
drm_gem_cma_mmap+0x25/0x30 [drm]
Nov 14 10:30:18 DomU kernel: [ 1169.569962]  [<ffffffff811e79b5>]
mmap_region+0x3a5/0x640
Nov 14 10:30:18 DomU kernel: [ 1169.569964]  [<ffffffff811e8096>]
do_mmap+0x446/0x530
Nov 14 10:30:18 DomU kernel: [ 1169.569966]  [<ffffffff813b88b5>] ?
common_mmap+0x45/0x50
Nov 14 10:30:18 DomU kernel: [ 1169.569968]  [<ffffffff813b8906>] ?
apparmor_mmap_file+0x16/0x20
Nov 14 10:30:18 DomU kernel: [ 1169.569970]  [<ffffffff81377a5d>] ?
security_mmap_file+0xdd/0xf0
Nov 14 10:30:18 DomU kernel: [ 1169.569972]  [<ffffffff811c8faa>]
vm_mmap_pgoff+0xba/0xf0
Nov 14 10:30:18 DomU kernel: [ 1169.569974]  [<ffffffff811e5c01>]
SyS_mmap_pgoff+0x1c1/0x290
Nov 14 10:30:18 DomU kernel: [ 1169.569976]  [<ffffffff8103313b>]
SyS_mmap+0x1b/0x30
Nov 14 10:30:18 DomU kernel: [ 1169.569978]  [<ffffffff8188bbbb>]
entry_SYSCALL_64_fastpath+0x1e/0xad
Nov 14 10:30:18 DomU kernel: [ 1169.569979] ---[ end trace ce1796cb265ebe08
]---
Nov 14 10:30:18 DomU kernel: [ 1169.569982] ------------[ cut here
]------------


And output of xl dmesg says:

(XEN) memory.c:226:d0v0 Could not allocate order=9 extent: id=31
memflags=0x40 (488 of 512)
(d31) mapping kernel into physical memory
(d31) about to get started...
(XEN) d31 attempted to change d31v0's CR4 flags 00000620 -> 00040660
(XEN) d31 attempted to change d31v1's CR4 flags 00000620 -> 00040660
(XEN) traps.c:3657: GPF (0000): ffff82d0801a1a09 -> ffff82d08024b970
(XEN) mm.c:1893:d31v0 Bad L1 flags 90
(XEN) mm.c:1893:d31v0 Bad L1 flags 90
(XEN) mm.c:1893:d31v0 Bad L1 flags 90
(XEN) mm.c:1893:d31v0 Bad L1 flags 90

My setup is a little bit tricky... I am using a Xen setup running
inside VirtualBox:

1. xl info:
host                   : Dom0
release                : 4.4.0-45-generic
version                : #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016
machine                : x86_64
nr_cpus                : 2
max_cpu_id             : 1
nr_nodes               : 1
cores_per_socket       : 2
threads_per_core       : 1
cpu_mhz                : 3408
hw_caps                :
178bfbff:d6d82203:28100800:00000121:00000000:00842000:00000000:00000100
virt_caps              :
total_memory           : 2047
free_memory            : 11
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 8
xen_extra              : .0-rc
xen_version            : 4.8.0-rc
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          :
xen_commandline        : placeholder
cc_compiler            : gcc (Debian 6.2.0-10) 6.2.0 20161027
cc_compile_by          : ijackson
cc_compile_domain      : chiark.greenend.org.uk
cc_compile_date        : Tue Nov  1 18:11:16 UTC 2016
build_id               : 3744fa5e7a5b01a0439ba4413e41a7a1c505d5ee
xend_config_format     : 4

2. DomU
Linux DomU 4.9.0-040900rc3-generic #201610291831 SMP Sat Oct 29 22:32:46
UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

Could anyone please give me any hint on what needs to
be checked and how this can be resolved?

Thank you,
Oleksandr Andrushchenko

[-- Attachment #1.2: Type: text/html, Size: 7597 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: DomU application crashes while mmap'ing device memory on x86_64
  2016-11-14 16:07 DomU application crashes while mmap'ing device memory on x86_64 Oleksandr Andrushchenko
@ 2016-11-22 18:27 ` Oleksandr Andrushchenko
  2016-11-30 19:00   ` Oleksandr Andrushchenko
  0 siblings, 1 reply; 5+ messages in thread
From: Oleksandr Andrushchenko @ 2016-11-22 18:27 UTC (permalink / raw)
  To: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 6466 bytes --]

Hi,

just wanted to bump this as I also have the same issue on real HW now
(x86_64)
Nov 14 10:30:18 DomU kernel: [ 1169.569936]  [<ffffffff8101d60c>]
xen_mc_flush+0x19c/0x1b0

Thank you in advnce,
Oleksandr


On Mon, Nov 14, 2016 at 6:07 PM, Oleksandr Andrushchenko <andr2000@gmail.com
> wrote:

> Hi, there!
>
> Sorry for the long read ahead, but it seems I've got stuck...
>
> I am working on a PV driver and facing an mmap issue.
> This actually happens when user-space tries to mmap
> the memory allocated by the driver:
>
> cma_obj->vaddr = dma_alloc_wc(drm->dev, size, &cma_obj->paddr,
>       GFP_KERNEL | __GFP_NOWARN);
>
> and maping:
>
> vma->vm_flags &= ~VM_PFNMAP;
> vma->vm_pgoff = 0;
>
> ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
>  cma_obj->paddr, vma->vm_end - vma->vm_start);
>
> Return of the dma_mmap_wc is 0, but I see in the DomU kernel logs:
>
> Nov 14 10:30:18 DomU kernel: [ 1169.569909] ------------[ cut here
> ]------------
> Nov 14 10:30:18 DomU kernel: [ 1169.569911] WARNING: CPU: 1 PID: 5146 at
> /home/kernel/COD/linux/arch/x86/xen/multicalls.c:129
> xen_mc_flush+0x19c/0x1b0
> Nov 14 10:30:18 DomU kernel: [ 1169.569912] Modules linked in:
> xen_drmfront(OE) drm_kms_helper(OE) drm(OE) fb_sys_fops syscopyarea
> sysfillrect sysimgblt crct10dif_pclmul crc32_pclmul ghash_clmulni_intel
> aesni_intel aes_x86_64 lrw glue_helper ablk_helper cryptd intel_rapl_perf
> autofs4 [last unloaded: xen_drmfront]
> Nov 14 10:30:18 DomU kernel: [ 1169.569919] CPU: 1 PID: 5146 Comm:
> lt-modetest Tainted: G        W  OE   4.9.0-040900rc3-generic #201610291831
> Nov 14 10:30:18 DomU kernel: [ 1169.569920]  ffffc900406ffb10
> ffffffff81416bf2 0000000000000000 0000000000000000
> Nov 14 10:30:18 DomU kernel: [ 1169.569923]  ffffc900406ffb50
> ffffffff8108361b 00000081406ffb30 ffff88003f90b8e0
> Nov 14 10:30:18 DomU kernel: [ 1169.569925]  0000000000000001
> 0000000000000010 0000000000000000 0000000000000201
> Nov 14 10:30:18 DomU kernel: [ 1169.569928] Call Trace:
> Nov 14 10:30:18 DomU kernel: [ 1169.569930]  [<ffffffff81416bf2>]
> dump_stack+0x63/0x81
> Nov 14 10:30:18 DomU kernel: [ 1169.569932]  [<ffffffff8108361b>]
> __warn+0xcb/0xf0
> Nov 14 10:30:18 DomU kernel: [ 1169.569934]  [<ffffffff8108374d>]
> warn_slowpath_null+0x1d/0x20
> Nov 14 10:30:18 DomU kernel: [ 1169.569936]  [<ffffffff8101d60c>]
> xen_mc_flush+0x19c/0x1b0
> Nov 14 10:30:18 DomU kernel: [ 1169.569938]  [<ffffffff8101d716>]
> __xen_mc_entry+0xf6/0x150
> Nov 14 10:30:18 DomU kernel: [ 1169.569940]  [<ffffffff81020476>]
> xen_extend_mmu_update+0x56/0xd0
> Nov 14 10:30:18 DomU kernel: [ 1169.569942]  [<ffffffff81021d67>]
> xen_set_pte_at+0x177/0x2f0
> Nov 14 10:30:18 DomU kernel: [ 1169.569944]  [<ffffffff811e064b>]
> remap_pfn_range+0x30b/0x430
> Nov 14 10:30:18 DomU kernel: [ 1169.569946]  [<ffffffff815a8267>]
> dma_common_mmap+0x87/0xa0
> Nov 14 10:30:18 DomU kernel: [ 1169.569953]  [<ffffffffc00ffa8f>]
> drm_gem_cma_mmap_obj+0x8f/0xa0 [drm]
> Nov 14 10:30:18 DomU kernel: [ 1169.569960]  [<ffffffffc00ffac5>]
> drm_gem_cma_mmap+0x25/0x30 [drm]
> Nov 14 10:30:18 DomU kernel: [ 1169.569962]  [<ffffffff811e79b5>]
> mmap_region+0x3a5/0x640
> Nov 14 10:30:18 DomU kernel: [ 1169.569964]  [<ffffffff811e8096>]
> do_mmap+0x446/0x530
> Nov 14 10:30:18 DomU kernel: [ 1169.569966]  [<ffffffff813b88b5>] ?
> common_mmap+0x45/0x50
> Nov 14 10:30:18 DomU kernel: [ 1169.569968]  [<ffffffff813b8906>] ?
> apparmor_mmap_file+0x16/0x20
> Nov 14 10:30:18 DomU kernel: [ 1169.569970]  [<ffffffff81377a5d>] ?
> security_mmap_file+0xdd/0xf0
> Nov 14 10:30:18 DomU kernel: [ 1169.569972]  [<ffffffff811c8faa>]
> vm_mmap_pgoff+0xba/0xf0
> Nov 14 10:30:18 DomU kernel: [ 1169.569974]  [<ffffffff811e5c01>]
> SyS_mmap_pgoff+0x1c1/0x290
> Nov 14 10:30:18 DomU kernel: [ 1169.569976]  [<ffffffff8103313b>]
> SyS_mmap+0x1b/0x30
> Nov 14 10:30:18 DomU kernel: [ 1169.569978]  [<ffffffff8188bbbb>]
> entry_SYSCALL_64_fastpath+0x1e/0xad
> Nov 14 10:30:18 DomU kernel: [ 1169.569979] ---[ end trace
> ce1796cb265ebe08 ]---
> Nov 14 10:30:18 DomU kernel: [ 1169.569982] ------------[ cut here
> ]------------
>
>
> And output of xl dmesg says:
>
> (XEN) memory.c:226:d0v0 Could not allocate order=9 extent: id=31
> memflags=0x40 (488 of 512)
> (d31) mapping kernel into physical memory
> (d31) about to get started...
> (XEN) d31 attempted to change d31v0's CR4 flags 00000620 -> 00040660
> (XEN) d31 attempted to change d31v1's CR4 flags 00000620 -> 00040660
> (XEN) traps.c:3657: GPF (0000): ffff82d0801a1a09 -> ffff82d08024b970
> (XEN) mm.c:1893:d31v0 Bad L1 flags 90
> (XEN) mm.c:1893:d31v0 Bad L1 flags 90
> (XEN) mm.c:1893:d31v0 Bad L1 flags 90
> (XEN) mm.c:1893:d31v0 Bad L1 flags 90
>
> My setup is a little bit tricky... I am using a Xen setup running
> inside VirtualBox:
>
> 1. xl info:
> host                   : Dom0
> release                : 4.4.0-45-generic
> version                : #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016
> machine                : x86_64
> nr_cpus                : 2
> max_cpu_id             : 1
> nr_nodes               : 1
> cores_per_socket       : 2
> threads_per_core       : 1
> cpu_mhz                : 3408
> hw_caps                : 178bfbff:d6d82203:28100800:
> 00000121:00000000:00842000:00000000:00000100
> virt_caps              :
> total_memory           : 2047
> free_memory            : 11
> sharing_freed_memory   : 0
> sharing_used_memory    : 0
> outstanding_claims     : 0
> free_cpus              : 0
> xen_major              : 4
> xen_minor              : 8
> xen_extra              : .0-rc
> xen_version            : 4.8.0-rc
> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p
> xen_scheduler          : credit
> xen_pagesize           : 4096
> platform_params        : virt_start=0xffff800000000000
> xen_changeset          :
> xen_commandline        : placeholder
> cc_compiler            : gcc (Debian 6.2.0-10) 6.2.0 20161027
> cc_compile_by          : ijackson
> cc_compile_domain      : chiark.greenend.org.uk
> cc_compile_date        : Tue Nov  1 18:11:16 UTC 2016
> build_id               : 3744fa5e7a5b01a0439ba4413e41a7a1c505d5ee
> xend_config_format     : 4
>
> 2. DomU
> Linux DomU 4.9.0-040900rc3-generic #201610291831 SMP Sat Oct 29 22:32:46
> UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
>
> Could anyone please give me any hint on what needs to
> be checked and how this can be resolved?
>
> Thank you,
> Oleksandr Andrushchenko
>



-- 
Best regards,
Oleksandr Andrushchenko

[-- Attachment #1.2: Type: text/html, Size: 8478 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: DomU application crashes while mmap'ing device memory on x86_64
  2016-11-22 18:27 ` Oleksandr Andrushchenko
@ 2016-11-30 19:00   ` Oleksandr Andrushchenko
  2016-11-30 19:10     ` Andrew Cooper
  0 siblings, 1 reply; 5+ messages in thread
From: Oleksandr Andrushchenko @ 2016-11-30 19:00 UTC (permalink / raw)
  To: xen-devel

I traced the problem down to vma->vm_page_prot which

in my case is set as:

vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);

This sets additional flags _PAGE_BIT_PSE/_PAGE_BIT_PAT +_PAGE_BIT_PCD

so after that remap_pfn_range makes Xen complain.

(pgprot_noncached(vma->vm_page_prot) == 0x80000000000000b7)

If I change prot to

vma->vm_page_prot = PAGE_SHARED;

(PAGE_SHARED == 0x8000000000000027) then I am able to mmap.

Can anyone please help me understand if this is a valid use-case for DomU
(pgprot_noncached) and if so why Xen cannot make it?

Thank you,
Oleksandr

On 11/22/2016 08:27 PM, Oleksandr Andrushchenko wrote:
>
> Hi,
>
> just wanted to bump this as I also have the same issue on real HW now 
> (x86_64)
>
> Nov 14 10:30:18 DomU kernel: [ 1169.569936]  [<ffffffff8101d60c>] 
> xen_mc_flush+0x19c/0x1b0
>
> Thank you in advnce,
> Oleksandr
>
>
> On Mon, Nov 14, 2016 at 6:07 PM, Oleksandr Andrushchenko 
> <andr2000@gmail.com <mailto:andr2000@gmail.com>> wrote:
>
>     Hi, there!
>
>     Sorry for the long read ahead, but it seems I've got stuck...
>
>     I am working on a PV driver and facing an mmap issue.
>     This actually happens when user-space tries to mmap
>     the memory allocated by the driver:
>
>     cma_obj->vaddr = dma_alloc_wc(drm->dev, size, &cma_obj->paddr,
>           GFP_KERNEL | __GFP_NOWARN);
>
>     and maping:
>
>     vma->vm_flags &= ~VM_PFNMAP;
>     vma->vm_pgoff = 0;
>
>     ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
>      cma_obj->paddr, vma->vm_end - vma->vm_start);
>
>     Return of the dma_mmap_wc is 0, but I see in the DomU kernel logs:
>
>     Nov 14 10:30:18 DomU kernel: [ 1169.569909] ------------[ cut here
>     ]------------
>     Nov 14 10:30:18 DomU kernel: [ 1169.569911] WARNING: CPU: 1 PID:
>     5146 at /home/kernel/COD/linux/arch/x86/xen/multicalls.c:129
>     xen_mc_flush+0x19c/0x1b0
>     Nov 14 10:30:18 DomU kernel: [ 1169.569912] Modules linked in:
>     xen_drmfront(OE) drm_kms_helper(OE) drm(OE) fb_sys_fops
>     syscopyarea sysfillrect sysimgblt crct10dif_pclmul crc32_pclmul
>     ghash_clmulni_intel aesni_intel aes_x86_64 lrw glue_helper
>     ablk_helper cryptd intel_rapl_perf autofs4 [last unloaded:
>     xen_drmfront]
>     Nov 14 10:30:18 DomU kernel: [ 1169.569919] CPU: 1 PID: 5146 Comm:
>     lt-modetest Tainted: G        W  OE 4.9.0-040900rc3-generic
>     #201610291831
>     Nov 14 10:30:18 DomU kernel: [ 1169.569920]  ffffc900406ffb10
>     ffffffff81416bf2 0000000000000000 0000000000000000
>     Nov 14 10:30:18 DomU kernel: [ 1169.569923]  ffffc900406ffb50
>     ffffffff8108361b 00000081406ffb30 ffff88003f90b8e0
>     Nov 14 10:30:18 DomU kernel: [ 1169.569925]  0000000000000001
>     0000000000000010 0000000000000000 0000000000000201
>     Nov 14 10:30:18 DomU kernel: [ 1169.569928] Call Trace:
>     Nov 14 10:30:18 DomU kernel: [ 1169.569930]  [<ffffffff81416bf2>]
>     dump_stack+0x63/0x81
>     Nov 14 10:30:18 DomU kernel: [ 1169.569932]  [<ffffffff8108361b>]
>     __warn+0xcb/0xf0
>     Nov 14 10:30:18 DomU kernel: [ 1169.569934]  [<ffffffff8108374d>]
>     warn_slowpath_null+0x1d/0x20
>     Nov 14 10:30:18 DomU kernel: [ 1169.569936]  [<ffffffff8101d60c>]
>     xen_mc_flush+0x19c/0x1b0
>     Nov 14 10:30:18 DomU kernel: [ 1169.569938]  [<ffffffff8101d716>]
>     __xen_mc_entry+0xf6/0x150
>     Nov 14 10:30:18 DomU kernel: [ 1169.569940]  [<ffffffff81020476>]
>     xen_extend_mmu_update+0x56/0xd0
>     Nov 14 10:30:18 DomU kernel: [ 1169.569942]  [<ffffffff81021d67>]
>     xen_set_pte_at+0x177/0x2f0
>     Nov 14 10:30:18 DomU kernel: [ 1169.569944]  [<ffffffff811e064b>]
>     remap_pfn_range+0x30b/0x430
>     Nov 14 10:30:18 DomU kernel: [ 1169.569946]  [<ffffffff815a8267>]
>     dma_common_mmap+0x87/0xa0
>     Nov 14 10:30:18 DomU kernel: [ 1169.569953]  [<ffffffffc00ffa8f>]
>     drm_gem_cma_mmap_obj+0x8f/0xa0 [drm]
>     Nov 14 10:30:18 DomU kernel: [ 1169.569960]  [<ffffffffc00ffac5>]
>     drm_gem_cma_mmap+0x25/0x30 [drm]
>     Nov 14 10:30:18 DomU kernel: [ 1169.569962]  [<ffffffff811e79b5>]
>     mmap_region+0x3a5/0x640
>     Nov 14 10:30:18 DomU kernel: [ 1169.569964]  [<ffffffff811e8096>]
>     do_mmap+0x446/0x530
>     Nov 14 10:30:18 DomU kernel: [ 1169.569966]  [<ffffffff813b88b5>]
>     ? common_mmap+0x45/0x50
>     Nov 14 10:30:18 DomU kernel: [ 1169.569968]  [<ffffffff813b8906>]
>     ? apparmor_mmap_file+0x16/0x20
>     Nov 14 10:30:18 DomU kernel: [ 1169.569970]  [<ffffffff81377a5d>]
>     ? security_mmap_file+0xdd/0xf0
>     Nov 14 10:30:18 DomU kernel: [ 1169.569972]  [<ffffffff811c8faa>]
>     vm_mmap_pgoff+0xba/0xf0
>     Nov 14 10:30:18 DomU kernel: [ 1169.569974]  [<ffffffff811e5c01>]
>     SyS_mmap_pgoff+0x1c1/0x290
>     Nov 14 10:30:18 DomU kernel: [ 1169.569976]  [<ffffffff8103313b>]
>     SyS_mmap+0x1b/0x30
>     Nov 14 10:30:18 DomU kernel: [ 1169.569978]  [<ffffffff8188bbbb>]
>     entry_SYSCALL_64_fastpath+0x1e/0xad
>     Nov 14 10:30:18 DomU kernel: [ 1169.569979] ---[ end trace
>     ce1796cb265ebe08 ]---
>     Nov 14 10:30:18 DomU kernel: [ 1169.569982] ------------[ cut here
>     ]------------
>
>
>     And output of xl dmesg says:
>
>     (XEN) memory.c:226:d0v0 Could not allocate order=9 extent: id=31
>     memflags=0x40 (488 of 512)
>     (d31) mapping kernel into physical memory
>     (d31) about to get started...
>     (XEN) d31 attempted to change d31v0's CR4 flags 00000620 -> 00040660
>     (XEN) d31 attempted to change d31v1's CR4 flags 00000620 -> 00040660
>     (XEN) traps.c:3657: GPF (0000): ffff82d0801a1a09 -> ffff82d08024b970
>     (XEN) mm.c:1893:d31v0 Bad L1 flags 90
>     (XEN) mm.c:1893:d31v0 Bad L1 flags 90
>     (XEN) mm.c:1893:d31v0 Bad L1 flags 90
>     (XEN) mm.c:1893:d31v0 Bad L1 flags 90
>
>     My setup is a little bit tricky... I am using a Xen setup running
>     inside VirtualBox:
>
>     1. xl info:
>     host                   : Dom0
>     release                : 4.4.0-45-generic
>     version                : #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016
>     machine                : x86_64
>     nr_cpus                : 2
>     max_cpu_id             : 1
>     nr_nodes               : 1
>     cores_per_socket       : 2
>     threads_per_core       : 1
>     cpu_mhz                : 3408
>     hw_caps                :
>     178bfbff:d6d82203:28100800:00000121:00000000:00842000:00000000:00000100
>     virt_caps              :
>     total_memory           : 2047
>     free_memory            : 11
>     sharing_freed_memory   : 0
>     sharing_used_memory    : 0
>     outstanding_claims     : 0
>     free_cpus              : 0
>     xen_major              : 4
>     xen_minor              : 8
>     xen_extra              : .0-rc
>     xen_version            : 4.8.0-rc
>     xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p
>     xen_scheduler          : credit
>     xen_pagesize           : 4096
>     platform_params        : virt_start=0xffff800000000000
>     xen_changeset          :
>     xen_commandline        : placeholder
>     cc_compiler            : gcc (Debian 6.2.0-10) 6.2.0 20161027
>     cc_compile_by          : ijackson
>     cc_compile_domain      : chiark.greenend.org.uk
>     <http://chiark.greenend.org.uk>
>     cc_compile_date        : Tue Nov  1 18:11:16 UTC 2016
>     build_id               : 3744fa5e7a5b01a0439ba4413e41a7a1c505d5ee
>     xend_config_format     : 4
>
>     2. DomU
>     Linux DomU 4.9.0-040900rc3-generic #201610291831 SMP Sat Oct 29
>     22:32:46 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
>
>     Could anyone please give me any hint on what needs to
>     be checked and how this can be resolved?
>
>     Thank you,
>     Oleksandr Andrushchenko
>
>
>
>
> -- 
> Best regards,
> Oleksandr Andrushchenko


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: DomU application crashes while mmap'ing device memory on x86_64
  2016-11-30 19:00   ` Oleksandr Andrushchenko
@ 2016-11-30 19:10     ` Andrew Cooper
  2016-11-30 19:24       ` Oleksandr Andrushchenko
  0 siblings, 1 reply; 5+ messages in thread
From: Andrew Cooper @ 2016-11-30 19:10 UTC (permalink / raw)
  To: Oleksandr Andrushchenko, xen-devel

On 30/11/16 19:00, Oleksandr Andrushchenko wrote:
> I traced the problem down to vma->vm_page_prot which
>
> in my case is set as:
>
> vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>
> This sets additional flags _PAGE_BIT_PSE/_PAGE_BIT_PAT +_PAGE_BIT_PCD
>
> so after that remap_pfn_range makes Xen complain.
>
> (pgprot_noncached(vma->vm_page_prot) == 0x80000000000000b7)
>
> If I change prot to
>
> vma->vm_page_prot = PAGE_SHARED;
>
> (PAGE_SHARED == 0x8000000000000027) then I am able to mmap.
>
> Can anyone please help me understand if this is a valid use-case for DomU
> (pgprot_noncached) and if so why Xen cannot make it?
>
> Thank you,
> Oleksandr

Superpages are not supported in a PV guest.

You can enable the use of 2mb superpages for PV guests by booting Xen
with the "allowsuperpage" command line option, but quite a few features
are broken in combination with PV superpages, and this area has been a
ripe source of security bugs.

Unprivileged guests (i.e. ones without hardware) are not permitted to
make mappings with anything other than a writeback memory type, because
all kinds of chaos can ensue if the guest constructs aliasing mappings
with different cacheabilities, and it is prohibitively expensive for Xen
to track for auditing purposes.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: DomU application crashes while mmap'ing device memory on x86_64
  2016-11-30 19:10     ` Andrew Cooper
@ 2016-11-30 19:24       ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 5+ messages in thread
From: Oleksandr Andrushchenko @ 2016-11-30 19:24 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel

Thank you for explanation, now it is clear.

BTW, is PAGE_SHARED the right choice in my case

or should I use something else instead?

Thank you,
Oleksandr

On 11/30/2016 09:10 PM, Andrew Cooper wrote:
> On 30/11/16 19:00, Oleksandr Andrushchenko wrote:
>> I traced the problem down to vma->vm_page_prot which
>>
>> in my case is set as:
>>
>> vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>>
>> This sets additional flags _PAGE_BIT_PSE/_PAGE_BIT_PAT +_PAGE_BIT_PCD
>>
>> so after that remap_pfn_range makes Xen complain.
>>
>> (pgprot_noncached(vma->vm_page_prot) == 0x80000000000000b7)
>>
>> If I change prot to
>>
>> vma->vm_page_prot = PAGE_SHARED;
>>
>> (PAGE_SHARED == 0x8000000000000027) then I am able to mmap.
>>
>> Can anyone please help me understand if this is a valid use-case for DomU
>> (pgprot_noncached) and if so why Xen cannot make it?
>>
>> Thank you,
>> Oleksandr
> Superpages are not supported in a PV guest.
>
> You can enable the use of 2mb superpages for PV guests by booting Xen
> with the "allowsuperpage" command line option, but quite a few features
> are broken in combination with PV superpages, and this area has been a
> ripe source of security bugs.
>
> Unprivileged guests (i.e. ones without hardware) are not permitted to
> make mappings with anything other than a writeback memory type, because
> all kinds of chaos can ensue if the guest constructs aliasing mappings
> with different cacheabilities, and it is prohibitively expensive for Xen
> to track for auditing purposes.
>
> ~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-11-30 19:24 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-14 16:07 DomU application crashes while mmap'ing device memory on x86_64 Oleksandr Andrushchenko
2016-11-22 18:27 ` Oleksandr Andrushchenko
2016-11-30 19:00   ` Oleksandr Andrushchenko
2016-11-30 19:10     ` Andrew Cooper
2016-11-30 19:24       ` Oleksandr Andrushchenko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.