From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: xen-devel@lists.xen.org
Subject: Re: DomU application crashes while mmap'ing device memory on x86_64
Date: Tue, 22 Nov 2016 20:27:32 +0200 [thread overview]
Message-ID: <CANY+8dfwb98nyZkx+cxJjjLH002_h0wHn3cyyn32MpLMoqh_5Q@mail.gmail.com> (raw)
In-Reply-To: <CANY+8deC=H_McEb2jZsnceaVUcSbzTVoEY_Ysn5q4m4jLG4QYg@mail.gmail.com>
[-- Attachment #1.1: Type: text/plain, Size: 6466 bytes --]
Hi,
just wanted to bump this as I also have the same issue on real HW now
(x86_64)
Nov 14 10:30:18 DomU kernel: [ 1169.569936] [<ffffffff8101d60c>]
xen_mc_flush+0x19c/0x1b0
Thank you in advnce,
Oleksandr
On Mon, Nov 14, 2016 at 6:07 PM, Oleksandr Andrushchenko <andr2000@gmail.com
> wrote:
> Hi, there!
>
> Sorry for the long read ahead, but it seems I've got stuck...
>
> I am working on a PV driver and facing an mmap issue.
> This actually happens when user-space tries to mmap
> the memory allocated by the driver:
>
> cma_obj->vaddr = dma_alloc_wc(drm->dev, size, &cma_obj->paddr,
> GFP_KERNEL | __GFP_NOWARN);
>
> and maping:
>
> vma->vm_flags &= ~VM_PFNMAP;
> vma->vm_pgoff = 0;
>
> ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
> cma_obj->paddr, vma->vm_end - vma->vm_start);
>
> Return of the dma_mmap_wc is 0, but I see in the DomU kernel logs:
>
> Nov 14 10:30:18 DomU kernel: [ 1169.569909] ------------[ cut here
> ]------------
> Nov 14 10:30:18 DomU kernel: [ 1169.569911] WARNING: CPU: 1 PID: 5146 at
> /home/kernel/COD/linux/arch/x86/xen/multicalls.c:129
> xen_mc_flush+0x19c/0x1b0
> Nov 14 10:30:18 DomU kernel: [ 1169.569912] Modules linked in:
> xen_drmfront(OE) drm_kms_helper(OE) drm(OE) fb_sys_fops syscopyarea
> sysfillrect sysimgblt crct10dif_pclmul crc32_pclmul ghash_clmulni_intel
> aesni_intel aes_x86_64 lrw glue_helper ablk_helper cryptd intel_rapl_perf
> autofs4 [last unloaded: xen_drmfront]
> Nov 14 10:30:18 DomU kernel: [ 1169.569919] CPU: 1 PID: 5146 Comm:
> lt-modetest Tainted: G W OE 4.9.0-040900rc3-generic #201610291831
> Nov 14 10:30:18 DomU kernel: [ 1169.569920] ffffc900406ffb10
> ffffffff81416bf2 0000000000000000 0000000000000000
> Nov 14 10:30:18 DomU kernel: [ 1169.569923] ffffc900406ffb50
> ffffffff8108361b 00000081406ffb30 ffff88003f90b8e0
> Nov 14 10:30:18 DomU kernel: [ 1169.569925] 0000000000000001
> 0000000000000010 0000000000000000 0000000000000201
> Nov 14 10:30:18 DomU kernel: [ 1169.569928] Call Trace:
> Nov 14 10:30:18 DomU kernel: [ 1169.569930] [<ffffffff81416bf2>]
> dump_stack+0x63/0x81
> Nov 14 10:30:18 DomU kernel: [ 1169.569932] [<ffffffff8108361b>]
> __warn+0xcb/0xf0
> Nov 14 10:30:18 DomU kernel: [ 1169.569934] [<ffffffff8108374d>]
> warn_slowpath_null+0x1d/0x20
> Nov 14 10:30:18 DomU kernel: [ 1169.569936] [<ffffffff8101d60c>]
> xen_mc_flush+0x19c/0x1b0
> Nov 14 10:30:18 DomU kernel: [ 1169.569938] [<ffffffff8101d716>]
> __xen_mc_entry+0xf6/0x150
> Nov 14 10:30:18 DomU kernel: [ 1169.569940] [<ffffffff81020476>]
> xen_extend_mmu_update+0x56/0xd0
> Nov 14 10:30:18 DomU kernel: [ 1169.569942] [<ffffffff81021d67>]
> xen_set_pte_at+0x177/0x2f0
> Nov 14 10:30:18 DomU kernel: [ 1169.569944] [<ffffffff811e064b>]
> remap_pfn_range+0x30b/0x430
> Nov 14 10:30:18 DomU kernel: [ 1169.569946] [<ffffffff815a8267>]
> dma_common_mmap+0x87/0xa0
> Nov 14 10:30:18 DomU kernel: [ 1169.569953] [<ffffffffc00ffa8f>]
> drm_gem_cma_mmap_obj+0x8f/0xa0 [drm]
> Nov 14 10:30:18 DomU kernel: [ 1169.569960] [<ffffffffc00ffac5>]
> drm_gem_cma_mmap+0x25/0x30 [drm]
> Nov 14 10:30:18 DomU kernel: [ 1169.569962] [<ffffffff811e79b5>]
> mmap_region+0x3a5/0x640
> Nov 14 10:30:18 DomU kernel: [ 1169.569964] [<ffffffff811e8096>]
> do_mmap+0x446/0x530
> Nov 14 10:30:18 DomU kernel: [ 1169.569966] [<ffffffff813b88b5>] ?
> common_mmap+0x45/0x50
> Nov 14 10:30:18 DomU kernel: [ 1169.569968] [<ffffffff813b8906>] ?
> apparmor_mmap_file+0x16/0x20
> Nov 14 10:30:18 DomU kernel: [ 1169.569970] [<ffffffff81377a5d>] ?
> security_mmap_file+0xdd/0xf0
> Nov 14 10:30:18 DomU kernel: [ 1169.569972] [<ffffffff811c8faa>]
> vm_mmap_pgoff+0xba/0xf0
> Nov 14 10:30:18 DomU kernel: [ 1169.569974] [<ffffffff811e5c01>]
> SyS_mmap_pgoff+0x1c1/0x290
> Nov 14 10:30:18 DomU kernel: [ 1169.569976] [<ffffffff8103313b>]
> SyS_mmap+0x1b/0x30
> Nov 14 10:30:18 DomU kernel: [ 1169.569978] [<ffffffff8188bbbb>]
> entry_SYSCALL_64_fastpath+0x1e/0xad
> Nov 14 10:30:18 DomU kernel: [ 1169.569979] ---[ end trace
> ce1796cb265ebe08 ]---
> Nov 14 10:30:18 DomU kernel: [ 1169.569982] ------------[ cut here
> ]------------
>
>
> And output of xl dmesg says:
>
> (XEN) memory.c:226:d0v0 Could not allocate order=9 extent: id=31
> memflags=0x40 (488 of 512)
> (d31) mapping kernel into physical memory
> (d31) about to get started...
> (XEN) d31 attempted to change d31v0's CR4 flags 00000620 -> 00040660
> (XEN) d31 attempted to change d31v1's CR4 flags 00000620 -> 00040660
> (XEN) traps.c:3657: GPF (0000): ffff82d0801a1a09 -> ffff82d08024b970
> (XEN) mm.c:1893:d31v0 Bad L1 flags 90
> (XEN) mm.c:1893:d31v0 Bad L1 flags 90
> (XEN) mm.c:1893:d31v0 Bad L1 flags 90
> (XEN) mm.c:1893:d31v0 Bad L1 flags 90
>
> My setup is a little bit tricky... I am using a Xen setup running
> inside VirtualBox:
>
> 1. xl info:
> host : Dom0
> release : 4.4.0-45-generic
> version : #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016
> machine : x86_64
> nr_cpus : 2
> max_cpu_id : 1
> nr_nodes : 1
> cores_per_socket : 2
> threads_per_core : 1
> cpu_mhz : 3408
> hw_caps : 178bfbff:d6d82203:28100800:
> 00000121:00000000:00842000:00000000:00000100
> virt_caps :
> total_memory : 2047
> free_memory : 11
> sharing_freed_memory : 0
> sharing_used_memory : 0
> outstanding_claims : 0
> free_cpus : 0
> xen_major : 4
> xen_minor : 8
> xen_extra : .0-rc
> xen_version : 4.8.0-rc
> xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p
> xen_scheduler : credit
> xen_pagesize : 4096
> platform_params : virt_start=0xffff800000000000
> xen_changeset :
> xen_commandline : placeholder
> cc_compiler : gcc (Debian 6.2.0-10) 6.2.0 20161027
> cc_compile_by : ijackson
> cc_compile_domain : chiark.greenend.org.uk
> cc_compile_date : Tue Nov 1 18:11:16 UTC 2016
> build_id : 3744fa5e7a5b01a0439ba4413e41a7a1c505d5ee
> xend_config_format : 4
>
> 2. DomU
> Linux DomU 4.9.0-040900rc3-generic #201610291831 SMP Sat Oct 29 22:32:46
> UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
>
> Could anyone please give me any hint on what needs to
> be checked and how this can be resolved?
>
> Thank you,
> Oleksandr Andrushchenko
>
--
Best regards,
Oleksandr Andrushchenko
[-- Attachment #1.2: Type: text/html, Size: 8478 bytes --]
[-- Attachment #2: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-11-22 18:27 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-11-14 16:07 DomU application crashes while mmap'ing device memory on x86_64 Oleksandr Andrushchenko
2016-11-22 18:27 ` Oleksandr Andrushchenko [this message]
2016-11-30 19:00 ` Oleksandr Andrushchenko
2016-11-30 19:10 ` Andrew Cooper
2016-11-30 19:24 ` Oleksandr Andrushchenko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CANY+8dfwb98nyZkx+cxJjjLH002_h0wHn3cyyn32MpLMoqh_5Q@mail.gmail.com \
--to=andr2000@gmail.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.