All of lore.kernel.org
 help / color / mirror / Atom feed
* IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
@ 2020-05-15 20:29 Manuel Bouyer
  2020-05-15 21:00 ` Andrew Cooper
  0 siblings, 1 reply; 16+ messages in thread
From: Manuel Bouyer @ 2020-05-15 20:29 UTC (permalink / raw)
  To: xen-devel

Hello,
NetBSD works as dom0 up to Xen 4.11. I'm trying to get it working
on 4.13.0. I added the support for gntdev operations,  but I'm stuck with
privcmd IOCTL_PRIVCMD_MMAPBATCH. It seems to work fine for PV and PVH domUs,
but with HVM domUs, MMU_NORMAL_PT_UPDATE returns -22 (EINVAL) and
qemu-dm dumps core (as expected; the page is not mapped).
Of course this works fine in 4.11

In the Xen kernel, I tracked it down to arch/x86/mm.c near line 2229,
in mod_l1_entry():
        /* Translate foreign guest address. */
        if ( cmd != MMU_PT_UPDATE_NO_TRANSLATE &&
             paging_mode_translate(pg_dom) )
        {
            p2m_type_t p2mt;
            p2m_query_t q = l1e_get_flags(nl1e) & _PAGE_RW ?
                            P2M_ALLOC | P2M_UNSHARE : P2M_ALLOC;

            page = get_page_from_gfn(pg_dom, l1e_get_pfn(nl1e), &p2mt, q);

            if ( p2m_is_paged(p2mt) )
            {
                if ( page )
                    put_page(page);
                p2m_mem_paging_populate(pg_dom, l1e_get_pfn(nl1e));
                return -ENOENT;
            }

            if ( p2mt == p2m_ram_paging_in && !page )
                return -ENOENT;

            /* Did our attempt to unshare fail? */
            if ( (q & P2M_UNSHARE) && p2m_is_shared(p2mt) )
            {
                /* We could not have obtained a page ref. */
                ASSERT(!page);
                /* And mem_sharing_notify has already been called. */
                return -ENOMEM;
            }

            if ( !page ) {
                gdprintk(XENLOG_WARNING, "translate but no page\n");
                return -EINVAL;
            }                        
            nl1e = l1e_from_page(page, l1e_get_flags(nl1e));
        }

the gdprintk() I added in the ( !page) case fires, so this is the
cause of the EINVAL.
Is it expected for a HVM domU ? If so, how should the dom0 code be
changed to get it working ? I failed to see where our code is different
from linux ...

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
  2020-05-15 20:29 IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0 Manuel Bouyer
@ 2020-05-15 21:00 ` Andrew Cooper
  2020-05-15 21:06   ` Manuel Bouyer
  0 siblings, 1 reply; 16+ messages in thread
From: Andrew Cooper @ 2020-05-15 21:00 UTC (permalink / raw)
  To: Manuel Bouyer, xen-devel

On 15/05/2020 21:29, Manuel Bouyer wrote:
> Hello,
> NetBSD works as dom0 up to Xen 4.11. I'm trying to get it working
> on 4.13.0. I added the support for gntdev operations,  but I'm stuck with
> privcmd IOCTL_PRIVCMD_MMAPBATCH. It seems to work fine for PV and PVH domUs,
> but with HVM domUs, MMU_NORMAL_PT_UPDATE returns -22 (EINVAL) and
> qemu-dm dumps core (as expected; the page is not mapped).
> Of course this works fine in 4.11
>
> In the Xen kernel, I tracked it down to arch/x86/mm.c near line 2229,
> in mod_l1_entry():
>         /* Translate foreign guest address. */
>         if ( cmd != MMU_PT_UPDATE_NO_TRANSLATE &&
>              paging_mode_translate(pg_dom) )
>         {
>             p2m_type_t p2mt;
>             p2m_query_t q = l1e_get_flags(nl1e) & _PAGE_RW ?
>                             P2M_ALLOC | P2M_UNSHARE : P2M_ALLOC;
>
>             page = get_page_from_gfn(pg_dom, l1e_get_pfn(nl1e), &p2mt, q);
>
>             if ( p2m_is_paged(p2mt) )
>             {
>                 if ( page )
>                     put_page(page);
>                 p2m_mem_paging_populate(pg_dom, l1e_get_pfn(nl1e));
>                 return -ENOENT;
>             }
>
>             if ( p2mt == p2m_ram_paging_in && !page )
>                 return -ENOENT;
>
>             /* Did our attempt to unshare fail? */
>             if ( (q & P2M_UNSHARE) && p2m_is_shared(p2mt) )
>             {
>                 /* We could not have obtained a page ref. */
>                 ASSERT(!page);
>                 /* And mem_sharing_notify has already been called. */
>                 return -ENOMEM;
>             }
>
>             if ( !page ) {
>                 gdprintk(XENLOG_WARNING, "translate but no page\n");
>                 return -EINVAL;
>             }                        
>             nl1e = l1e_from_page(page, l1e_get_flags(nl1e));
>         }
>
> the gdprintk() I added in the ( !page) case fires, so this is the
> cause of the EINVAL.
> Is it expected for a HVM domU ? If so, how should the dom0 code be
> changed to get it working ? I failed to see where our code is different
> from linux ...

What is qemu doing at the time?  Is it by any chance trying to map the
IOREQ server frame?

~Andrew


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
  2020-05-15 21:00 ` Andrew Cooper
@ 2020-05-15 21:06   ` Manuel Bouyer
  2020-05-15 21:38     ` Andrew Cooper
  0 siblings, 1 reply; 16+ messages in thread
From: Manuel Bouyer @ 2020-05-15 21:06 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel

On Fri, May 15, 2020 at 10:00:07PM +0100, Andrew Cooper wrote:
> What is qemu doing at the time?  Is it by any chance trying to map the
> IOREQ server frame?

Here's what gdb says about it:
Core was generated by `qemu-dm'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x000000000046997d in cpu_x86_init (
    cpu_model=cpu_model@entry=0x4d622d "qemu32")
    at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/i386-dm/helper2.c:156
156                 rc = xenevtchn_bind_interdomain(
--Type <RET> for more, q to quit, c to continue without paging--
[Current thread is 1 (process 1480)]
(gdb) where
#0  0x000000000046997d in cpu_x86_init (
    cpu_model=cpu_model@entry=0x4d622d "qemu32")
    at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/i386-dm/helper2.c:156
#1  0x000000000043628d in pc_init1 (ram_size=<optimized out>, 
    vga_ram_size=4194304, boot_device=0x7f7fff460397 "cda", pci_enabled=1, 
    cpu_model=0x4d622d "qemu32", initrd_filename=<optimized out>, 
    kernel_cmdline=<optimized out>, kernel_filename=<optimized out>)
    at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/hw/pc.c:829
#2  0x00000000004636e7 in xen_init_fv (ram_size=0, vga_ram_size=4194304, 
    boot_device=0x7f7fff460397 "cda", kernel_filename=0x0, 
    kernel_cmdline=0x4abff6 "", initrd_filename=0x0, cpu_model=0x0, 
    direct_pci=0x0)
    at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/hw/xen_machine_fv.c:405
#3  0x00000000004a975b in main (argc=23, argv=0x7f7fff45fc78, 
    envp=<optimized out>)
    at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/vl.c:6014

Does it help ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
  2020-05-15 21:06   ` Manuel Bouyer
@ 2020-05-15 21:38     ` Andrew Cooper
  2020-05-15 21:53       ` Manuel Bouyer
  0 siblings, 1 reply; 16+ messages in thread
From: Andrew Cooper @ 2020-05-15 21:38 UTC (permalink / raw)
  To: Manuel Bouyer; +Cc: xen-devel

On 15/05/2020 22:06, Manuel Bouyer wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>
> On Fri, May 15, 2020 at 10:00:07PM +0100, Andrew Cooper wrote:
>> What is qemu doing at the time?  Is it by any chance trying to map the
>> IOREQ server frame?
> Here's what gdb says about it:
> Core was generated by `qemu-dm'.
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  0x000000000046997d in cpu_x86_init (
>     cpu_model=cpu_model@entry=0x4d622d "qemu32")
>     at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/i386-dm/helper2.c:156
> 156                 rc = xenevtchn_bind_interdomain(
> --Type <RET> for more, q to quit, c to continue without paging--
> [Current thread is 1 (process 1480)]
> (gdb) where
> #0  0x000000000046997d in cpu_x86_init (
>     cpu_model=cpu_model@entry=0x4d622d "qemu32")
>     at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/i386-dm/helper2.c:156
> #1  0x000000000043628d in pc_init1 (ram_size=<optimized out>, 
>     vga_ram_size=4194304, boot_device=0x7f7fff460397 "cda", pci_enabled=1, 
>     cpu_model=0x4d622d "qemu32", initrd_filename=<optimized out>, 
>     kernel_cmdline=<optimized out>, kernel_filename=<optimized out>)
>     at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/hw/pc.c:829
> #2  0x00000000004636e7 in xen_init_fv (ram_size=0, vga_ram_size=4194304, 
>     boot_device=0x7f7fff460397 "cda", kernel_filename=0x0, 
>     kernel_cmdline=0x4abff6 "", initrd_filename=0x0, cpu_model=0x0, 
>     direct_pci=0x0)
>     at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/hw/xen_machine_fv.c:405
> #3  0x00000000004a975b in main (argc=23, argv=0x7f7fff45fc78, 
>     envp=<optimized out>)
>     at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/vl.c:6014
>
> Does it help ?

Yes and no.  This is collateral damage of earlier bug.

What failed was xen_init_fv()'s

    shared_page = xc_map_foreign_range(xc_handle, domid, XC_PAGE_SIZE,
                                       PROT_READ|PROT_WRITE, ioreq_pfn);
    if (shared_page == NULL) {
        fprintf(logfile, "map shared IO page returned error %d\n", errno);
        exit(-1);
    }

because we've ended up with a non-NULL pointer with no mapping behind
it, hence the SIGSEGV for the first time we try to use the pointer.

Whatever logic is behind xc_map_foreign_range() should have returned
NULL or a real mapping.

ioreq_pfn ought to be something just below the 4G boundary, and the
toolstack ought to put memory there in the first place.  Can you
identify what value ioreq_pfn has, and whether it matches up with the
magic gfn range as reported by `xl create -vvv` for the guest?

~Andrew


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
  2020-05-15 21:38     ` Andrew Cooper
@ 2020-05-15 21:53       ` Manuel Bouyer
  2020-05-16 16:18         ` Andrew Cooper
  0 siblings, 1 reply; 16+ messages in thread
From: Manuel Bouyer @ 2020-05-15 21:53 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel

[-- Attachment #1: Type: text/plain, Size: 1466 bytes --]

On Fri, May 15, 2020 at 10:38:13PM +0100, Andrew Cooper wrote:
> > [...]
> > Does it help ?
> 
> Yes and no.  This is collateral damage of earlier bug.
> 
> What failed was xen_init_fv()'s
> 
>     shared_page = xc_map_foreign_range(xc_handle, domid, XC_PAGE_SIZE,
>                                        PROT_READ|PROT_WRITE, ioreq_pfn);
>     if (shared_page == NULL) {
>         fprintf(logfile, "map shared IO page returned error %d\n", errno);
>         exit(-1);
>     }
> 
> because we've ended up with a non-NULL pointer with no mapping behind
> it, hence the SIGSEGV for the first time we try to use the pointer.
> 
> Whatever logic is behind xc_map_foreign_range() should have returned
> NULL or a real mapping.

What's strange is that the mapping is validated, by mapping it in
the dom0 kernel space. But when we try to remap in in the process's
space, it fails.

> 
> ioreq_pfn ought to be something just below the 4G boundary, and the
> toolstack ought to put memory there in the first place.  Can you
> identify what value ioreq_pfn has,

You mean, something like:
(gdb) print/x ioreq_pfn
$2 = 0xfeff0

> and whether it matches up with the
> magic gfn range as reported by `xl create -vvv` for the guest?

I guess you mean
xl -vvv create
the output is attached

The kernel says it tries to map 0xfeff0000 to virtual address 0x79656f951000.


-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--

[-- Attachment #2: typescript --]
[-- Type: text/plain, Size: 13735 bytes --]

Script started on Fri May 15 23:47:25 2020
# xl -vvv create nb1=-\b \b\b \b-hbvm\b \b\b \b\b \bvm
Parsing config from nb1-hvm
libxl: debug: libxl_create.c:1819:do_domain_create: Domain 0:ao 0x75de79e9b000: create: how=0x0 callback=0x0 poller=0x75de79ec60a0
libxl: detail: libxl_create.c:584:libxl__domain_make: passthrough: disabled
libxl: debug: libxl_device.c:380:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=unknown
libxl: debug: libxl_device.c:415:libxl__device_disk_set_backend: Disk vdev=hda, using backend phy
libxl: debug: libxl_create.c:1148:initiate_domain_create: Domain 5:running bootloader
libxl: debug: libxl_bootloader.c:328:libxl__bootloader_run: Domain 5:not a PV/PVH domain, skipping bootloader
libxl: debug: libxl_event.c:689:libxl__ev_xswatch_deregister: watch w=0x75de79e64cd0: deregister unregistered
domainbuilder: detail: xc_dom_allocate: cmdline="", features=""
domainbuilder: detail: xc_dom_kernel_file: filename="/usr/pkg/libexec/xen/boot/hvmloader"
domainbuilder: detail: xc_dom_malloc_filemap    : 337 kB
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.13, caps xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying ELF-generic loader ... 
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ... 
domainbuilder: detail: xc_dom_probe_bzimage_kernel: kernel is not a bzImage
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying HVM-generic loader ... 
domainbuilder: detail: loader probe OK
xc: detail: ELF: phdr: paddr=0x100000 memsz=0x5c844
xc: detail: ELF: memory: 0x100000 -> 0x15c844
domainbuilder: detail: xc_dom_mem_init: mem 1020 MB, pages 0x3fc00 pages, 4k each
domainbuilder: detail: xc_dom_mem_init: 0x3fc00 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: range: start=0x0 end=0x3fc00000
domainbuilder: detail: xc_dom_malloc            : 2040 kB
xc: detail: PHYSICAL MEMORY ALLOCATION:
xc: detail:   4KB PAGES: 0x0000000000000200
xc: detail:   2MB PAGES: 0x00000000000001fd
xc: detail:   1GB PAGES: 0x0000000000000000
domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x100+0x5d at 0x75de79b31000
domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x100000 -> 0x15d000  (pfn 0x100 + 0x5d pages)
xc: detail: ELF: phdr 0 at 0x75de79ad4000 -> 0x75de79b26ca0
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x15d+0x1 at 0x75de79e1d000
domainbuilder: detail: xc_dom_alloc_segment:   HVM start info : 0x15d000 -> 0x15e000  (pfn 0x15d + 0x1 pages)
domainbuilder: detail: alloc_pgtables_hvm: doing nothing
domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0x15e000
domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0x0
domainbuilder: detail: xc_dom_boot_image: called
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32 <= matches
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64
domainbuilder: detail: domain builder memory footprint
domainbuilder: detail:    allocated
domainbuilder: detail:       malloc             : 2045 kB
domainbuilder: detail:       anon mmap          : 0 bytes
domainbuilder: detail:    mapped
domainbuilder: detail:       file mmap          : 337 kB
domainbuilder: detail:       domU mmap          : 376 kB
domainbuilder: detail: vcpu_hvm: called
domainbuilder: detail: compat_gnttab_hvm_seed: d5: pfn=0xff000
domainbuilder: detail: xc_dom_set_gnttab_entry: d5 gnt[0] -> d0 0xfefff
domainbuilder: detail: xc_dom_set_gnttab_entry: d5 gnt[1] -> d0 0xfeffc
domainbuilder: detail: xc_dom_release: called
libxl: debug: libxl_device.c:380:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=phy
libxl: debug: libxl_event.c:639:libxl__ev_xswatch_register: watch w=0x75de79bfd4d0 wpath=/local/domain/0/backend/vbd/5/768/state token=3/0: register slotnum=3
libxl: debug: libxl_create.c:1856:do_domain_create: Domain 0:ao 0x75de79e9b000: inprogress: poller=0x75de79ec60a0, flags=i
libxl: debug: libxl_event.c:576:watchfd_callback: watch w=0x75de79bfd4d0 wpath=/local/domain/0/backend/vbd/5/768/state token=3/0: event epath=/local/domain/0/backend/vbd/5/768/state
libxl: debug: libxl_event.c:881:devstate_callback: backend /local/domain/0/backend/vbd/5/768/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:576:watchfd_callback: watch w=0x75de79bfd4d0 wpath=/local/domain/0/backend/vbd/5/768/state token=3/0: event epath=/local/domain/0/backend/vbd/5/768/state
libxl: debug: libxl_event.c:877:devstate_callback: backend /local/domain/0/backend/vbd/5/768/state wanted state 2 ok
libxl: debug: libxl_event.c:676:libxl__ev_xswatch_deregister: watch w=0x75de79bfd4d0 wpath=/local/domain/0/backend/vbd/5/768/state token=3/0: deregister slotnum=3
libxl: debug: libxl_device.c:1090:device_backend_callback: Domain 5:calling device_backend_cleanup
libxl: debug: libxl_event.c:689:libxl__ev_xswatch_deregister: watch w=0x75de79bfd4d0: deregister unregistered
libxl: debug: libxl_device.c:1191:device_hotplug: Domain 5:calling hotplug script: /usr/pkg/etc/xen/scripts/block /local/domain/0/backend/vbd/5/768
libxl: debug: libxl_device.c:1192:device_hotplug: Domain 5:extra args:
libxl: debug: libxl_device.c:1198:device_hotplug: Domain 5:     2
libxl: debug: libxl_device.c:1200:device_hotplug: Domain 5:env:
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /usr/pkg/etc/xen/scripts/block /local/domain/0/backend/vbd/5/768 
libxl: debug: libxl_event.c:689:libxl__ev_xswatch_deregister: watch w=0x75de79bfd5d0: deregister unregistered
libxl: debug: libxl_netbsd.c:74:libxl__get_hotplug_script_info: Domain 5:num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1176:device_hotplug: Domain 5:No hotplug script to execute
libxl: debug: libxl_event.c:689:libxl__ev_xswatch_deregister: watch w=0x75de79bfd5d0: deregister unregistered
libxl: debug: libxl_dm.c:2626:libxl__spawn_local_dm: Domain 5:Spawning device-model /usr/pkg/libexec/xen/bin/qemu-dm with arguments:
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  /usr/pkg/libexec/xen/bin/qemu-dm
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -d
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  5
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -domain-name
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  nb1
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -vnc
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  132.227.103.47:1
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -videoram
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  4
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -boot
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  cda
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -usb
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -usbdevice
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  tablet
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -acpi
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -vcpu_avail
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  0x01
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -net
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  nic,vlan=0,macaddr=00:16:3e:00:10:54,model=e1000
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -net
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  tap,vlan=0,ifname=xvif5i0-emu,bridge=bridge0,script=/usr/pkg/etc/xen/scripts/qemu-ifup,downscript=/usr/pkg/etc/xen/scripts/qemu-ifup
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -M
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  xenfv
libxl: debug: libxl_dm.c:2630:libxl__spawn_local_dm: Domain 5:Spawning device-model /usr/pkg/libexec/xen/bin/qemu-dm with additional environment:
libxl: debug: libxl_dm.c:2632:libxl__spawn_local_dm: Domain 5:  XEN_QEMU_CONSOLE_LIMIT=1048576
libxl: debug: libxl_event.c:639:libxl__ev_xswatch_register: watch w=0x75de79e64fc8 wpath=/local/domain/0/device-model/5/state token=3/1: register slotnum=3
libxl: debug: libxl_event.c:576:watchfd_callback: watch w=0x75de79e64fc8 wpath=/local/domain/0/device-model/5/state token=3/1: event epath=/local/domain/0/device-model/5/state
libxl: debug: libxl_exec.c:407:spawn_watch_event: domain 5 device model: spawn watch p=(null)
libxl: debug: libxl_event.c:676:libxl__ev_xswatch_deregister: watch w=0x75de79e64fc8 wpath=/local/domain/0/device-model/5/state token=3/1: deregister slotnum=3
libxl: error: libxl_dm.c:2783:device_model_spawn_outcome: Domain 5:domain 5 device model: spawn failed (rc=-3)
libxl: error: libxl_dm.c:2999:device_model_postconfig_done: Domain 5:Post DM startup configs failed, rc=-3
libxl: debug: libxl_qmp.c:1896:libxl__ev_qmp_dispose: Domain 0: ev 0x75de79e64fe0
libxl: error: libxl_create.c:1676:domcreate_devmodel_started: Domain 5:device model did not start: -3
libxl: debug: libxl_dm.c:3237:libxl__destroy_device_model: Domain 5:Didn't find dm UID; destroying by pid
libxl: error: libxl_dm.c:3103:kill_device_model: Device Model already exited
libxl: debug: libxl_event.c:639:libxl__ev_xswatch_register: watch w=0x75de79bfe0d0 wpath=/local/domain/0/backend/vbd/5/768/state token=3/2: register slotnum=3
libxl: debug: libxl_event.c:576:watchfd_callback: watch w=0x75de79bfe0d0 wpath=/local/domain/0/backend/vbd/5/768/state token=3/2: event epath=/local/domain/0/backend/vbd/5/768/state
libxl: debug: libxl_event.c:881:devstate_callback: backend /local/domain/0/backend/vbd/5/768/state wanted state 6 still waiting state 5
libxl: debug: libxl_event.c:576:watchfd_callback: watch w=0x75de79bfe0d0 wpath=/local/domain/0/backend/vbd/5/768/state token=3/2: event epath=/local/domain/0/backend/vbd/5/768/state
libxl: debug: libxl_event.c:877:devstate_callback: backend /local/domain/0/backend/vbd/5/768/state wanted state 6 ok
libxl: debug: libxl_event.c:676:libxl__ev_xswatch_deregister: watch w=0x75de79bfe0d0 wpath=/local/domain/0/backend/vbd/5/768/state token=3/2: deregister slotnum=3
libxl: debug: libxl_device.c:1090:device_backend_callback: Domain 5:calling device_backend_cleanup
libxl: debug: libxl_event.c:689:libxl__ev_xswatch_deregister: watch w=0x75de79bfe0d0: deregister unregistered
libxl: debug: libxl_device.c:1191:device_hotplug: Domain 5:calling hotplug script: /usr/pkg/etc/xen/scripts/block /local/domain/0/backend/vbd/5/768
libxl: debug: libxl_device.c:1192:device_hotplug: Domain 5:extra args:
libxl: debug: libxl_device.c:1198:device_hotplug: Domain 5:     6
libxl: debug: libxl_device.c:1200:device_hotplug: Domain 5:env:
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /usr/pkg/etc/xen/scripts/block /local/domain/0/backend/vbd/5/768 
libxl: debug: libxl_event.c:689:libxl__ev_xswatch_deregister: watch w=0x75de79bfe1d0: deregister unregistered
libxl: debug: libxl_netbsd.c:74:libxl__get_hotplug_script_info: Domain 5:num_exec 1, not running hotplug scripts
libxl: debug: libxl_device.c:1176:device_hotplug: Domain 5:No hotplug script to execute
libxl: debug: libxl_event.c:689:libxl__ev_xswatch_deregister: watch w=0x75de79bfe1d0: deregister unregistered
libxl: debug: libxl_device.c:1176:device_hotplug: Domain 5:No hotplug script to execute
libxl: debug: libxl_event.c:689:libxl__ev_xswatch_deregister: watch w=0x75de79bfe5d0: deregister unregistered
libxl: debug: libxl_domain.c:1355:devices_destroy_cb: Domain 5:Forked pid 2095 for destroy of domain
libxl: debug: libxl_event.c:1893:libxl__ao_complete: ao 0x75de79e9b000: complete, rc=-3
libxl: debug: libxl_event.c:1862:libxl__ao__destroy: ao 0x75de79e9b000: destroy
libxl: debug: libxl_domain.c:1040:libxl_domain_destroy: Domain 5:ao 0x75de79e9b000: create: how=0x0 callback=0x0 poller=0x75de79ec60a0
libxl: error: libxl_domain.c:1177:libxl__destroy_domid: Domain 5:Non-existant domain
libxl: error: libxl_domain.c:1131:domain_destroy_callback: Domain 5:Unable to destroy guest
libxl: error: libxl_domain.c:1058:domain_destroy_cb: Domain 5:Destruction of domain failed
libxl: debug: libxl_event.c:1893:libxl__ao_complete: ao 0x75de79e9b000: complete, rc=-21
libxl: debug: libxl_domain.c:1049:libxl_domain_destroy: Domain 5:ao 0x75de79e9b000: inprogress: poller=0x75de79ec60a0, flags=ic
libxl: debug: libxl_event.c:1862:libxl__ao__destroy: ao 0x75de79e9b000: destroy
xencall:buffer: debug: total allocations:400 total releases:400
xencall:buffer: debug: current allocations:0 maximum allocations:3
xencall:buffer: debug: cache current size:3
xencall:buffer: debug: cache hits:383 misses:3 toobig:14
xencall:buffer: debug: total allocations:0 total releases:0
xencall:buffer: debug: current allocations:0 maximum allocations:0
xencall:buffer: debug: cache current size:0
xencall:buffer: debug: cache hits:0 misses:0 toobig:0
# 
Script done on Fri May 15 23:47:46 2020

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
  2020-05-15 21:53       ` Manuel Bouyer
@ 2020-05-16 16:18         ` Andrew Cooper
  2020-05-17  9:30           ` Manuel Bouyer
  2020-05-17 17:32           ` Manuel Bouyer
  0 siblings, 2 replies; 16+ messages in thread
From: Andrew Cooper @ 2020-05-16 16:18 UTC (permalink / raw)
  To: Manuel Bouyer; +Cc: xen-devel

On 15/05/2020 22:53, Manuel Bouyer wrote:
> On Fri, May 15, 2020 at 10:38:13PM +0100, Andrew Cooper wrote:
>>> [...]
>>> Does it help ?
>> Yes and no.  This is collateral damage of earlier bug.
>>
>> What failed was xen_init_fv()'s
>>
>>     shared_page = xc_map_foreign_range(xc_handle, domid, XC_PAGE_SIZE,
>>                                        PROT_READ|PROT_WRITE, ioreq_pfn);
>>     if (shared_page == NULL) {
>>         fprintf(logfile, "map shared IO page returned error %d\n", errno);
>>         exit(-1);
>>     }
>>
>> because we've ended up with a non-NULL pointer with no mapping behind
>> it, hence the SIGSEGV for the first time we try to use the pointer.
>>
>> Whatever logic is behind xc_map_foreign_range() should have returned
>> NULL or a real mapping.
> What's strange is that the mapping is validated, by mapping it in
> the dom0 kernel space. But when we try to remap in in the process's
> space, it fails.

Hmm - this sounds like a kernel bug I'm afraid.

>> ioreq_pfn ought to be something just below the 4G boundary, and the
>> toolstack ought to put memory there in the first place.  Can you
>> identify what value ioreq_pfn has,
> You mean, something like:
> (gdb) print/x ioreq_pfn
> $2 = 0xfeff0
>
>> and whether it matches up with the
>> magic gfn range as reported by `xl create -vvv` for the guest?
> I guess you mean
> xl -vvv create
> the output is attached
>
> The kernel says it tries to map 0xfeff0000 to virtual address 0x79656f951000.

The value looks right, and the logs look normal.

~Andrew


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
  2020-05-16 16:18         ` Andrew Cooper
@ 2020-05-17  9:30           ` Manuel Bouyer
  2020-05-17 17:32           ` Manuel Bouyer
  1 sibling, 0 replies; 16+ messages in thread
From: Manuel Bouyer @ 2020-05-17  9:30 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel

On Sat, May 16, 2020 at 05:18:45PM +0100, Andrew Cooper wrote:
> On 15/05/2020 22:53, Manuel Bouyer wrote:
> > On Fri, May 15, 2020 at 10:38:13PM +0100, Andrew Cooper wrote:
> >>> [...]
> >>> Does it help ?
> >> Yes and no.  This is collateral damage of earlier bug.
> >>
> >> What failed was xen_init_fv()'s
> >>
> >>     shared_page = xc_map_foreign_range(xc_handle, domid, XC_PAGE_SIZE,
> >>                                        PROT_READ|PROT_WRITE, ioreq_pfn);
> >>     if (shared_page == NULL) {
> >>         fprintf(logfile, "map shared IO page returned error %d\n", errno);
> >>         exit(-1);
> >>     }
> >>
> >> because we've ended up with a non-NULL pointer with no mapping behind
> >> it, hence the SIGSEGV for the first time we try to use the pointer.
> >>
> >> Whatever logic is behind xc_map_foreign_range() should have returned
> >> NULL or a real mapping.
> > What's strange is that the mapping is validated, by mapping it in
> > the dom0 kernel space. But when we try to remap in in the process's
> > space, it fails.
> 
> Hmm - this sounds like a kernel bug I'm afraid.

no I don't think it is. It works with Xen 4.11, and it works with 4.13 for
PV/PVH domUs. Maybe there's a missing flag in the PTE of some sort that is
mandatory for 4.13 but not 4.11 for userland PTE but it doens't look obvious.

The difference could be that the kernel page tables are active when
mapping the foreing page in the dom0's kernel space, but the
user process page table it not (obviously, as we're in the kernel
when doing the mapping).

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
  2020-05-16 16:18         ` Andrew Cooper
  2020-05-17  9:30           ` Manuel Bouyer
@ 2020-05-17 17:32           ` Manuel Bouyer
  2020-05-17 17:56             ` Manuel Bouyer
  1 sibling, 1 reply; 16+ messages in thread
From: Manuel Bouyer @ 2020-05-17 17:32 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel

I've been looking a bit deeper in the Xen kernel.
The mapping is failed in ./arch/x86/mm/p2m.c:p2m_get_page_from_gfn(),
        /* Error path: not a suitable GFN at all */
	if ( !p2m_is_ram(*t) && !p2m_is_paging(*t) && !p2m_is_pod(*t) ) {
	    gdprintk(XENLOG_ERR, "p2m_get_page_from_gfn2: %d is_ram %ld is_paging %ld is_pod %ld\n", *t, p2m_is_ram(*t), p2m_is_paging(*t), p2m_is_pod(*t) );
	    return NULL;
	}

*t is 4, which translates to p2m_mmio_dm

it looks like p2m_get_page_from_gfn() is not ready to handle this case
for dom0.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
  2020-05-17 17:32           ` Manuel Bouyer
@ 2020-05-17 17:56             ` Manuel Bouyer
  2020-05-18  7:36               ` Paul Durrant
  2020-05-19  9:54               ` Roger Pau Monné
  0 siblings, 2 replies; 16+ messages in thread
From: Manuel Bouyer @ 2020-05-17 17:56 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel

On Sun, May 17, 2020 at 07:32:59PM +0200, Manuel Bouyer wrote:
> I've been looking a bit deeper in the Xen kernel.
> The mapping is failed in ./arch/x86/mm/p2m.c:p2m_get_page_from_gfn(),
>         /* Error path: not a suitable GFN at all */
> 	if ( !p2m_is_ram(*t) && !p2m_is_paging(*t) && !p2m_is_pod(*t) ) {
> 	    gdprintk(XENLOG_ERR, "p2m_get_page_from_gfn2: %d is_ram %ld is_paging %ld is_pod %ld\n", *t, p2m_is_ram(*t), p2m_is_paging(*t), p2m_is_pod(*t) );
> 	    return NULL;
> 	}
> 
> *t is 4, which translates to p2m_mmio_dm
> 
> it looks like p2m_get_page_from_gfn() is not ready to handle this case
> for dom0.

And so it looks like I need to implement osdep_xenforeignmemory_map_resource()
for NetBSD

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
  2020-05-17 17:56             ` Manuel Bouyer
@ 2020-05-18  7:36               ` Paul Durrant
  2020-05-18 17:31                 ` Manuel Bouyer
  2020-05-19  9:54               ` Roger Pau Monné
  1 sibling, 1 reply; 16+ messages in thread
From: Paul Durrant @ 2020-05-18  7:36 UTC (permalink / raw)
  To: 'Manuel Bouyer', 'Andrew Cooper'; +Cc: xen-devel

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Manuel Bouyer
> Sent: 17 May 2020 18:56
> To: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: xen-devel@lists.xenproject.org
> Subject: Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
> 
> On Sun, May 17, 2020 at 07:32:59PM +0200, Manuel Bouyer wrote:
> > I've been looking a bit deeper in the Xen kernel.
> > The mapping is failed in ./arch/x86/mm/p2m.c:p2m_get_page_from_gfn(),
> >         /* Error path: not a suitable GFN at all */
> > 	if ( !p2m_is_ram(*t) && !p2m_is_paging(*t) && !p2m_is_pod(*t) ) {
> > 	    gdprintk(XENLOG_ERR, "p2m_get_page_from_gfn2: %d is_ram %ld is_paging %ld is_pod %ld\n", *t,
> p2m_is_ram(*t), p2m_is_paging(*t), p2m_is_pod(*t) );
> > 	    return NULL;
> > 	}
> >
> > *t is 4, which translates to p2m_mmio_dm
> >
> > it looks like p2m_get_page_from_gfn() is not ready to handle this case
> > for dom0.
> 
> And so it looks like I need to implement osdep_xenforeignmemory_map_resource()
> for NetBSD
> 

It would be a good idea but you shouldn't have to. Also, qemu-trad won't use it even if it is there.

  Paul



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
  2020-05-18  7:36               ` Paul Durrant
@ 2020-05-18 17:31                 ` Manuel Bouyer
  2020-05-19  7:34                   ` Jan Beulich
  0 siblings, 1 reply; 16+ messages in thread
From: Manuel Bouyer @ 2020-05-18 17:31 UTC (permalink / raw)
  To: paul; +Cc: 'Andrew Cooper', xen-devel

On Mon, May 18, 2020 at 08:36:24AM +0100, Paul Durrant wrote:
> > -----Original Message-----
> > From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Manuel Bouyer
> > Sent: 17 May 2020 18:56
> > To: Andrew Cooper <andrew.cooper3@citrix.com>
> > Cc: xen-devel@lists.xenproject.org
> > Subject: Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
> > 
> > On Sun, May 17, 2020 at 07:32:59PM +0200, Manuel Bouyer wrote:
> > > I've been looking a bit deeper in the Xen kernel.
> > > The mapping is failed in ./arch/x86/mm/p2m.c:p2m_get_page_from_gfn(),
> > >         /* Error path: not a suitable GFN at all */
> > > 	if ( !p2m_is_ram(*t) && !p2m_is_paging(*t) && !p2m_is_pod(*t) ) {
> > > 	    gdprintk(XENLOG_ERR, "p2m_get_page_from_gfn2: %d is_ram %ld is_paging %ld is_pod %ld\n", *t,
> > p2m_is_ram(*t), p2m_is_paging(*t), p2m_is_pod(*t) );
> > > 	    return NULL;
> > > 	}
> > >
> > > *t is 4, which translates to p2m_mmio_dm
> > >
> > > it looks like p2m_get_page_from_gfn() is not ready to handle this case
> > > for dom0.
> > 
> > And so it looks like I need to implement osdep_xenforeignmemory_map_resource()
> > for NetBSD
> > 
> 
> It would be a good idea but you shouldn't have to.

This is how I read the code too, it should fallback to mmapbatch.
But mmapbatch doesn't work in all cases on 4.13.0

> Also, qemu-trad won't use it even if it is there.

Looks like it still use mmapbatch for some mappings, indeeed.
But now it goes a bit further, but still end up failing to map the guest's
memory.

Also, with some fix I got qemu-xen working building on NetBSD, it starts
but fails to load the BIOS rom. Once again p2m_get_page_from_gfn() fails
to find a page, and thinks the type is p2m_mmio_dm.

From what I found it seems that all unallocated memory is tagged p2m_mmio_dm,
is it right ? That would point to some issue allocating RAM for
the domU in qemu, I would need to find where this happens in qemu.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
  2020-05-18 17:31                 ` Manuel Bouyer
@ 2020-05-19  7:34                   ` Jan Beulich
  2020-05-19  8:46                     ` Manuel Bouyer
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2020-05-19  7:34 UTC (permalink / raw)
  To: Manuel Bouyer; +Cc: 'Andrew Cooper', xen-devel, paul

On 18.05.2020 19:31, Manuel Bouyer wrote:
> From what I found it seems that all unallocated memory is tagged p2m_mmio_dm,
> is it right ?

Yes. For many years there has been a plan to better separate this from
p2m_invalid ...

Jan


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
  2020-05-19  7:34                   ` Jan Beulich
@ 2020-05-19  8:46                     ` Manuel Bouyer
  2020-05-19  8:51                       ` Jan Beulich
  0 siblings, 1 reply; 16+ messages in thread
From: Manuel Bouyer @ 2020-05-19  8:46 UTC (permalink / raw)
  To: Jan Beulich; +Cc: 'Andrew Cooper', xen-devel, paul

On Tue, May 19, 2020 at 09:34:30AM +0200, Jan Beulich wrote:
> On 18.05.2020 19:31, Manuel Bouyer wrote:
> > From what I found it seems that all unallocated memory is tagged p2m_mmio_dm,
> > is it right ?
> 
> Yes. For many years there has been a plan to better separate this from
> p2m_invalid ...

thanks.

So for some reason, MMU_NORMAL_PT_UPDATE thinks that the memory is not
allocated for this domain. This is true for both the ioreq page, and
when trying to load the BIOS rom in the guest memory.
I traced the hypercall in the tools and the memory is allocated with
XENMEM_populate_physmap (and the gfn returned by XENMEM_populate_physmap
and passed to MMU_NORMAL_PT_UPDATE do match).

Still looking ...

Note that I'm using the 4.13.0 release sources, not the top of branch.
Is it something that could have been fixed after the release ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
  2020-05-19  8:46                     ` Manuel Bouyer
@ 2020-05-19  8:51                       ` Jan Beulich
  0 siblings, 0 replies; 16+ messages in thread
From: Jan Beulich @ 2020-05-19  8:51 UTC (permalink / raw)
  To: Manuel Bouyer; +Cc: 'Andrew Cooper', paul, xen-devel

On 19.05.2020 10:46, Manuel Bouyer wrote:
> Note that I'm using the 4.13.0 release sources, not the top of branch.
> Is it something that could have been fixed after the release ?

I don't recall anything, but switching to 4.13.1 would still seem like
a helpful thing for you to do.

Jan


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
  2020-05-17 17:56             ` Manuel Bouyer
  2020-05-18  7:36               ` Paul Durrant
@ 2020-05-19  9:54               ` Roger Pau Monné
  2020-05-19 10:28                 ` Manuel Bouyer
  1 sibling, 1 reply; 16+ messages in thread
From: Roger Pau Monné @ 2020-05-19  9:54 UTC (permalink / raw)
  To: Manuel Bouyer; +Cc: Andrew Cooper, xen-devel

On Sun, May 17, 2020 at 07:56:07PM +0200, Manuel Bouyer wrote:
> On Sun, May 17, 2020 at 07:32:59PM +0200, Manuel Bouyer wrote:
> > I've been looking a bit deeper in the Xen kernel.
> > The mapping is failed in ./arch/x86/mm/p2m.c:p2m_get_page_from_gfn(),
> >         /* Error path: not a suitable GFN at all */
> > 	if ( !p2m_is_ram(*t) && !p2m_is_paging(*t) && !p2m_is_pod(*t) ) {
> > 	    gdprintk(XENLOG_ERR, "p2m_get_page_from_gfn2: %d is_ram %ld is_paging %ld is_pod %ld\n", *t, p2m_is_ram(*t), p2m_is_paging(*t), p2m_is_pod(*t) );
> > 	    return NULL;
> > 	}
> > 
> > *t is 4, which translates to p2m_mmio_dm
> > 
> > it looks like p2m_get_page_from_gfn() is not ready to handle this case
> > for dom0.
> 
> And so it looks like I need to implement osdep_xenforeignmemory_map_resource()
> for NetBSD

FWIW, FreeBSD doesn't have osdep_xenforeignmemory_map_resource
implemented and still works fine with 4.13.0 (is able to create HVM
guests), but that's a PVH dom0, not a PV one.

Regards, Roger.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
  2020-05-19  9:54               ` Roger Pau Monné
@ 2020-05-19 10:28                 ` Manuel Bouyer
  0 siblings, 0 replies; 16+ messages in thread
From: Manuel Bouyer @ 2020-05-19 10:28 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Andrew Cooper, xen-devel

On Tue, May 19, 2020 at 11:54:07AM +0200, Roger Pau Monné wrote:
> FWIW, FreeBSD doesn't have osdep_xenforeignmemory_map_resource
> implemented and still works fine with 4.13.0 (is able to create HVM
> guests), but that's a PVH dom0, not a PV one.

Yes, FreeBSD is PVH-nnly. This implies different code paths (the dom0
kernel has to map the foreing pages in its physical space, which PV
doesn't have to do (and can't do))

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2020-05-19 10:28 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-15 20:29 IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0 Manuel Bouyer
2020-05-15 21:00 ` Andrew Cooper
2020-05-15 21:06   ` Manuel Bouyer
2020-05-15 21:38     ` Andrew Cooper
2020-05-15 21:53       ` Manuel Bouyer
2020-05-16 16:18         ` Andrew Cooper
2020-05-17  9:30           ` Manuel Bouyer
2020-05-17 17:32           ` Manuel Bouyer
2020-05-17 17:56             ` Manuel Bouyer
2020-05-18  7:36               ` Paul Durrant
2020-05-18 17:31                 ` Manuel Bouyer
2020-05-19  7:34                   ` Jan Beulich
2020-05-19  8:46                     ` Manuel Bouyer
2020-05-19  8:51                       ` Jan Beulich
2020-05-19  9:54               ` Roger Pau Monné
2020-05-19 10:28                 ` Manuel Bouyer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.