All of lore.kernel.org
 help / color / mirror / Atom feed
* [BUG] blkback reporting incorrect number of sectors, unable to boot
@ 2017-11-04  4:48 Mike Reardon
  2017-11-06 12:33 ` Jan Beulich
  0 siblings, 1 reply; 16+ messages in thread
From: Mike Reardon @ 2017-11-04  4:48 UTC (permalink / raw)
  To: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 7725 bytes --]

Hello,

I had originally posted about this issue to win-pv-devel but it was
suggested this is actually an issue in blkback.

I added some additional storage to my server with some native 4k sector
size disks.  The LVM volumes on that array seem to work fine when mounted
by the host, and when passed through to any of the Linux guests, but
Windows guests aren't able to use them when using PV drivers.  The work
fine to install when I first install Windows (Windows 10, latest build) but
once I install the PV drivers it will no longer boot and give an
inaccessible boot device error.  If I assign the storage to a different
Windows guest that already has the drivers installed (as secondary storage,
not as the boot device) I see the disk listed in disk management, but the
size of the disk is 8x larger than it should be.  After looking into it a
bit, the disk is reporting 8x the number of sectors it should have when I
run xenstore-ls.  Here is the info from xenstore-ls for the relevant volume:

      51712 = ""
       frontend = "/local/domain/8/device/vbd/51712"
       params = "/dev/tv_storage/main-storage"
       script = "/etc/xen/scripts/block"
       frontend-id = "8"
       online = "1"
       removable = "0"
       bootable = "1"
       state = "2"
       dev = "xvda"
       type = "phy"
       mode = "w"
       device-type = "disk"
       discard-enable = "1"
       feature-max-indirect-segments = "256"
       multi-queue-max-queues = "12"
       max-ring-page-order = "4"
       physical-device = "fe:0"
       physical-device-path = "/dev/dm-0"
       hotplug-status = "connected"
       feature-flush-cache = "1"
       feature-discard = "0"
       feature-barrier = "1"
       feature-persistent = "1"
       sectors = "34359738368"
       info = "0"
       sector-size = "4096"
       physical-sector-size = "4096"


Here are the numbers for the volume as reported by fdisk:

Disk /dev/tv_storage/main-storage: 16 TiB, 17592186044416 bytes, 4294967296
sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device                        Boot Start        End    Sectors Size Id Type
/dev/tv_storage/main-storage1          1 4294967295 4294967295  16T ee GPT


As with the size reported in Windows disk management, the number of sectors
from xenstore seems is 8x higher than what it should be.  The disks aren't
using 512b sector emulation, they are natively 4k, so I have no idea where
the 8x increase is coming from.


Here is some additional info from the system:

Xen version is 4.10.0-rc3

xl info:
host                   : localhost
release                : 4.13.3-gentoo
version                : #1 SMP Sat Sep 23 00:48:14 MDT 2017
machine                : x86_64
nr_cpus                : 12
max_cpu_id             : 11
nr_nodes               : 1
cores_per_socket       : 6
threads_per_core       : 2
cpu_mhz                : 3200
hw_caps                :
bfebfbff:17bee3ff:2c100800:00000001:00000001:00000000:00000000:00000100
virt_caps              : hvm hvm_directio
total_memory           : 65486
free_memory            : 27511
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 10
xen_extra              : .0-rc
xen_version            : 4.10.0-rc
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          :
xen_commandline        : placeholder dom0_mem=4G,max:4G
cc_compiler            : x86_64-pc-linux-gnu-gcc (Gentoo 5.4.0-r3 p1.3,
pie-0.6.5) 5.4.0
cc_compile_by          :
cc_compile_domain      : localdomain
cc_compile_date        : Fri Nov  3 17:56:23 MDT 2017
build_id               : 518460cc025ca13ae79e3b971cfa0df2b1285323
xend_config_format     : 4



xl -v create:
libxl: detail: libxl_dom.c:264:hvm_set_viridian_features: base group enabled
libxl: detail: libxl_dom.c:264:hvm_set_viridian_features: freq group enabled
libxl: detail: libxl_dom.c:264:hvm_set_viridian_features: time_ref_count
group enabled
libxl: detail: libxl_dom.c:264:hvm_set_viridian_features: apic_assist group
enabled
libxl: detail: libxl_dom.c:264:hvm_set_viridian_features: crash_ctl group
enabled
domainbuilder: detail: xc_dom_allocate: cmdline="", features=""
domainbuilder: detail: xc_dom_kernel_file:
filename="/usr/libexec/xen/boot/hvmloader"
domainbuilder: detail: xc_dom_malloc_filemap    : 208 kB
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.10, caps xen-3.0-x86_64
xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader
...
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying HVM-generic loader ...
domainbuilder: detail: loader probe OK
xc: detail: ELF: phdr: paddr=0x100000 memsz=0x3dc64
xc: detail: ELF: memory: 0x100000 -> 0x13dc64
domainbuilder: detail: xc_dom_mem_init: mem 4080 MB, pages 0xff000 pages,
4k each
domainbuilder: detail: xc_dom_mem_init: 0xff000 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: xc_dom_malloc            : 8672 kB
xc: detail: PHYSICAL MEMORY ALLOCATION:
xc: detail:   4KB PAGES: 0x0000000000000200
xc: detail:   2MB PAGES: 0x00000000000003f7
xc: detail:   1GB PAGES: 0x0000000000000002
domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn
0x100+0x3e at 0x7f4882ac9000
domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x100000 ->
0x13e000  (pfn 0x100 + 0x3e pages)
xc: detail: ELF: phdr 0 at 0x7f4882a8b000 -> 0x7f4882abf1c8
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn
0x13e+0x200 at 0x7f487eeda000
domainbuilder: detail: xc_dom_alloc_segment:   System Firmware module :
0x13e000 -> 0x33e000  (pfn 0x13e + 0x200 pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn
0x33e+0x1 at 0x7f4882b4c000
domainbuilder: detail: xc_dom_alloc_segment:   HVM start info : 0x33e000 ->
0x33f000  (pfn 0x33e + 0x1 pages)
domainbuilder: detail: alloc_pgtables_hvm: doing nothing
domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0x33f000
domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0x0
domainbuilder: detail: xc_dom_boot_image: called
domainbuilder: detail: xc_dom_compat_check: supported guest type:
xen-3.0-x86_64
domainbuilder: detail: xc_dom_compat_check: supported guest type:
xen-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type:
hvm-3.0-x86_32 <= matches
domainbuilder: detail: xc_dom_compat_check: supported guest type:
hvm-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type:
hvm-3.0-x86_64
domainbuilder: detail: clear_page: pfn 0xfefff, mfn 0xfefff
domainbuilder: detail: clear_page: pfn 0xfeffc, mfn 0xfeffc
domainbuilder: detail: domain builder memory footprint
domainbuilder: detail:    allocated
domainbuilder: detail:       malloc             : 8688 kB
domainbuilder: detail:       anon mmap          : 0 bytes
domainbuilder: detail:    mapped
domainbuilder: detail:       file mmap          : 208 kB
domainbuilder: detail:       domU mmap          : 2300 kB
domainbuilder: detail: vcpu_hvm: called
domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0x10f000
domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0x10f001
domainbuilder: detail: xc_dom_release: called


Thank you!

[-- Attachment #1.2: Type: text/html, Size: 9233 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [BUG] blkback reporting incorrect number of sectors, unable to boot
  2017-11-04  4:48 [BUG] blkback reporting incorrect number of sectors, unable to boot Mike Reardon
@ 2017-11-06 12:33 ` Jan Beulich
  2017-11-07 10:30   ` Roger Pau Monné
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2017-11-06 12:33 UTC (permalink / raw)
  To: Roger Pau Monne, Mike Reardon, Konrad Rzeszutek Wilk; +Cc: xen-devel

>>> On 04.11.17 at 05:48, <mule@inso.org> wrote:
> I added some additional storage to my server with some native 4k sector
> size disks.  The LVM volumes on that array seem to work fine when mounted
> by the host, and when passed through to any of the Linux guests, but
> Windows guests aren't able to use them when using PV drivers.  The work
> fine to install when I first install Windows (Windows 10, latest build) but
> once I install the PV drivers it will no longer boot and give an
> inaccessible boot device error.  If I assign the storage to a different
> Windows guest that already has the drivers installed (as secondary storage,
> not as the boot device) I see the disk listed in disk management, but the
> size of the disk is 8x larger than it should be.  After looking into it a
> bit, the disk is reporting 8x the number of sectors it should have when I
> run xenstore-ls.  Here is the info from xenstore-ls for the relevant volume:
> 
>       51712 = ""
>        frontend = "/local/domain/8/device/vbd/51712"
>        params = "/dev/tv_storage/main-storage"
>        script = "/etc/xen/scripts/block"
>        frontend-id = "8"
>        online = "1"
>        removable = "0"
>        bootable = "1"
>        state = "2"
>        dev = "xvda"
>        type = "phy"
>        mode = "w"
>        device-type = "disk"
>        discard-enable = "1"
>        feature-max-indirect-segments = "256"
>        multi-queue-max-queues = "12"
>        max-ring-page-order = "4"
>        physical-device = "fe:0"
>        physical-device-path = "/dev/dm-0"
>        hotplug-status = "connected"
>        feature-flush-cache = "1"
>        feature-discard = "0"
>        feature-barrier = "1"
>        feature-persistent = "1"
>        sectors = "34359738368"
>        info = "0"
>        sector-size = "4096"
>        physical-sector-size = "4096"
> 
> 
> Here are the numbers for the volume as reported by fdisk:
> 
> Disk /dev/tv_storage/main-storage: 16 TiB, 17592186044416 bytes, 4294967296
> sectors
> Units: sectors of 1 * 4096 = 4096 bytes
> Sector size (logical/physical): 4096 bytes / 4096 bytes
> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> Disklabel type: dos
> Disk identifier: 0x00000000
> 
> Device                        Boot Start        End    Sectors Size Id Type
> /dev/tv_storage/main-storage1          1 4294967295 4294967295  16T ee GPT
> 
> 
> As with the size reported in Windows disk management, the number of sectors
> from xenstore seems is 8x higher than what it should be.  The disks aren't
> using 512b sector emulation, they are natively 4k, so I have no idea where
> the 8x increase is coming from.

Hmm, looks like a backend problem indeed: struct hd_struct's
nr_sects (which get_capacity() returns) looks to be in 512-byte
units, regardless of actual sector size. Hence the plain
get_capacity() use as well the (wrongly open coded) use of
part_nr_sects_read() looks insufficient in vbd_sz(). Roger,
Konrad? Question of course is whether the Linux frontend then
also needs adjustment, and hence whether the backend can
be corrected in a compatible way in the first place.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [BUG] blkback reporting incorrect number of sectors, unable to boot
  2017-11-06 12:33 ` Jan Beulich
@ 2017-11-07 10:30   ` Roger Pau Monné
  2017-11-07 10:51     ` Paul Durrant
  2017-11-07 11:31     ` Jan Beulich
  0 siblings, 2 replies; 16+ messages in thread
From: Roger Pau Monné @ 2017-11-07 10:30 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Mike Reardon, xen-devel, Konrad Rzeszutek Wilk

On Mon, Nov 06, 2017 at 05:33:37AM -0700, Jan Beulich wrote:
> >>> On 04.11.17 at 05:48, <mule@inso.org> wrote:
> > I added some additional storage to my server with some native 4k sector
> > size disks.  The LVM volumes on that array seem to work fine when mounted
> > by the host, and when passed through to any of the Linux guests, but
> > Windows guests aren't able to use them when using PV drivers.  The work
> > fine to install when I first install Windows (Windows 10, latest build) but
> > once I install the PV drivers it will no longer boot and give an
> > inaccessible boot device error.  If I assign the storage to a different
> > Windows guest that already has the drivers installed (as secondary storage,
> > not as the boot device) I see the disk listed in disk management, but the
> > size of the disk is 8x larger than it should be.  After looking into it a
> > bit, the disk is reporting 8x the number of sectors it should have when I
> > run xenstore-ls.  Here is the info from xenstore-ls for the relevant volume:
> > 
> >       51712 = ""
> >        frontend = "/local/domain/8/device/vbd/51712"
> >        params = "/dev/tv_storage/main-storage"
> >        script = "/etc/xen/scripts/block"
> >        frontend-id = "8"
> >        online = "1"
> >        removable = "0"
> >        bootable = "1"
> >        state = "2"
> >        dev = "xvda"
> >        type = "phy"
> >        mode = "w"
> >        device-type = "disk"
> >        discard-enable = "1"
> >        feature-max-indirect-segments = "256"
> >        multi-queue-max-queues = "12"
> >        max-ring-page-order = "4"
> >        physical-device = "fe:0"
> >        physical-device-path = "/dev/dm-0"
> >        hotplug-status = "connected"
> >        feature-flush-cache = "1"
> >        feature-discard = "0"
> >        feature-barrier = "1"
> >        feature-persistent = "1"
> >        sectors = "34359738368"
> >        info = "0"
> >        sector-size = "4096"
> >        physical-sector-size = "4096"
> > 
> > 
> > Here are the numbers for the volume as reported by fdisk:
> > 
> > Disk /dev/tv_storage/main-storage: 16 TiB, 17592186044416 bytes, 4294967296
> > sectors
> > Units: sectors of 1 * 4096 = 4096 bytes
> > Sector size (logical/physical): 4096 bytes / 4096 bytes
> > I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> > Disklabel type: dos
> > Disk identifier: 0x00000000
> > 
> > Device                        Boot Start        End    Sectors Size Id Type
> > /dev/tv_storage/main-storage1          1 4294967295 4294967295  16T ee GPT
> > 
> > 
> > As with the size reported in Windows disk management, the number of sectors
> > from xenstore seems is 8x higher than what it should be.  The disks aren't
> > using 512b sector emulation, they are natively 4k, so I have no idea where
> > the 8x increase is coming from.
> 
> Hmm, looks like a backend problem indeed: struct hd_struct's
> nr_sects (which get_capacity() returns) looks to be in 512-byte
> units, regardless of actual sector size. Hence the plain
> get_capacity() use as well the (wrongly open coded) use of
> part_nr_sects_read() looks insufficient in vbd_sz(). Roger,
> Konrad?

Hm, AFAICT sector-size should always be set to 512.

> Question of course is whether the Linux frontend then
> also needs adjustment, and hence whether the backend can
> be corrected in a compatible way in the first place.

blkfront uses set_capacity, which also seems to expect the sectors to
be hardcoded to 512.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [BUG] blkback reporting incorrect number of sectors, unable to boot
  2017-11-07 10:30   ` Roger Pau Monné
@ 2017-11-07 10:51     ` Paul Durrant
  2017-11-07 11:31     ` Jan Beulich
  1 sibling, 0 replies; 16+ messages in thread
From: Paul Durrant @ 2017-11-07 10:51 UTC (permalink / raw)
  To: Roger Pau Monne, Jan Beulich
  Cc: Mike Reardon, Konrad Rzeszutek Wilk, xen-devel

> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of
> Roger Pau Monné
> Sent: 07 November 2017 10:30
> To: Jan Beulich <JBeulich@suse.com>
> Cc: Mike Reardon <mule@inso.org>; xen-devel@lists.xen.org; Konrad
> Rzeszutek Wilk <konrad.wilk@oracle.com>
> Subject: Re: [Xen-devel] [BUG] blkback reporting incorrect number of
> sectors, unable to boot
> 
> On Mon, Nov 06, 2017 at 05:33:37AM -0700, Jan Beulich wrote:
> > >>> On 04.11.17 at 05:48, <mule@inso.org> wrote:
> > > I added some additional storage to my server with some native 4k sector
> > > size disks.  The LVM volumes on that array seem to work fine when
> mounted
> > > by the host, and when passed through to any of the Linux guests, but
> > > Windows guests aren't able to use them when using PV drivers.  The
> work
> > > fine to install when I first install Windows (Windows 10, latest build) but
> > > once I install the PV drivers it will no longer boot and give an
> > > inaccessible boot device error.  If I assign the storage to a different
> > > Windows guest that already has the drivers installed (as secondary
> storage,
> > > not as the boot device) I see the disk listed in disk management, but the
> > > size of the disk is 8x larger than it should be.  After looking into it a
> > > bit, the disk is reporting 8x the number of sectors it should have when I
> > > run xenstore-ls.  Here is the info from xenstore-ls for the relevant
> volume:
> > >
> > >       51712 = ""
> > >        frontend = "/local/domain/8/device/vbd/51712"
> > >        params = "/dev/tv_storage/main-storage"
> > >        script = "/etc/xen/scripts/block"
> > >        frontend-id = "8"
> > >        online = "1"
> > >        removable = "0"
> > >        bootable = "1"
> > >        state = "2"
> > >        dev = "xvda"
> > >        type = "phy"
> > >        mode = "w"
> > >        device-type = "disk"
> > >        discard-enable = "1"
> > >        feature-max-indirect-segments = "256"
> > >        multi-queue-max-queues = "12"
> > >        max-ring-page-order = "4"
> > >        physical-device = "fe:0"
> > >        physical-device-path = "/dev/dm-0"
> > >        hotplug-status = "connected"
> > >        feature-flush-cache = "1"
> > >        feature-discard = "0"
> > >        feature-barrier = "1"
> > >        feature-persistent = "1"
> > >        sectors = "34359738368"
> > >        info = "0"
> > >        sector-size = "4096"
> > >        physical-sector-size = "4096"
> > >
> > >
> > > Here are the numbers for the volume as reported by fdisk:
> > >
> > > Disk /dev/tv_storage/main-storage: 16 TiB, 17592186044416 bytes,
> 4294967296
> > > sectors
> > > Units: sectors of 1 * 4096 = 4096 bytes
> > > Sector size (logical/physical): 4096 bytes / 4096 bytes
> > > I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> > > Disklabel type: dos
> > > Disk identifier: 0x00000000
> > >
> > > Device                        Boot Start        End    Sectors Size Id Type
> > > /dev/tv_storage/main-storage1          1 4294967295 4294967295  16T ee
> GPT
> > >
> > >
> > > As with the size reported in Windows disk management, the number of
> sectors
> > > from xenstore seems is 8x higher than what it should be.  The disks aren't
> > > using 512b sector emulation, they are natively 4k, so I have no idea where
> > > the 8x increase is coming from.
> >
> > Hmm, looks like a backend problem indeed: struct hd_struct's
> > nr_sects (which get_capacity() returns) looks to be in 512-byte
> > units, regardless of actual sector size. Hence the plain
> > get_capacity() use as well the (wrongly open coded) use of
> > part_nr_sects_read() looks insufficient in vbd_sz(). Roger,
> > Konrad?
> 
> Hm, AFAICT sector-size should always be set to 512.
> 
> > Question of course is whether the Linux frontend then
> > also needs adjustment, and hence whether the backend can
> > be corrected in a compatible way in the first place.
> 
> blkfront uses set_capacity, which also seems to expect the sectors to
> be hardcoded to 512.
> 

Oh dear. No wonder it's all quite broken.

  Paul

> Roger.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [BUG] blkback reporting incorrect number of sectors, unable to boot
  2017-11-07 10:30   ` Roger Pau Monné
  2017-11-07 10:51     ` Paul Durrant
@ 2017-11-07 11:31     ` Jan Beulich
  2017-11-07 12:41       ` Roger Pau Monné
  1 sibling, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2017-11-07 11:31 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Mike Reardon, Konrad Rzeszutek Wilk, xen-devel

>>> On 07.11.17 at 11:30, <roger.pau@citrix.com> wrote:
> On Mon, Nov 06, 2017 at 05:33:37AM -0700, Jan Beulich wrote:
>> >>> On 04.11.17 at 05:48, <mule@inso.org> wrote:
>> > I added some additional storage to my server with some native 4k sector
>> > size disks.  The LVM volumes on that array seem to work fine when mounted
>> > by the host, and when passed through to any of the Linux guests, but
>> > Windows guests aren't able to use them when using PV drivers.  The work
>> > fine to install when I first install Windows (Windows 10, latest build) but
>> > once I install the PV drivers it will no longer boot and give an
>> > inaccessible boot device error.  If I assign the storage to a different
>> > Windows guest that already has the drivers installed (as secondary storage,
>> > not as the boot device) I see the disk listed in disk management, but the
>> > size of the disk is 8x larger than it should be.  After looking into it a
>> > bit, the disk is reporting 8x the number of sectors it should have when I
>> > run xenstore-ls.  Here is the info from xenstore-ls for the relevant volume:
>> > 
>> >       51712 = ""
>> >        frontend = "/local/domain/8/device/vbd/51712"
>> >        params = "/dev/tv_storage/main-storage"
>> >        script = "/etc/xen/scripts/block"
>> >        frontend-id = "8"
>> >        online = "1"
>> >        removable = "0"
>> >        bootable = "1"
>> >        state = "2"
>> >        dev = "xvda"
>> >        type = "phy"
>> >        mode = "w"
>> >        device-type = "disk"
>> >        discard-enable = "1"
>> >        feature-max-indirect-segments = "256"
>> >        multi-queue-max-queues = "12"
>> >        max-ring-page-order = "4"
>> >        physical-device = "fe:0"
>> >        physical-device-path = "/dev/dm-0"
>> >        hotplug-status = "connected"
>> >        feature-flush-cache = "1"
>> >        feature-discard = "0"
>> >        feature-barrier = "1"
>> >        feature-persistent = "1"
>> >        sectors = "34359738368"
>> >        info = "0"
>> >        sector-size = "4096"
>> >        physical-sector-size = "4096"
>> > 
>> > 
>> > Here are the numbers for the volume as reported by fdisk:
>> > 
>> > Disk /dev/tv_storage/main-storage: 16 TiB, 17592186044416 bytes, 4294967296
>> > sectors
>> > Units: sectors of 1 * 4096 = 4096 bytes
>> > Sector size (logical/physical): 4096 bytes / 4096 bytes
>> > I/O size (minimum/optimal): 4096 bytes / 4096 bytes
>> > Disklabel type: dos
>> > Disk identifier: 0x00000000
>> > 
>> > Device                        Boot Start        End    Sectors Size Id Type
>> > /dev/tv_storage/main-storage1          1 4294967295 4294967295  16T ee GPT
>> > 
>> > 
>> > As with the size reported in Windows disk management, the number of sectors
>> > from xenstore seems is 8x higher than what it should be.  The disks aren't
>> > using 512b sector emulation, they are natively 4k, so I have no idea where
>> > the 8x increase is coming from.
>> 
>> Hmm, looks like a backend problem indeed: struct hd_struct's
>> nr_sects (which get_capacity() returns) looks to be in 512-byte
>> units, regardless of actual sector size. Hence the plain
>> get_capacity() use as well the (wrongly open coded) use of
>> part_nr_sects_read() looks insufficient in vbd_sz(). Roger,
>> Konrad?
> 
> Hm, AFAICT sector-size should always be set to 512.

Which would mean that bdev_logical_block_size() can't be used by
blkback to set this value. Yet then - what's the point of the xenstore
setting if it's always the same value anyway?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [BUG] blkback reporting incorrect number of sectors, unable to boot
  2017-11-07 11:31     ` Jan Beulich
@ 2017-11-07 12:41       ` Roger Pau Monné
  2017-11-09  3:27         ` Mike Reardon
  0 siblings, 1 reply; 16+ messages in thread
From: Roger Pau Monné @ 2017-11-07 12:41 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Mike Reardon, Konrad Rzeszutek Wilk, xen-devel

On Tue, Nov 07, 2017 at 04:31:06AM -0700, Jan Beulich wrote:
> >>> On 07.11.17 at 11:30, <roger.pau@citrix.com> wrote:
> > On Mon, Nov 06, 2017 at 05:33:37AM -0700, Jan Beulich wrote:
> >> >>> On 04.11.17 at 05:48, <mule@inso.org> wrote:
> >> > I added some additional storage to my server with some native 4k sector
> >> > size disks.  The LVM volumes on that array seem to work fine when mounted
> >> > by the host, and when passed through to any of the Linux guests, but
> >> > Windows guests aren't able to use them when using PV drivers.  The work
> >> > fine to install when I first install Windows (Windows 10, latest build) but
> >> > once I install the PV drivers it will no longer boot and give an
> >> > inaccessible boot device error.  If I assign the storage to a different
> >> > Windows guest that already has the drivers installed (as secondary storage,
> >> > not as the boot device) I see the disk listed in disk management, but the
> >> > size of the disk is 8x larger than it should be.  After looking into it a
> >> > bit, the disk is reporting 8x the number of sectors it should have when I
> >> > run xenstore-ls.  Here is the info from xenstore-ls for the relevant volume:
> >> > 
> >> >       51712 = ""
> >> >        frontend = "/local/domain/8/device/vbd/51712"
> >> >        params = "/dev/tv_storage/main-storage"
> >> >        script = "/etc/xen/scripts/block"
> >> >        frontend-id = "8"
> >> >        online = "1"
> >> >        removable = "0"
> >> >        bootable = "1"
> >> >        state = "2"
> >> >        dev = "xvda"
> >> >        type = "phy"
> >> >        mode = "w"
> >> >        device-type = "disk"
> >> >        discard-enable = "1"
> >> >        feature-max-indirect-segments = "256"
> >> >        multi-queue-max-queues = "12"
> >> >        max-ring-page-order = "4"
> >> >        physical-device = "fe:0"
> >> >        physical-device-path = "/dev/dm-0"
> >> >        hotplug-status = "connected"
> >> >        feature-flush-cache = "1"
> >> >        feature-discard = "0"
> >> >        feature-barrier = "1"
> >> >        feature-persistent = "1"
> >> >        sectors = "34359738368"
> >> >        info = "0"
> >> >        sector-size = "4096"
> >> >        physical-sector-size = "4096"
> >> > 
> >> > 
> >> > Here are the numbers for the volume as reported by fdisk:
> >> > 
> >> > Disk /dev/tv_storage/main-storage: 16 TiB, 17592186044416 bytes, 4294967296
> >> > sectors
> >> > Units: sectors of 1 * 4096 = 4096 bytes
> >> > Sector size (logical/physical): 4096 bytes / 4096 bytes
> >> > I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> >> > Disklabel type: dos
> >> > Disk identifier: 0x00000000
> >> > 
> >> > Device                        Boot Start        End    Sectors Size Id Type
> >> > /dev/tv_storage/main-storage1          1 4294967295 4294967295  16T ee GPT
> >> > 
> >> > 
> >> > As with the size reported in Windows disk management, the number of sectors
> >> > from xenstore seems is 8x higher than what it should be.  The disks aren't
> >> > using 512b sector emulation, they are natively 4k, so I have no idea where
> >> > the 8x increase is coming from.
> >> 
> >> Hmm, looks like a backend problem indeed: struct hd_struct's
> >> nr_sects (which get_capacity() returns) looks to be in 512-byte
> >> units, regardless of actual sector size. Hence the plain
> >> get_capacity() use as well the (wrongly open coded) use of
> >> part_nr_sects_read() looks insufficient in vbd_sz(). Roger,
> >> Konrad?
> > 
> > Hm, AFAICT sector-size should always be set to 512.
> 
> Which would mean that bdev_logical_block_size() can't be used by
> blkback to set this value. Yet then - what's the point of the xenstore
> setting if it's always the same value anyway?

Some frontends (at least FreeBSD) will choke if sector-size is not
set. So we have the following scenario:

 - Windows: acknowledges sector-size * sectors in order to set disk
   capacity.
 - Linux: sets disk capacity to sectors * 512.
 - FreeBSD: sets disk capacity to sector-size * sectors, will choke if
   sector-size is not set.

In order to keep compatibility with all of them AFAICT the only option
is to hardcode sector-size to 512 in xenstore.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [BUG] blkback reporting incorrect number of sectors, unable to boot
  2017-11-07 12:41       ` Roger Pau Monné
@ 2017-11-09  3:27         ` Mike Reardon
  2017-11-09  9:30           ` Roger Pau Monné
  0 siblings, 1 reply; 16+ messages in thread
From: Mike Reardon @ 2017-11-09  3:27 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Konrad Rzeszutek Wilk, Jan Beulich, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 4808 bytes --]

So am I correct in reading this that for at least the foreseeable future
storage using 4k sector sizes is not gonna happen?  I'm just trying to
figure out if I need to get some different hardware.

Thank you!

On Tue, Nov 7, 2017 at 5:41 AM, Roger Pau Monné <roger.pau@citrix.com>
wrote:

> On Tue, Nov 07, 2017 at 04:31:06AM -0700, Jan Beulich wrote:
> > >>> On 07.11.17 at 11:30, <roger.pau@citrix.com> wrote:
> > > On Mon, Nov 06, 2017 at 05:33:37AM -0700, Jan Beulich wrote:
> > >> >>> On 04.11.17 at 05:48, <mule@inso.org> wrote:
> > >> > I added some additional storage to my server with some native 4k
> sector
> > >> > size disks.  The LVM volumes on that array seem to work fine when
> mounted
> > >> > by the host, and when passed through to any of the Linux guests, but
> > >> > Windows guests aren't able to use them when using PV drivers.  The
> work
> > >> > fine to install when I first install Windows (Windows 10, latest
> build) but
> > >> > once I install the PV drivers it will no longer boot and give an
> > >> > inaccessible boot device error.  If I assign the storage to a
> different
> > >> > Windows guest that already has the drivers installed (as secondary
> storage,
> > >> > not as the boot device) I see the disk listed in disk management,
> but the
> > >> > size of the disk is 8x larger than it should be.  After looking
> into it a
> > >> > bit, the disk is reporting 8x the number of sectors it should have
> when I
> > >> > run xenstore-ls.  Here is the info from xenstore-ls for the
> relevant volume:
> > >> >
> > >> >       51712 = ""
> > >> >        frontend = "/local/domain/8/device/vbd/51712"
> > >> >        params = "/dev/tv_storage/main-storage"
> > >> >        script = "/etc/xen/scripts/block"
> > >> >        frontend-id = "8"
> > >> >        online = "1"
> > >> >        removable = "0"
> > >> >        bootable = "1"
> > >> >        state = "2"
> > >> >        dev = "xvda"
> > >> >        type = "phy"
> > >> >        mode = "w"
> > >> >        device-type = "disk"
> > >> >        discard-enable = "1"
> > >> >        feature-max-indirect-segments = "256"
> > >> >        multi-queue-max-queues = "12"
> > >> >        max-ring-page-order = "4"
> > >> >        physical-device = "fe:0"
> > >> >        physical-device-path = "/dev/dm-0"
> > >> >        hotplug-status = "connected"
> > >> >        feature-flush-cache = "1"
> > >> >        feature-discard = "0"
> > >> >        feature-barrier = "1"
> > >> >        feature-persistent = "1"
> > >> >        sectors = "34359738368"
> > >> >        info = "0"
> > >> >        sector-size = "4096"
> > >> >        physical-sector-size = "4096"
> > >> >
> > >> >
> > >> > Here are the numbers for the volume as reported by fdisk:
> > >> >
> > >> > Disk /dev/tv_storage/main-storage: 16 TiB, 17592186044416 bytes,
> 4294967296
> > >> > sectors
> > >> > Units: sectors of 1 * 4096 = 4096 bytes
> > >> > Sector size (logical/physical): 4096 bytes / 4096 bytes
> > >> > I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> > >> > Disklabel type: dos
> > >> > Disk identifier: 0x00000000
> > >> >
> > >> > Device                        Boot Start        End    Sectors Size
> Id Type
> > >> > /dev/tv_storage/main-storage1          1 4294967295 4294967295  16T
> ee GPT
> > >> >
> > >> >
> > >> > As with the size reported in Windows disk management, the number of
> sectors
> > >> > from xenstore seems is 8x higher than what it should be.  The disks
> aren't
> > >> > using 512b sector emulation, they are natively 4k, so I have no
> idea where
> > >> > the 8x increase is coming from.
> > >>
> > >> Hmm, looks like a backend problem indeed: struct hd_struct's
> > >> nr_sects (which get_capacity() returns) looks to be in 512-byte
> > >> units, regardless of actual sector size. Hence the plain
> > >> get_capacity() use as well the (wrongly open coded) use of
> > >> part_nr_sects_read() looks insufficient in vbd_sz(). Roger,
> > >> Konrad?
> > >
> > > Hm, AFAICT sector-size should always be set to 512.
> >
> > Which would mean that bdev_logical_block_size() can't be used by
> > blkback to set this value. Yet then - what's the point of the xenstore
> > setting if it's always the same value anyway?
>
> Some frontends (at least FreeBSD) will choke if sector-size is not
> set. So we have the following scenario:
>
>  - Windows: acknowledges sector-size * sectors in order to set disk
>    capacity.
>  - Linux: sets disk capacity to sectors * 512.
>  - FreeBSD: sets disk capacity to sector-size * sectors, will choke if
>    sector-size is not set.
>
> In order to keep compatibility with all of them AFAICT the only option
> is to hardcode sector-size to 512 in xenstore.
>
> Roger.
>

[-- Attachment #1.2: Type: text/html, Size: 6764 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [BUG] blkback reporting incorrect number of sectors, unable to boot
  2017-11-09  3:27         ` Mike Reardon
@ 2017-11-09  9:30           ` Roger Pau Monné
  2017-11-09  9:37             ` Paul Durrant
  2017-11-09 15:15             ` Mike Reardon
  0 siblings, 2 replies; 16+ messages in thread
From: Roger Pau Monné @ 2017-11-09  9:30 UTC (permalink / raw)
  To: Mike Reardon; +Cc: Konrad Rzeszutek Wilk, Jan Beulich, xen-devel

Please try to avoid top-posting.

On Wed, Nov 08, 2017 at 08:27:17PM -0700, Mike Reardon wrote:
> So am I correct in reading this that for at least the foreseeable future
> storage using 4k sector sizes is not gonna happen?  I'm just trying to
> figure out if I need to get some different hardware.

Have you tried to use qdisk instead of blkback for the storage
backend?

You will have to change your disk configuration line to add
backendtype=qdisk.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [BUG] blkback reporting incorrect number of sectors, unable to boot
  2017-11-09  9:30           ` Roger Pau Monné
@ 2017-11-09  9:37             ` Paul Durrant
  2017-11-09 15:15             ` Mike Reardon
  1 sibling, 0 replies; 16+ messages in thread
From: Paul Durrant @ 2017-11-09  9:37 UTC (permalink / raw)
  To: Roger Pau Monne, Mike Reardon
  Cc: xen-devel, Jan Beulich, Konrad Rzeszutek Wilk

> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of
> Roger Pau Monné
> Sent: 09 November 2017 09:30
> To: Mike Reardon <mule@inso.org>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Jan Beulich
> <JBeulich@suse.com>; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [BUG] blkback reporting incorrect number of
> sectors, unable to boot
> 
> Please try to avoid top-posting.
> 
> On Wed, Nov 08, 2017 at 08:27:17PM -0700, Mike Reardon wrote:
> > So am I correct in reading this that for at least the foreseeable future
> > storage using 4k sector sizes is not gonna happen?  I'm just trying to
> > figure out if I need to get some different hardware.
> 
> Have you tried to use qdisk instead of blkback for the storage
> backend?
> 
> You will have to change your disk configuration line to add
> backendtype=qdisk.

From my reading qdisk (i.e. xen_disk.c in the QEMU source) hard-codes its block size to 512, but at least it looks like it won't mis-report the number of sectors.

  Paul

> 
> Roger.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [BUG] blkback reporting incorrect number of sectors, unable to boot
  2017-11-09  9:30           ` Roger Pau Monné
  2017-11-09  9:37             ` Paul Durrant
@ 2017-11-09 15:15             ` Mike Reardon
  2017-11-09 17:03               ` Roger Pau Monné
  1 sibling, 1 reply; 16+ messages in thread
From: Mike Reardon @ 2017-11-09 15:15 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Konrad Rzeszutek Wilk, Jan Beulich, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 972 bytes --]

On Thu, Nov 9, 2017 at 2:30 AM, Roger Pau Monné <roger.pau@citrix.com>
wrote:

> Please try to avoid top-posting.
>
> On Wed, Nov 08, 2017 at 08:27:17PM -0700, Mike Reardon wrote:
> > So am I correct in reading this that for at least the foreseeable future
> > storage using 4k sector sizes is not gonna happen?  I'm just trying to
> > figure out if I need to get some different hardware.
>
> Have you tried to use qdisk instead of blkback for the storage
> backend?
>
> You will have to change your disk configuration line to add
> backendtype=qdisk.
>
> Roger.
>

Sorry I didn't realize my client was defaulting to top post.

If I add that to the disk config line, the system just hangs on the ovmf
bios screen.  This appears in the qemu-dm log:


xen be: qdisk-51712: xen be: qdisk-51712: error: Failed to get "write" lock
error: Failed to get "write" lock
xen be: qdisk-51712: xen be: qdisk-51712: initialise() failed
initialise() failed

[-- Attachment #1.2: Type: text/html, Size: 1549 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [BUG] blkback reporting incorrect number of sectors, unable to boot
  2017-11-09 15:15             ` Mike Reardon
@ 2017-11-09 17:03               ` Roger Pau Monné
  2017-11-09 17:49                 ` Anthony PERARD
  2017-11-09 18:25                 ` Mike Reardon
  0 siblings, 2 replies; 16+ messages in thread
From: Roger Pau Monné @ 2017-11-09 17:03 UTC (permalink / raw)
  To: Mike Reardon
  Cc: Anthony PERARD, xen-devel, Stefano Stabellini, Jan Beulich,
	Konrad Rzeszutek Wilk

On Thu, Nov 09, 2017 at 08:15:52AM -0700, Mike Reardon wrote:
> On Thu, Nov 9, 2017 at 2:30 AM, Roger Pau Monné <roger.pau@citrix.com>
> wrote:
> 
> > Please try to avoid top-posting.
> >
> > On Wed, Nov 08, 2017 at 08:27:17PM -0700, Mike Reardon wrote:
> > > So am I correct in reading this that for at least the foreseeable future
> > > storage using 4k sector sizes is not gonna happen?  I'm just trying to
> > > figure out if I need to get some different hardware.
> >
> > Have you tried to use qdisk instead of blkback for the storage
> > backend?
> >
> > You will have to change your disk configuration line to add
> > backendtype=qdisk.
> >
> > Roger.
> >
> 
> Sorry I didn't realize my client was defaulting to top post.
> 
> If I add that to the disk config line, the system just hangs on the ovmf
> bios screen.  This appears in the qemu-dm log:
> 
> 
> xen be: qdisk-51712: xen be: qdisk-51712: error: Failed to get "write" lock
> error: Failed to get "write" lock
> xen be: qdisk-51712: xen be: qdisk-51712: initialise() failed
> initialise() failed

Hm, that doesn't seem related to the issue at hand. Adding Anthony and
Stefano (the QEMU maintainers).

Is there a know issue when booting a HVM guest with qdisk and UEFI?

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [BUG] blkback reporting incorrect number of sectors, unable to boot
  2017-11-09 17:03               ` Roger Pau Monné
@ 2017-11-09 17:49                 ` Anthony PERARD
  2017-11-10  9:40                   ` Paul Durrant
  2017-11-09 18:25                 ` Mike Reardon
  1 sibling, 1 reply; 16+ messages in thread
From: Anthony PERARD @ 2017-11-09 17:49 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Mike Reardon, xen-devel, Stefano Stabellini, Jan Beulich,
	Konrad Rzeszutek Wilk

On Thu, Nov 09, 2017 at 05:03:18PM +0000, Roger Pau Monné wrote:
> On Thu, Nov 09, 2017 at 08:15:52AM -0700, Mike Reardon wrote:
> > On Thu, Nov 9, 2017 at 2:30 AM, Roger Pau Monné <roger.pau@citrix.com>
> > wrote:
> > 
> > > Please try to avoid top-posting.
> > >
> > > On Wed, Nov 08, 2017 at 08:27:17PM -0700, Mike Reardon wrote:
> > > > So am I correct in reading this that for at least the foreseeable future
> > > > storage using 4k sector sizes is not gonna happen?  I'm just trying to
> > > > figure out if I need to get some different hardware.
> > >
> > > Have you tried to use qdisk instead of blkback for the storage
> > > backend?
> > >
> > > You will have to change your disk configuration line to add
> > > backendtype=qdisk.
> > >
> > > Roger.
> > >
> > 
> > Sorry I didn't realize my client was defaulting to top post.
> > 
> > If I add that to the disk config line, the system just hangs on the ovmf
> > bios screen.  This appears in the qemu-dm log:
> > 
> > 
> > xen be: qdisk-51712: xen be: qdisk-51712: error: Failed to get "write" lock
> > error: Failed to get "write" lock
> > xen be: qdisk-51712: xen be: qdisk-51712: initialise() failed
> > initialise() failed

:(, I never saw those error messages, maybe we should increase the
verbosity of the qemu backends.

> Hm, that doesn't seem related to the issue at hand. Adding Anthony and
> Stefano (the QEMU maintainers).
> 
> Is there a know issue when booting a HVM guest with qdisk and UEFI?

I know of the issue, I don't know what to do about it yet.

The problem is that QEMU 4.10 have a lock on the disk image. When
booting an HVM guest with a qdisk backend, the disk is open twice, but
can only be locked once, so when the pv disk is been initialized, the
initialisation kind of fail.
Unfortunatly, OVMF will wait indefinitly until the PV disk is
initialized.

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [BUG] blkback reporting incorrect number of sectors, unable to boot
  2017-11-09 17:03               ` Roger Pau Monné
  2017-11-09 17:49                 ` Anthony PERARD
@ 2017-11-09 18:25                 ` Mike Reardon
  1 sibling, 0 replies; 16+ messages in thread
From: Mike Reardon @ 2017-11-09 18:25 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Anthony PERARD, xen-devel, Stefano Stabellini, Jan Beulich,
	Konrad Rzeszutek Wilk


[-- Attachment #1.1: Type: text/plain, Size: 1624 bytes --]

On Thu, Nov 9, 2017 at 10:03 AM, Roger Pau Monné <roger.pau@citrix.com>
wrote:

> On Thu, Nov 09, 2017 at 08:15:52AM -0700, Mike Reardon wrote:
> > On Thu, Nov 9, 2017 at 2:30 AM, Roger Pau Monné <roger.pau@citrix.com>
> > wrote:
> >
> > > Please try to avoid top-posting.
> > >
> > > On Wed, Nov 08, 2017 at 08:27:17PM -0700, Mike Reardon wrote:
> > > > So am I correct in reading this that for at least the foreseeable
> future
> > > > storage using 4k sector sizes is not gonna happen?  I'm just trying
> to
> > > > figure out if I need to get some different hardware.
> > >
> > > Have you tried to use qdisk instead of blkback for the storage
> > > backend?
> > >
> > > You will have to change your disk configuration line to add
> > > backendtype=qdisk.
> > >
> > > Roger.
> > >
> >
> > Sorry I didn't realize my client was defaulting to top post.
> >
> > If I add that to the disk config line, the system just hangs on the ovmf
> > bios screen.  This appears in the qemu-dm log:
> >
> >
> > xen be: qdisk-51712: xen be: qdisk-51712: error: Failed to get "write"
> lock
> > error: Failed to get "write" lock
> > xen be: qdisk-51712: xen be: qdisk-51712: initialise() failed
> > initialise() failed
>
> Hm, that doesn't seem related to the issue at hand. Adding Anthony and
> Stefano (the QEMU maintainers).
>
> Is there a know issue when booting a HVM guest with qdisk and UEFI?
>
> Roger.
>

So I added the disk to a domU that was running Windows and not using UEFI
and it appears to be working as expected.  The disk shows up, I can
read/write to the disk just fine.

[-- Attachment #1.2: Type: text/html, Size: 2338 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [BUG] blkback reporting incorrect number of sectors, unable to boot
  2017-11-09 17:49                 ` Anthony PERARD
@ 2017-11-10  9:40                   ` Paul Durrant
  2017-11-10  9:52                     ` Jan Beulich
  0 siblings, 1 reply; 16+ messages in thread
From: Paul Durrant @ 2017-11-10  9:40 UTC (permalink / raw)
  To: Anthony Perard, Roger Pau Monne
  Cc: Konrad Rzeszutek Wilk, Mike Reardon, Stefano Stabellini,
	Jan Beulich, xen-devel

> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of
> Anthony PERARD
> Sent: 09 November 2017 17:50
> To: Roger Pau Monne <roger.pau@citrix.com>
> Cc: Mike Reardon <mule@inso.org>; xen-devel@lists.xen.org; Stefano
> Stabellini <sstabellini@kernel.org>; Jan Beulich <JBeulich@suse.com>;
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Subject: Re: [Xen-devel] [BUG] blkback reporting incorrect number of
> sectors, unable to boot
> 
> On Thu, Nov 09, 2017 at 05:03:18PM +0000, Roger Pau Monné wrote:
> > On Thu, Nov 09, 2017 at 08:15:52AM -0700, Mike Reardon wrote:
> > > On Thu, Nov 9, 2017 at 2:30 AM, Roger Pau Monné
> <roger.pau@citrix.com>
> > > wrote:
> > >
> > > > Please try to avoid top-posting.
> > > >
> > > > On Wed, Nov 08, 2017 at 08:27:17PM -0700, Mike Reardon wrote:
> > > > > So am I correct in reading this that for at least the foreseeable future
> > > > > storage using 4k sector sizes is not gonna happen?  I'm just trying to
> > > > > figure out if I need to get some different hardware.
> > > >
> > > > Have you tried to use qdisk instead of blkback for the storage
> > > > backend?
> > > >
> > > > You will have to change your disk configuration line to add
> > > > backendtype=qdisk.
> > > >
> > > > Roger.
> > > >
> > >
> > > Sorry I didn't realize my client was defaulting to top post.
> > >
> > > If I add that to the disk config line, the system just hangs on the ovmf
> > > bios screen.  This appears in the qemu-dm log:
> > >
> > >
> > > xen be: qdisk-51712: xen be: qdisk-51712: error: Failed to get "write" lock
> > > error: Failed to get "write" lock
> > > xen be: qdisk-51712: xen be: qdisk-51712: initialise() failed
> > > initialise() failed
> 
> :(, I never saw those error messages, maybe we should increase the
> verbosity of the qemu backends.
> 
> > Hm, that doesn't seem related to the issue at hand. Adding Anthony and
> > Stefano (the QEMU maintainers).
> >
> > Is there a know issue when booting a HVM guest with qdisk and UEFI?
> 
> I know of the issue, I don't know what to do about it yet.
> 
> The problem is that QEMU 4.10 have a lock on the disk image. When
> booting an HVM guest with a qdisk backend, the disk is open twice, but
> can only be locked once, so when the pv disk is been initialized, the
> initialisation kind of fail.
> Unfortunatly, OVMF will wait indefinitly until the PV disk is
> initialized.

That's presumably because the OVMF frontend leaves the emulated disk plugged in despite talking via PV?

  Paul

> 
> --
> Anthony PERARD
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [BUG] blkback reporting incorrect number of sectors, unable to boot
  2017-11-10  9:40                   ` Paul Durrant
@ 2017-11-10  9:52                     ` Jan Beulich
  2017-11-10  9:58                       ` Paul Durrant
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2017-11-10  9:52 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Mike Reardon, Konrad Rzeszutek Wilk, xen-devel,
	Stefano Stabellini, Anthony Perard, Roger Pau Monne

>>> On 10.11.17 at 10:40, <Paul.Durrant@citrix.com> wrote:
>> Anthony PERARD
>> Sent: 09 November 2017 17:50
>> The problem is that QEMU 4.10 have a lock on the disk image. When
>> booting an HVM guest with a qdisk backend, the disk is open twice, but
>> can only be locked once, so when the pv disk is been initialized, the
>> initialisation kind of fail.
>> Unfortunatly, OVMF will wait indefinitly until the PV disk is
>> initialized.
> 
> That's presumably because the OVMF frontend leaves the emulated disk plugged 
> in despite talking via PV?

Well, how could it not? It can't know whether the OS to be booted
is going to have PV drivers, and iirc the unplug is not reversible.
Shouldn't OVMF close the blkif connection, with the backend
responding to this by unlocking (and maybe closing) the image?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [BUG] blkback reporting incorrect number of sectors, unable to boot
  2017-11-10  9:52                     ` Jan Beulich
@ 2017-11-10  9:58                       ` Paul Durrant
  0 siblings, 0 replies; 16+ messages in thread
From: Paul Durrant @ 2017-11-10  9:58 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Mike Reardon, Konrad Rzeszutek Wilk, xen-devel,
	Stefano Stabellini, Anthony Perard, Roger Pau Monne

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 10 November 2017 09:53
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Anthony Perard <anthony.perard@citrix.com>; Roger Pau Monne
> <roger.pau@citrix.com>; Mike Reardon <mule@inso.org>; Stefano Stabellini
> <sstabellini@kernel.org>; xen-devel@lists.xen.org; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>
> Subject: RE: [Xen-devel] [BUG] blkback reporting incorrect number of
> sectors, unable to boot
> 
> >>> On 10.11.17 at 10:40, <Paul.Durrant@citrix.com> wrote:
> >> Anthony PERARD
> >> Sent: 09 November 2017 17:50
> >> The problem is that QEMU 4.10 have a lock on the disk image. When
> >> booting an HVM guest with a qdisk backend, the disk is open twice, but
> >> can only be locked once, so when the pv disk is been initialized, the
> >> initialisation kind of fail.
> >> Unfortunatly, OVMF will wait indefinitly until the PV disk is
> >> initialized.
> >
> > That's presumably because the OVMF frontend leaves the emulated disk
> plugged
> > in despite talking via PV?
> 
> Well, how could it not? It can't know whether the OS to be booted
> is going to have PV drivers, and iirc the unplug is not reversible.

Oh, quite, but this is a fundamental problem if QEMU believes that it is not safe to open the underlying storage shared read-write (which would be the case for a qcow, or vhd where there is metadata to worry about). QEMU will open the storage as soon as the emulated device is realised so when xen_disk tries to open it again at connect time, it's always going to fail.

  Paul

> Shouldn't OVMF close the blkif connection, with the backend
> responding to this by unlocking (and maybe closing) the image?
> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2017-11-10  9:58 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-04  4:48 [BUG] blkback reporting incorrect number of sectors, unable to boot Mike Reardon
2017-11-06 12:33 ` Jan Beulich
2017-11-07 10:30   ` Roger Pau Monné
2017-11-07 10:51     ` Paul Durrant
2017-11-07 11:31     ` Jan Beulich
2017-11-07 12:41       ` Roger Pau Monné
2017-11-09  3:27         ` Mike Reardon
2017-11-09  9:30           ` Roger Pau Monné
2017-11-09  9:37             ` Paul Durrant
2017-11-09 15:15             ` Mike Reardon
2017-11-09 17:03               ` Roger Pau Monné
2017-11-09 17:49                 ` Anthony PERARD
2017-11-10  9:40                   ` Paul Durrant
2017-11-10  9:52                     ` Jan Beulich
2017-11-10  9:58                       ` Paul Durrant
2017-11-09 18:25                 ` Mike Reardon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.