qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Bug 1926231] [NEW] SCSI passthrough of SATA cdrom -> errors & performance issues
@ 2021-04-27  0:39 Michael Slade
  2021-04-27 13:34 ` [Bug 1926231] " Michael Slade
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Michael Slade @ 2021-04-27  0:39 UTC (permalink / raw)
  To: qemu-devel

Public bug reported:

qemu 5.0, compiled from git

I am passing through a SATA cdrom via SCSI passthrough, with this
libvirt XML:

    <hostdev mode='subsystem' type='scsi' managed='no' sgio='unfiltered' rawio='yes'>
      <source>
        <adapter name='scsi_host3'/>
        <address bus='0' target='0' unit='0'/>
      </source>
      <alias name='hostdev0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </hostdev>

It seems to mostly work, I have written discs with it, except I am
getting errors that cause reads to take about 5x as long as they should,
under certain circumstances.  It appears to be based on the guest's read
block size.

I found that if on the guest I run, say `dd if=$some_large_file
bs=262144|pv > /dev/null`, `iostat` and `pv` disagree about how much is
being read by a factor of about 2.  Also many kernel messages like this
happen on the guest:

[  190.919684] sr 0:0:0:0: [sr0] tag#160 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
[  190.919687] sr 0:0:0:0: [sr0] tag#160 Sense Key : Aborted Command [current] 
[  190.919689] sr 0:0:0:0: [sr0] tag#160 Add. Sense: I/O process terminated
[  190.919691] sr 0:0:0:0: [sr0] tag#160 CDB: Read(10) 28 00 00 18 a5 5a 00 00 80 00
[  190.919694] blk_update_request: I/O error, dev sr0, sector 6460776 op 0x0:(READ) flags 0x80700 phys_seg 5 prio class 0

If I change to bs=131072 the errors stop and performance is normal.

(262144 happens to be the block size ultimately used by md5sum, which is
how I got here)

I also ran strace on the qemu process while it was happening, and
noticed SG_IO calls like this:

21748 10:06:29.330910 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x95\x5a\x00\x00\x80\x00", mx_sb_len=252, iovec_count=0, dxfer_len=262144, timeout=4294967295, flags=SG_FLAG_DIRECT_IO <unfinished ...>
21751 10:06:29.330976 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x94\xda\x00\x00\x02\x00", mx_sb_len=252, iovec_count=0, dxfer_len=4096, timeout=4294967295, flags=SG_FLAG_DIRECT_IO <unfinished ...>
21749 10:06:29.331586 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x94\xdc\x00\x00\x02\x00", mx_sb_len=252, iovec_count=0, dxfer_len=4096, timeout=4294967295, flags=SG_FLAG_DIRECT_IO <unfinished ...>
[etc]

I suspect qemu is the culprit because I have tried a 4.19 guest kernel
as well as a 5.9 one, with the same result.

** Affects: qemu
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1926231

Title:
  SCSI passthrough of SATA cdrom -> errors & performance issues

Status in QEMU:
  New

Bug description:
  qemu 5.0, compiled from git

  I am passing through a SATA cdrom via SCSI passthrough, with this
  libvirt XML:

      <hostdev mode='subsystem' type='scsi' managed='no' sgio='unfiltered' rawio='yes'>
        <source>
          <adapter name='scsi_host3'/>
          <address bus='0' target='0' unit='0'/>
        </source>
        <alias name='hostdev0'/>
        <address type='drive' controller='0' bus='0' target='0' unit='0'/>
      </hostdev>

  It seems to mostly work, I have written discs with it, except I am
  getting errors that cause reads to take about 5x as long as they
  should, under certain circumstances.  It appears to be based on the
  guest's read block size.

  I found that if on the guest I run, say `dd if=$some_large_file
  bs=262144|pv > /dev/null`, `iostat` and `pv` disagree about how much
  is being read by a factor of about 2.  Also many kernel messages like
  this happen on the guest:

  [  190.919684] sr 0:0:0:0: [sr0] tag#160 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
  [  190.919687] sr 0:0:0:0: [sr0] tag#160 Sense Key : Aborted Command [current] 
  [  190.919689] sr 0:0:0:0: [sr0] tag#160 Add. Sense: I/O process terminated
  [  190.919691] sr 0:0:0:0: [sr0] tag#160 CDB: Read(10) 28 00 00 18 a5 5a 00 00 80 00
  [  190.919694] blk_update_request: I/O error, dev sr0, sector 6460776 op 0x0:(READ) flags 0x80700 phys_seg 5 prio class 0

  If I change to bs=131072 the errors stop and performance is normal.

  (262144 happens to be the block size ultimately used by md5sum, which
  is how I got here)

  I also ran strace on the qemu process while it was happening, and
  noticed SG_IO calls like this:

  21748 10:06:29.330910 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x95\x5a\x00\x00\x80\x00", mx_sb_len=252, iovec_count=0, dxfer_len=262144, timeout=4294967295, flags=SG_FLAG_DIRECT_IO <unfinished ...>
  21751 10:06:29.330976 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x94\xda\x00\x00\x02\x00", mx_sb_len=252, iovec_count=0, dxfer_len=4096, timeout=4294967295, flags=SG_FLAG_DIRECT_IO <unfinished ...>
  21749 10:06:29.331586 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x94\xdc\x00\x00\x02\x00", mx_sb_len=252, iovec_count=0, dxfer_len=4096, timeout=4294967295, flags=SG_FLAG_DIRECT_IO <unfinished ...>
  [etc]

  I suspect qemu is the culprit because I have tried a 4.19 guest kernel
  as well as a 5.9 one, with the same result.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1926231/+subscriptions


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [Bug 1926231] Re: SCSI passthrough of SATA cdrom -> errors & performance issues
  2021-04-27  0:39 [Bug 1926231] [NEW] SCSI passthrough of SATA cdrom -> errors & performance issues Michael Slade
@ 2021-04-27 13:34 ` Michael Slade
  2021-05-15 11:17 ` Thomas Huth
  2021-07-15  4:17 ` Launchpad Bug Tracker
  2 siblings, 0 replies; 4+ messages in thread
From: Michael Slade @ 2021-04-27 13:34 UTC (permalink / raw)
  To: qemu-devel

Just discovered that `hdparm -a 128` works around the issue (it was 256
at boot).

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1926231

Title:
  SCSI passthrough of SATA cdrom -> errors & performance issues

Status in QEMU:
  New

Bug description:
  qemu 5.0, compiled from git

  I am passing through a SATA cdrom via SCSI passthrough, with this
  libvirt XML:

      <hostdev mode='subsystem' type='scsi' managed='no' sgio='unfiltered' rawio='yes'>
        <source>
          <adapter name='scsi_host3'/>
          <address bus='0' target='0' unit='0'/>
        </source>
        <alias name='hostdev0'/>
        <address type='drive' controller='0' bus='0' target='0' unit='0'/>
      </hostdev>

  It seems to mostly work, I have written discs with it, except I am
  getting errors that cause reads to take about 5x as long as they
  should, under certain circumstances.  It appears to be based on the
  guest's read block size.

  I found that if on the guest I run, say `dd if=$some_large_file
  bs=262144|pv > /dev/null`, `iostat` and `pv` disagree about how much
  is being read by a factor of about 2.  Also many kernel messages like
  this happen on the guest:

  [  190.919684] sr 0:0:0:0: [sr0] tag#160 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
  [  190.919687] sr 0:0:0:0: [sr0] tag#160 Sense Key : Aborted Command [current] 
  [  190.919689] sr 0:0:0:0: [sr0] tag#160 Add. Sense: I/O process terminated
  [  190.919691] sr 0:0:0:0: [sr0] tag#160 CDB: Read(10) 28 00 00 18 a5 5a 00 00 80 00
  [  190.919694] blk_update_request: I/O error, dev sr0, sector 6460776 op 0x0:(READ) flags 0x80700 phys_seg 5 prio class 0

  If I change to bs=131072 the errors stop and performance is normal.

  (262144 happens to be the block size ultimately used by md5sum, which
  is how I got here)

  I also ran strace on the qemu process while it was happening, and
  noticed SG_IO calls like this:

  21748 10:06:29.330910 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x95\x5a\x00\x00\x80\x00", mx_sb_len=252, iovec_count=0, dxfer_len=262144, timeout=4294967295, flags=SG_FLAG_DIRECT_IO <unfinished ...>
  21751 10:06:29.330976 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x94\xda\x00\x00\x02\x00", mx_sb_len=252, iovec_count=0, dxfer_len=4096, timeout=4294967295, flags=SG_FLAG_DIRECT_IO <unfinished ...>
  21749 10:06:29.331586 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x94\xdc\x00\x00\x02\x00", mx_sb_len=252, iovec_count=0, dxfer_len=4096, timeout=4294967295, flags=SG_FLAG_DIRECT_IO <unfinished ...>
  [etc]

  I suspect qemu is the culprit because I have tried a 4.19 guest kernel
  as well as a 5.9 one, with the same result.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1926231/+subscriptions


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [Bug 1926231] Re: SCSI passthrough of SATA cdrom -> errors & performance issues
  2021-04-27  0:39 [Bug 1926231] [NEW] SCSI passthrough of SATA cdrom -> errors & performance issues Michael Slade
  2021-04-27 13:34 ` [Bug 1926231] " Michael Slade
@ 2021-05-15 11:17 ` Thomas Huth
  2021-07-15  4:17 ` Launchpad Bug Tracker
  2 siblings, 0 replies; 4+ messages in thread
From: Thomas Huth @ 2021-05-15 11:17 UTC (permalink / raw)
  To: qemu-devel

The QEMU project is currently moving its bug tracking to another system.
For this we need to know which bugs are still valid and which could be
closed already. Thus we are setting the bug state to "Incomplete" now.

If the bug has already been fixed in the latest upstream version of QEMU,
then please close this ticket as "Fix released".

If it is not fixed yet and you think that this bug report here is still
valid, then you have two options:

1) If you already have an account on gitlab.com, please open a new ticket
for this problem in our new tracker here:

    https://gitlab.com/qemu-project/qemu/-/issues

and then close this ticket here on Launchpad (or let it expire auto-
matically after 60 days). Please mention the URL of this bug ticket on
Launchpad in the new ticket on GitLab.

2) If you don't have an account on gitlab.com and don't intend to get
one, but still would like to keep this ticket opened, then please switch
the state back to "New" or "Confirmed" within the next 60 days (other-
wise it will get closed as "Expired"). We will then eventually migrate
the ticket automatically to the new system (but you won't be the reporter
of the bug in the new system and thus you won't get notified on changes
anymore).

Thank you and sorry for the inconvenience.


** Changed in: qemu
       Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1926231

Title:
  SCSI passthrough of SATA cdrom -> errors & performance issues

Status in QEMU:
  Incomplete

Bug description:
  qemu 5.0, compiled from git

  I am passing through a SATA cdrom via SCSI passthrough, with this
  libvirt XML:

      <hostdev mode='subsystem' type='scsi' managed='no' sgio='unfiltered' rawio='yes'>
        <source>
          <adapter name='scsi_host3'/>
          <address bus='0' target='0' unit='0'/>
        </source>
        <alias name='hostdev0'/>
        <address type='drive' controller='0' bus='0' target='0' unit='0'/>
      </hostdev>

  It seems to mostly work, I have written discs with it, except I am
  getting errors that cause reads to take about 5x as long as they
  should, under certain circumstances.  It appears to be based on the
  guest's read block size.

  I found that if on the guest I run, say `dd if=$some_large_file
  bs=262144|pv > /dev/null`, `iostat` and `pv` disagree about how much
  is being read by a factor of about 2.  Also many kernel messages like
  this happen on the guest:

  [  190.919684] sr 0:0:0:0: [sr0] tag#160 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
  [  190.919687] sr 0:0:0:0: [sr0] tag#160 Sense Key : Aborted Command [current] 
  [  190.919689] sr 0:0:0:0: [sr0] tag#160 Add. Sense: I/O process terminated
  [  190.919691] sr 0:0:0:0: [sr0] tag#160 CDB: Read(10) 28 00 00 18 a5 5a 00 00 80 00
  [  190.919694] blk_update_request: I/O error, dev sr0, sector 6460776 op 0x0:(READ) flags 0x80700 phys_seg 5 prio class 0

  If I change to bs=131072 the errors stop and performance is normal.

  (262144 happens to be the block size ultimately used by md5sum, which
  is how I got here)

  I also ran strace on the qemu process while it was happening, and
  noticed SG_IO calls like this:

  21748 10:06:29.330910 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x95\x5a\x00\x00\x80\x00", mx_sb_len=252, iovec_count=0, dxfer_len=262144, timeout=4294967295, flags=SG_FLAG_DIRECT_IO <unfinished ...>
  21751 10:06:29.330976 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x94\xda\x00\x00\x02\x00", mx_sb_len=252, iovec_count=0, dxfer_len=4096, timeout=4294967295, flags=SG_FLAG_DIRECT_IO <unfinished ...>
  21749 10:06:29.331586 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x94\xdc\x00\x00\x02\x00", mx_sb_len=252, iovec_count=0, dxfer_len=4096, timeout=4294967295, flags=SG_FLAG_DIRECT_IO <unfinished ...>
  [etc]

  I suspect qemu is the culprit because I have tried a 4.19 guest kernel
  as well as a 5.9 one, with the same result.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1926231/+subscriptions


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [Bug 1926231] Re: SCSI passthrough of SATA cdrom -> errors & performance issues
  2021-04-27  0:39 [Bug 1926231] [NEW] SCSI passthrough of SATA cdrom -> errors & performance issues Michael Slade
  2021-04-27 13:34 ` [Bug 1926231] " Michael Slade
  2021-05-15 11:17 ` Thomas Huth
@ 2021-07-15  4:17 ` Launchpad Bug Tracker
  2 siblings, 0 replies; 4+ messages in thread
From: Launchpad Bug Tracker @ 2021-07-15  4:17 UTC (permalink / raw)
  To: qemu-devel

[Expired for QEMU because there has been no activity for 60 days.]

** Changed in: qemu
       Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1926231

Title:
  SCSI passthrough of SATA cdrom -> errors & performance issues

Status in QEMU:
  Expired

Bug description:
  qemu 5.0, compiled from git

  I am passing through a SATA cdrom via SCSI passthrough, with this
  libvirt XML:

      <hostdev mode='subsystem' type='scsi' managed='no' sgio='unfiltered' rawio='yes'>
        <source>
          <adapter name='scsi_host3'/>
          <address bus='0' target='0' unit='0'/>
        </source>
        <alias name='hostdev0'/>
        <address type='drive' controller='0' bus='0' target='0' unit='0'/>
      </hostdev>

  It seems to mostly work, I have written discs with it, except I am
  getting errors that cause reads to take about 5x as long as they
  should, under certain circumstances.  It appears to be based on the
  guest's read block size.

  I found that if on the guest I run, say `dd if=$some_large_file
  bs=262144|pv > /dev/null`, `iostat` and `pv` disagree about how much
  is being read by a factor of about 2.  Also many kernel messages like
  this happen on the guest:

  [  190.919684] sr 0:0:0:0: [sr0] tag#160 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
  [  190.919687] sr 0:0:0:0: [sr0] tag#160 Sense Key : Aborted Command [current] 
  [  190.919689] sr 0:0:0:0: [sr0] tag#160 Add. Sense: I/O process terminated
  [  190.919691] sr 0:0:0:0: [sr0] tag#160 CDB: Read(10) 28 00 00 18 a5 5a 00 00 80 00
  [  190.919694] blk_update_request: I/O error, dev sr0, sector 6460776 op 0x0:(READ) flags 0x80700 phys_seg 5 prio class 0

  If I change to bs=131072 the errors stop and performance is normal.

  (262144 happens to be the block size ultimately used by md5sum, which
  is how I got here)

  I also ran strace on the qemu process while it was happening, and
  noticed SG_IO calls like this:

  21748 10:06:29.330910 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x95\x5a\x00\x00\x80\x00", mx_sb_len=252, iovec_count=0, dxfer_len=262144, timeout=4294967295, flags=SG_FLAG_DIRECT_IO <unfinished ...>
  21751 10:06:29.330976 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x94\xda\x00\x00\x02\x00", mx_sb_len=252, iovec_count=0, dxfer_len=4096, timeout=4294967295, flags=SG_FLAG_DIRECT_IO <unfinished ...>
  21749 10:06:29.331586 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x94\xdc\x00\x00\x02\x00", mx_sb_len=252, iovec_count=0, dxfer_len=4096, timeout=4294967295, flags=SG_FLAG_DIRECT_IO <unfinished ...>
  [etc]

  I suspect qemu is the culprit because I have tried a 4.19 guest kernel
  as well as a 5.9 one, with the same result.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1926231/+subscriptions



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-07-15  4:29 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-27  0:39 [Bug 1926231] [NEW] SCSI passthrough of SATA cdrom -> errors & performance issues Michael Slade
2021-04-27 13:34 ` [Bug 1926231] " Michael Slade
2021-05-15 11:17 ` Thomas Huth
2021-07-15  4:17 ` Launchpad Bug Tracker

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).