Linux-USB Archive on lore.kernel.org
 help / color / Atom feed
* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
       [not found] <307581a490b610c3025ee80f79a465a89d68ed19.camel@unipv.it>
@ 2019-08-20 17:13 ` Alan Stern
  2019-08-23 10:39   ` Andrea Vai
  0 siblings, 1 reply; 23+ messages in thread
From: Alan Stern @ 2019-08-20 17:13 UTC (permalink / raw)
  To: Andrea Vai
  Cc: Johannes Thumshirn, Jens Axboe, linux-usb, linux-scsi,
	Himanshu Madhani, Hannes Reinecke, Ming Lei, Omar Sandoval,
	Martin K. Petersen, Greg KH

On Mon, 19 Aug 2019, Andrea Vai wrote:

> Hi Alan,
>   I attach the two traces, collected as follows:
> 
> - start the trace;
> - wait 10 seconds;
> - plug the drive;
> - wait 5 seconds;
> - mount the drive;
> - wait 5 seconds;
> - copy a 500 byte file;
> - wait 5 seconds;
> - unmount the drive;
> - wait 5 seconds;
> - stop the trace.

Still no noticeable differences between the two traces.  They both 
include a 1.2 second delay shortly after the writing starts, and the 
initialization sequences are the same.

I really don't know where to look for this.  The only thing I can think
of at this point is to repeat this test, but using a file large enough
for the difference in writing speed to show up plainly.

By the way, it would be best to run the tests with the smallest
possible number of other USB devices plugged in.  None at all, if you
can arrange it.

Alan Stern


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-08-20 17:13 ` Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6 Alan Stern
@ 2019-08-23 10:39   ` Andrea Vai
  2019-08-23 20:42     ` Alan Stern
  0 siblings, 1 reply; 23+ messages in thread
From: Andrea Vai @ 2019-08-23 10:39 UTC (permalink / raw)
  To: Alan Stern
  Cc: Johannes Thumshirn, Jens Axboe, linux-usb, linux-scsi,
	Himanshu Madhani, Hannes Reinecke, Ming Lei, Omar Sandoval,
	Martin K. Petersen, Greg KH

[-- Attachment #1: Type: text/plain, Size: 4851 bytes --]

Il giorno mar, 20/08/2019 alle 13.13 -0400, Alan Stern ha scritto:
> On Mon, 19 Aug 2019, Andrea Vai wrote:
> 
> > Hi Alan,
> >   I attach the two traces, collected as follows:
> > 
> > - start the trace;
> > - wait 10 seconds;
> > - plug the drive;
> > - wait 5 seconds;
> > - mount the drive;
> > - wait 5 seconds;
> > - copy a 500 byte file;
> > - wait 5 seconds;
> > - unmount the drive;
> > - wait 5 seconds;
> > - stop the trace.
> 
> Still no noticeable differences between the two traces.  They both 
> include a 1.2 second delay shortly after the writing starts, and
> the 
> initialization sequences are the same.
> 
> I really don't know where to look for this.  The only thing I can
> think
> of at this point is to repeat this test, but using a file large
> enough
> for the difference in writing speed to show up plainly.
> 
> By the way, it would be best to run the tests with the smallest
> possible number of other USB devices plugged in.  None at all, if
> you
> can arrange it.

Thanks, I went some steps further on this.
The following considerations all apply to the "bad" kernel.

Increasing the filesize lead me to find out that using a file sized
less than roughly 10MB the problem does not happen.

I found these results by making sets of 10 tries for each filesize,
using a filesize of 1kB, 10kB, 100kB, 1MB, 10MB, 100MB, 500MB (so, we
have 70 usbmon logs on these). If we define "fast" a copy that takes
(roughly(*)) no more time to complete than all the other tries in its
set, and "slow" elsewhere (=one or more tries in its set are
(sensibly(*)) faster), I noticed that in each set with a filesize of
10MB or more the behavior can be very different: sometimes the copy is
still "fast", sometimes is "slow". The frequency of the "slow" copies
increases with the filesize. Also, among the "slow" copies in a set,
the time can be very different.

Also, I found that if the file is not present on the target location
(i.e. the USB pendrive), the problem does not happen (I have ten
usbmon logs here, taken in the worst scenario (500MB filesize)).

Tell me which log(s) would you like me to send you: I can sum up here
all the sets of tries, and the time their copies took to complete (in
seconds):

1kB: 26, 27, 26, 26, 27, 26, 26, 27, 26, 27
10kB: 27, 27, 26, 26, 27, 26, 27, 26, 27, 27
100kB: 26, 26, 26, 27, 26, 26, 26, 27, 27, 27
1MB: 26, 27, 27, 27, 27, 27, 27, 27, 27, 26
10MB: 27, 31, 37, 27, 38, 27, 39, 27, 30, 28
100MB: 32, 32, 144, 32, 145, 32, 123, 32, 153, 123
500MB: 56, 1396, 747, 131, 795, 764, 292, 1021, 807, 516

Also, note that the first copy is always "fast", because each file was
initially not present on the pendrive. As said, I did one test of 10
tries by deleting the file on the pendrive before copying it again,
and the results are

500MB: 56, 56, 57, 57, 56, 56, 60, 25***, 55, 56 (***Note the "fake"
25s, doesn't matter because I forgot to plug the pendrive :-/ )

I have made a script to semi-automate all the tests I have done. I
attach the script here, so anyone interested could check it for any
mistake (remember I am not very skilled so I may have wrote buggy
code, done wrong assumptions, etc.). Please note that I decreased the
time between the trace start and the drive plugging from 10s to 5s
(simply to reduce the time needed to me to look at the countdown). Of
course I can do again the test(s) you need with a bigger amount of
$wait.

The script has been run with the command

# for k in {1..10}; do size=1000; ./test_usbmon $size && ping -a -c 5 8.8.8.8 ; done
(example for 1kB filesize)

or, in the set of "delete before copy",

# for k in {1..10}; do size=500000000; ./cancellaTestFile $size && ./test_usbmon $size && ping -a -c 5 8.8.8.8 ; done

The ping command is there just to have a sound alarm when finished.

I also attach the script to delete the file ("cancellaTestFile").

I took care to plug the pendrive exactly at the end of the countdown,
to keep the times in the logs more simple to detect and manage by you.

I have also logged all the terminal output log of the script.

Last note: I ran all the tests without any other USB device connected
but the pendrive (well, actually there is a card reader connected to
the internal USB connector, but on another bus. I didn't want to open
the case and disconnect it but of course I can do it if needed).
Thanks for pointing it out.

Thanks, and bye
Andrea

(*) as an example, on a set that shows the total elapsed time in
seconds being

26, 27, 27, 27, 27, 27, 27, 27, 27, 26

I have assumed all of the copies to be "fast", while in the set

32, 32, 144, 32, 145, 32, 123, 32, 153, 123

I have assumed 5 of the copies as "fast" (the ones that took 32
seconds) and the other "slow". Not going to deepen in some standard
deviation evaluation, etc., but if you'd like to I can provide some
more scientific detailed data :-)


[-- Attachment #2: test_usbmon --]
[-- Type: application/x-shellscript, Size: 1607 bytes --]

[-- Attachment #3: cancellaTestFile --]
[-- Type: application/x-shellscript, Size: 296 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-08-23 10:39   ` Andrea Vai
@ 2019-08-23 20:42     ` Alan Stern
  2019-08-26  6:09       ` Andrea Vai
  0 siblings, 1 reply; 23+ messages in thread
From: Alan Stern @ 2019-08-23 20:42 UTC (permalink / raw)
  To: Andrea Vai
  Cc: Johannes Thumshirn, Jens Axboe, linux-usb, linux-scsi,
	Himanshu Madhani, Hannes Reinecke, Ming Lei, Omar Sandoval,
	Martin K. Petersen, Greg KH

On Fri, 23 Aug 2019, Andrea Vai wrote:

> Il giorno mar, 20/08/2019 alle 13.13 -0400, Alan Stern ha scritto:
> > On Mon, 19 Aug 2019, Andrea Vai wrote:
> > 
> > > Hi Alan,
> > >   I attach the two traces, collected as follows:
> > > 
> > > - start the trace;
> > > - wait 10 seconds;
> > > - plug the drive;
> > > - wait 5 seconds;
> > > - mount the drive;
> > > - wait 5 seconds;
> > > - copy a 500 byte file;
> > > - wait 5 seconds;
> > > - unmount the drive;
> > > - wait 5 seconds;
> > > - stop the trace.
> > 
> > Still no noticeable differences between the two traces.  They both 
> > include a 1.2 second delay shortly after the writing starts, and
> > the 
> > initialization sequences are the same.
> > 
> > I really don't know where to look for this.  The only thing I can
> > think
> > of at this point is to repeat this test, but using a file large
> > enough
> > for the difference in writing speed to show up plainly.
> > 
> > By the way, it would be best to run the tests with the smallest
> > possible number of other USB devices plugged in.  None at all, if
> > you
> > can arrange it.
> 
> Thanks, I went some steps further on this.
> The following considerations all apply to the "bad" kernel.
> 
> Increasing the filesize lead me to find out that using a file sized
> less than roughly 10MB the problem does not happen.
> 
> I found these results by making sets of 10 tries for each filesize,
> using a filesize of 1kB, 10kB, 100kB, 1MB, 10MB, 100MB, 500MB (so, we
> have 70 usbmon logs on these). If we define "fast" a copy that takes
> (roughly(*)) no more time to complete than all the other tries in its
> set, and "slow" elsewhere (=one or more tries in its set are
> (sensibly(*)) faster), I noticed that in each set with a filesize of
> 10MB or more the behavior can be very different: sometimes the copy is
> still "fast", sometimes is "slow". The frequency of the "slow" copies
> increases with the filesize. Also, among the "slow" copies in a set,
> the time can be very different.
> 
> Also, I found that if the file is not present on the target location
> (i.e. the USB pendrive), the problem does not happen (I have ten
> usbmon logs here, taken in the worst scenario (500MB filesize)).
> 
> Tell me which log(s) would you like me to send you: I can sum up here
> all the sets of tries, and the time their copies took to complete (in
> seconds):
> 
> 1kB: 26, 27, 26, 26, 27, 26, 26, 27, 26, 27
> 10kB: 27, 27, 26, 26, 27, 26, 27, 26, 27, 27
> 100kB: 26, 26, 26, 27, 26, 26, 26, 27, 27, 27
> 1MB: 26, 27, 27, 27, 27, 27, 27, 27, 27, 26
> 10MB: 27, 31, 37, 27, 38, 27, 39, 27, 30, 28
> 100MB: 32, 32, 144, 32, 145, 32, 123, 32, 153, 123
> 500MB: 56, 1396, 747, 131, 795, 764, 292, 1021, 807, 516
> 
> Also, note that the first copy is always "fast", because each file was
> initially not present on the pendrive. As said, I did one test of 10
> tries by deleting the file on the pendrive before copying it again,
> and the results are
> 
> 500MB: 56, 56, 57, 57, 56, 56, 60, 25***, 55, 56 (***Note the "fake"
> 25s, doesn't matter because I forgot to plug the pendrive :-/ )
> 
> I have made a script to semi-automate all the tests I have done. I
> attach the script here, so anyone interested could check it for any
> mistake (remember I am not very skilled so I may have wrote buggy
> code, done wrong assumptions, etc.). Please note that I decreased the
> time between the trace start and the drive plugging from 10s to 5s
> (simply to reduce the time needed to me to look at the countdown). Of
> course I can do again the test(s) you need with a bigger amount of
> $wait.
> 
> The script has been run with the command
> 
> # for k in {1..10}; do size=1000; ./test_usbmon $size && ping -a -c 5 8.8.8.8 ; done
> (example for 1kB filesize)
> 
> or, in the set of "delete before copy",
> 
> # for k in {1..10}; do size=500000000; ./cancellaTestFile $size && ./test_usbmon $size && ping -a -c 5 8.8.8.8 ; done
> 
> The ping command is there just to have a sound alarm when finished.
> 
> I also attach the script to delete the file ("cancellaTestFile").
> 
> I took care to plug the pendrive exactly at the end of the countdown,
> to keep the times in the logs more simple to detect and manage by you.
> 
> I have also logged all the terminal output log of the script.
> 
> Last note: I ran all the tests without any other USB device connected
> but the pendrive (well, actually there is a card reader connected to
> the internal USB connector, but on another bus. I didn't want to open
> the case and disconnect it but of course I can do it if needed).
> Thanks for pointing it out.
> 
> Thanks, and bye
> Andrea
> 
> (*) as an example, on a set that shows the total elapsed time in
> seconds being
> 
> 26, 27, 27, 27, 27, 27, 27, 27, 27, 26
> 
> I have assumed all of the copies to be "fast", while in the set
> 
> 32, 32, 144, 32, 145, 32, 123, 32, 153, 123
> 
> I have assumed 5 of the copies as "fast" (the ones that took 32
> seconds) and the other "slow". Not going to deepen in some standard
> deviation evaluation, etc., but if you'd like to I can provide some
> more scientific detailed data :-)

Wow, that sounds like a lot of work.

Let's start with the 39-second run for the 10-MB file.  If you can put 
the trace files on a server somewhere, available for downloading, that 
would avoid sending a lot of uninteresting data to the mailing list.

Odd that the delays never occur when you're writing a new file.  (If
nothing else, that gives you a way to work around the problem!)  It's
hard to say what it means, though.  Maybe the flash drive doesn't like 
overwriting used blocks.

Alan Stern


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-08-23 20:42     ` Alan Stern
@ 2019-08-26  6:09       ` Andrea Vai
  2019-08-26 16:33         ` Alan Stern
  0 siblings, 1 reply; 23+ messages in thread
From: Andrea Vai @ 2019-08-26  6:09 UTC (permalink / raw)
  To: Alan Stern
  Cc: Johannes Thumshirn, Jens Axboe, linux-usb, linux-scsi,
	Himanshu Madhani, Hannes Reinecke, Ming Lei, Omar Sandoval,
	Martin K. Petersen, Greg KH

[Sorry, I had previously sent the message to the list but has been
rejected. Sorry for any duplicate]

Il giorno ven, 23/08/2019 alle 16.42 -0400, Alan Stern ha scritto:
> On Fri, 23 Aug 2019, Andrea Vai wrote:
> 
> > Il giorno mar, 20/08/2019 alle 13.13 -0400, Alan Stern ha scritto:
> > > On Mon, 19 Aug 2019, Andrea Vai wrote:
> > > 
> > > > Hi Alan,
> > > >   I attach the two traces, collected as follows:
> > > > 
> > > > - start the trace;
> > > > - wait 10 seconds;
> > > > - plug the drive;
> > > > - wait 5 seconds;
> > > > - mount the drive;
> > > > - wait 5 seconds;
> > > > - copy a 500 byte file;
> > > > - wait 5 seconds;
> > > > - unmount the drive;
> > > > - wait 5 seconds;
> > > > - stop the trace.
> > > 
> > > Still no noticeable differences between the two traces.  They
> both 
> > > include a 1.2 second delay shortly after the writing starts, and
> > > the 
> > > initialization sequences are the same.
> > > 
> > > I really don't know where to look for this.  The only thing I
> can
> > > think
> > > of at this point is to repeat this test, but using a file large
> > > enough
> > > for the difference in writing speed to show up plainly.
> > > 
> > > By the way, it would be best to run the tests with the smallest
> > > possible number of other USB devices plugged in.  None at all,
> if
> > > you
> > > can arrange it.
> > 
> > Thanks, I went some steps further on this.
> > The following considerations all apply to the "bad" kernel.
> > 
> > Increasing the filesize lead me to find out that using a file
> sized
> > less than roughly 10MB the problem does not happen.
> > 
> > I found these results by making sets of 10 tries for each
> filesize,
> > using a filesize of 1kB, 10kB, 100kB, 1MB, 10MB, 100MB, 500MB (so,
> we
> > have 70 usbmon logs on these). If we define "fast" a copy that
> takes
> > (roughly(*)) no more time to complete than all the other tries in
> its
> > set, and "slow" elsewhere (=one or more tries in its set are
> > (sensibly(*)) faster), I noticed that in each set with a filesize
> of
> > 10MB or more the behavior can be very different: sometimes the
> copy is
> > still "fast", sometimes is "slow". The frequency of the "slow"
> copies
> > increases with the filesize. Also, among the "slow" copies in a
> set,
> > the time can be very different.
> > 
> > Also, I found that if the file is not present on the target
> location
> > (i.e. the USB pendrive), the problem does not happen (I have ten
> > usbmon logs here, taken in the worst scenario (500MB filesize)).
> > 
> > Tell me which log(s) would you like me to send you: I can sum up
> here
> > all the sets of tries, and the time their copies took to complete
> (in
> > seconds):
> > 
> > 1kB: 26, 27, 26, 26, 27, 26, 26, 27, 26, 27
> > 10kB: 27, 27, 26, 26, 27, 26, 27, 26, 27, 27
> > 100kB: 26, 26, 26, 27, 26, 26, 26, 27, 27, 27
> > 1MB: 26, 27, 27, 27, 27, 27, 27, 27, 27, 26
> > 10MB: 27, 31, 37, 27, 38, 27, 39, 27, 30, 28
> > 100MB: 32, 32, 144, 32, 145, 32, 123, 32, 153, 123
> > 500MB: 56, 1396, 747, 131, 795, 764, 292, 1021, 807, 516
> > 
> > Also, note that the first copy is always "fast", because each file
> was
> > initially not present on the pendrive. As said, I did one test of
> 10
> > tries by deleting the file on the pendrive before copying it
> again,
> > and the results are
> > 
> > 500MB: 56, 56, 57, 57, 56, 56, 60, 25***, 55, 56 (***Note the
> "fake"
> > 25s, doesn't matter because I forgot to plug the pendrive :-/ )
> > 
> > I have made a script to semi-automate all the tests I have done. I
> > attach the script here, so anyone interested could check it for
> any
> > mistake (remember I am not very skilled so I may have wrote buggy
> > code, done wrong assumptions, etc.). Please note that I decreased
> the
> > time between the trace start and the drive plugging from 10s to 5s
> > (simply to reduce the time needed to me to look at the countdown).
> Of
> > course I can do again the test(s) you need with a bigger amount of
> > $wait.
> > 
> > The script has been run with the command
> > 
> > # for k in {1..10}; do size=1000; ./test_usbmon $size && ping -a
> -c 5 8.8.8.8 ; done
> > (example for 1kB filesize)
> > 
> > or, in the set of "delete before copy",
> > 
> > # for k in {1..10}; do size=500000000; ./cancellaTestFile $size &&
> ./test_usbmon $size && ping -a -c 5 8.8.8.8 ; done
> > 
> > The ping command is there just to have a sound alarm when
> finished.
> > 
> > I also attach the script to delete the file ("cancellaTestFile").
> > 
> > I took care to plug the pendrive exactly at the end of the
> countdown,
> > to keep the times in the logs more simple to detect and manage by
> you.
> > 
> > I have also logged all the terminal output log of the script.
> > 
> > Last note: I ran all the tests without any other USB device
> connected
> > but the pendrive (well, actually there is a card reader connected
> to
> > the internal USB connector, but on another bus. I didn't want to
> open
> > the case and disconnect it but of course I can do it if needed).
> > Thanks for pointing it out.
> > 
> > Thanks, and bye
> > Andrea
> > 
> > (*) as an example, on a set that shows the total elapsed time in
> > seconds being
> > 
> > 26, 27, 27, 27, 27, 27, 27, 27, 27, 26
> > 
> > I have assumed all of the copies to be "fast", while in the set
> > 
> > 32, 32, 144, 32, 145, 32, 123, 32, 153, 123
> > 
> > I have assumed 5 of the copies as "fast" (the ones that took 32
> > seconds) and the other "slow". Not going to deepen in some
> standard
> > deviation evaluation, etc., but if you'd like to I can provide
> some
> > more scientific detailed data :-)
> 
> Wow, that sounds like a lot of work.

just a drop in the ocean, compared to yours :-)

> Let's start with the 39-second run for the 10-MB file.  If you can
> put 
> the trace files on a server somewhere, available for downloading,
> that 
> would avoid sending a lot of uninteresting data to the mailing list.

ok, so you can grab them at

http://fisica.unipv.it/transfer/usbmon_logs.zip

(they will be automatically removed from there in a couple of weeks).

For each size there is a .txt file (which contains the terminal
output) and 10 bad.mon.out_.... trace files. The file suffix "NonCanc"
means there has not been file deletion before copy; while "Canc" means
the opposite.

Each trace file name is identified by a timestamp that is also
referenced inside the txt, so if you want to get i.e. the 39-sec trial
for the 10MB filesize you have to open the ...10MB....txt, search for
the 39 seconds total time string ("Dopo stop trace: 39"), look at the
beginning of that trial, a dozen rows before, take note of the
timestamp, and open the corresponding bad.mon.out file (of course, if
there are more trials with the same time, you have to identify it by
counting its position (7th in the example above)).

To make it more simple:

$ seconds=39; size=10MB; grep -B14 "Dopo stop trace: $seconds" log_10trials_"$size"_NonCanc.txt

should show you more straightly the part(s) you need.

> Odd that the delays never occur when you're writing a new file.  (If
> nothing else, that gives you a way to work around the problem!) 

Thank you, didn't realize that :-) I will try it.

Thanks, and bye
Andrea


-- 
Andrea Vai
University of Pavia
Department of Physics
Via Bassi, 6
27100 Pavia PV
Tel. +39 0382 987489
Mob. +39 328 3354086

http://fisica.unipv.it
http://www.andreavai.it


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-08-26  6:09       ` Andrea Vai
@ 2019-08-26 16:33         ` Alan Stern
  0 siblings, 0 replies; 23+ messages in thread
From: Alan Stern @ 2019-08-26 16:33 UTC (permalink / raw)
  To: Andrea Vai
  Cc: Johannes Thumshirn, Jens Axboe, linux-usb, linux-scsi,
	Himanshu Madhani, Hannes Reinecke, Ming Lei, Omar Sandoval,
	Martin K. Petersen, Greg KH

On Mon, 26 Aug 2019, Andrea Vai wrote:

> ok, so you can grab them at
> 
> http://fisica.unipv.it/transfer/usbmon_logs.zip
> 
> (they will be automatically removed from there in a couple of weeks).
> 
> For each size there is a .txt file (which contains the terminal
> output) and 10 bad.mon.out_.... trace files. The file suffix "NonCanc"
> means there has not been file deletion before copy; while "Canc" means
> the opposite.
> 
> Each trace file name is identified by a timestamp that is also
> referenced inside the txt, so if you want to get i.e. the 39-sec trial
> for the 10MB filesize you have to open the ...10MB....txt, search for
> the 39 seconds total time string ("Dopo stop trace: 39"), look at the
> beginning of that trial, a dozen rows before, take note of the
> timestamp, and open the corresponding bad.mon.out file (of course, if
> there are more trials with the same time, you have to identify it by
> counting its position (7th in the example above)).
> 
> To make it more simple:
> 
> $ seconds=39; size=10MB; grep -B14 "Dopo stop trace: $seconds" log_10trials_"$size"_NonCanc.txt
> 
> should show you more straightly the part(s) you need.
> 
> > Odd that the delays never occur when you're writing a new file.  (If
> > nothing else, that gives you a way to work around the problem!) 
> 
> Thank you, didn't realize that :-) I will try it.

In fact, even the traces where the file doesn't exist beforehand show 
some delays.  Just not as many delays as the traces where the file does 
exist.  And again, each delay is in the middle of a write command, not 
between commands.

I suppose changes to the upper software layers could affect which
blocks are assigned when a new file is written.  Perhaps one kernel
re-uses the same old blocks that had been previously occupied and the
other kernel allocates a completely new set of blocks.  That might
change the drive's behavior.  The quick way to tell is to record two
usbmon traces, one under the "good" kernel and one under the "bad"  
kernel, where each test involves writing over a file that already
exists (say, 50 MB) -- the same file for both tests.  The block numbers
will appear in the traces.

Also, I wonder if the changing the size of the data transfers would
make any difference.  This is easy to try; just write "64" to
/sys/block/sd?/queue/max_sectors_kb (where the ? is the appropriate
drive letter) after the drive is plugged in but before the test starts.

Alan Stern


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
       [not found] <e3f87757f7a0fdf551e911ad32fc8122eebe04c7.camel@unipv.it>
@ 2019-08-13 19:52 ` Alan Stern
  0 siblings, 0 replies; 23+ messages in thread
From: Alan Stern @ 2019-08-13 19:52 UTC (permalink / raw)
  To: Andrea Vai
  Cc: Johannes Thumshirn, Jens Axboe, linux-usb, linux-scsi,
	Himanshu Madhani, Hannes Reinecke, Ming Lei, Omar Sandoval,
	Martin K. Petersen, Greg KH

On Fri, 9 Aug 2019, Andrea Vai wrote:

> Il giorno lun, 08/07/2019 alle 11.38 -0400, Alan Stern ha scritto:
> > 
> > [...]
> >  Andrea, another thing you could try is to collect a usbmon trace
> > under 
> > > > one of the "slow" kernels.  Follow the instructions in 
> > > > Documentation/usb/usbmon.txt.  I think you could kill the file-
> > copy 
> > > > operation after just a couple of seconds; that should provide
> > enough 
> > > > trace information to help see what causes the slowdown.
> > > > 
> > > > (If you want, do the same test with a "fast" kernel and then
> > we'll 
> > > > compare the results.)
> > > > 
> > > > Alan Stern
> > > > 
> > > 
> > > Thanks Alan,
> > >   so I attach two ouputs, one for a "good" and one for a "bad"
> > kernel.
> > 
> > [...]
> > I don't know what the results mean, but I _can_ tell you what's
> > happening.  Every so often (at intervals of about a second) the pen
> > drive completely stops communicating with the "bad" kernel for about
> > one second and then starts up again.
> > 
> > Here's a short example from the "bad" trace:
> > 
> > ffff9169f0d399c0 513072808 S Bo:6:008:2 -115 122880 = 00000000
> > 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> > ffff9169f0d399c0 514262176 C Bo:6:008:2 0 122880 >
> > 
> > The second column is a timestamp (in microseconds).  This little
> > extract shows a 120-KB write starting at time 513.072 and ending at
> > 514.262, more than a second later.  Normally such a write would
> > complete in about 0.06 s.
> > 
> > The cumulative effect of all these delays is to slow the transfer 
> > drastically.  The "good" kernel trace shows a few delays like this,
> > but 
> > only one or two.
> > 
> > I have no idea how commit f664a3cc17b7, getting rid of the legacy
> > IO 
> > path, could have caused these delays.  It seems more likely that
> > the 
> > pen drive itself is the cause, perhaps because it is flushing
> > buffers 
> > more often under the "bad" kernel.
> > 
> > I'd like you to try doing another pair of usbmon tests.  This time,
> > start collecting the usbmon trace _before_ you plug in the pen
> > drive,
> > and stop the trace shortly after the pen drive has been mounted.  
> > Don't try to transfer any data.  Perhaps the two kernels are
> > initializing the pen drive with different settings and that accounts
> > for the different behaviors.
> 
> Thanks Alan, sorry for the delay (I haven't had physical access to the
> device for a while). I attach the two new usbmon logs.
> 
> Meanwhile, I can try with other pendrives, although I already tried
> two more and both behave "bad" (but one is identical to the first, and
> the other one is similar).

I looked through your two traces.  Unfortunately they don't help --
they are essentially identical.  Exactly the same sequence of commands
was sent to the device in both traces, except for a couple of
inconsequential TEST UNIT READY commands.

There are some timing differences, but they occur in between commands,
not during a command.  They probably are the times you spent typing
stuff into the terminal.

Can you run another test?  Similar to the last one, but this time start
collecting the usbmon trace at least 10 seconds after the drive is
plugged in.  Then mount the drive, copy a small file (say less than 500
bytes) to it, unmount it, and then stop the trace.

Alan Stern


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-07-09 21:18               ` Andrea Vai
@ 2019-07-10  2:44                 ` Ming Lei
  0 siblings, 0 replies; 23+ messages in thread
From: Ming Lei @ 2019-07-10  2:44 UTC (permalink / raw)
  To: Andrea Vai
  Cc: Jens Axboe, linux-usb, linux-scsi, Himanshu Madhani,
	Hannes Reinecke, Omar Sandoval, Martin K. Petersen, Greg KH,
	Alan Stern

On Tue, Jul 09, 2019 at 11:18:38PM +0200, Andrea Vai wrote:
> On 08/07/19 09:01:35, Ming Lei wrote:
> > > > > > > > > 
> > > > > > > > > commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
> > > > > > > > > 
> > > > > > > > >  [...]    
> > 
> > 1) run the bcc biosnoop trace in one terminal after mounting the fs on the USB dirve
> > 
> > 2) start the write test in another teminal
> > 
> > 3) wait for 10 seconds, and stop the bcc trace via ctrl^ + C, then post the bcc biosnoop
> > trace log
> >
> 
> Done, attached.
> 
> Thanks,
> Andrea

> 5.420198000    jbd2/sdf1-8    6775   sdf     W  29624320  4096       0.37
> 5.423601000    jbd2/sdf1-8    6775   sdf     W  29624328  12288      3.29
> 5.424009000    jbd2/sdf1-8    6775   sdf     W  29624352  4096       0.30
> 6.665732000    kworker/u8:0   6401   sdf     R  10248     4096     841.64
> 6.666891000    kworker/u8:0   6401   sdf     R  10256     4096       0.98
> 6.668141000    kworker/u8:0   6401   sdf     R  4196352   4096       1.07
> 6.669312000    kworker/u8:0   6401   sdf     R  8390656   4096       0.99
> 6.670568000    kworker/u8:0   6401   sdf     R  12584960  4096       1.06
> 6.674257000    kworker/u8:0   6401   sdf     R  16779264  4096       3.50
> 6.675510000    kworker/u8:0   6401   sdf     R  20973568  4096       1.06
> 6.678344000    kworker/u8:0   6401   sdf     R  25167872  4096       2.64
> 6.680384000    kworker/u8:0   6401   sdf     R  29362176  4096       1.83
> 6.681622000    kworker/u8:0   6401   sdf     R  33556480  4096       1.03
> 6.682713000    kworker/u8:0   6401   sdf     R  37750784  4096       0.92
> 6.686360000    kworker/u8:0   6401   sdf     R  41945088  4096       3.48
> 6.687589000    kworker/u8:0   6401   sdf     R  46139392  4096       1.04
> 6.688639000    kworker/u8:0   6401   sdf     R  46139424  4096       0.88
> 6.766336000    kworker/u8:0   6401   sdf     W  47351808  122880    77.17
> 6.773563000    kworker/u8:0   6401   sdf     W  47352048  122880     7.08
> 6.780980000    kworker/u8:0   6401   sdf     W  47352288  122880     7.29
> 6.786366000    kworker/u8:0   6401   sdf     W  47352528  122880     5.27
> 6.793831000    kworker/u8:0   6401   sdf     W  47352768  122880     7.38
> 6.801085000    kworker/u8:0   6401   sdf     W  47353008  122880     7.20
> 6.808405000    kworker/u8:0   6401   sdf     W  47353248  122880     7.27
> 6.813799000    kworker/u8:0   6401   sdf     W  47353488  122880     5.35
> 6.821103000    kworker/u8:0   6401   sdf     W  47353728  118784     7.25
> 6.828459000    kworker/u8:0   6401   sdf     W  47353960  122880     7.09
> 6.835722000    kworker/u8:0   6401   sdf     W  47354200  122880     7.20
> 6.842813000    kworker/u8:0   6401   sdf     W  47354440  122880     7.05
> 6.850184000    kworker/u8:0   6401   sdf     W  47354680  122880     7.32
> 6.857452000    kworker/u8:0   6401   sdf     W  47354920  122880     7.22
> 6.864874000    kworker/u8:0   6401   sdf     W  47355160  122880     7.38
> 6.871841000    kworker/u8:0   6401   sdf     W  47355400  122880     6.92
> 6.879380000    kworker/u8:0   6401   sdf     W  47355640  122880     7.50
> 6.886497000    kworker/u8:0   6401   sdf     W  47355880  122880     7.04
> 6.893940000    kworker/u8:0   6401   sdf     W  47356120  122880     7.40
> 6.899304000    kworker/u8:0   6401   sdf     W  47356360  122880     5.33
> 6.906635000    kworker/u8:0   6401   sdf     W  47356600  122880     7.29
> 6.913957000    kworker/u8:0   6401   sdf     W  47356840  122880     7.29
> 6.921308000    kworker/u8:0   6401   sdf     W  47357080  122880     7.31
> 6.924391000    kworker/u8:0   6401   sdf     W  47357320  53248      3.03
> 6.931971000    kworker/u8:0   6401   sdf     W  47357424  122880     7.22
> 6.939169000    kworker/u8:0   6401   sdf     W  47357664  122880     7.14
> 6.944985000    kworker/u8:0   6401   sdf     W  47357904  122880     5.76
> 6.952113000    kworker/u8:0   6401   sdf     W  47358144  122880     7.06
> 6.959608000    kworker/u8:0   6401   sdf     W  47358384  122880     7.45
> 6.966820000    kworker/u8:0   6401   sdf     W  47358624  122880     7.16
> 6.972393000    kworker/u8:0   6401   sdf     W  47358864  122880     5.53
> 6.979550000    kworker/u8:0   6401   sdf     W  47359104  122880     7.09
> 6.986879000    kworker/u8:0   6401   sdf     W  47359344  122880     7.28
> 6.994306000    kworker/u8:0   6401   sdf     W  47359584  122880     7.40
> 7.001338000    kworker/u8:0   6401   sdf     W  47359824  122880     6.98
> 7.008740000    kworker/u8:0   6401   sdf     W  47360064  122880     7.37
> 7.016114000    kworker/u8:0   6401   sdf     W  47360304  122880     7.31
> 7.023546000    kworker/u8:0   6401   sdf     W  47360544  122880     7.39
> 7.030437000    kworker/u8:0   6401   sdf     W  47360784  122880     6.82
> 7.037834000    kworker/u8:0   6401   sdf     W  47361024  122880     7.35
> 7.045224000    kworker/u8:0   6401   sdf     W  47361264  122880     7.32
> 7.052687000    kworker/u8:0   6401   sdf     W  47361504  122880     7.39
> 7.057977000    kworker/u8:0   6401   sdf     W  47361744  122880     5.01
> 7.065390000    kworker/u8:0   6401   sdf     W  47361984  122880     7.35
> 7.072749000    kworker/u8:0   6401   sdf     W  47362224  122880     7.28
> 7.080174000    kworker/u8:0   6401   sdf     W  47362464  122880     7.36
> 7.085335000    kworker/u8:0   6401   sdf     W  47362704  122880     5.08
> 7.092881000    kworker/u8:0   6401   sdf     W  47362944  122880     7.50
> 7.100197000    kworker/u8:0   6401   sdf     W  47363184  122880     7.23
> 7.107507000    kworker/u8:0   6401   sdf     W  47363424  122880     7.24
> 7.114684000    kworker/u8:0   6401   sdf     W  47363664  122880     7.09
> 7.121936000    kworker/u8:0   6401   sdf     W  47363904  122880     7.18
> 7.129299000    kworker/u8:0   6401   sdf     W  47364144  122880     7.29
> 7.136645000    kworker/u8:0   6401   sdf     W  47364384  122880     7.27
> 7.143733000    kworker/u8:0   6401   sdf     W  47364624  122880     7.02
> 7.151112000    kworker/u8:0   6401   sdf     W  47364864  122880     7.31
> 7.158509000    kworker/u8:0   6401   sdf     W  47365104  122880     7.33
> 7.165735000    kworker/u8:0   6401   sdf     W  47365344  122880     7.14
> 7.171205000    kworker/u8:0   6401   sdf     W  47365584  122880     5.40
> 7.178625000    kworker/u8:0   6401   sdf     W  47365824  122880     7.37
> 7.185980000    kworker/u8:0   6401   sdf     W  47366064  122880     7.31
> 7.193593000    kworker/u8:0   6401   sdf     W  47366304  122880     7.57
> 7.198674000    kworker/u8:0   6401   sdf     W  47366544  122880     5.02
> 7.205996000    kworker/u8:0   6401   sdf     W  47366784  122880     7.25
> 7.213419000    kworker/u8:0   6401   sdf     W  47367024  122880     7.37
> 7.221005000    kworker/u8:0   6401   sdf     W  47367264  122880     7.52
> 7.227906000    kworker/u8:0   6401   sdf     W  47367504  122880     6.83
> 7.235251000    kworker/u8:0   6401   sdf     W  47367744  122880     7.28
> 7.241892000    kworker/u8:0   6401   sdf     W  47367984  106496     6.59
> 7.249617000    kworker/u8:0   6401   sdf     W  47368192  122880     6.88
> 7.256892000    kworker/u8:0   6401   sdf     W  47368432  122880     7.20
> 7.264223000    kworker/u8:0   6401   sdf     W  47368672  122880     7.26
> 7.269672000    kworker/u8:0   6401   sdf     W  47368912  122880     5.39
> 7.277066000    kworker/u8:0   6401   sdf     W  47369152  122880     7.36
> 7.284630000    kworker/u8:0   6401   sdf     W  47369392  122880     7.52
> 7.291891000    kworker/u8:0   6401   sdf     W  47369632  122880     7.20
> 7.297211000    kworker/u8:0   6401   sdf     W  47369872  122880     5.29
> 7.304665000    kworker/u8:0   6401   sdf     W  47370112  122880     7.41
> 7.312050000    kworker/u8:0   6401   sdf     W  47370352  122880     7.33
> 7.319520000    kworker/u8:0   6401   sdf     W  47370592  122880     7.41
> 7.326494000    kworker/u8:0   6401   sdf     W  47370832  122880     6.89
> 7.334082000    kworker/u8:0   6401   sdf     W  47371072  122880     7.52
> 7.341505000    kworker/u8:0   6401   sdf     W  47371312  122880     7.34
> 7.348600000    kworker/u8:0   6401   sdf     W  47371552  122880     7.01
> 7.355806000    kworker/u8:0   6401   sdf     W  47371792  122880     7.16
> 7.363147000    kworker/u8:0   6401   sdf     W  47372032  122880     7.27
> 7.370377000    kworker/u8:0   6401   sdf     W  47372272  122880     7.16
> 7.377770000    kworker/u8:0   6401   sdf     W  47372512  122880     7.36
> 7.383323000    kworker/u8:0   6401   sdf     W  47372752  122880     5.50
> 7.390651000    kworker/u8:0   6401   sdf     W  47372992  122880     7.27
> 7.398245000    kworker/u8:0   6401   sdf     W  47373232  122880     7.52
> 7.405347000    kworker/u8:0   6401   sdf     W  47373472  122880     7.04
> 7.410761000    kworker/u8:0   6401   sdf     W  47373712  122880     5.36
> 7.418075000    kworker/u8:0   6401   sdf     W  47373952  122880     7.26
> 7.425451000    kworker/u8:0   6401   sdf     W  47374192  122880     7.33
> 7.432864000    kworker/u8:0   6401   sdf     W  47374432  122880     7.36
> 7.439913000    kworker/u8:0   6401   sdf     W  47374672  122880     6.99
> 7.447312000    kworker/u8:0   6401   sdf     W  47374912  122880     7.35
> 7.454721000    kworker/u8:0   6401   sdf     W  47375152  122880     7.35
> 7.461955000    kworker/u8:0   6401   sdf     W  47375392  122880     7.19
> 7.469040000    kworker/u8:0   6401   sdf     W  47375632  122880     7.04
> 7.476416000    kworker/u8:0   6401   sdf     W  47375872  122880     7.31
> 7.483856000    kworker/u8:0   6401   sdf     W  47376112  122880     7.38
> 7.491270000    kworker/u8:0   6401   sdf     W  47376352  122880     7.37
> 7.496452000    kworker/u8:0   6401   sdf     W  47376592  122880     5.12
> 7.504011000    kworker/u8:0   6401   sdf     W  47376832  122880     7.52
> 7.511217000    kworker/u8:0   6401   sdf     W  47377072  122880     7.12
> 7.518511000    kworker/u8:0   6401   sdf     W  47377312  122880     7.23
> 7.523956000    kworker/u8:0   6401   sdf     W  47377552  122880     5.41
> 7.531197000    kworker/u8:0   6401   sdf     W  47377792  122880     7.21
> 7.538546000    kworker/u8:0   6401   sdf     W  47378032  122880     7.31
> 7.545757000    kworker/u8:0   6401   sdf     W  47378272  122880     7.17
> 7.552919000    kworker/u8:0   6401   sdf     W  47378512  122880     7.13
> 7.560269000    kworker/u8:0   6401   sdf     W  47378752  122880     7.31
> 7.567555000    kworker/u8:0   6401   sdf     W  47378992  122880     7.25
> 7.574863000    kworker/u8:0   6401   sdf     W  47379232  122880     7.27
> 7.581802000    kworker/u8:0   6401   sdf     W  47379472  122880     6.87
> 7.588979000    kworker/u8:0   6401   sdf     W  47379712  122880     7.13
> 7.596343000    kworker/u8:0   6401   sdf     W  47379952  122880     7.30
> 7.603224000    kworker/u8:0   6401   sdf     W  47380192  122880     6.84
> 7.608655000    kworker/u8:0   6401   sdf     W  47380432  122880     5.38
> 7.615811000    kworker/u8:0   6401   sdf     W  47380672  122880     7.12
> 7.623028000    kworker/u8:0   6401   sdf     W  47380912  122880     7.18
> 7.630413000    kworker/u8:0   6401   sdf     W  47381152  122880     7.34
> 7.635223000    kworker/u8:0   6401   sdf     W  47381392  122880     4.75
> 7.642268000    kworker/u8:0   6401   sdf     W  47381632  122880     6.98
> 7.649383000    kworker/u8:0   6401   sdf     W  47381872  122880     7.04
> 7.656619000    kworker/u8:0   6401   sdf     W  47382112  122880     7.19
> 7.661918000    kworker/u8:0   6401   sdf     W  47382352  122880     5.25
> 8.268039000    kworker/u8:0   6401   sdf     W  47382592  122880   606.07
> 8.280414000    kworker/u8:0   6401   sdf     W  47382832  122880    12.30
> 8.286503000    kworker/u8:0   6401   sdf     W  47383072  122880     6.02
> 8.293627000    kworker/u8:0   6401   sdf     W  47383312  122880     7.07
> 8.300777000    kworker/u8:0   6401   sdf     W  47383552  122880     7.08
> 8.307733000    kworker/u8:0   6401   sdf     W  47383792  122880     6.89
> 8.315212000    kworker/u8:0   6401   sdf     W  47384032  122880     7.43
> 8.320697000    kworker/u8:0   6401   sdf     W  47384272  122880     5.41
> 8.323335000    kworker/u8:0   6401   sdf     W  47384512  32768      2.58
> 8.331116000    kworker/u8:0   6401   sdf     W  47384576  122880     6.58
> 8.338586000    kworker/u8:0   6401   sdf     W  47384816  122880     7.40
> 8.346113000    kworker/u8:0   6401   sdf     W  47385056  122880     7.46
> 8.351558000    kworker/u8:0   6401   sdf     W  47385296  122880     5.37
> 8.359030000    kworker/u8:0   6401   sdf     W  47385536  122880     7.43
> 8.366425000    kworker/u8:0   6401   sdf     W  47385776  122880     7.34
> 8.373712000    kworker/u8:0   6401   sdf     W  47386016  122880     7.22
> 8.940689000    cp             6828   sdf     W  47400960  122880   566.92
> 9.614647000    kworker/u8:0   6401   sdf     W  47386256  122880   673.87
> 10.179120000   cp             6828   sdf     W  47401200  122880   564.38

5 of 165 IO requests(1 4096 READ and 4 120K WRITE) take more than 500ms, and
most of others only needs <10ms. That is the reason why the performance
drops, and it is highly related with your USB pen drive.

biosnoop collects the trace data in blk_mq_start_request() and
blk_account_io_completion(), both happens during dispatching request to
LLD & device.

From my understanding, scsi-mq should only change the timing, or maybe
some setting is changed(still very unlikely), you may follow Alan's suggestion
to collect usbmon on 'bad' & 'good' kernel, and check if the setting is
changed.

If the initialization setting is same, maybe you can try US_FL_GO_SLOW
quirk.

Thanks,
Ming

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-07-08  1:01             ` Ming Lei
@ 2019-07-09 21:18               ` Andrea Vai
  2019-07-10  2:44                 ` Ming Lei
  0 siblings, 1 reply; 23+ messages in thread
From: Andrea Vai @ 2019-07-09 21:18 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-usb, linux-scsi, Himanshu Madhani,
	Hannes Reinecke, Omar Sandoval, Martin K. Petersen, Greg KH,
	Alan Stern

[-- Attachment #1: Type: text/plain, Size: 442 bytes --]

On 08/07/19 09:01:35, Ming Lei wrote:
> > > > > > > > 
> > > > > > > > commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
> > > > > > > > 
> > > > > > > >  [...]    
> 
> 1) run the bcc biosnoop trace in one terminal after mounting the fs on the USB dirve
> 
> 2) start the write test in another teminal
> 
> 3) wait for 10 seconds, and stop the bcc trace via ctrl^ + C, then post the bcc biosnoop
> trace log
>

Done, attached.

Thanks,
Andrea

[-- Attachment #2: biosnoop_sdf.txt --]
[-- Type: text/plain, Size: 12210 bytes --]

5.420198000    jbd2/sdf1-8    6775   sdf     W  29624320  4096       0.37
5.423601000    jbd2/sdf1-8    6775   sdf     W  29624328  12288      3.29
5.424009000    jbd2/sdf1-8    6775   sdf     W  29624352  4096       0.30
6.665732000    kworker/u8:0   6401   sdf     R  10248     4096     841.64
6.666891000    kworker/u8:0   6401   sdf     R  10256     4096       0.98
6.668141000    kworker/u8:0   6401   sdf     R  4196352   4096       1.07
6.669312000    kworker/u8:0   6401   sdf     R  8390656   4096       0.99
6.670568000    kworker/u8:0   6401   sdf     R  12584960  4096       1.06
6.674257000    kworker/u8:0   6401   sdf     R  16779264  4096       3.50
6.675510000    kworker/u8:0   6401   sdf     R  20973568  4096       1.06
6.678344000    kworker/u8:0   6401   sdf     R  25167872  4096       2.64
6.680384000    kworker/u8:0   6401   sdf     R  29362176  4096       1.83
6.681622000    kworker/u8:0   6401   sdf     R  33556480  4096       1.03
6.682713000    kworker/u8:0   6401   sdf     R  37750784  4096       0.92
6.686360000    kworker/u8:0   6401   sdf     R  41945088  4096       3.48
6.687589000    kworker/u8:0   6401   sdf     R  46139392  4096       1.04
6.688639000    kworker/u8:0   6401   sdf     R  46139424  4096       0.88
6.766336000    kworker/u8:0   6401   sdf     W  47351808  122880    77.17
6.773563000    kworker/u8:0   6401   sdf     W  47352048  122880     7.08
6.780980000    kworker/u8:0   6401   sdf     W  47352288  122880     7.29
6.786366000    kworker/u8:0   6401   sdf     W  47352528  122880     5.27
6.793831000    kworker/u8:0   6401   sdf     W  47352768  122880     7.38
6.801085000    kworker/u8:0   6401   sdf     W  47353008  122880     7.20
6.808405000    kworker/u8:0   6401   sdf     W  47353248  122880     7.27
6.813799000    kworker/u8:0   6401   sdf     W  47353488  122880     5.35
6.821103000    kworker/u8:0   6401   sdf     W  47353728  118784     7.25
6.828459000    kworker/u8:0   6401   sdf     W  47353960  122880     7.09
6.835722000    kworker/u8:0   6401   sdf     W  47354200  122880     7.20
6.842813000    kworker/u8:0   6401   sdf     W  47354440  122880     7.05
6.850184000    kworker/u8:0   6401   sdf     W  47354680  122880     7.32
6.857452000    kworker/u8:0   6401   sdf     W  47354920  122880     7.22
6.864874000    kworker/u8:0   6401   sdf     W  47355160  122880     7.38
6.871841000    kworker/u8:0   6401   sdf     W  47355400  122880     6.92
6.879380000    kworker/u8:0   6401   sdf     W  47355640  122880     7.50
6.886497000    kworker/u8:0   6401   sdf     W  47355880  122880     7.04
6.893940000    kworker/u8:0   6401   sdf     W  47356120  122880     7.40
6.899304000    kworker/u8:0   6401   sdf     W  47356360  122880     5.33
6.906635000    kworker/u8:0   6401   sdf     W  47356600  122880     7.29
6.913957000    kworker/u8:0   6401   sdf     W  47356840  122880     7.29
6.921308000    kworker/u8:0   6401   sdf     W  47357080  122880     7.31
6.924391000    kworker/u8:0   6401   sdf     W  47357320  53248      3.03
6.931971000    kworker/u8:0   6401   sdf     W  47357424  122880     7.22
6.939169000    kworker/u8:0   6401   sdf     W  47357664  122880     7.14
6.944985000    kworker/u8:0   6401   sdf     W  47357904  122880     5.76
6.952113000    kworker/u8:0   6401   sdf     W  47358144  122880     7.06
6.959608000    kworker/u8:0   6401   sdf     W  47358384  122880     7.45
6.966820000    kworker/u8:0   6401   sdf     W  47358624  122880     7.16
6.972393000    kworker/u8:0   6401   sdf     W  47358864  122880     5.53
6.979550000    kworker/u8:0   6401   sdf     W  47359104  122880     7.09
6.986879000    kworker/u8:0   6401   sdf     W  47359344  122880     7.28
6.994306000    kworker/u8:0   6401   sdf     W  47359584  122880     7.40
7.001338000    kworker/u8:0   6401   sdf     W  47359824  122880     6.98
7.008740000    kworker/u8:0   6401   sdf     W  47360064  122880     7.37
7.016114000    kworker/u8:0   6401   sdf     W  47360304  122880     7.31
7.023546000    kworker/u8:0   6401   sdf     W  47360544  122880     7.39
7.030437000    kworker/u8:0   6401   sdf     W  47360784  122880     6.82
7.037834000    kworker/u8:0   6401   sdf     W  47361024  122880     7.35
7.045224000    kworker/u8:0   6401   sdf     W  47361264  122880     7.32
7.052687000    kworker/u8:0   6401   sdf     W  47361504  122880     7.39
7.057977000    kworker/u8:0   6401   sdf     W  47361744  122880     5.01
7.065390000    kworker/u8:0   6401   sdf     W  47361984  122880     7.35
7.072749000    kworker/u8:0   6401   sdf     W  47362224  122880     7.28
7.080174000    kworker/u8:0   6401   sdf     W  47362464  122880     7.36
7.085335000    kworker/u8:0   6401   sdf     W  47362704  122880     5.08
7.092881000    kworker/u8:0   6401   sdf     W  47362944  122880     7.50
7.100197000    kworker/u8:0   6401   sdf     W  47363184  122880     7.23
7.107507000    kworker/u8:0   6401   sdf     W  47363424  122880     7.24
7.114684000    kworker/u8:0   6401   sdf     W  47363664  122880     7.09
7.121936000    kworker/u8:0   6401   sdf     W  47363904  122880     7.18
7.129299000    kworker/u8:0   6401   sdf     W  47364144  122880     7.29
7.136645000    kworker/u8:0   6401   sdf     W  47364384  122880     7.27
7.143733000    kworker/u8:0   6401   sdf     W  47364624  122880     7.02
7.151112000    kworker/u8:0   6401   sdf     W  47364864  122880     7.31
7.158509000    kworker/u8:0   6401   sdf     W  47365104  122880     7.33
7.165735000    kworker/u8:0   6401   sdf     W  47365344  122880     7.14
7.171205000    kworker/u8:0   6401   sdf     W  47365584  122880     5.40
7.178625000    kworker/u8:0   6401   sdf     W  47365824  122880     7.37
7.185980000    kworker/u8:0   6401   sdf     W  47366064  122880     7.31
7.193593000    kworker/u8:0   6401   sdf     W  47366304  122880     7.57
7.198674000    kworker/u8:0   6401   sdf     W  47366544  122880     5.02
7.205996000    kworker/u8:0   6401   sdf     W  47366784  122880     7.25
7.213419000    kworker/u8:0   6401   sdf     W  47367024  122880     7.37
7.221005000    kworker/u8:0   6401   sdf     W  47367264  122880     7.52
7.227906000    kworker/u8:0   6401   sdf     W  47367504  122880     6.83
7.235251000    kworker/u8:0   6401   sdf     W  47367744  122880     7.28
7.241892000    kworker/u8:0   6401   sdf     W  47367984  106496     6.59
7.249617000    kworker/u8:0   6401   sdf     W  47368192  122880     6.88
7.256892000    kworker/u8:0   6401   sdf     W  47368432  122880     7.20
7.264223000    kworker/u8:0   6401   sdf     W  47368672  122880     7.26
7.269672000    kworker/u8:0   6401   sdf     W  47368912  122880     5.39
7.277066000    kworker/u8:0   6401   sdf     W  47369152  122880     7.36
7.284630000    kworker/u8:0   6401   sdf     W  47369392  122880     7.52
7.291891000    kworker/u8:0   6401   sdf     W  47369632  122880     7.20
7.297211000    kworker/u8:0   6401   sdf     W  47369872  122880     5.29
7.304665000    kworker/u8:0   6401   sdf     W  47370112  122880     7.41
7.312050000    kworker/u8:0   6401   sdf     W  47370352  122880     7.33
7.319520000    kworker/u8:0   6401   sdf     W  47370592  122880     7.41
7.326494000    kworker/u8:0   6401   sdf     W  47370832  122880     6.89
7.334082000    kworker/u8:0   6401   sdf     W  47371072  122880     7.52
7.341505000    kworker/u8:0   6401   sdf     W  47371312  122880     7.34
7.348600000    kworker/u8:0   6401   sdf     W  47371552  122880     7.01
7.355806000    kworker/u8:0   6401   sdf     W  47371792  122880     7.16
7.363147000    kworker/u8:0   6401   sdf     W  47372032  122880     7.27
7.370377000    kworker/u8:0   6401   sdf     W  47372272  122880     7.16
7.377770000    kworker/u8:0   6401   sdf     W  47372512  122880     7.36
7.383323000    kworker/u8:0   6401   sdf     W  47372752  122880     5.50
7.390651000    kworker/u8:0   6401   sdf     W  47372992  122880     7.27
7.398245000    kworker/u8:0   6401   sdf     W  47373232  122880     7.52
7.405347000    kworker/u8:0   6401   sdf     W  47373472  122880     7.04
7.410761000    kworker/u8:0   6401   sdf     W  47373712  122880     5.36
7.418075000    kworker/u8:0   6401   sdf     W  47373952  122880     7.26
7.425451000    kworker/u8:0   6401   sdf     W  47374192  122880     7.33
7.432864000    kworker/u8:0   6401   sdf     W  47374432  122880     7.36
7.439913000    kworker/u8:0   6401   sdf     W  47374672  122880     6.99
7.447312000    kworker/u8:0   6401   sdf     W  47374912  122880     7.35
7.454721000    kworker/u8:0   6401   sdf     W  47375152  122880     7.35
7.461955000    kworker/u8:0   6401   sdf     W  47375392  122880     7.19
7.469040000    kworker/u8:0   6401   sdf     W  47375632  122880     7.04
7.476416000    kworker/u8:0   6401   sdf     W  47375872  122880     7.31
7.483856000    kworker/u8:0   6401   sdf     W  47376112  122880     7.38
7.491270000    kworker/u8:0   6401   sdf     W  47376352  122880     7.37
7.496452000    kworker/u8:0   6401   sdf     W  47376592  122880     5.12
7.504011000    kworker/u8:0   6401   sdf     W  47376832  122880     7.52
7.511217000    kworker/u8:0   6401   sdf     W  47377072  122880     7.12
7.518511000    kworker/u8:0   6401   sdf     W  47377312  122880     7.23
7.523956000    kworker/u8:0   6401   sdf     W  47377552  122880     5.41
7.531197000    kworker/u8:0   6401   sdf     W  47377792  122880     7.21
7.538546000    kworker/u8:0   6401   sdf     W  47378032  122880     7.31
7.545757000    kworker/u8:0   6401   sdf     W  47378272  122880     7.17
7.552919000    kworker/u8:0   6401   sdf     W  47378512  122880     7.13
7.560269000    kworker/u8:0   6401   sdf     W  47378752  122880     7.31
7.567555000    kworker/u8:0   6401   sdf     W  47378992  122880     7.25
7.574863000    kworker/u8:0   6401   sdf     W  47379232  122880     7.27
7.581802000    kworker/u8:0   6401   sdf     W  47379472  122880     6.87
7.588979000    kworker/u8:0   6401   sdf     W  47379712  122880     7.13
7.596343000    kworker/u8:0   6401   sdf     W  47379952  122880     7.30
7.603224000    kworker/u8:0   6401   sdf     W  47380192  122880     6.84
7.608655000    kworker/u8:0   6401   sdf     W  47380432  122880     5.38
7.615811000    kworker/u8:0   6401   sdf     W  47380672  122880     7.12
7.623028000    kworker/u8:0   6401   sdf     W  47380912  122880     7.18
7.630413000    kworker/u8:0   6401   sdf     W  47381152  122880     7.34
7.635223000    kworker/u8:0   6401   sdf     W  47381392  122880     4.75
7.642268000    kworker/u8:0   6401   sdf     W  47381632  122880     6.98
7.649383000    kworker/u8:0   6401   sdf     W  47381872  122880     7.04
7.656619000    kworker/u8:0   6401   sdf     W  47382112  122880     7.19
7.661918000    kworker/u8:0   6401   sdf     W  47382352  122880     5.25
8.268039000    kworker/u8:0   6401   sdf     W  47382592  122880   606.07
8.280414000    kworker/u8:0   6401   sdf     W  47382832  122880    12.30
8.286503000    kworker/u8:0   6401   sdf     W  47383072  122880     6.02
8.293627000    kworker/u8:0   6401   sdf     W  47383312  122880     7.07
8.300777000    kworker/u8:0   6401   sdf     W  47383552  122880     7.08
8.307733000    kworker/u8:0   6401   sdf     W  47383792  122880     6.89
8.315212000    kworker/u8:0   6401   sdf     W  47384032  122880     7.43
8.320697000    kworker/u8:0   6401   sdf     W  47384272  122880     5.41
8.323335000    kworker/u8:0   6401   sdf     W  47384512  32768      2.58
8.331116000    kworker/u8:0   6401   sdf     W  47384576  122880     6.58
8.338586000    kworker/u8:0   6401   sdf     W  47384816  122880     7.40
8.346113000    kworker/u8:0   6401   sdf     W  47385056  122880     7.46
8.351558000    kworker/u8:0   6401   sdf     W  47385296  122880     5.37
8.359030000    kworker/u8:0   6401   sdf     W  47385536  122880     7.43
8.366425000    kworker/u8:0   6401   sdf     W  47385776  122880     7.34
8.373712000    kworker/u8:0   6401   sdf     W  47386016  122880     7.22
8.940689000    cp             6828   sdf     W  47400960  122880   566.92
9.614647000    kworker/u8:0   6401   sdf     W  47386256  122880   673.87
10.179120000   cp             6828   sdf     W  47401200  122880   564.38

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-07-06 22:06         ` Andrea Vai
@ 2019-07-08 15:38           ` Alan Stern
  0 siblings, 0 replies; 23+ messages in thread
From: Alan Stern @ 2019-07-08 15:38 UTC (permalink / raw)
  To: Andrea Vai
  Cc: Johannes Thumshirn, Jens Axboe, linux-usb, linux-scsi,
	Himanshu Madhani, Hannes Reinecke, Ming Lei, Omar Sandoval,
	Martin K. Petersen, Greg KH

On Sun, 7 Jul 2019, Andrea Vai wrote:

> On 03/07/19 10:23:13, Alan Stern wrote:
> > 
> > [...]
> > Andrea, another thing you could try is to collect a usbmon trace under 
> > one of the "slow" kernels.  Follow the instructions in 
> > Documentation/usb/usbmon.txt.  I think you could kill the file-copy 
> > operation after just a couple of seconds; that should provide enough 
> > trace information to help see what causes the slowdown.
> > 
> > (If you want, do the same test with a "fast" kernel and then we'll 
> > compare the results.)
> > 
> > Alan Stern
> > 
> 
> Thanks Alan,
>   so I attach two ouputs, one for a "good" and one for a "bad" kernel.
> 
> Both killed after roughly 20 seconds, using the command
> 
> cat /sys/kernel/debug/usb/usbmon/6u > file
> 
> (my pendrive turns out to be attached to bus #6)
> 
> Hope it helps.

I don't know what the results mean, but I _can_ tell you what's
happening.  Every so often (at intervals of about a second) the pen
drive completely stops communicating with the "bad" kernel for about
one second and then starts up again.

Here's a short example from the "bad" trace:

ffff9169f0d399c0 513072808 S Bo:6:008:2 -115 122880 = 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ffff9169f0d399c0 514262176 C Bo:6:008:2 0 122880 >

The second column is a timestamp (in microseconds).  This little
extract shows a 120-KB write starting at time 513.072 and ending at
514.262, more than a second later.  Normally such a write would
complete in about 0.06 s.

The cumulative effect of all these delays is to slow the transfer 
drastically.  The "good" kernel trace shows a few delays like this, but 
only one or two.

I have no idea how commit f664a3cc17b7, getting rid of the legacy IO 
path, could have caused these delays.  It seems more likely that the 
pen drive itself is the cause, perhaps because it is flushing buffers 
more often under the "bad" kernel.

I'd like you to try doing another pair of usbmon tests.  This time,
start collecting the usbmon trace _before_ you plug in the pen drive,
and stop the trace shortly after the pen drive has been mounted.  
Don't try to transfer any data.  Perhaps the two kernels are
initializing the pen drive with different settings and that accounts
for the different behaviors.

Alan Stern


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-07-06  9:33           ` Andrea Vai
@ 2019-07-08  1:01             ` Ming Lei
  2019-07-09 21:18               ` Andrea Vai
  0 siblings, 1 reply; 23+ messages in thread
From: Ming Lei @ 2019-07-08  1:01 UTC (permalink / raw)
  To: Andrea Vai
  Cc: Jens Axboe, linux-usb, linux-scsi, Himanshu Madhani,
	Hannes Reinecke, Omar Sandoval, Martin K. Petersen, Greg KH,
	Alan Stern

On Sat, Jul 06, 2019 at 11:33:27AM +0200, Andrea Vai wrote:
> On 03/07/19 14:36:05, Ming Lei wrote:
> > On Wed, Jul 03, 2019 at 07:11:17AM +0200, Andrea Vai wrote:
> > > On 03/07/19 10:01:23, Ming Lei wrote:
> > > > On Wed, Jul 03, 2019 at 12:39:31AM +0200, Andrea Vai wrote:
> > > > > On 02/07/19 20:01:13, Ming Lei wrote:
> > > > > > On Tue, Jul 02, 2019 at 12:46:45PM +0200, Andrea Vai wrote:
> > > > > > > Hi,
> > > > > > >   I have a problem writing data to a USB pendrive, and it seems
> > > > > > > kernel-related. With the help of Greg an Alan (thanks) and some
> > > > > > > bisect, I found out the offending commit being
> > > > > > > 
> > > > > > > commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
> > > > > > > 
> > > > > > >  [...]    
> > > > > > >     
> > [...]
> > Then could you install bcc package and collect the IO trace?
> > 
> > 	sudo /usr/share/bcc/tools/biosnoop | grep sdN
> > 
> > sdN is your USB disk device name.
> 
> The command runs forever (or at least for hours) without giving any output through "|grep sdf". The device is connected, but not mounted. Maybe I should run the command with the device mounted? Or while performing the test?
> The command itself seems to work, as /usr/share/bcc/tools/biosnoop | tee -a biosnoop.txt produces an output file sized about some MB in some hours. 
> 
> What should I do?

1) run the bcc biosnoop trace in one terminal after mounting the fs on the USB dirve

2) start the write test in another teminal

3) wait for 10 seconds, and stop the bcc trace via ctrl^ + C, then post the bcc biosnoop
trace log


Thanks, 
Ming

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-07-03 14:23       ` Alan Stern
@ 2019-07-06 22:06         ` Andrea Vai
  2019-07-08 15:38           ` Alan Stern
  0 siblings, 1 reply; 23+ messages in thread
From: Andrea Vai @ 2019-07-06 22:06 UTC (permalink / raw)
  To: Alan Stern
  Cc: Johannes Thumshirn, Jens Axboe, linux-usb, linux-scsi,
	Himanshu Madhani, Hannes Reinecke, Ming Lei, Omar Sandoval,
	Martin K. Petersen, Greg KH

[-- Attachment #1: Type: text/plain, Size: 768 bytes --]

On 03/07/19 10:23:13, Alan Stern wrote:
> 
> [...]
> Andrea, another thing you could try is to collect a usbmon trace under 
> one of the "slow" kernels.  Follow the instructions in 
> Documentation/usb/usbmon.txt.  I think you could kill the file-copy 
> operation after just a couple of seconds; that should provide enough 
> trace information to help see what causes the slowdown.
> 
> (If you want, do the same test with a "fast" kernel and then we'll 
> compare the results.)
> 
> Alan Stern
> 

Thanks Alan,
  so I attach two ouputs, one for a "good" and one for a "bad" kernel.

Both killed after roughly 20 seconds, using the command

cat /sys/kernel/debug/usb/usbmon/6u > file

(my pendrive turns out to be attached to bus #6)

Hope it helps.

Thanks,
Andrea

[-- Attachment #2: GOOD_4.20.0-rc1+.mon.out.tar --]
[-- Type: application/x-tar, Size: 66712 bytes --]

[-- Attachment #3: BAD_5.2.0-rc7.mon.out.tar --]
[-- Type: application/x-tar, Size: 9548 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-07-03  6:36         ` Ming Lei
  2019-07-03 15:27           ` Chris Murphy
@ 2019-07-06  9:33           ` Andrea Vai
  2019-07-08  1:01             ` Ming Lei
  1 sibling, 1 reply; 23+ messages in thread
From: Andrea Vai @ 2019-07-06  9:33 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-usb, linux-scsi, Himanshu Madhani,
	Hannes Reinecke, Omar Sandoval, Martin K. Petersen, Greg KH,
	Alan Stern

On 03/07/19 14:36:05, Ming Lei wrote:
> On Wed, Jul 03, 2019 at 07:11:17AM +0200, Andrea Vai wrote:
> > On 03/07/19 10:01:23, Ming Lei wrote:
> > > On Wed, Jul 03, 2019 at 12:39:31AM +0200, Andrea Vai wrote:
> > > > On 02/07/19 20:01:13, Ming Lei wrote:
> > > > > On Tue, Jul 02, 2019 at 12:46:45PM +0200, Andrea Vai wrote:
> > > > > > Hi,
> > > > > >   I have a problem writing data to a USB pendrive, and it seems
> > > > > > kernel-related. With the help of Greg an Alan (thanks) and some
> > > > > > bisect, I found out the offending commit being
> > > > > > 
> > > > > > commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
> > > > > > 
> > > > > >  [...]    
> > > > > >     
> [...]
> Then could you install bcc package and collect the IO trace?
> 
> 	sudo /usr/share/bcc/tools/biosnoop | grep sdN
> 
> sdN is your USB disk device name.

The command runs forever (or at least for hours) without giving any output through "|grep sdf". The device is connected, but not mounted. Maybe I should run the command with the device mounted? Or while performing the test?
The command itself seems to work, as /usr/share/bcc/tools/biosnoop | tee -a biosnoop.txt produces an output file sized about some MB in some hours. 

What should I do?

Thanks,
Andrea


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-07-03  6:36         ` Ming Lei
@ 2019-07-03 15:27           ` Chris Murphy
  2019-07-06  9:33           ` Andrea Vai
  1 sibling, 0 replies; 23+ messages in thread
From: Chris Murphy @ 2019-07-03 15:27 UTC (permalink / raw)
  To: Ming Lei
  Cc: Andrea Vai, Jens Axboe, linux-usb, linux-scsi, Himanshu Madhani,
	Hannes Reinecke, Omar Sandoval, Martin K. Petersen, Greg KH,
	Alan Stern

On Wed, Jul 3, 2019 at 12:36 AM Ming Lei <ming.lei@redhat.com> wrote:
>
> BTW, 'rotational' shouldn't be set for USB drive, except for USB HDD,
> but that shouldn't be related with your issue.

I get
/sys/block/sdb/queue/rotational:1
for all USB flash drives on all computers with every kernel since
forever, including 5.2rc7. Lexar, Samsung, Kingston.


-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-07-03  7:29     ` Johannes Thumshirn
@ 2019-07-03 14:23       ` Alan Stern
  2019-07-06 22:06         ` Andrea Vai
  0 siblings, 1 reply; 23+ messages in thread
From: Alan Stern @ 2019-07-03 14:23 UTC (permalink / raw)
  To: Johannes Thumshirn
  Cc: Andrea Vai, Jens Axboe, linux-usb, linux-scsi, Himanshu Madhani,
	Hannes Reinecke, Ming Lei, Omar Sandoval, Martin K. Petersen,
	Greg KH

On Wed, 3 Jul 2019, Johannes Thumshirn wrote:

> On Wed, Jul 03, 2019 at 12:36:30AM +0200, Andrea Vai wrote:
> > On 02/07/19 13:51:17, Johannes Thumshirn wrote:
> > > On Tue, Jul 02, 2019 at 12:46:45PM +0200, Andrea Vai wrote:
> > > > Hi,
> > > >   I have a problem writing data to a USB pendrive, and it seems
> > > > kernel-related. With the help of Greg an Alan (thanks) and some
> > > > bisect, I found out the offending commit being
> > > > 
> > > > commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
> > > >
> > > > [...]
> > > 
> > > Hi,
> > > 
> > > Can you please check what IO scheduler you have set for your USB pendrive?
> > > 
> > > i.e. with:
> > > cat /sys/block/$DISK/queue/scheduler
> > >
> > 
> > # cat /sys/block/sdf/queue/scheduler
> > [mq-deadline] none
> 
> One thing you can try as well is building a kernel with CONFIG_IOSCHED_BFQ and
> use it. Deadline is probably not the best choice for a slow drive.

Andrea, another thing you could try is to collect a usbmon trace under 
one of the "slow" kernels.  Follow the instructions in 
Documentation/usb/usbmon.txt.  I think you could kill the file-copy 
operation after just a couple of seconds; that should provide enough 
trace information to help see what causes the slowdown.

(If you want, do the same test with a "fast" kernel and then we'll 
compare the results.)

Alan Stern


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-07-02 22:36   ` Andrea Vai
@ 2019-07-03  7:29     ` Johannes Thumshirn
  2019-07-03 14:23       ` Alan Stern
  0 siblings, 1 reply; 23+ messages in thread
From: Johannes Thumshirn @ 2019-07-03  7:29 UTC (permalink / raw)
  To: Andrea Vai
  Cc: Jens Axboe, linux-usb, linux-scsi, Himanshu Madhani,
	Hannes Reinecke, Ming Lei, Omar Sandoval, Martin K. Petersen,
	Greg KH, Alan Stern

On Wed, Jul 03, 2019 at 12:36:30AM +0200, Andrea Vai wrote:
> On 02/07/19 13:51:17, Johannes Thumshirn wrote:
> > On Tue, Jul 02, 2019 at 12:46:45PM +0200, Andrea Vai wrote:
> > > Hi,
> > >   I have a problem writing data to a USB pendrive, and it seems
> > > kernel-related. With the help of Greg an Alan (thanks) and some
> > > bisect, I found out the offending commit being
> > > 
> > > commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
> > >
> > > [...]
> > 
> > Hi,
> > 
> > Can you please check what IO scheduler you have set for your USB pendrive?
> > 
> > i.e. with:
> > cat /sys/block/$DISK/queue/scheduler
> >
> 
> # cat /sys/block/sdf/queue/scheduler
> [mq-deadline] none

One thing you can try as well is building a kernel with CONFIG_IOSCHED_BFQ and
use it. Deadline is probably not the best choice for a slow drive.

Byte,
	Johannes
-- 
Johannes Thumshirn                            SUSE Labs Filesystems
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-07-03  5:11       ` Andrea Vai
@ 2019-07-03  6:36         ` Ming Lei
  2019-07-03 15:27           ` Chris Murphy
  2019-07-06  9:33           ` Andrea Vai
  0 siblings, 2 replies; 23+ messages in thread
From: Ming Lei @ 2019-07-03  6:36 UTC (permalink / raw)
  To: Andrea Vai
  Cc: Jens Axboe, linux-usb, linux-scsi, Himanshu Madhani,
	Hannes Reinecke, Omar Sandoval, Martin K. Petersen, Greg KH,
	Alan Stern

On Wed, Jul 03, 2019 at 07:11:17AM +0200, Andrea Vai wrote:
> On 03/07/19 10:01:23, Ming Lei wrote:
> > On Wed, Jul 03, 2019 at 12:39:31AM +0200, Andrea Vai wrote:
> > > On 02/07/19 20:01:13, Ming Lei wrote:
> > > > On Tue, Jul 02, 2019 at 12:46:45PM +0200, Andrea Vai wrote:
> > > > > Hi,
> > > > >   I have a problem writing data to a USB pendrive, and it seems
> > > > > kernel-related. With the help of Greg an Alan (thanks) and some
> > > > > bisect, I found out the offending commit being
> > > > > 
> > > > > commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
> > > > > 
> > > > >  [...]    
> > > > >     
> > > > 
> > > > One possible reason may be related with too small 'nr_requests', could
> > > > you apply the following command and see if any difference can be made?
> > > > 
> > > > echo 32 > /sys/block/sdN/queue/nr_requests
> > > 
> > > I applied it (echo 32 > /sys/block/sdf/queue/nr_requests), ran the test again, and still failed. I assumed I didn't have to build the kernel again, did I? (sorry but I am not skilled)
> > > 
> > 
> > You don't need to build kernel.
> > 
> > I just run same write test on one slow usb drive in my laptop, which
> > runs '5.1.11-200.fc29.x86_64', and can't reproduce your issue, maybe it
> > depends on your drive.
> > 
> > Could you collect the queue limits sysfs log via the following command?
> > 
> > 	find /sys/block/sdN/queue -type f -exec grep -aH . {} \;
> >
> 
> # find /sys/block/sdf/queue -type f -exec grep -aH . {} ;
> /sys/block/sdf/queue/io_poll_delay:-1
> /sys/block/sdf/queue/max_integrity_segments:0
> /sys/block/sdf/queue/zoned:none
> /sys/block/sdf/queue/scheduler:[mq-deadline] none
> /sys/block/sdf/queue/io_poll:0
> /sys/block/sdf/queue/discard_zeroes_data:0
> /sys/block/sdf/queue/minimum_io_size:512
> /sys/block/sdf/queue/nr_zones:0
> /sys/block/sdf/queue/write_same_max_bytes:0
> /sys/block/sdf/queue/max_segments:2048
> /sys/block/sdf/queue/dax:0
> /sys/block/sdf/queue/physical_block_size:512
> /sys/block/sdf/queue/logical_block_size:512
> /sys/block/sdf/queue/io_timeout:30000
> /sys/block/sdf/queue/nr_requests:2
> /sys/block/sdf/queue/write_cache:write through/sys/block/sdf/queue/max_segment_size:4294967295
> /sys/block/sdf/queue/rotational:1
> /sys/block/sdf/queue/discard_max_bytes:0
> /sys/block/sdf/queue/add_random:1
> /sys/block/sdf/queue/discard_max_hw_bytes:0
> /sys/block/sdf/queue/optimal_io_size:0
> /sys/block/sdf/queue/chunk_sectors:0
> /sys/block/sdf/queue/iosched/front_merges:1
> /sys/block/sdf/queue/iosched/read_expire:500
> /sys/block/sdf/queue/iosched/fifo_batch:16
> /sys/block/sdf/queue/iosched/write_expire:5000/sys/block/sdf/queue/iosched/writes_starved:2
> /sys/block/sdf/queue/read_ahead_kb:128
> /sys/block/sdf/queue/max_discard_segments:1
> /sys/block/sdf/queue/write_zeroes_max_bytes:0
> /sys/block/sdf/queue/nomerges:0
> /sys/block/sdf/queue/wbt_lat_usec:75000
> /sys/block/sdf/queue/fua:0
> /sys/block/sdf/queue/discard_granularity:0
> /sys/block/sdf/queue/rq_affinity:1
> /sys/block/sdf/queue/max_sectors_kb:120
> /sys/block/sdf/queue/hw_sector_size:512
> /sys/block/sdf/queue/max_hw_sectors_kb:120
> /sys/block/sdf/queue/iostats:1 

The above is basically same with my USB drive's queue setting, and looks
all are fine.

BTW, 'rotational' shouldn't be set for USB drive, except for USB HDD,
but that shouldn't be related with your issue.

Then could you install bcc package and collect the IO trace?

	sudo /usr/share/bcc/tools/biosnoop | grep sdN

sdN is your USB disk device name.

Thanks,
Ming

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-07-03  2:01     ` Ming Lei
@ 2019-07-03  5:11       ` Andrea Vai
  2019-07-03  6:36         ` Ming Lei
  0 siblings, 1 reply; 23+ messages in thread
From: Andrea Vai @ 2019-07-03  5:11 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-usb, linux-scsi, Himanshu Madhani,
	Hannes Reinecke, Omar Sandoval, Martin K. Petersen, Greg KH,
	Alan Stern

On 03/07/19 10:01:23, Ming Lei wrote:
> On Wed, Jul 03, 2019 at 12:39:31AM +0200, Andrea Vai wrote:
> > On 02/07/19 20:01:13, Ming Lei wrote:
> > > On Tue, Jul 02, 2019 at 12:46:45PM +0200, Andrea Vai wrote:
> > > > Hi,
> > > >   I have a problem writing data to a USB pendrive, and it seems
> > > > kernel-related. With the help of Greg an Alan (thanks) and some
> > > > bisect, I found out the offending commit being
> > > > 
> > > > commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
> > > > 
> > > >  [...]    
> > > >     
> > > 
> > > One possible reason may be related with too small 'nr_requests', could
> > > you apply the following command and see if any difference can be made?
> > > 
> > > echo 32 > /sys/block/sdN/queue/nr_requests
> > 
> > I applied it (echo 32 > /sys/block/sdf/queue/nr_requests), ran the test again, and still failed. I assumed I didn't have to build the kernel again, did I? (sorry but I am not skilled)
> > 
> 
> You don't need to build kernel.
> 
> I just run same write test on one slow usb drive in my laptop, which
> runs '5.1.11-200.fc29.x86_64', and can't reproduce your issue, maybe it
> depends on your drive.
> 
> Could you collect the queue limits sysfs log via the following command?
> 
> 	find /sys/block/sdN/queue -type f -exec grep -aH . {} \;
>

# find /sys/block/sdf/queue -type f -exec grep -aH . {} ;
/sys/block/sdf/queue/io_poll_delay:-1
/sys/block/sdf/queue/max_integrity_segments:0
/sys/block/sdf/queue/zoned:none
/sys/block/sdf/queue/scheduler:[mq-deadline] none
/sys/block/sdf/queue/io_poll:0
/sys/block/sdf/queue/discard_zeroes_data:0
/sys/block/sdf/queue/minimum_io_size:512
/sys/block/sdf/queue/nr_zones:0
/sys/block/sdf/queue/write_same_max_bytes:0
/sys/block/sdf/queue/max_segments:2048
/sys/block/sdf/queue/dax:0
/sys/block/sdf/queue/physical_block_size:512
/sys/block/sdf/queue/logical_block_size:512
/sys/block/sdf/queue/io_timeout:30000
/sys/block/sdf/queue/nr_requests:2
/sys/block/sdf/queue/write_cache:write through/sys/block/sdf/queue/max_segment_size:4294967295
/sys/block/sdf/queue/rotational:1
/sys/block/sdf/queue/discard_max_bytes:0
/sys/block/sdf/queue/add_random:1
/sys/block/sdf/queue/discard_max_hw_bytes:0
/sys/block/sdf/queue/optimal_io_size:0
/sys/block/sdf/queue/chunk_sectors:0
/sys/block/sdf/queue/iosched/front_merges:1
/sys/block/sdf/queue/iosched/read_expire:500
/sys/block/sdf/queue/iosched/fifo_batch:16
/sys/block/sdf/queue/iosched/write_expire:5000/sys/block/sdf/queue/iosched/writes_starved:2
/sys/block/sdf/queue/read_ahead_kb:128
/sys/block/sdf/queue/max_discard_segments:1
/sys/block/sdf/queue/write_zeroes_max_bytes:0
/sys/block/sdf/queue/nomerges:0
/sys/block/sdf/queue/wbt_lat_usec:75000
/sys/block/sdf/queue/fua:0
/sys/block/sdf/queue/discard_granularity:0
/sys/block/sdf/queue/rq_affinity:1
/sys/block/sdf/queue/max_sectors_kb:120
/sys/block/sdf/queue/hw_sector_size:512
/sys/block/sdf/queue/max_hw_sectors_kb:120
/sys/block/sdf/queue/iostats:1 

Thanks, and bye,
Andrea

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-07-02 22:39   ` Andrea Vai
@ 2019-07-03  2:01     ` Ming Lei
  2019-07-03  5:11       ` Andrea Vai
  0 siblings, 1 reply; 23+ messages in thread
From: Ming Lei @ 2019-07-03  2:01 UTC (permalink / raw)
  To: Andrea Vai
  Cc: Jens Axboe, linux-usb, linux-scsi, Himanshu Madhani,
	Hannes Reinecke, Omar Sandoval, Martin K. Petersen, Greg KH,
	Alan Stern

On Wed, Jul 03, 2019 at 12:39:31AM +0200, Andrea Vai wrote:
> On 02/07/19 20:01:13, Ming Lei wrote:
> > On Tue, Jul 02, 2019 at 12:46:45PM +0200, Andrea Vai wrote:
> > > Hi,
> > >   I have a problem writing data to a USB pendrive, and it seems
> > > kernel-related. With the help of Greg an Alan (thanks) and some
> > > bisect, I found out the offending commit being
> > > 
> > > commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
> > > 
> > >  [...]    
> > >     
> > 
> > One possible reason may be related with too small 'nr_requests', could
> > you apply the following command and see if any difference can be made?
> > 
> > echo 32 > /sys/block/sdN/queue/nr_requests
> 
> I applied it (echo 32 > /sys/block/sdf/queue/nr_requests), ran the test again, and still failed. I assumed I didn't have to build the kernel again, did I? (sorry but I am not skilled)
> 

You don't need to build kernel.

I just run same write test on one slow usb drive in my laptop, which
runs '5.1.11-200.fc29.x86_64', and can't reproduce your issue, maybe it
depends on your drive.

Could you collect the queue limits sysfs log via the following command?

	find /sys/block/sdN/queue -type f -exec grep -aH . {} \;

Thanks,
Ming

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-07-02 12:01 ` Ming Lei
@ 2019-07-02 22:39   ` Andrea Vai
  2019-07-03  2:01     ` Ming Lei
  0 siblings, 1 reply; 23+ messages in thread
From: Andrea Vai @ 2019-07-02 22:39 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-usb, linux-scsi, Himanshu Madhani,
	Hannes Reinecke, Omar Sandoval, Martin K. Petersen, Greg KH,
	Alan Stern

On 02/07/19 20:01:13, Ming Lei wrote:
> On Tue, Jul 02, 2019 at 12:46:45PM +0200, Andrea Vai wrote:
> > Hi,
> >   I have a problem writing data to a USB pendrive, and it seems
> > kernel-related. With the help of Greg an Alan (thanks) and some
> > bisect, I found out the offending commit being
> > 
> > commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
> > 
> >  [...]    
> >     
> 
> One possible reason may be related with too small 'nr_requests', could
> you apply the following command and see if any difference can be made?
> 
> echo 32 > /sys/block/sdN/queue/nr_requests

I applied it (echo 32 > /sys/block/sdf/queue/nr_requests), ran the test again, and still failed. I assumed I didn't have to build the kernel again, did I? (sorry but I am not skilled)

Many thanks,
Andrea


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-07-02 11:51 ` Johannes Thumshirn
@ 2019-07-02 22:36   ` Andrea Vai
  2019-07-03  7:29     ` Johannes Thumshirn
  0 siblings, 1 reply; 23+ messages in thread
From: Andrea Vai @ 2019-07-02 22:36 UTC (permalink / raw)
  To: Johannes Thumshirn
  Cc: Jens Axboe, linux-usb, linux-scsi, Himanshu Madhani,
	Hannes Reinecke, Ming Lei, Omar Sandoval, Martin K. Petersen,
	Greg KH, Alan Stern

On 02/07/19 13:51:17, Johannes Thumshirn wrote:
> On Tue, Jul 02, 2019 at 12:46:45PM +0200, Andrea Vai wrote:
> > Hi,
> >   I have a problem writing data to a USB pendrive, and it seems
> > kernel-related. With the help of Greg an Alan (thanks) and some
> > bisect, I found out the offending commit being
> > 
> > commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
> >
> > [...]
> 
> Hi,
> 
> Can you please check what IO scheduler you have set for your USB pendrive?
> 
> i.e. with:
> cat /sys/block/$DISK/queue/scheduler
>

# cat /sys/block/sdf/queue/scheduler
[mq-deadline] none

Many thanks,
Andrea

 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-07-02 10:46 Andrea Vai
  2019-07-02 11:51 ` Johannes Thumshirn
@ 2019-07-02 12:01 ` Ming Lei
  2019-07-02 22:39   ` Andrea Vai
  1 sibling, 1 reply; 23+ messages in thread
From: Ming Lei @ 2019-07-02 12:01 UTC (permalink / raw)
  To: Andrea Vai
  Cc: Jens Axboe, linux-usb, linux-scsi, Himanshu Madhani,
	Hannes Reinecke, Omar Sandoval, Martin K. Petersen, Greg KH,
	Alan Stern

On Tue, Jul 02, 2019 at 12:46:45PM +0200, Andrea Vai wrote:
> Hi,
>   I have a problem writing data to a USB pendrive, and it seems
> kernel-related. With the help of Greg an Alan (thanks) and some
> bisect, I found out the offending commit being
> 
> commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
> Author: Jens Axboe <axboe@kernel.dk>
> Date:   Thu Nov 1 16:36:27 2018 -0600
> 
>     scsi: kill off the legacy IO path
>     
>     This removes the legacy (non-mq) IO path for SCSI.
>     
> So, here I am to notify you about the problem and ask you if I can
> help in any way to work it out and fix it.
> 
> The problem is that if I copy a file from the internal SATA HD to the
> pendrive, it takes ~10 times to complete (in respect of the time
> needed with the patch reverted).
> 
> The test script, which I use to detect if the problem triggers or not,
> is:
> 
> #!/bin/bash
> logfile=...
> uname -a | tee -a $logfile
> echo -n "Begin: " | tee -a $logfile
> date | tee -a $logfile
> touch inizio
> SECONDS=0
> mount UUID="05141239-4ea5-494d-aa91-acd67db89ce5" /mnt/pendrive
> cp /NoBackup/buttare/testfile /mnt/pendrive
> umount /mnt/pendrive
> tempo=$SECONDS
> touch fine
> echo -n "...end: " | tee -a $logfile
> date | tee -a $logfile
> echo "It took $tempo seconds!" | tee -a $logfile
> 
> If I run the test with a 512MB file it takes >10min vs. half a minute.
> 
> The problem is still present in last tested git (cloned today in the
> morning).
> 
> You can see the previous discussion that lead to these results at
> 
> https://marc.info/?t=155922230700001&r=1&w=2

One possible reason may be related with too small 'nr_requests', could
you apply the following command and see if any difference can be made?

echo 32 > /sys/block/sdN/queue/nr_requests


Thanks,
Ming

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
  2019-07-02 10:46 Andrea Vai
@ 2019-07-02 11:51 ` Johannes Thumshirn
  2019-07-02 22:36   ` Andrea Vai
  2019-07-02 12:01 ` Ming Lei
  1 sibling, 1 reply; 23+ messages in thread
From: Johannes Thumshirn @ 2019-07-02 11:51 UTC (permalink / raw)
  To: Andrea Vai
  Cc: Jens Axboe, linux-usb, linux-scsi, Himanshu Madhani,
	Hannes Reinecke, Ming Lei, Omar Sandoval, Martin K. Petersen,
	Greg KH, Alan Stern

On Tue, Jul 02, 2019 at 12:46:45PM +0200, Andrea Vai wrote:
> Hi,
>   I have a problem writing data to a USB pendrive, and it seems
> kernel-related. With the help of Greg an Alan (thanks) and some
> bisect, I found out the offending commit being
> 
> commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
> Author: Jens Axboe <axboe@kernel.dk>
> Date:   Thu Nov 1 16:36:27 2018 -0600
> 
>     scsi: kill off the legacy IO path
>     
>     This removes the legacy (non-mq) IO path for SCSI.
>     
> So, here I am to notify you about the problem and ask you if I can
> help in any way to work it out and fix it.
> 
> The problem is that if I copy a file from the internal SATA HD to the
> pendrive, it takes ~10 times to complete (in respect of the time
> needed with the patch reverted).
> 
> The test script, which I use to detect if the problem triggers or not,
> is:
> 
> #!/bin/bash
> logfile=...
> uname -a | tee -a $logfile
> echo -n "Begin: " | tee -a $logfile
> date | tee -a $logfile
> touch inizio
> SECONDS=0
> mount UUID="05141239-4ea5-494d-aa91-acd67db89ce5" /mnt/pendrive
> cp /NoBackup/buttare/testfile /mnt/pendrive
> umount /mnt/pendrive
> tempo=$SECONDS
> touch fine
> echo -n "...end: " | tee -a $logfile
> date | tee -a $logfile
> echo "It took $tempo seconds!" | tee -a $logfile
> 
> If I run the test with a 512MB file it takes >10min vs. half a minute.
> 
> The problem is still present in last tested git (cloned today in the
> morning).
> 
> You can see the previous discussion that lead to these results at
> 
> https://marc.info/?t=155922230700001&r=1&w=2

Hi,

Can you please check what IO scheduler you have set for your USB pendrive?

i.e. with:
cat /sys/block/$DISK/queue/scheduler

Thanks,
	Johannes
-- 
Johannes Thumshirn                            SUSE Labs Filesystems
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
@ 2019-07-02 10:46 Andrea Vai
  2019-07-02 11:51 ` Johannes Thumshirn
  2019-07-02 12:01 ` Ming Lei
  0 siblings, 2 replies; 23+ messages in thread
From: Andrea Vai @ 2019-07-02 10:46 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-usb, linux-scsi, Himanshu Madhani, Hannes Reinecke,
	Ming Lei, Omar Sandoval, Martin K. Petersen, Greg KH, Alan Stern

Hi,
  I have a problem writing data to a USB pendrive, and it seems
kernel-related. With the help of Greg an Alan (thanks) and some
bisect, I found out the offending commit being

commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6
Author: Jens Axboe <axboe@kernel.dk>
Date:   Thu Nov 1 16:36:27 2018 -0600

    scsi: kill off the legacy IO path
    
    This removes the legacy (non-mq) IO path for SCSI.
    
So, here I am to notify you about the problem and ask you if I can
help in any way to work it out and fix it.

The problem is that if I copy a file from the internal SATA HD to the
pendrive, it takes ~10 times to complete (in respect of the time
needed with the patch reverted).

The test script, which I use to detect if the problem triggers or not,
is:

#!/bin/bash
logfile=...
uname -a | tee -a $logfile
echo -n "Begin: " | tee -a $logfile
date | tee -a $logfile
touch inizio
SECONDS=0
mount UUID="05141239-4ea5-494d-aa91-acd67db89ce5" /mnt/pendrive
cp /NoBackup/buttare/testfile /mnt/pendrive
umount /mnt/pendrive
tempo=$SECONDS
touch fine
echo -n "...end: " | tee -a $logfile
date | tee -a $logfile
echo "It took $tempo seconds!" | tee -a $logfile

If I run the test with a 512MB file it takes >10min vs. half a minute.

The problem is still present in last tested git (cloned today in the
morning).

You can see the previous discussion that lead to these results at

https://marc.info/?t=155922230700001&r=1&w=2

Thanks, and bye
Andrea


^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, back to index

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <307581a490b610c3025ee80f79a465a89d68ed19.camel@unipv.it>
2019-08-20 17:13 ` Slow I/O on USB media after commit f664a3cc17b7d0a2bc3b3ab96181e1029b0ec0e6 Alan Stern
2019-08-23 10:39   ` Andrea Vai
2019-08-23 20:42     ` Alan Stern
2019-08-26  6:09       ` Andrea Vai
2019-08-26 16:33         ` Alan Stern
     [not found] <e3f87757f7a0fdf551e911ad32fc8122eebe04c7.camel@unipv.it>
2019-08-13 19:52 ` Alan Stern
2019-07-02 10:46 Andrea Vai
2019-07-02 11:51 ` Johannes Thumshirn
2019-07-02 22:36   ` Andrea Vai
2019-07-03  7:29     ` Johannes Thumshirn
2019-07-03 14:23       ` Alan Stern
2019-07-06 22:06         ` Andrea Vai
2019-07-08 15:38           ` Alan Stern
2019-07-02 12:01 ` Ming Lei
2019-07-02 22:39   ` Andrea Vai
2019-07-03  2:01     ` Ming Lei
2019-07-03  5:11       ` Andrea Vai
2019-07-03  6:36         ` Ming Lei
2019-07-03 15:27           ` Chris Murphy
2019-07-06  9:33           ` Andrea Vai
2019-07-08  1:01             ` Ming Lei
2019-07-09 21:18               ` Andrea Vai
2019-07-10  2:44                 ` Ming Lei

Linux-USB Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-usb/0 linux-usb/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-usb linux-usb/ https://lore.kernel.org/linux-usb \
		linux-usb@vger.kernel.org linux-usb@archiver.kernel.org
	public-inbox-index linux-usb


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-usb


AGPL code for this site: git clone https://public-inbox.org/ public-inbox