All of lore.kernel.org
 help / color / mirror / Atom feed
* could fio be used to wipe disks?
@ 2017-03-15 21:42 Antoine Beaupre
  2017-03-16  7:42 ` Sitsofe Wheeler
  0 siblings, 1 reply; 5+ messages in thread
From: Antoine Beaupre @ 2017-03-15 21:42 UTC (permalink / raw)
  To: fio

Hi,

I'm writing a stress-testing tool and i'm looking at using fio to
stress-test disks. The point is not exactly to benchmark the disks, but
put sustained load on the disks to make sure they are generally in
working order.

Right now, I came up with something like this:

      fio --name=stressant --readwrite=randrw  --filename=/dev/sdX \
          --size=100% --numjob=4 --sync=1 --direct=1 --group_reporting

My question is:

 1. will this reliably wipe the whole drive? i know that some data can
    remain due to magnetic properties of the drive or nasty SSD tricks,
    but assume we don't do crazy forensics

 2. if not, is there a way to directly test write I/O directly through
    the device (to ignore filesystem-related issues) non-destructively?

Thanks!

A.

PS: for those curious, my prototype is available here:

https://gitlab.com/anarcat/stressant/blob/master/stressant.py

Nothing serious so far...

-- 
La publicité est la dictature invisible de notre société.
                        - Jacques Ellul


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: could fio be used to wipe disks?
  2017-03-15 21:42 could fio be used to wipe disks? Antoine Beaupre
@ 2017-03-16  7:42 ` Sitsofe Wheeler
  2017-03-16 12:30   ` Antoine Beaupré
       [not found]   ` <CAGpXXZ+bWSkOp3KqUvycgHY1gaonriGX95-7tUQ-nND-sGbaeA@mail.gmail.com>
  0 siblings, 2 replies; 5+ messages in thread
From: Sitsofe Wheeler @ 2017-03-16  7:42 UTC (permalink / raw)
  To: Antoine Beaupre; +Cc: fio

Hi,

1. It will try and use the whole area of the disk but if fio's
blocksize turns out not to the same as the disk's block size it may
miss the very end of the disk. Also if an error is encountered fio may
stop part way through and fail to overwrite the rest of the data. If
you're not paranoid about disk wiping then it is unlikely that fio
will be any faster than doing something like
dd if=/dev/zero of=/dev/mydisk bs=1M oflag=direct

Bear in mind that if you're worried about wiping disks properly you
will have to write particular pattern over the disk. Further it may
not easier than you think to get at stale internal mappings the "disk"
might have which is why you use SECURE ERASE on SSDs. fio does none of
this.

Since you mentioned stress you may want to look at using a deeper
iodepth with the libaio engine (assuming you're on Linux, other
platforms have asynchronous engines too) and only doing the sync at
the end.

2. What you are proposing is very dangerous. Assuming the filesystem
is unmounted you and you wanted to do writes you would need to read
the data and then write that same data back but fio has no facility
for such a thing at present. If the filesystem is mounted then I think
what you're asking for is impossible without introducing the
possibility of filesystem corruption.

On 15 March 2017 at 21:42, Antoine Beaupre <anarcat@orangeseeds.org> wrote:
> Hi,
>
> I'm writing a stress-testing tool and i'm looking at using fio to
> stress-test disks. The point is not exactly to benchmark the disks, but
> put sustained load on the disks to make sure they are generally in
> working order.
>
> Right now, I came up with something like this:
>
>       fio --name=stressant --readwrite=randrw  --filename=/dev/sdX \
>           --size=100% --numjob=4 --sync=1 --direct=1 --group_reporting
>
> My question is:
>
>  1. will this reliably wipe the whole drive? i know that some data can
>     remain due to magnetic properties of the drive or nasty SSD tricks,
>     but assume we don't do crazy forensics
>
>  2. if not, is there a way to directly test write I/O directly through
>     the device (to ignore filesystem-related issues) non-destructively?
>
> Thanks!
>
> A.
>
> PS: for those curious, my prototype is available here:
>
> https://gitlab.com/anarcat/stressant/blob/master/stressant.py
>
> Nothing serious so far...
>
> --
> La publicité est la dictature invisible de notre société.
>                         - Jacques Ellul
>
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Sitsofe | http://sucs.org/~sits/

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: could fio be used to wipe disks?
  2017-03-16  7:42 ` Sitsofe Wheeler
@ 2017-03-16 12:30   ` Antoine Beaupré
  2017-03-16 17:35     ` Sitsofe Wheeler
       [not found]   ` <CAGpXXZ+bWSkOp3KqUvycgHY1gaonriGX95-7tUQ-nND-sGbaeA@mail.gmail.com>
  1 sibling, 1 reply; 5+ messages in thread
From: Antoine Beaupré @ 2017-03-16 12:30 UTC (permalink / raw)
  To: Sitsofe Wheeler; +Cc: fio

On 2017-03-16 07:42:48, Sitsofe Wheeler wrote:
> Hi,
>
> 1. It will try and use the whole area of the disk but if fio's
> blocksize turns out not to the same as the disk's block size it may
> miss the very end of the disk.

I see.

> Also if an error is encountered fio may
> stop part way through and fail to overwrite the rest of the data.

That's not much of an issue for me.

> you're not paranoid about disk wiping then it is unlikely that fio
> will be any faster than doing something like
> dd if=/dev/zero of=/dev/mydisk bs=1M oflag=direct

The idea is not to be fast, but to two the two things at once: a more
extensive stress test *and* a disk wipe, in optimal time.

> Bear in mind that if you're worried about wiping disks properly you
> will have to write particular pattern over the disk.

Yeah, I know about nwipe and everything. As I mentioned originally, I am
not sure this is in the scope of my project...

> Further it may not easier than you think to get at stale internal
> mappings the "disk" might have which is why you use SECURE ERASE on
> SSDs. fio does none of this.

... I also mentioned I was aware of SSD specifics, thanks. :)

> Since you mentioned stress you may want to look at using a deeper
> iodepth with the libaio engine (assuming you're on Linux, other
> platforms have asynchronous engines too) and only doing the sync at
> the end.

Even after reading this:

http://fio.readthedocs.io/en/latest/fio_doc.html#i-o-depth

I don't quite understand what iodepth does or how to use it. Could you
expand on this?

> 2. What you are proposing is very dangerous. Assuming the filesystem
> is unmounted you and you wanted to do writes you would need to read
> the data and then write that same data back but fio has no facility
> for such a thing at present. If the filesystem is mounted then I think
> what you're asking for is impossible without introducing the
> possibility of filesystem corruption.

Right, that's what I figured as well, thanks for the confirmation!

A.

-- 
The Net treats censorship as damage and routes around it.
                         - John Gilmore

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: could fio be used to wipe disks?
       [not found]   ` <CAGpXXZ+bWSkOp3KqUvycgHY1gaonriGX95-7tUQ-nND-sGbaeA@mail.gmail.com>
@ 2017-03-16 17:14     ` Sitsofe Wheeler
  0 siblings, 0 replies; 5+ messages in thread
From: Sitsofe Wheeler @ 2017-03-16 17:14 UTC (permalink / raw)
  To: Greg Freemyer; +Cc: Antoine Beaupre, fio

Hi Greg,

You make a fair point and perhaps I should have used scare quotes. I'm
aware of the "can't recover anything after first past on disks greater
than 100GByte" debate at least with respect to hard disks and the
challenges that people have hosted across the years that no one has
taken them up on. I do wonder if SSDs (coupled with all their fancy
compression) complicates things a bit though (but the original poster
has stated that's a non-issue).

On 16 March 2017 at 09:58, Greg Freemyer <greg.freemyer@gmail.com> wrote:
>
>
> On Thursday, March 16, 2017, Sitsofe Wheeler <sitsofe@gmail.com> wrote:
>>
>>
>> Bear in mind that if you're worried about wiping disks properly you
>> will have to write particular pattern over the disk.
>
>
> I apologize, but I feel the need to correct that.
>
> That was/is true of 20th century manufactured disks when tolerances on disk
> head movement were low.  When drives hit densities of 40 GB per platter it
> quit being true.
>
> The U.S. NIST media sanitation document addressing media holding government
> classified (but not top secret) allowed a single pass of zeros for wiping
> drives larger than 20 GB a decade ago.
>
> dc3dd is a forensic tool maintained by the U.S. DoD computer forensics lab.
>
> dc3dd wipe=/dev/sda writes a single pass of nulls to all bytes/sectors and
> calls it done.
>
> Certainly you can find 20+ year old studies that contradict the above, but I
> don't believe you can find anything from this century that does.
>
> Sorry to be pedantic,
> Greg
>
>
>
> --
> --
> Greg Freemyer
>



-- 
Sitsofe | http://sucs.org/~sits/

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: could fio be used to wipe disks?
  2017-03-16 12:30   ` Antoine Beaupré
@ 2017-03-16 17:35     ` Sitsofe Wheeler
  0 siblings, 0 replies; 5+ messages in thread
From: Sitsofe Wheeler @ 2017-03-16 17:35 UTC (permalink / raw)
  To: Antoine Beaupré; +Cc: fio

On 16 March 2017 at 12:30, Antoine Beaupré <anarcat@orangeseeds.org> wrote:
> On 2017-03-16 07:42:48, Sitsofe Wheeler wrote:
>
> Yeah, I know about nwipe and everything. As I mentioned originally, I am
> not sure this is in the scope of my project...

OK.

>> Further it may not easier than you think to get at stale internal
>> mappings the "disk" might have which is why you use SECURE ERASE on
>> SSDs. fio does none of this.
>
> ... I also mentioned I was aware of SSD specifics, thanks. :)

OK - my bad :-)

>> Since you mentioned stress you may want to look at using a deeper
>> iodepth with the libaio engine (assuming you're on Linux, other
>> platforms have asynchronous engines too) and only doing the sync at
>> the end.
>
> Even after reading this:
>
> http://fio.readthedocs.io/en/latest/fio_doc.html#i-o-depth
>
> I don't quite understand what iodepth does or how to use it. Could you
> expand on this?

Sure. If you're using an fio ioengine that can submit I/O
*asynchronously* (i.e. doesn't have the wait for an I/O to come back
as completed before submitting another I/O) you have a potentially
cheaper (because you don't need to use so many CPUs) way of submitting
LOTS of I/O. The iodepth parameter controls the maximum amount of in
flight I/O to submit before you wait for some of it complete. To give
you a wild figure modern SATA disks can accept up to 32 outstanding
commands at once (although you may find the real depth achievable at
least one less than that due to how things work) so if you end up only
submitting four simultaneous commands the disk might not find that too
stressful (but this is highly job dependent).

A different way of putting is: if you're on Linux take a look at the
output shown by "iostat -x 1". One of the columns will be avgqu-sz and
the deeper this gets the more simultaneous I/O is being submitted to
the disk. If you vary the "--numjob=" of your original example you
will hopefully see this changing. What I'm suggesting is: a different
I/O engine may let you achieve the same effect but using less CPU.
Generally a higher depth is desirable but there are lot of things that
influence the actual queue depth achieved. Also bear in mind that you
might stress your kernel/CPU more before you "stress out" your disk
with certain job types.

-- 
Sitsofe | http://sucs.org/~sits/

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-03-16 17:35 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-15 21:42 could fio be used to wipe disks? Antoine Beaupre
2017-03-16  7:42 ` Sitsofe Wheeler
2017-03-16 12:30   ` Antoine Beaupré
2017-03-16 17:35     ` Sitsofe Wheeler
     [not found]   ` <CAGpXXZ+bWSkOp3KqUvycgHY1gaonriGX95-7tUQ-nND-sGbaeA@mail.gmail.com>
2017-03-16 17:14     ` Sitsofe Wheeler

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.