All of lore.kernel.org
 help / color / mirror / Atom feed
* Deterministic thrashing
@ 2014-04-06 10:29 Loic Dachary
  2014-04-07 16:55 ` Gregory Farnum
  0 siblings, 1 reply; 4+ messages in thread
From: Loic Dachary @ 2014-04-06 10:29 UTC (permalink / raw)
  To: Ceph Development

[-- Attachment #1: Type: text/plain, Size: 759 bytes --]

Hi Ceph,

It would be nice to have a way to replay the random events injected by stanzas such as

- thrashosds:
    chance_pgnum_grow: 2
    chance_pgpnum_fix: 1

When a teuthology workload (such as tracker.ceph.com/issues/7914#note-34) crashes once a week and the error is not obvious, it would increase the probability to reproduce the crash. Instead of the "trashosds" we could have something like "recorded-trashosds: trashosd.events" and instead of being random they would happen more deterministically (same number of events and same number of seconds between events ?). 

I realize this is non trivial to implement but maybe someone already thought about that and has a better idea ?

Cheers

-- 
Loïc Dachary, Artisan Logiciel Libre


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 263 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Deterministic thrashing
  2014-04-06 10:29 Deterministic thrashing Loic Dachary
@ 2014-04-07 16:55 ` Gregory Farnum
  2014-04-07 17:13   ` Loic Dachary
  0 siblings, 1 reply; 4+ messages in thread
From: Gregory Farnum @ 2014-04-07 16:55 UTC (permalink / raw)
  To: Loic Dachary; +Cc: Ceph Development

This would be really nice but there are unfortunately even more
hiccups than you've noted here:
1) Thrashing is both time and disk access sensitive, and hardware differs
2) The teuthology thrashing is triggered largely based on PG state
events (eg, "all PGs are clean, so restart an OSD")
3) The actual failures tend to involve a combination of PG state and
inbound client operations, and I can't think of any realistic way to
coordinate those.

Those problems look technically insurmountable to me, but maybe I'm
missing something?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Sun, Apr 6, 2014 at 3:29 AM, Loic Dachary <loic@dachary.org> wrote:
> Hi Ceph,
>
> It would be nice to have a way to replay the random events injected by stanzas such as
>
> - thrashosds:
>     chance_pgnum_grow: 2
>     chance_pgpnum_fix: 1
>
> When a teuthology workload (such as tracker.ceph.com/issues/7914#note-34) crashes once a week and the error is not obvious, it would increase the probability to reproduce the crash. Instead of the "trashosds" we could have something like "recorded-trashosds: trashosd.events" and instead of being random they would happen more deterministically (same number of events and same number of seconds between events ?).
>
> I realize this is non trivial to implement but maybe someone already thought about that and has a better idea ?
>
> Cheers
>
> --
> Loïc Dachary, Artisan Logiciel Libre
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Deterministic thrashing
  2014-04-07 16:55 ` Gregory Farnum
@ 2014-04-07 17:13   ` Loic Dachary
  2014-04-07 17:16     ` Gregory Farnum
  0 siblings, 1 reply; 4+ messages in thread
From: Loic Dachary @ 2014-04-07 17:13 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: Ceph Development

[-- Attachment #1: Type: text/plain, Size: 1763 bytes --]



On 07/04/2014 18:55, Gregory Farnum wrote:
> This would be really nice but there are unfortunately even more
> hiccups than you've noted here:
> 1) Thrashing is both time and disk access sensitive, and hardware differs
> 2) The teuthology thrashing is triggered largely based on PG state
> events (eg, "all PGs are clean, so restart an OSD")
> 3) The actual failures tend to involve a combination of PG state and
> inbound client operations, and I can't think of any realistic way to
> coordinate those.
> 
> Those problems look technically insurmountable to me, but maybe I'm
> missing something?

There is no easy way to use the logs / events to significantly reduce the randomness of the workload ? I honestly have no clue ;-)

Cheers

> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
> 
> 
> On Sun, Apr 6, 2014 at 3:29 AM, Loic Dachary <loic@dachary.org> wrote:
>> Hi Ceph,
>>
>> It would be nice to have a way to replay the random events injected by stanzas such as
>>
>> - thrashosds:
>>     chance_pgnum_grow: 2
>>     chance_pgpnum_fix: 1
>>
>> When a teuthology workload (such as tracker.ceph.com/issues/7914#note-34) crashes once a week and the error is not obvious, it would increase the probability to reproduce the crash. Instead of the "trashosds" we could have something like "recorded-trashosds: trashosd.events" and instead of being random they would happen more deterministically (same number of events and same number of seconds between events ?).
>>
>> I realize this is non trivial to implement but maybe someone already thought about that and has a better idea ?
>>
>> Cheers
>>
>> --
>> Loïc Dachary, Artisan Logiciel Libre
>>

-- 
Loïc Dachary, Artisan Logiciel Libre


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 263 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Deterministic thrashing
  2014-04-07 17:13   ` Loic Dachary
@ 2014-04-07 17:16     ` Gregory Farnum
  0 siblings, 0 replies; 4+ messages in thread
From: Gregory Farnum @ 2014-04-07 17:16 UTC (permalink / raw)
  To: Loic Dachary; +Cc: Ceph Development

On Mon, Apr 7, 2014 at 10:13 AM, Loic Dachary <loic@dachary.org> wrote:
>
>
> On 07/04/2014 18:55, Gregory Farnum wrote:
>> This would be really nice but there are unfortunately even more
>> hiccups than you've noted here:
>> 1) Thrashing is both time and disk access sensitive, and hardware differs
>> 2) The teuthology thrashing is triggered largely based on PG state
>> events (eg, "all PGs are clean, so restart an OSD")
>> 3) The actual failures tend to involve a combination of PG state and
>> inbound client operations, and I can't think of any realistic way to
>> coordinate those.
>>
>> Those problems look technically insurmountable to me, but maybe I'm
>> missing something?
>
> There is no easy way to use the logs / events to significantly reduce the randomness of the workload ? I honestly have no clue ;-)

I don't think so, no. :( We'd have to somehow order every event in the
system while not losing any of the system race conditions that were
previously triggered!
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2014-04-07 17:16 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-04-06 10:29 Deterministic thrashing Loic Dachary
2014-04-07 16:55 ` Gregory Farnum
2014-04-07 17:13   ` Loic Dachary
2014-04-07 17:16     ` Gregory Farnum

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.