All of lore.kernel.org
 help / color / mirror / Atom feed
* [Draft] Testing Requirements for drm/i915 Patches
@ 2013-10-29 19:00 Daniel Vetter
  2013-10-29 23:38 ` Jesse Barnes
  2013-10-30 17:30 ` Jesse Barnes
  0 siblings, 2 replies; 5+ messages in thread
From: Daniel Vetter @ 2013-10-29 19:00 UTC (permalink / raw)
  To: Intel Graphics Development

Hi all,

So in the past half year we've had tons of sometimes rather heated discussions
about getting patches merged. Often these discussions have been in the context
of specific patch series, which meant that people are already invested. Which
contributed to the boiling emotions. I'd like to avoid that here by making this
a free-standing discussion.

There's a bunch of smaller process tuning going on, but the big thing I'd like
to instate henceforth is that automated test coverage is a primary consideration
for anything going upstream. In this write up I'll detail my reasons,
considerations and expectations. My plan is to solicit feedback over the next
few days and then publish an edited and polished version to my blog.

After that I'll put down my foot on this process so that we can go back to
coding and stop blowing through so much time and energy on waging flamewars.

Feedback and critique highly welcome.

Cheers, Daniel

Testing Requirements for Upstreaming (Draft)
============================================

I want to make automated test coverage an integral part of our feature and bufix
development process. For features this means that starting with the design phase
testability needs to be considered an integral part of any feature. This needs
to go up through the entire development process until when the implementation is
submitted together with the proposed tests. For bugfixes that means the fix is
only complete once the automated testcase for it is also done, if we need a new
one.

This specifically excludes testing with humans somewhere in the loop. We are
extremely limited in our validation resources, every time we put something new
onto the "manual testing" plate something else _will_ fall off.

Why?
----

- More predictability. Right now test coverage often only comes up as a topic
  when I drop my maintainer review onto a patch series. Which is too late, since
  it'll delay the otherwise working patches and so massively frustrates people.
  I hope by making test requirements clear and up-front we can make the
  upstreaming process more predictable. Also, if we have good tests from the get-go
  there should be much less need for me to drop patches from my trees
  after having them merged.

- Less bikeshedding. In my opinion test cases are an excellent means to settle
  bikesheds - we've had in the past seen cases of endless back&forths
  where writing a simple testcase would have shown that _all_ proposed
  color flavours are actually broken.

  The even more important thing is that fully automated tests allow us to
  legitimately postpone cleanups. If the only testing we have is manual testing
  then we have only one shot at a feature tested, namely when the developer
  tests it. So it better be perfect. But with automated tests we can postpone
  cleanups with too high risks of regressions until a really clear need is
  established. And since that need often never materializes we'll save work.

- Better review. For me it's often helps a lot to review tests than the actual
  code in-depth. This is especially true for reviewing userspace interface
  additions.

- Actionable regression reports. Only if we have a fully automated testcase do
  we have a good chance that QA reports a regression within just a few days.
  Everything else can easily take weeks (for platforms and features which are
  explicitly tested) to months (for stuff only users from the community notice).
  And especially now that much more shipping products depend upon a working
  i915.ko driver we just can't do this any more.

- Better tests. A lot of our code is really hard to test in an automated
  fashion, and pushing the frontier of what is testable often requires a lot of
  work. I hope that by making tests an integral part of any feature work and so
  forcing more people to work on them and think about testing we'll
  advance the state of the art at a brisker pace.

Risks and Buts
--------------

- Bikeshedding on tests. This plan is obviously not too useful if we just
  replace massive bikeshedding on patches with massive bikeshedding on
  testcases. But right now we do almost no review on i-g-t patches so the risk
  is small. Long-term the review requirements for testcases will certainly
  increase, but as with everything else we simply need to strive for a good
  balance to strike for just the right amount of review.

  Also if we really start discussing tests _before_ having written massive patch
  series we'll do the bikeshedding while there's no real rebase pain. So even if
  the bikeshedding just shifts we'll benefit I think, especially for
  really big features.

- Technical debt in test coverage. We have a lot of old code which still
  completely lacks testcases. Which means that even small feature work might be
  on the hook for a big pile of debt restructuring. I think this is inevitable
  occasionally. But I think that doing an assement of the current state of
  test coverage of the existing code _before_ starting a feature instead
  of when the patches are ready for merging should help a lot, before
  everyone is invested into patches already and mounting rebase pain looms
  large.

  Again we need to strive for a good balance between "too many tests to write
  up-front for old code" and "needs for tests that only the final review
  uncovers creating process bubbles".

- Upstreaming of product stuff. Product guys are notoriuosly busy and writing
  tests is actual work. Otoh the upstream codebase feeds back into _all_ product
  trees (and the upstream kernel), so requirements are simply a bit higher. And
  I also don't think that we can push the testing of some features fully to
  product teams, since they'll be pissed really quickly if every update they get
  from us breaks their stuff. So if these additional test requirements (compared
  to the past) means that some product patches won't get merged, then I think
  that's the right choice.

- But ... all the other kernel drivers don't do this. We're also one of the
  biggest driver's in the kernel, with a code churn rate roughly 5x worse than
  anything else and a pretty big (and growing) team. Also, we're often the
  critical path in enabling new platforms in the fast-paced mobile space.
  Different standards apply.

Expectations
------------

Since the point here is to make the actual test requirements known up-front we
need to settle on clear expectations. Since this is the part that actually
matters in practice I'll really welcome close scrutiny and comments here.

- Tests must fully cover userspace interfaces. By this I mean exercising all the
  possible options, especially the usual tricky corner cases (e.g. off-by-one
  array sizes, overflows). It also needs to include tests for all the
  userspace input validation (i.e. correctly rejecting invalid input,
  including checks for the error codes). For userspace interface additions
  technical debt really must be addressed. This means that when adding a
  new flag and we currently don't have any tests for those flags, then
  I'll ask for a testcase which fully exercises all the flag values we
  currently support on top of the new interface addition.

- Tests need to provide a reasonable baseline coverage of the internal driver
  state. The idea here isn't to aim for full coverage, that's an impossible and
  pointless endeavor. The goal is to have a good starting point of tests so that
  when a tricky corner case pops up in review or validation that it's not a
  terribly big effort to add a specific testcase for it.

- Issues discovered in review and final validation need automated test coverage.
  The reasoning is that anything which slipped the developer's attention is
  tricky enough to warrant an explicit testcase, since in a later refactoring
  there's a good chance that it'll be missed again. This has a bit a risk
  to delay patches, but if the basic test coverage is good enough as per
  the previous point it really shouln't be an issue.

- Finally we need to push the testable frontier with new ideas like pipe CRCs,
  modeset state cross checking or arbitrary monitor configuration injection
  (with fixed EDIDs and connector state forcing). The point here is to forster
  new crazy ideas, and the expectation is very much _not_ that developers then
  need to write testcases for all the old bugfixes that suddenly became
  testable. That workload needs to be spread out over a bunch of features
  touching the relevant area. This only really applies to features and
  code paths which are currently in the "not testable" bucket anyway.

This should specify the "what" decently enough, but we also need to look at how
tests should work.

Specific testcases in i-g-t are obviously the preferred form, but for some
features that's just not possible. In such cases in-kernel self-checks like
the modeset state checker of fifo underrun reporting are really good
approaches. Two ceaveats apply:

- The test infrastructure really should be orthogonal to the code being tested.
  In-line asserts that check for preconditions are really nice and useful, but
  since they're closely tied to the code itself have a good chance to be broken
  in the same ways.

- The debug feature needs to be enabled by default, and it needs to be loud.
  Otherwise no one will notice that something is amiss. So currently the fifo
  underrun reporting doesn't really count since it only causes debug level
  output when something goes wrong. Of course it's still a really good tool for
  developers, just not yet for catching regressions.

Finally the short lists of excuses that don't count as proper test coverage for
a feature.

- Manual testing. We are ridiculously limited on our QA manpower. Every time we
  drop something onto the "manual testing" plate something else _will_ drop off.
  Which means in the end that we don't really have any test coverage. So
  if patches don't come with automated tests, in-kernel cross-checking or
  some other form of validation attached they need to have really good
  reasons for doing so.

- Testing by product teams. The entire point of Intel OTC's "upstream first"
  strategy is to have a common codebase for everyone. If we break product trees
  every time we feed an update into them because we can't properly regression
  test a given feature then the value of upstreaming features is greatly
  diminished in my opinion and could potentially doom collaborations with
  product teams. We just can't have that.

  This means that when products teams submit patches upstream they also need
  to submit the relevant testcases to i-g-t.
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Draft] Testing Requirements for drm/i915 Patches
  2013-10-29 19:00 [Draft] Testing Requirements for drm/i915 Patches Daniel Vetter
@ 2013-10-29 23:38 ` Jesse Barnes
  2013-10-30 11:38   ` Daniel Vetter
  2013-10-30 17:30 ` Jesse Barnes
  1 sibling, 1 reply; 5+ messages in thread
From: Jesse Barnes @ 2013-10-29 23:38 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: Intel Graphics Development

Since a number of people internally are also involved in i915
development but not on the mailing list, I think we'll need to have an
internal meeting or two to cover this stuff and get buy in.

Overall, developing tests along with code is a good goal.  A few
comments below.

On Tue, 29 Oct 2013 20:00:49 +0100
Daniel Vetter <daniel@ffwll.ch> wrote:

> - Tests must fully cover userspace interfaces. By this I mean exercising all the
[snip]
> - Tests need to provide a reasonable baseline coverage of the internal driver
>   state. The idea here isn't to aim for full coverage, that's an impossible and
[snip]

What you've described here is basically full validation.  Something
that most groups at Intel have large teams to dedicate all their time
to.  I'm not sure how far we can go down this path with just the
development resources we have today (though maybe we'll get some help
from the validation teams in the product groups)

> Finally the short lists of excuses that don't count as proper test coverage for
> a feature.
> 
> - Manual testing. We are ridiculously limited on our QA manpower. Every time we
>   drop something onto the "manual testing" plate something else _will_ drop off.
>   Which means in the end that we don't really have any test coverage. So
>   if patches don't come with automated tests, in-kernel cross-checking or
>   some other form of validation attached they need to have really good
>   reasons for doing so.

Some things are only testable manually at this point, since we don't
have a sophisticated webcam structure set up for everything (and in
fact, the webcam tests we do have are fairly manual at this point, in
that they have to be set up specially each time).

> - Testing by product teams. The entire point of Intel OTC's "upstream first"
>   strategy is to have a common codebase for everyone. If we break product trees
>   every time we feed an update into them because we can't properly regression
>   test a given feature then the value of upstreaming features is greatly
>   diminished in my opinion and could potentially doom collaborations with
>   product teams. We just can't have that.
> 
>   This means that when products teams submit patches upstream they also need
>   to submit the relevant testcases to i-g-t.

So what I'm hearing here is that even if someone submits a tested
patch, with tests available (and passing) somewhere other than i-g-t,
you'll reject them until they port/write a new test for i-g-t.  Is that
what you meant?  I think a more reasonable criteria would be that tests
from non-i-g-t test suites are available and run by our QA, or run
against upstream kernels by groups other than our QA.  That should keep
a lid on regressions just as well.

One thing you didn't mention here is that our test suite is starting to
see as much churn (and more breakage) than upstream.  If you look at
recent results from QA, you'd think SNB was totally broken based on
i-g-t results.  But despite that, desktops come up fine and things
generally work.  So we need to take care that our tests are simple and
that our test library code doesn't see massive churn causing false
positive breakage all the time.  In other words, tests are just as
likely to be broken (reporting false breakage or false passing) as the
code they're testing.  The best way to avoid that is to keep the tests
very small, simple, and targeted.  Converting and refactoring code in
i-g-t to allow that will be a big chunk of work.

-- 
Jesse Barnes, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Draft] Testing Requirements for drm/i915 Patches
  2013-10-29 23:38 ` Jesse Barnes
@ 2013-10-30 11:38   ` Daniel Vetter
  0 siblings, 0 replies; 5+ messages in thread
From: Daniel Vetter @ 2013-10-30 11:38 UTC (permalink / raw)
  To: Jesse Barnes; +Cc: Intel Graphics Development

On Wed, Oct 30, 2013 at 12:38 AM, Jesse Barnes <jbarnes@virtuousgeek.org> wrote:
> Since a number of people internally are also involved in i915
> development but not on the mailing list, I think we'll need to have an
> internal meeting or two to cover this stuff and get buy in.

Yeah I'll do that. I simply didn't get around to spaming internal
mailing lists with this yet yesterday evening.

> Overall, developing tests along with code is a good goal.  A few
> comments below.
>
> On Tue, 29 Oct 2013 20:00:49 +0100
> Daniel Vetter <daniel@ffwll.ch> wrote:
>
>> - Tests must fully cover userspace interfaces. By this I mean exercising all the
> [snip]
>> - Tests need to provide a reasonable baseline coverage of the internal driver
>>   state. The idea here isn't to aim for full coverage, that's an impossible and
> [snip]
>
> What you've described here is basically full validation.  Something
> that most groups at Intel have large teams to dedicate all their time
> to.  I'm not sure how far we can go down this path with just the
> development resources we have today (though maybe we'll get some help
> from the validation teams in the product groups)

I guess you can call it validation, I'd go with test driven
development instead. Also my main focus is on userspace interfaces
(due to the security relevance and that we need to keep them working
forever) and bugs that review/testing/users caught. The general
coverage is just to avoid people getting royally pissed off since
starting to write tests once everything is essentially ready for
merging (like we've done with ppgtt) is just too late.

>> Finally the short lists of excuses that don't count as proper test coverage for
>> a feature.
>>
>> - Manual testing. We are ridiculously limited on our QA manpower. Every time we
>>   drop something onto the "manual testing" plate something else _will_ drop off.
>>   Which means in the end that we don't really have any test coverage. So
>>   if patches don't come with automated tests, in-kernel cross-checking or
>>   some other form of validation attached they need to have really good
>>   reasons for doing so.
>
> Some things are only testable manually at this point, since we don't
> have a sophisticated webcam structure set up for everything (and in
> fact, the webcam tests we do have are fairly manual at this point, in
> that they have to be set up specially each time).

I agree that not everything is testable in an automated way. And I
also think that anything which requires special setup like webcams
isn't really useful, since it's a pain to replicate and for developers
it's easer to just look at the screen. But if you look at the past 2
years of i-g-t progress we've come up with some neat tricks:
- fake gpu hang injection. Before that our reset code was essentially
untested, after that we've spent 1+ years thus far fixing bugs.
Nowadays gpu hang handling actually works, which is great for
developers and also improves the user experience a lot.
- exercising gem slowpath: We've added tricks like using gtt mmaps to
precisely provoke pagefaults and then added prefault disabling and
Chris' drop_caches to debugfs to exercise more cornercases. Nowadays
i-g-t exercise the full slowpath-in-slowpath madness in execbuf.
- CRC checksums: It's still very early, but Ville has already pushed a
testcase for all the cursor bugs he's fixed in the past (and
discovered yet another one on top).
- fifo underrun reporting: Unfortunately not yet enabled by default
(since our watermark code is still buggy), but it seems like Ville and
Paulo have found this to be a really powerful tool for catching
watermark bugs.
- modeset state checker in the kernel: I don't think we could have
pushed through all the big reworks we've done in the past year in our
modeset code without this.

There's more, and I already have crazy ideas for more infrastructure
to make more bugs and features testable (like all the ugly 3 pipe
resource sharing issues we have on ivb/hsw).

Essentially I just want people to spend a bit of their brain power on
coming up with new ideas for stuff that's currently untestable. That
doesn't mean that it's a hard requirement since very often all the
ideas just suck (or are way too much effort to be able to inject the
required state and then check it). But occasionally something really
good will come out of this.

>> - Testing by product teams. The entire point of Intel OTC's "upstream first"
>>   strategy is to have a common codebase for everyone. If we break product trees
>>   every time we feed an update into them because we can't properly regression
>>   test a given feature then the value of upstreaming features is greatly
>>   diminished in my opinion and could potentially doom collaborations with
>>   product teams. We just can't have that.
>>
>>   This means that when products teams submit patches upstream they also need
>>   to submit the relevant testcases to i-g-t.
>
> So what I'm hearing here is that even if someone submits a tested
> patch, with tests available (and passing) somewhere other than i-g-t,
> you'll reject them until they port/write a new test for i-g-t.  Is that
> what you meant?  I think a more reasonable criteria would be that tests
> from non-i-g-t test suites are available and run by our QA, or run
> against upstream kernels by groups other than our QA.  That should keep
> a lid on regressions just as well.

Essentially yes, I'd like to reject stuff which we can't test in
upstream. If we have other testsuites and they're integrated into both
QA and available to developers (e.g. how the mesa teams have a piglit
wrapper for oglc tests), then that's also ok.

But writing good tests is pretty hard, so I think actually
collaborating is much better than every product group having their own
set of testcases for bare-metal kernel testing (integration testing is
a different matter ofc). We might need to massage i-g-t a bit to make
that possible though. Also for developers it's nice if all the tests
and tools are in the same repo, to avoid the need to set things up
first when our qa reports a regression.

Note that we already have such an exception of sorts: The functional
tests for hw contexts are explicitly done through mesa/piglit since it
would be simply insane to do that on bare metal. So I'm not
fundamentally opposed to external tests.

> One thing you didn't mention here is that our test suite is starting to
> see as much churn (and more breakage) than upstream.  If you look at
> recent results from QA, you'd think SNB was totally broken based on
> i-g-t results.  But despite that, desktops come up fine and things
> generally work.  So we need to take care that our tests are simple and
> that our test library code doesn't see massive churn causing false
> positive breakage all the time.  In other words, tests are just as
> likely to be broken (reporting false breakage or false passing) as the
> code they're testing.  The best way to avoid that is to keep the tests
> very small, simple, and targeted.  Converting and refactoring code in
> i-g-t to allow that will be a big chunk of work.

Well long term I think we need to do the same thing the mesa guys are
doing with piglits and require review on every i-g-t patch. For now
the commit&fix later approach seems to still be workable (and I admit
to have abused QA a bit for the big infrastructure rework I've done in
the past few months).

So right now I don't see any real need for refactoring i-g-t. Also
I've done a quick bug scrub (some old reports I've failed to close)
and run a full i-g-t on my snb and stuff works. So I also don't see
your massive pile of failures.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Draft] Testing Requirements for drm/i915 Patches
  2013-10-29 19:00 [Draft] Testing Requirements for drm/i915 Patches Daniel Vetter
  2013-10-29 23:38 ` Jesse Barnes
@ 2013-10-30 17:30 ` Jesse Barnes
  2013-10-30 18:22   ` Daniel Vetter
  1 sibling, 1 reply; 5+ messages in thread
From: Jesse Barnes @ 2013-10-30 17:30 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: Intel Graphics Development

On Tue, 29 Oct 2013 20:00:49 +0100
Daniel Vetter <daniel@ffwll.ch> wrote:

> Since the point here is to make the actual test requirements known up-front we
> need to settle on clear expectations. Since this is the part that actually
> matters in practice I'll really welcome close scrutiny and comments here.

Another thought on these lines.

The expectations come from more than just the test side of things, but
also from the design of new features, or how code is refactored for a
new platform or to fix a bug.

So it may make sense, before starting big work, to propose both the
tests required for the feature/refactor/bug fix, as well as the
proposed design of the change.  That would let us get any potential
test & feature bikeshedding out of the way before tons of time is
invested in the code, hopefully saving a lot of rage, especially on the
big features.

Note that this runs contrary to a lot of other subsystems, which are
sometimes allergic to up front design thinking, and prefer to churn
through dozens of implementations to settle on something.  Both
approaches have value, and some combination is probably best.  Some
well thought out discussion up front, followed by testing, review, and
adjustment of an actual implementation.

Getting back to your points:
  - tests must cover userspace interfaces
  - tests need to provide coverage of internal driver state
  - tests need to be written following review for any bugs caught in
    review (I don't fully agree here; we can look at this on a case by
    case basis; e.g. in some cases an additional BUG_ON or state check
    may be all that's needed)
  - test infrastructure and scope should increase over time

I think we can discuss the above points as part of any new proposed
feature or rework.  For reworks in particular, we can probably start to
address some of the "technical debt" you mentioned, where the bar for a
rework is relatively high, requiring additional tests and
infrastructure scope.

-- 
Jesse Barnes, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Draft] Testing Requirements for drm/i915 Patches
  2013-10-30 17:30 ` Jesse Barnes
@ 2013-10-30 18:22   ` Daniel Vetter
  0 siblings, 0 replies; 5+ messages in thread
From: Daniel Vetter @ 2013-10-30 18:22 UTC (permalink / raw)
  To: Jesse Barnes; +Cc: Intel Graphics Development

On Wed, Oct 30, 2013 at 6:30 PM, Jesse Barnes <jbarnes@virtuousgeek.org> wrote:
> On Tue, 29 Oct 2013 20:00:49 +0100
> Daniel Vetter <daniel@ffwll.ch> wrote:
>
>> Since the point here is to make the actual test requirements known up-front we
>> need to settle on clear expectations. Since this is the part that actually
>> matters in practice I'll really welcome close scrutiny and comments here.
>
> Another thought on these lines.
>
> The expectations come from more than just the test side of things, but
> also from the design of new features, or how code is refactored for a
> new platform or to fix a bug.
>
> So it may make sense, before starting big work, to propose both the
> tests required for the feature/refactor/bug fix, as well as the
> proposed design of the change.  That would let us get any potential
> test & feature bikeshedding out of the way before tons of time is
> invested in the code, hopefully saving a lot of rage, especially on the
> big features.

Yeah, that's actually very much the core idea I have. Using ppgtt as
an example I think a reasonable test coverage we could have discussed
upfront would have been:
- make sure all the slowpaths in execbuf are really covered with
faulting testcases (maybe even concurrent thrashing ones), iirc we've
still had a bunch of gaping holes there
- filling in testcases for specific corner cases we've had to fix in
the execbuf/gem code in the past (e.g. the gen6 ppgtt w/a)
- a generic "let's thrash the crap out of gem" test as a baseline,
which will be fleshed out while developing the patches to hit the
low-mem corner-cases when vma setup fails midway through an execbuf
somewhere.

Later on (and this actually happened since I've totally forgotten
about it) we'd discover that we need a testcase for the secure
dispatch stuff. But since we already had a few nice testcase to throw
crazy execbufs at the kernel that was just little work on top.

> Note that this runs contrary to a lot of other subsystems, which are
> sometimes allergic to up front design thinking, and prefer to churn
> through dozens of implementations to settle on something.  Both
> approaches have value, and some combination is probably best.  Some
> well thought out discussion up front, followed by testing, review, and
> adjustment of an actual implementation.

Yeah, I agree that we need more planing than what's the usual approach.

> Getting back to your points:
>   - tests must cover userspace interfaces
>   - tests need to provide coverage of internal driver state
>   - tests need to be written following review for any bugs caught in
>     review (I don't fully agree here; we can look at this on a case by
>     case basis; e.g. in some cases an additional BUG_ON or state check
>     may be all that's needed)

I guess I need to clarify this: Tests don't need to be special i-g-ts,
in-kernel self-checks are also good (and often the best approach). For
WARN_ONs we just need try to make them as orthogonal as possible to
the code itself (refcount checks are always great at that) to minimize
the chances of both the code and the check being broken in the same
way.

>   - test infrastructure and scope should increase over time
>
> I think we can discuss the above points as part of any new proposed
> feature or rework.  For reworks in particular, we can probably start to
> address some of the "technical debt" you mentioned, where the bar for a
> rework is relatively high, requiring additional tests and
> infrastructure scope.

Yeah I think having a upfront discussion about testing and what should
be done for big features/reworks would be really useful. Both to make
sure that we have an optimal set of tests (not too much or too little)
to get the patches merged as smoothly as possible, but also to evenly
distribute plugging test gaps for existing features.

Especially when people add new test infrastructure (like EDID
injection or pipe CRC support) we should be really lenient about the
coverage for new features. Otherwise doing the feature, the test
infrastructure _and_ all the tests for the feature is too much. So for
completely new test approaches I think it's more than good enough to
just deliver that plus a small proof-of-concept testcase - advancing
the testable scope itself is extermely valuable. Later on extension
work can then start to fill in the gaps.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2013-10-30 18:22 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-10-29 19:00 [Draft] Testing Requirements for drm/i915 Patches Daniel Vetter
2013-10-29 23:38 ` Jesse Barnes
2013-10-30 11:38   ` Daniel Vetter
2013-10-30 17:30 ` Jesse Barnes
2013-10-30 18:22   ` Daniel Vetter

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.