From mboxrd@z Thu Jan 1 00:00:00 1970 From: Brendan Higgins Subject: [PATCH v1 14/17] Documentation: kunit: add documentation for KUnit Date: Thu, 4 Apr 2019 15:06:49 -0700 Message-ID: <20190404220652.19765-15-brendanhiggins@google.com> References: <20190404220652.19765-1-brendanhiggins@google.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: <20190404220652.19765-1-brendanhiggins@google.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" To: corbet@lwn.net, frowand.list@gmail.com, keescook@google.com, kieran.bingham@ideasonboard.com, mcgrof@kernel.org, robh@kernel.org, shuah@kernel.org, yamada.masahiro@socionext.com Cc: pmladek@suse.com, linux-doc@vger.kernel.org, amir73il@gmail.com, Brendan Higgins , dri-devel@lists.freedesktop.org, Alexander.Levin@microsoft.com, linux-kselftest@vger.kernel.org, linux-nvdimm@lists.01.org, khilman@baylibre.com, knut.omang@oracle.com, Felix Guo , wfg@linux.intel.com, joel@jms.id.au, jdike@addtoit.com, dan.carpenter@oracle.com, devicetree@vger.kernel.org, linux-kbuild@vger.kernel.org, Tim.Bird@sony.com, linux-um@lists.infradead.org, rostedt@goodmis.org, julia.lawall@lip6.fr, dan.j.williams@intel.com, kunit-dev@googlegroups.com, richard@nod.at, gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, mpe@ellerman.id.au, linux-fsdevel@vger.kernel.org List-Id: linux-nvdimm@lists.01.org QWRkIGRvY3VtZW50YXRpb24gZm9yIEtVbml0LCB0aGUgTGludXgga2VybmVsIHVuaXQgdGVzdGlu ZyBmcmFtZXdvcmsuCi0gQWRkIGludHJvIGFuZCB1c2FnZSBndWlkZSBmb3IgS1VuaXQKLSBBZGQg QVBJIHJlZmVyZW5jZQoKU2lnbmVkLW9mZi1ieTogRmVsaXggR3VvIDxmZWxpeGd1b3hpdXBpbmdA Z21haWwuY29tPgpTaWduZWQtb2ZmLWJ5OiBCcmVuZGFuIEhpZ2dpbnMgPGJyZW5kYW5oaWdnaW5z QGdvb2dsZS5jb20+Ci0tLQogRG9jdW1lbnRhdGlvbi9pbmRleC5yc3QgICAgICAgICAgIHwgICAx ICsKIERvY3VtZW50YXRpb24va3VuaXQvYXBpL2luZGV4LnJzdCB8ICAxNiArKwogRG9jdW1lbnRh dGlvbi9rdW5pdC9hcGkvdGVzdC5yc3QgIHwgIDE1ICsKIERvY3VtZW50YXRpb24va3VuaXQvZmFx LnJzdCAgICAgICB8ICA0NiArKysKIERvY3VtZW50YXRpb24va3VuaXQvaW5kZXgucnN0ICAgICB8 ICA4MCArKysrKysKIERvY3VtZW50YXRpb24va3VuaXQvc3RhcnQucnN0ICAgICB8IDE4MCArKysr KysrKysrKysKIERvY3VtZW50YXRpb24va3VuaXQvdXNhZ2UucnN0ICAgICB8IDQ0NyArKysrKysr KysrKysrKysrKysrKysrKysrKysrKysKIDcgZmlsZXMgY2hhbmdlZCwgNzg1IGluc2VydGlvbnMo KykKIGNyZWF0ZSBtb2RlIDEwMDY0NCBEb2N1bWVudGF0aW9uL2t1bml0L2FwaS9pbmRleC5yc3QK IGNyZWF0ZSBtb2RlIDEwMDY0NCBEb2N1bWVudGF0aW9uL2t1bml0L2FwaS90ZXN0LnJzdAogY3Jl YXRlIG1vZGUgMTAwNjQ0IERvY3VtZW50YXRpb24va3VuaXQvZmFxLnJzdAogY3JlYXRlIG1vZGUg MTAwNjQ0IERvY3VtZW50YXRpb24va3VuaXQvaW5kZXgucnN0CiBjcmVhdGUgbW9kZSAxMDA2NDQg RG9jdW1lbnRhdGlvbi9rdW5pdC9zdGFydC5yc3QKIGNyZWF0ZSBtb2RlIDEwMDY0NCBEb2N1bWVu dGF0aW9uL2t1bml0L3VzYWdlLnJzdAoKZGlmZiAtLWdpdCBhL0RvY3VtZW50YXRpb24vaW5kZXgu cnN0IGIvRG9jdW1lbnRhdGlvbi9pbmRleC5yc3QKaW5kZXggODBhNDIxY2I5MzVlNy4uMjY0Y2Zk NjEzYTc3NCAxMDA2NDQKLS0tIGEvRG9jdW1lbnRhdGlvbi9pbmRleC5yc3QKKysrIGIvRG9jdW1l bnRhdGlvbi9pbmRleC5yc3QKQEAgLTY1LDYgKzY1LDcgQEAgbWVyZ2VkIG11Y2ggZWFzaWVyLgog ICAga2VybmVsLWhhY2tpbmcvaW5kZXgKICAgIHRyYWNlL2luZGV4CiAgICBtYWludGFpbmVyL2lu ZGV4CisgICBrdW5pdC9pbmRleAogCiBLZXJuZWwgQVBJIGRvY3VtZW50YXRpb24KIC0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLQpkaWZmIC0tZ2l0IGEvRG9jdW1lbnRhdGlvbi9rdW5pdC9hcGkvaW5k ZXgucnN0IGIvRG9jdW1lbnRhdGlvbi9rdW5pdC9hcGkvaW5kZXgucnN0Cm5ldyBmaWxlIG1vZGUg MTAwNjQ0CmluZGV4IDAwMDAwMDAwMDAwMDAuLmMzMWM1MzAwODgxNTMKLS0tIC9kZXYvbnVsbAor KysgYi9Eb2N1bWVudGF0aW9uL2t1bml0L2FwaS9pbmRleC5yc3QKQEAgLTAsMCArMSwxNiBAQAor Li4gU1BEWC1MaWNlbnNlLUlkZW50aWZpZXI6IEdQTC0yLjAKKworPT09PT09PT09PT09PQorQVBJ IFJlZmVyZW5jZQorPT09PT09PT09PT09PQorLi4gdG9jdHJlZTo6CisKKwl0ZXN0CisKK1RoaXMg c2VjdGlvbiBkb2N1bWVudHMgdGhlIEtVbml0IGtlcm5lbCB0ZXN0aW5nIEFQSS4gSXQgaXMgZGl2 aWRlZCBpbnRvIDMKK3NlY3Rpb25zOgorCis9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT0gPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQorOmRvYzpg dGVzdGAgICAgICAgICAgICAgICAgICAgICAgIGRvY3VtZW50cyBhbGwgb2YgdGhlIHN0YW5kYXJk IHRlc3RpbmcgQVBJCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZXhjbHVkaW5n IG1vY2tpbmcgb3IgbW9ja2luZyByZWxhdGVkIGZlYXR1cmVzLgorPT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09ID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT0KZGlmZiAtLWdpdCBhL0RvY3VtZW50YXRpb24va3VuaXQvYXBpL3Rlc3QucnN0IGIvRG9j dW1lbnRhdGlvbi9rdW5pdC9hcGkvdGVzdC5yc3QKbmV3IGZpbGUgbW9kZSAxMDA2NDQKaW5kZXgg MDAwMDAwMDAwMDAwMC4uN2M5MjYwMTRmMDQ3YwotLS0gL2Rldi9udWxsCisrKyBiL0RvY3VtZW50 YXRpb24va3VuaXQvYXBpL3Rlc3QucnN0CkBAIC0wLDAgKzEsMTUgQEAKKy4uIFNQRFgtTGljZW5z ZS1JZGVudGlmaWVyOiBHUEwtMi4wCisKKz09PT09PT09CitUZXN0IEFQSQorPT09PT09PT0KKwor VGhpcyBmaWxlIGRvY3VtZW50cyBhbGwgb2YgdGhlIHN0YW5kYXJkIHRlc3RpbmcgQVBJIGV4Y2x1 ZGluZyBtb2NraW5nIG9yIG1vY2tpbmcKK3JlbGF0ZWQgZmVhdHVyZXMuCisKKy4uIGtlcm5lbC1k b2M6OiBpbmNsdWRlL2t1bml0L3Rlc3QuaAorICAgOmludGVybmFsOgorCisuLiBrZXJuZWwtZG9j OjogaW5jbHVkZS9rdW5pdC9rdW5pdC1zdHJlYW0uaAorICAgOmludGVybmFsOgorCmRpZmYgLS1n aXQgYS9Eb2N1bWVudGF0aW9uL2t1bml0L2ZhcS5yc3QgYi9Eb2N1bWVudGF0aW9uL2t1bml0L2Zh cS5yc3QKbmV3IGZpbGUgbW9kZSAxMDA2NDQKaW5kZXggMDAwMDAwMDAwMDAwMC4uY2I4ZTRmYjIy NTdhMAotLS0gL2Rldi9udWxsCisrKyBiL0RvY3VtZW50YXRpb24va3VuaXQvZmFxLnJzdApAQCAt MCwwICsxLDQ2IEBACisuLiBTUERYLUxpY2Vuc2UtSWRlbnRpZmllcjogR1BMLTIuMAorCis9PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQorRnJlcXVlbnRseSBBc2tlZCBR dWVzdGlvbnMKKz09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CisKK0hv dyBpcyB0aGlzIGRpZmZlcmVudCBmcm9tIEF1dG90ZXN0LCBrc2VsZnRlc3QsIGV0Yz8KKz09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KK0tVbml0IGlz IGEgdW5pdCB0ZXN0aW5nIGZyYW1ld29yay4gQXV0b3Rlc3QsIGtzZWxmdGVzdCAoYW5kIHNvbWUg b3RoZXJzKSBhcmUKK25vdC4KKworQSBgdW5pdCB0ZXN0IDxodHRwczovL21hcnRpbmZvd2xlci5j b20vYmxpa2kvVW5pdFRlc3QuaHRtbD5gXyBpcyBzdXBwb3NlZCB0bwordGVzdCBhIHNpbmdsZSB1 bml0IG9mIGNvZGUgaW4gaXNvbGF0aW9uLCBoZW5jZSB0aGUgbmFtZS4gQSB1bml0IHRlc3Qgc2hv dWxkIGJlCit0aGUgZmluZXN0IGdyYW51bGFyaXR5IG9mIHRlc3RpbmcgYW5kIGFzIHN1Y2ggc2hv dWxkIGFsbG93IGFsbCBwb3NzaWJsZSBjb2RlCitwYXRocyB0byBiZSB0ZXN0ZWQgaW4gdGhlIGNv ZGUgdW5kZXIgdGVzdDsgdGhpcyBpcyBvbmx5IHBvc3NpYmxlIGlmIHRoZSBjb2RlCit1bmRlciB0 ZXN0IGlzIHZlcnkgc21hbGwgYW5kIGRvZXMgbm90IGhhdmUgYW55IGV4dGVybmFsIGRlcGVuZGVu Y2llcyBvdXRzaWRlIG9mCit0aGUgdGVzdCdzIGNvbnRyb2wgbGlrZSBoYXJkd2FyZS4KKworVGhl cmUgYXJlIG5vIHRlc3RpbmcgZnJhbWV3b3JrcyBjdXJyZW50bHkgYXZhaWxhYmxlIGZvciB0aGUg a2VybmVsIHRoYXQgZG8gbm90CityZXF1aXJlIGluc3RhbGxpbmcgdGhlIGtlcm5lbCBvbiBhIHRl c3QgbWFjaGluZSBvciBpbiBhIFZNIGFuZCBhbGwgcmVxdWlyZQordGVzdHMgdG8gYmUgd3JpdHRl biBpbiB1c2Vyc3BhY2UgYW5kIHJ1biBvbiB0aGUga2VybmVsIHVuZGVyIHRlc3Q7IHRoaXMgaXMg dHJ1ZQorZm9yIEF1dG90ZXN0LCBrc2VsZnRlc3QsIGFuZCBzb21lIG90aGVycywgZGlzcXVhbGlm eWluZyBhbnkgb2YgdGhlbSBmcm9tIGJlaW5nCitjb25zaWRlcmVkIHVuaXQgdGVzdGluZyBmcmFt ZXdvcmtzLgorCitXaGF0IGlzIHRoZSBkaWZmZXJlbmNlIGJldHdlZW4gYSB1bml0IHRlc3QgYW5k IHRoZXNlIG90aGVyIGtpbmRzIG9mIHRlc3RzPworPT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KK01vc3QgZXhp c3RpbmcgdGVzdHMgZm9yIHRoZSBMaW51eCBrZXJuZWwgd291bGQgYmUgY2F0ZWdvcml6ZWQgYXMg YW4gaW50ZWdyYXRpb24KK3Rlc3QsIG9yIGFuIGVuZC10by1lbmQgdGVzdC4KKworLSBBIHVuaXQg dGVzdCBpcyBzdXBwb3NlZCB0byB0ZXN0IGEgc2luZ2xlIHVuaXQgb2YgY29kZSBpbiBpc29sYXRp b24sIGhlbmNlIHRoZQorICBuYW1lLiBBIHVuaXQgdGVzdCBzaG91bGQgYmUgdGhlIGZpbmVzdCBn cmFudWxhcml0eSBvZiB0ZXN0aW5nIGFuZCBhcyBzdWNoCisgIHNob3VsZCBhbGxvdyBhbGwgcG9z c2libGUgY29kZSBwYXRocyB0byBiZSB0ZXN0ZWQgaW4gdGhlIGNvZGUgdW5kZXIgdGVzdDsgdGhp cworICBpcyBvbmx5IHBvc3NpYmxlIGlmIHRoZSBjb2RlIHVuZGVyIHRlc3QgaXMgdmVyeSBzbWFs bCBhbmQgZG9lcyBub3QgaGF2ZSBhbnkKKyAgZXh0ZXJuYWwgZGVwZW5kZW5jaWVzIG91dHNpZGUg b2YgdGhlIHRlc3QncyBjb250cm9sIGxpa2UgaGFyZHdhcmUuCistIEFuIGludGVncmF0aW9uIHRl c3QgdGVzdHMgdGhlIGludGVyYWN0aW9uIGJldHdlZW4gYSBtaW5pbWFsIHNldCBvZiBjb21wb25l bnRzLAorICB1c3VhbGx5IGp1c3QgdHdvIG9yIHRocmVlLiBGb3IgZXhhbXBsZSwgc29tZW9uZSBt aWdodCB3cml0ZSBhbiBpbnRlZ3JhdGlvbgorICB0ZXN0IHRvIHRlc3QgdGhlIGludGVyYWN0aW9u IGJldHdlZW4gYSBkcml2ZXIgYW5kIGEgcGllY2Ugb2YgaGFyZHdhcmUsIG9yIHRvCisgIHRlc3Qg dGhlIGludGVyYWN0aW9uIGJldHdlZW4gdGhlIHVzZXJzcGFjZSBsaWJyYXJpZXMgdGhlIGtlcm5l bCBwcm92aWRlcyBhbmQKKyAgdGhlIGtlcm5lbCBpdHNlbGY7IGhvd2V2ZXIsIG9uZSBvZiB0aGVz ZSB0ZXN0cyB3b3VsZCBwcm9iYWJseSBub3QgdGVzdCB0aGUKKyAgZW50aXJlIGtlcm5lbCBhbG9u ZyB3aXRoIGhhcmR3YXJlIGludGVyYWN0aW9ucyBhbmQgaW50ZXJhY3Rpb25zIHdpdGggdGhlCisg IHVzZXJzcGFjZS4KKy0gQW4gZW5kLXRvLWVuZCB0ZXN0IHVzdWFsbHkgdGVzdHMgdGhlIGVudGly ZSBzeXN0ZW0gZnJvbSB0aGUgcGVyc3BlY3RpdmUgb2YgdGhlCisgIGNvZGUgdW5kZXIgdGVzdC4g Rm9yIGV4YW1wbGUsIHNvbWVvbmUgbWlnaHQgd3JpdGUgYW4gZW5kLXRvLWVuZCB0ZXN0IGZvciB0 aGUKKyAga2VybmVsIGJ5IGluc3RhbGxpbmcgYSBwcm9kdWN0aW9uIGNvbmZpZ3VyYXRpb24gb2Yg dGhlIGtlcm5lbCBvbiBwcm9kdWN0aW9uCisgIGhhcmR3YXJlIHdpdGggYSBwcm9kdWN0aW9uIHVz ZXJzcGFjZSBhbmQgdGhlbiB0cnlpbmcgdG8gZXhlcmNpc2Ugc29tZSBiZWhhdmlvcgorICB0aGF0 IGRlcGVuZHMgb24gaW50ZXJhY3Rpb25zIGJldHdlZW4gdGhlIGhhcmR3YXJlLCB0aGUga2VybmVs LCBhbmQgdXNlcnNwYWNlLgpkaWZmIC0tZ2l0IGEvRG9jdW1lbnRhdGlvbi9rdW5pdC9pbmRleC5y c3QgYi9Eb2N1bWVudGF0aW9uL2t1bml0L2luZGV4LnJzdApuZXcgZmlsZSBtb2RlIDEwMDY0NApp bmRleCAwMDAwMDAwMDAwMDAwLi5jNjcxMDIxMWI2NDdmCi0tLSAvZGV2L251bGwKKysrIGIvRG9j dW1lbnRhdGlvbi9rdW5pdC9pbmRleC5yc3QKQEAgLTAsMCArMSw4MCBAQAorLi4gU1BEWC1MaWNl bnNlLUlkZW50aWZpZXI6IEdQTC0yLjAKKworPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT0KK0tVbml0IC0gVW5pdCBUZXN0aW5nIGZvciB0aGUgTGludXggS2VybmVsCis9 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQorCisuLiB0b2N0cmVlOjoK Kwk6bWF4ZGVwdGg6IDIKKworCXN0YXJ0CisJdXNhZ2UKKwlhcGkvaW5kZXgKKwlmYXEKKworV2hh dCBpcyBLVW5pdD8KKz09PT09PT09PT09PT09CisKK0tVbml0IGlzIGEgbGlnaHR3ZWlnaHQgdW5p dCB0ZXN0aW5nIGFuZCBtb2NraW5nIGZyYW1ld29yayBmb3IgdGhlIExpbnV4IGtlcm5lbC4KK1Ro ZXNlIHRlc3RzIGFyZSBhYmxlIHRvIGJlIHJ1biBsb2NhbGx5IG9uIGEgZGV2ZWxvcGVyJ3Mgd29y a3N0YXRpb24gd2l0aG91dCBhIFZNCitvciBzcGVjaWFsIGhhcmR3YXJlLgorCitLVW5pdCBpcyBo ZWF2aWx5IGluc3BpcmVkIGJ5IEpVbml0LCBQeXRob24ncyB1bml0dGVzdC5tb2NrLCBhbmQKK0dv b2dsZXRlc3QvR29vZ2xlbW9jayBmb3IgQysrLiBLVW5pdCBwcm92aWRlcyBmYWNpbGl0aWVzIGZv ciBkZWZpbmluZyB1bml0IHRlc3QKK2Nhc2VzLCBncm91cGluZyByZWxhdGVkIHRlc3QgY2FzZXMg aW50byB0ZXN0IHN1aXRlcywgcHJvdmlkaW5nIGNvbW1vbgoraW5mcmFzdHJ1Y3R1cmUgZm9yIHJ1 bm5pbmcgdGVzdHMsIGFuZCBtdWNoIG1vcmUuCisKK0dldCBzdGFydGVkIG5vdzogOmRvYzpgc3Rh cnRgCisKK1doeSBLVW5pdD8KKz09PT09PT09PT0KKworQSB1bml0IHRlc3QgaXMgc3VwcG9zZWQg dG8gdGVzdCBhIHNpbmdsZSB1bml0IG9mIGNvZGUgaW4gaXNvbGF0aW9uLCBoZW5jZSB0aGUKK25h bWUuIEEgdW5pdCB0ZXN0IHNob3VsZCBiZSB0aGUgZmluZXN0IGdyYW51bGFyaXR5IG9mIHRlc3Rp bmcgYW5kIGFzIHN1Y2ggc2hvdWxkCithbGxvdyBhbGwgcG9zc2libGUgY29kZSBwYXRocyB0byBi ZSB0ZXN0ZWQgaW4gdGhlIGNvZGUgdW5kZXIgdGVzdDsgdGhpcyBpcyBvbmx5Citwb3NzaWJsZSBp ZiB0aGUgY29kZSB1bmRlciB0ZXN0IGlzIHZlcnkgc21hbGwgYW5kIGRvZXMgbm90IGhhdmUgYW55 IGV4dGVybmFsCitkZXBlbmRlbmNpZXMgb3V0c2lkZSBvZiB0aGUgdGVzdCdzIGNvbnRyb2wgbGlr ZSBoYXJkd2FyZS4KKworT3V0c2lkZSBvZiBLVW5pdCwgdGhlcmUgYXJlIG5vIHRlc3RpbmcgZnJh bWV3b3JrcyBjdXJyZW50bHkKK2F2YWlsYWJsZSBmb3IgdGhlIGtlcm5lbCB0aGF0IGRvIG5vdCBy ZXF1aXJlIGluc3RhbGxpbmcgdGhlIGtlcm5lbCBvbiBhIHRlc3QKK21hY2hpbmUgb3IgaW4gYSBW TSBhbmQgYWxsIHJlcXVpcmUgdGVzdHMgdG8gYmUgd3JpdHRlbiBpbiB1c2Vyc3BhY2UgcnVubmlu ZyBvbgordGhlIGtlcm5lbDsgdGhpcyBpcyB0cnVlIGZvciBBdXRvdGVzdCwgYW5kIGtzZWxmdGVz dCwgZGlzcXVhbGlmeWluZworYW55IG9mIHRoZW0gZnJvbSBiZWluZyBjb25zaWRlcmVkIHVuaXQg dGVzdGluZyBmcmFtZXdvcmtzLgorCitLVW5pdCBhZGRyZXNzZXMgdGhlIHByb2JsZW0gb2YgYmVp bmcgYWJsZSB0byBydW4gdGVzdHMgd2l0aG91dCBuZWVkaW5nIGEgdmlydHVhbAorbWFjaGluZSBv ciBhY3R1YWwgaGFyZHdhcmUgd2l0aCBVc2VyIE1vZGUgTGludXguIFVzZXIgTW9kZSBMaW51eCBp cyBhIExpbnV4CithcmNoaXRlY3R1cmUsIGxpa2UgQVJNIG9yIHg4NjsgaG93ZXZlciwgdW5saWtl IG90aGVyIGFyY2hpdGVjdHVyZXMgaXQgY29tcGlsZXMKK3RvIGEgc3RhbmRhbG9uZSBwcm9ncmFt IHRoYXQgY2FuIGJlIHJ1biBsaWtlIGFueSBvdGhlciBwcm9ncmFtIGRpcmVjdGx5IGluc2lkZQor b2YgYSBob3N0IG9wZXJhdGluZyBzeXN0ZW07IHRvIGJlIGNsZWFyLCBpdCBkb2VzIG5vdCByZXF1 aXJlIGFueSB2aXJ0dWFsaXphdGlvbgorc3VwcG9ydDsgaXQgaXMganVzdCBhIHJlZ3VsYXIgcHJv Z3JhbS4KKworS1VuaXQgaXMgZmFzdC4gRXhjbHVkaW5nIGJ1aWxkIHRpbWUsIGZyb20gaW52b2Nh dGlvbiB0byBjb21wbGV0aW9uIEtVbml0IGNhbiBydW4KK3NldmVyYWwgZG96ZW4gdGVzdHMgaW4g b25seSAxMCB0byAyMCBzZWNvbmRzOyB0aGlzIG1pZ2h0IG5vdCBzb3VuZCBsaWtlIGEgYmlnCitk ZWFsIHRvIHNvbWUgcGVvcGxlLCBidXQgaGF2aW5nIHN1Y2ggZmFzdCBhbmQgZWFzeSB0byBydW4g dGVzdHMgZnVuZGFtZW50YWxseQorY2hhbmdlcyB0aGUgd2F5IHlvdSBnbyBhYm91dCB0ZXN0aW5n IGFuZCBldmVuIHdyaXRpbmcgY29kZSBpbiB0aGUgZmlyc3QgcGxhY2UuCitMaW51cyBoaW1zZWxm IHNhaWQgaW4gaGlzIGBnaXQgdGFsayBhdCBHb29nbGUKKzxodHRwczovL2dpc3QuZ2l0aHViLmNv bS9sb3JuLzEyNzI2ODYvcmV2aXNpb25zI2RpZmYtNTNjNjU1NzIxMjc4NTVmMWIwMDNkYjQwNjRh OTQ1NzNSODc0PmBfOgorCisJIi4uLiBhIGxvdCBvZiBwZW9wbGUgc2VlbSB0byB0aGluayB0aGF0 IHBlcmZvcm1hbmNlIGlzIGFib3V0IGRvaW5nIHRoZQorCXNhbWUgdGhpbmcsIGp1c3QgZG9pbmcg aXQgZmFzdGVyLCBhbmQgdGhhdCBpcyBub3QgdHJ1ZS4gVGhhdCBpcyBub3Qgd2hhdAorCXBlcmZv cm1hbmNlIGlzIGFsbCBhYm91dC4gSWYgeW91IGNhbiBkbyBzb21ldGhpbmcgcmVhbGx5IGZhc3Qs IHJlYWxseQorCXdlbGwsIHBlb3BsZSB3aWxsIHN0YXJ0IHVzaW5nIGl0IGRpZmZlcmVudGx5LiIK KworSW4gdGhpcyBjb250ZXh0IExpbnVzIHdhcyB0YWxraW5nIGFib3V0IGJyYW5jaGluZyBhbmQg bWVyZ2luZywKK2J1dCB0aGlzIHBvaW50IGFsc28gYXBwbGllcyB0byB0ZXN0aW5nLiBJZiB5b3Vy IHRlc3RzIGFyZSBzbG93LCB1bnJlbGlhYmxlLCBhcmUKK2RpZmZpY3VsdCB0byB3cml0ZSwgYW5k IHJlcXVpcmUgYSBzcGVjaWFsIHNldHVwIG9yIHNwZWNpYWwgaGFyZHdhcmUgdG8gcnVuLAordGhl biB5b3Ugd2FpdCBhIGxvdCBsb25nZXIgdG8gd3JpdGUgdGVzdHMsIGFuZCB5b3Ugd2FpdCBhIGxv dCBsb25nZXIgdG8gcnVuCit0ZXN0czsgdGhpcyBtZWFucyB0aGF0IHRlc3RzIGFyZSBsaWtlbHkg dG8gYnJlYWssIHVubGlrZWx5IHRvIHRlc3QgYSBsb3Qgb2YKK3RoaW5ncywgYW5kIGFyZSB1bmxp a2VseSB0byBiZSByZXJ1biBvbmNlIHRoZXkgcGFzcy4gSWYgeW91ciB0ZXN0cyBhcmUgcmVhbGx5 CitmYXN0LCB5b3UgcnVuIHRoZW0gYWxsIHRoZSB0aW1lLCBldmVyeSB0aW1lIHlvdSBtYWtlIGEg Y2hhbmdlLCBhbmQgZXZlcnkgdGltZQorc29tZW9uZSBzZW5kcyB5b3Ugc29tZSBjb2RlLiBXaHkg dHJ1c3QgdGhhdCBzb21lb25lIHJhbiBhbGwgdGhlaXIgdGVzdHMKK2NvcnJlY3RseSBvbiBldmVy eSBjaGFuZ2Ugd2hlbiB5b3UgY2FuIGp1c3QgcnVuIHRoZW0geW91cnNlbGYgaW4gbGVzcyB0aW1l IHRoYW4KK2l0IHRha2VzIHRvIHJlYWQgaGlzIC8gaGVyIHRlc3QgbG9nPworCitIb3cgZG8gSSB1 c2UgaXQ/Cis9PT09PT09PT09PT09PT09PT09CisKKyogICA6ZG9jOmBzdGFydGAgLSBmb3IgbmV3 IHVzZXJzIG9mIEtVbml0CisqICAgOmRvYzpgdXNhZ2VgIC0gZm9yIGEgbW9yZSBkZXRhaWxlZCBl eHBsYW5hdGlvbiBvZiBLVW5pdCBmZWF0dXJlcworKiAgIDpkb2M6YGFwaS9pbmRleGAgLSBmb3Ig dGhlIGxpc3Qgb2YgS1VuaXQgQVBJcyB1c2VkIGZvciB0ZXN0aW5nCisKZGlmZiAtLWdpdCBhL0Rv Y3VtZW50YXRpb24va3VuaXQvc3RhcnQucnN0IGIvRG9jdW1lbnRhdGlvbi9rdW5pdC9zdGFydC5y c3QKbmV3IGZpbGUgbW9kZSAxMDA2NDQKaW5kZXggMDAwMDAwMDAwMDAwMC4uNWNkYmE1MDkxOTA1 ZQotLS0gL2Rldi9udWxsCisrKyBiL0RvY3VtZW50YXRpb24va3VuaXQvc3RhcnQucnN0CkBAIC0w LDAgKzEsMTgwIEBACisuLiBTUERYLUxpY2Vuc2UtSWRlbnRpZmllcjogR1BMLTIuMAorCis9PT09 PT09PT09PT09PT0KK0dldHRpbmcgU3RhcnRlZAorPT09PT09PT09PT09PT09CisKK0luc3RhbGxp bmcgZGVwZW5kZW5jaWVzCis9PT09PT09PT09PT09PT09PT09PT09PQorS1VuaXQgaGFzIHRoZSBz YW1lIGRlcGVuZGVuY2llcyBhcyB0aGUgTGludXgga2VybmVsLiBBcyBsb25nIGFzIHlvdSBjYW4g YnVpbGQKK3RoZSBrZXJuZWwsIHlvdSBjYW4gcnVuIEtVbml0LgorCitLVW5pdCBXcmFwcGVyCis9 PT09PT09PT09PT09CitJbmNsdWRlZCB3aXRoIEtVbml0IGlzIGEgc2ltcGxlIFB5dGhvbiB3cmFw cGVyIHRoYXQgaGVscHMgZm9ybWF0IHRoZSBvdXRwdXQgdG8KK2Vhc2lseSB1c2UgYW5kIHJlYWQg S1VuaXQgb3V0cHV0LiBJdCBoYW5kbGVzIGJ1aWxkaW5nIGFuZCBydW5uaW5nIHRoZSBrZXJuZWws IGFzCit3ZWxsIGFzIGZvcm1hdHRpbmcgdGhlIG91dHB1dC4KKworVGhlIHdyYXBwZXIgY2FuIGJl IHJ1biB3aXRoOgorCisuLiBjb2RlLWJsb2NrOjogYmFzaAorCisgICAuL3Rvb2xzL3Rlc3Rpbmcv a3VuaXQva3VuaXQucHkKKworQ3JlYXRpbmcgYSBrdW5pdGNvbmZpZworPT09PT09PT09PT09PT09 PT09PT09PQorVGhlIFB5dGhvbiBzY3JpcHQgaXMgYSB0aGluIHdyYXBwZXIgYXJvdW5kIEtidWls ZCBhcyBzdWNoLCBpdCBuZWVkcyB0byBiZQorY29uZmlndXJlZCB3aXRoIGEgYGBrdW5pdGNvbmZp Z2BgIGZpbGUuIFRoaXMgZmlsZSBlc3NlbnRpYWxseSBjb250YWlucyB0aGUKK3JlZ3VsYXIgS2Vy bmVsIGNvbmZpZywgd2l0aCB0aGUgc3BlY2lmaWMgdGVzdCB0YXJnZXRzIGFzIHdlbGwuCisKKy4u IGNvZGUtYmxvY2s6OiBiYXNoCisKKwlnaXQgY2xvbmUgLWIgbWFzdGVyIGh0dHBzOi8va3VuaXQu Z29vZ2xlc291cmNlLmNvbS9rdW5pdGNvbmZpZyAkUEFUSF9UT19LVU5JVENPTkZJR19SRVBPCisJ Y2QgJFBBVEhfVE9fTElOVVhfUkVQTworCWxuIC1zICRQQVRIX1RPX0tVTklUX0NPTkZJR19SRVBP L2t1bml0Y29uZmlnIGt1bml0Y29uZmlnCisKK1lvdSBtYXkgd2FudCB0byBhZGQga3VuaXRjb25m aWcgdG8geW91ciBsb2NhbCBnaXRpZ25vcmUuCisKK1ZlcmlmeWluZyBLVW5pdCBXb3JrcworLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLQorCitUbyBtYWtlIHN1cmUgdGhhdCBldmVyeXRoaW5nIGlz IHNldCB1cCBjb3JyZWN0bHksIHNpbXBseSBpbnZva2UgdGhlIFB5dGhvbgord3JhcHBlciBmcm9t IHlvdXIga2VybmVsIHJlcG86CisKKy4uIGNvZGUtYmxvY2s6OiBiYXNoCisKKwkuL3Rvb2xzL3Rl c3Rpbmcva3VuaXQva3VuaXQucHkKKworLi4gbm90ZTo6CisgICBZb3UgbWF5IHdhbnQgdG8gcnVu IGBgbWFrZSBtcnByb3BlcmBgIGZpcnN0LgorCitJZiBldmVyeXRoaW5nIHdvcmtlZCBjb3JyZWN0 bHksIHlvdSBzaG91bGQgc2VlIHRoZSBmb2xsb3dpbmc6CisKKy4uIGNvZGUtYmxvY2s6OiBiYXNo CisKKwlHZW5lcmF0aW5nIC5jb25maWcgLi4uCisJQnVpbGRpbmcgS1VuaXQgS2VybmVsIC4uLgor CVN0YXJ0aW5nIEtVbml0IEtlcm5lbCAuLi4KKworZm9sbG93ZWQgYnkgYSBsaXN0IG9mIHRlc3Rz IHRoYXQgYXJlIHJ1bi4gQWxsIG9mIHRoZW0gc2hvdWxkIGJlIHBhc3NpbmcuCisKKy4uIG5vdGU6 OgorICAgQmVjYXVzZSBpdCBpcyBidWlsZGluZyBhIGxvdCBvZiBzb3VyY2VzIGZvciB0aGUgZmly c3QgdGltZSwgdGhlIGBgQnVpbGRpbmcKKyAgIGt1bml0IGtlcm5lbGBgIHN0ZXAgbWF5IHRha2Ug YSB3aGlsZS4KKworV3JpdGluZyB5b3VyIGZpcnN0IHRlc3QKKz09PT09PT09PT09PT09PT09PT09 PT09PT09CisKK0luIHlvdXIga2VybmVsIHJlcG8gbGV0J3MgYWRkIHNvbWUgY29kZSB0aGF0IHdl IGNhbiB0ZXN0LiBDcmVhdGUgYSBmaWxlCitgYGRyaXZlcnMvbWlzYy9leGFtcGxlLmhgYCB3aXRo IHRoZSBjb250ZW50czoKKworLi4gY29kZS1ibG9jazo6IGMKKworCWludCBtaXNjX2V4YW1wbGVf YWRkKGludCBsZWZ0LCBpbnQgcmlnaHQpOworCitjcmVhdGUgYSBmaWxlIGBgZHJpdmVycy9taXNj L2V4YW1wbGUuY2BgOgorCisuLiBjb2RlLWJsb2NrOjogYworCisJI2luY2x1ZGUgPGxpbnV4L2Vy cm5vLmg+CisKKwkjaW5jbHVkZSAiZXhhbXBsZS5oIgorCisJaW50IG1pc2NfZXhhbXBsZV9hZGQo aW50IGxlZnQsIGludCByaWdodCkKKwl7CisJCXJldHVybiBsZWZ0ICsgcmlnaHQ7CisJfQorCitO b3cgYWRkIHRoZSBmb2xsb3dpbmcgbGluZXMgdG8gYGBkcml2ZXJzL21pc2MvS2NvbmZpZ2BgOgor CisuLiBjb2RlLWJsb2NrOjoga2NvbmZpZworCisJY29uZmlnIE1JU0NfRVhBTVBMRQorCQlib29s ICJNeSBleGFtcGxlIgorCithbmQgdGhlIGZvbGxvd2luZyBsaW5lcyB0byBgYGRyaXZlcnMvbWlz Yy9NYWtlZmlsZWBgOgorCisuLiBjb2RlLWJsb2NrOjogbWFrZQorCisJb2JqLSQoQ09ORklHX01J U0NfRVhBTVBMRSkgKz0gZXhhbXBsZS5vCisKK05vdyB3ZSBhcmUgcmVhZHkgdG8gd3JpdGUgdGhl IHRlc3QuIFRoZSB0ZXN0IHdpbGwgYmUgaW4KK2BgZHJpdmVycy9taXNjL2V4YW1wbGUtdGVzdC5j YGA6CisKKy4uIGNvZGUtYmxvY2s6OiBjCisKKwkjaW5jbHVkZSA8a3VuaXQvdGVzdC5oPgorCSNp bmNsdWRlICJleGFtcGxlLmgiCisKKwkvKiBEZWZpbmUgdGhlIHRlc3QgY2FzZXMuICovCisKKwlz dGF0aWMgdm9pZCBtaXNjX2V4YW1wbGVfYWRkX3Rlc3RfYmFzaWMoc3RydWN0IGt1bml0ICp0ZXN0 KQorCXsKKwkJS1VOSVRfRVhQRUNUX0VRKHRlc3QsIDEsIG1pc2NfZXhhbXBsZV9hZGQoMSwgMCkp OworCQlLVU5JVF9FWFBFQ1RfRVEodGVzdCwgMiwgbWlzY19leGFtcGxlX2FkZCgxLCAxKSk7CisJ CUtVTklUX0VYUEVDVF9FUSh0ZXN0LCAwLCBtaXNjX2V4YW1wbGVfYWRkKC0xLCAxKSk7CisJCUtV TklUX0VYUEVDVF9FUSh0ZXN0LCBJTlRfTUFYLCBtaXNjX2V4YW1wbGVfYWRkKDAsIElOVF9NQVgp KTsKKwkJS1VOSVRfRVhQRUNUX0VRKHRlc3QsIC0xLCBtaXNjX2V4YW1wbGVfYWRkKElOVF9NQVgs IElOVF9NSU4pKTsKKwl9CisKKwlzdGF0aWMgdm9pZCBtaXNjX2V4YW1wbGVfdGVzdF9mYWlsdXJl KHN0cnVjdCBrdW5pdCAqdGVzdCkKKwl7CisJCUtVTklUX0ZBSUwodGVzdCwgIlRoaXMgdGVzdCBu ZXZlciBwYXNzZXMuIik7CisJfQorCisJc3RhdGljIHN0cnVjdCBrdW5pdF9jYXNlIG1pc2NfZXhh bXBsZV90ZXN0X2Nhc2VzW10gPSB7CisJCUtVTklUX0NBU0UobWlzY19leGFtcGxlX2FkZF90ZXN0 X2Jhc2ljKSwKKwkJS1VOSVRfQ0FTRShtaXNjX2V4YW1wbGVfdGVzdF9mYWlsdXJlKSwKKwkJe30s CisJfTsKKworCXN0YXRpYyBzdHJ1Y3Qga3VuaXRfbW9kdWxlIG1pc2NfZXhhbXBsZV90ZXN0X21v ZHVsZSA9IHsKKwkJLm5hbWUgPSAibWlzYy1leGFtcGxlIiwKKwkJLnRlc3RfY2FzZXMgPSBtaXNj X2V4YW1wbGVfdGVzdF9jYXNlcywKKwl9OworCW1vZHVsZV90ZXN0KG1pc2NfZXhhbXBsZV90ZXN0 X21vZHVsZSk7CisKK05vdyBhZGQgdGhlIGZvbGxvd2luZyB0byBgYGRyaXZlcnMvbWlzYy9LY29u ZmlnYGA6CisKKy4uIGNvZGUtYmxvY2s6OiBrY29uZmlnCisKKwljb25maWcgTUlTQ19FWEFNUExF X1RFU1QKKwkJYm9vbCAiVGVzdCBmb3IgbXkgZXhhbXBsZSIKKwkJZGVwZW5kcyBvbiBNSVNDX0VY QU1QTEUgJiYgS1VOSVQKKworYW5kIHRoZSBmb2xsb3dpbmcgdG8gYGBkcml2ZXJzL21pc2MvTWFr ZWZpbGVgYDoKKworLi4gY29kZS1ibG9jazo6IG1ha2UKKworCW9iai0kKENPTkZJR19NSVNDX0VY QU1QTEVfVEVTVCkgKz0gZXhhbXBsZS10ZXN0Lm8KKworTm93IGFkZCBpdCB0byB5b3VyIGBga3Vu aXRjb25maWdgYDoKKworLi4gY29kZS1ibG9jazo6IG5vbmUKKworCUNPTkZJR19NSVNDX0VYQU1Q TEU9eQorCUNPTkZJR19NSVNDX0VYQU1QTEVfVEVTVD15CisKK05vdyB5b3UgY2FuIHJ1biB0aGUg dGVzdDoKKworLi4gY29kZS1ibG9jazo6IGJhc2gKKworCS4vdG9vbHMvdGVzdGluZy9rdW5pdC9r dW5pdC5weQorCitZb3Ugc2hvdWxkIHNlZSB0aGUgZm9sbG93aW5nIGZhaWx1cmU6CisKKy4uIGNv ZGUtYmxvY2s6OiBub25lCisKKwkuLi4KKwlbMTY6MDg6NTddIFtQQVNTRURdIG1pc2MtZXhhbXBs ZTptaXNjX2V4YW1wbGVfYWRkX3Rlc3RfYmFzaWMKKwlbMTY6MDg6NTddIFtGQUlMRURdIG1pc2Mt ZXhhbXBsZTptaXNjX2V4YW1wbGVfdGVzdF9mYWlsdXJlCisJWzE2OjA4OjU3XSBFWFBFQ1RBVElP TiBGQUlMRUQgYXQgZHJpdmVycy9taXNjL2V4YW1wbGUtdGVzdC5jOjE3CisJWzE2OjA4OjU3XSAJ VGhpcyB0ZXN0IG5ldmVyIHBhc3Nlcy4KKwkuLi4KKworQ29uZ3JhdHMhIFlvdSBqdXN0IHdyb3Rl IHlvdXIgZmlyc3QgS1VuaXQgdGVzdCEKKworTmV4dCBTdGVwcworPT09PT09PT09PT09PQorKiAg IENoZWNrIG91dCB0aGUgOmRvYzpgdXNhZ2VgIHBhZ2UgZm9yIGEgbW9yZQorICAgIGluLWRlcHRo IGV4cGxhbmF0aW9uIG9mIEtVbml0LgpkaWZmIC0tZ2l0IGEvRG9jdW1lbnRhdGlvbi9rdW5pdC91 c2FnZS5yc3QgYi9Eb2N1bWVudGF0aW9uL2t1bml0L3VzYWdlLnJzdApuZXcgZmlsZSBtb2RlIDEw MDY0NAppbmRleCAwMDAwMDAwMDAwMDAwLi41YzgzZWE5ZTIxYmM1Ci0tLSAvZGV2L251bGwKKysr IGIvRG9jdW1lbnRhdGlvbi9rdW5pdC91c2FnZS5yc3QKQEAgLTAsMCArMSw0NDcgQEAKKy4uIFNQ RFgtTGljZW5zZS1JZGVudGlmaWVyOiBHUEwtMi4wCisKKz09PT09PT09PT09PT0KK1VzaW5nIEtV bml0Cis9PT09PT09PT09PT09CisKK1RoZSBwdXJwb3NlIG9mIHRoaXMgZG9jdW1lbnQgaXMgdG8g ZGVzY3JpYmUgd2hhdCBLVW5pdCBpcywgaG93IGl0IHdvcmtzLCBob3cgaXQKK2lzIGludGVuZGVk IHRvIGJlIHVzZWQsIGFuZCBhbGwgdGhlIGNvbmNlcHRzIGFuZCB0ZXJtaW5vbG9neSB0aGF0IGFy ZSBuZWVkZWQgdG8KK3VuZGVyc3RhbmQgaXQuIFRoaXMgZ3VpZGUgYXNzdW1lcyBhIHdvcmtpbmcg a25vd2xlZGdlIG9mIHRoZSBMaW51eCBrZXJuZWwgYW5kCitzb21lIGJhc2ljIGtub3dsZWRnZSBv ZiB0ZXN0aW5nLgorCitGb3IgYSBoaWdoIGxldmVsIGludHJvZHVjdGlvbiB0byBLVW5pdCwgaW5j bHVkaW5nIHNldHRpbmcgdXAgS1VuaXQgZm9yIHlvdXIKK3Byb2plY3QsIHNlZSA6ZG9jOmBzdGFy dGAuCisKK09yZ2FuaXphdGlvbiBvZiB0aGlzIGRvY3VtZW50Cis9PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT0KKworVGhpcyBkb2N1bWVudCBpcyBvcmdhbml6ZWQgaW50byB0d28gbWFp biBzZWN0aW9uczogVGVzdGluZyBhbmQgSXNvbGF0aW5nCitCZWhhdmlvci4gVGhlIGZpcnN0IGNv dmVycyB3aGF0IGEgdW5pdCB0ZXN0IGlzIGFuZCBob3cgdG8gdXNlIEtVbml0IHRvIHdyaXRlCit0 aGVtLiBUaGUgc2Vjb25kIGNvdmVycyBob3cgdG8gdXNlIEtVbml0IHRvIGlzb2xhdGUgY29kZSBh bmQgbWFrZSBpdCBwb3NzaWJsZQordG8gdW5pdCB0ZXN0IGNvZGUgdGhhdCB3YXMgb3RoZXJ3aXNl IHVuLXVuaXQtdGVzdGFibGUuCisKK1Rlc3RpbmcKKz09PT09PT09PT0KKworV2hhdCBpcyBLVW5p dD8KKy0tLS0tLS0tLS0tLS0tLS0tLQorCisiSyIgaXMgc2hvcnQgZm9yICJrZXJuZWwiIHNvICJL VW5pdCIgaXMgdGhlICIoTGludXgpIEtlcm5lbCBVbml0IFRlc3RpbmcKK0ZyYW1ld29yay4iIEtV bml0IGlzIGludGVuZGVkIGZpcnN0IGFuZCBmb3JlbW9zdCBmb3Igd3JpdGluZyB1bml0IHRlc3Rz OyBpdCBpcworZ2VuZXJhbCBlbm91Z2ggdGhhdCBpdCBjYW4gYmUgdXNlZCB0byB3cml0ZSBpbnRl Z3JhdGlvbiB0ZXN0czsgaG93ZXZlciwgdGhpcyBpcworYSBzZWNvbmRhcnkgZ29hbC4gS1VuaXQg aGFzIG5vIGFtYml0aW9uIG9mIGJlaW5nIHRoZSBvbmx5IHRlc3RpbmcgZnJhbWV3b3JrIGZvcgor dGhlIGtlcm5lbDsgZm9yIGV4YW1wbGUsIGl0IGRvZXMgbm90IGludGVuZCB0byBiZSBhbiBlbmQt dG8tZW5kIHRlc3RpbmcKK2ZyYW1ld29yay4KKworV2hhdCBpcyBVbml0IFRlc3Rpbmc/CistLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tCisKK0EgYHVuaXQgdGVzdCA8aHR0cHM6Ly9tYXJ0aW5mb3ds ZXIuY29tL2JsaWtpL1VuaXRUZXN0Lmh0bWw+YF8gaXMgYSB0ZXN0IHRoYXQKK3Rlc3RzIGNvZGUg YXQgdGhlIHNtYWxsZXN0IHBvc3NpYmxlIHNjb3BlLCBhICp1bml0KiBvZiBjb2RlLiBJbiB0aGUg QworcHJvZ3JhbW1pbmcgbGFuZ3VhZ2UgdGhhdCdzIGEgZnVuY3Rpb24uCisKK1VuaXQgdGVzdHMg c2hvdWxkIGJlIHdyaXR0ZW4gZm9yIGFsbCB0aGUgcHVibGljbHkgZXhwb3NlZCBmdW5jdGlvbnMg aW4gYQorY29tcGlsYXRpb24gdW5pdDsgc28gdGhhdCBpcyBhbGwgdGhlIGZ1bmN0aW9ucyB0aGF0 IGFyZSBleHBvcnRlZCBpbiBlaXRoZXIgYQorKmNsYXNzKiAoZGVmaW5lZCBiZWxvdykgb3IgYWxs IGZ1bmN0aW9ucyB3aGljaCBhcmUgKipub3QqKiBzdGF0aWMuCisKK1dyaXRpbmcgVGVzdHMKKy0t LS0tLS0tLS0tLS0KKworVGVzdCBDYXNlcworfn5+fn5+fn5+fgorCitUaGUgZnVuZGFtZW50YWwg dW5pdCBpbiBLVW5pdCBpcyB0aGUgdGVzdCBjYXNlLiBBIHRlc3QgY2FzZSBpcyBhIGZ1bmN0aW9u IHdpdGgKK3RoZSBzaWduYXR1cmUgYGB2b2lkICgqKShzdHJ1Y3Qga3VuaXQgKnRlc3QpYGAuIEl0 IGNhbGxzIGEgZnVuY3Rpb24gdG8gYmUgdGVzdGVkCithbmQgdGhlbiBzZXRzICpleHBlY3RhdGlv bnMqIGZvciB3aGF0IHNob3VsZCBoYXBwZW4uIEZvciBleGFtcGxlOgorCisuLiBjb2RlLWJsb2Nr OjogYworCisJdm9pZCBleGFtcGxlX3Rlc3Rfc3VjY2VzcyhzdHJ1Y3Qga3VuaXQgKnRlc3QpCisJ eworCX0KKworCXZvaWQgZXhhbXBsZV90ZXN0X2ZhaWx1cmUoc3RydWN0IGt1bml0ICp0ZXN0KQor CXsKKwkJS1VOSVRfRkFJTCh0ZXN0LCAiVGhpcyB0ZXN0IG5ldmVyIHBhc3Nlcy4iKTsKKwl9CisK K0luIHRoZSBhYm92ZSBleGFtcGxlIGBgZXhhbXBsZV90ZXN0X3N1Y2Nlc3NgYCBhbHdheXMgcGFz c2VzIGJlY2F1c2UgaXQgZG9lcworbm90aGluZzsgbm8gZXhwZWN0YXRpb25zIGFyZSBzZXQsIHNv IGFsbCBleHBlY3RhdGlvbnMgcGFzcy4gT24gdGhlIG90aGVyIGhhbmQKK2BgZXhhbXBsZV90ZXN0 X2ZhaWx1cmVgYCBhbHdheXMgZmFpbHMgYmVjYXVzZSBpdCBjYWxscyBgYEtVTklUX0ZBSUxgYCwg d2hpY2ggaXMKK2Egc3BlY2lhbCBleHBlY3RhdGlvbiB0aGF0IGxvZ3MgYSBtZXNzYWdlIGFuZCBj YXVzZXMgdGhlIHRlc3QgY2FzZSB0byBmYWlsLgorCitFeHBlY3RhdGlvbnMKK35+fn5+fn5+fn5+ fgorQW4gKmV4cGVjdGF0aW9uKiBpcyBhIHdheSB0byBzcGVjaWZ5IHRoYXQgeW91IGV4cGVjdCBh IHBpZWNlIG9mIGNvZGUgdG8gZG8KK3NvbWV0aGluZyBpbiBhIHRlc3QuIEFuIGV4cGVjdGF0aW9u IGlzIGNhbGxlZCBsaWtlIGEgZnVuY3Rpb24uIEEgdGVzdCBpcyBtYWRlCitieSBzZXR0aW5nIGV4 cGVjdGF0aW9ucyBhYm91dCB0aGUgYmVoYXZpb3Igb2YgYSBwaWVjZSBvZiBjb2RlIHVuZGVyIHRl c3Q7IHdoZW4KK29uZSBvciBtb3JlIG9mIHRoZSBleHBlY3RhdGlvbnMgZmFpbCwgdGhlIHRlc3Qg Y2FzZSBmYWlscyBhbmQgaW5mb3JtYXRpb24gYWJvdXQKK3RoZSBmYWlsdXJlIGlzIGxvZ2dlZC4g Rm9yIGV4YW1wbGU6CisKKy4uIGNvZGUtYmxvY2s6OiBjCisKKwl2b2lkIGFkZF90ZXN0X2Jhc2lj KHN0cnVjdCBrdW5pdCAqdGVzdCkKKwl7CisJCUtVTklUX0VYUEVDVF9FUSh0ZXN0LCAxLCBhZGQo MSwgMCkpOworCQlLVU5JVF9FWFBFQ1RfRVEodGVzdCwgMiwgYWRkKDEsIDEpKTsKKwl9CisKK0lu IHRoZSBhYm92ZSBleGFtcGxlIGBgYWRkX3Rlc3RfYmFzaWNgYCBtYWtlcyBhIG51bWJlciBvZiBh c3NlcnRpb25zIGFib3V0IHRoZQorYmVoYXZpb3Igb2YgYSBmdW5jdGlvbiBjYWxsZWQgYGBhZGRg YDsgdGhlIGZpcnN0IHBhcmFtZXRlciBpcyBhbHdheXMgb2YgdHlwZQorYGBzdHJ1Y3Qga3VuaXQg KmBgLCB3aGljaCBjb250YWlucyBpbmZvcm1hdGlvbiBhYm91dCB0aGUgY3VycmVudCB0ZXN0IGNv bnRleHQ7Cit0aGUgc2Vjb25kIHBhcmFtZXRlciwgaW4gdGhpcyBjYXNlLCBpcyB3aGF0IHRoZSB2 YWx1ZSBpcyBleHBlY3RlZCB0byBiZTsgdGhlCitsYXN0IHZhbHVlIGlzIHdoYXQgdGhlIHZhbHVl IGFjdHVhbGx5IGlzLiBJZiBgYGFkZGBgIHBhc3NlcyBhbGwgb2YgdGhlc2UKK2V4cGVjdGF0aW9u cywgdGhlIHRlc3QgY2FzZSwgYGBhZGRfdGVzdF9iYXNpY2BgIHdpbGwgcGFzczsgaWYgYW55IG9u ZSBvZiB0aGVzZQorZXhwZWN0YXRpb25zIGZhaWwsIHRoZSB0ZXN0IGNhc2Ugd2lsbCBmYWlsLgor CitJdCBpcyBpbXBvcnRhbnQgdG8gdW5kZXJzdGFuZCB0aGF0IGEgdGVzdCBjYXNlICpmYWlscyog d2hlbiBhbnkgZXhwZWN0YXRpb24gaXMKK3Zpb2xhdGVkOyBob3dldmVyLCB0aGUgdGVzdCB3aWxs IGNvbnRpbnVlIHJ1bm5pbmcsIHBvdGVudGlhbGx5IHRyeWluZyBvdGhlcgorZXhwZWN0YXRpb25z IHVudGlsIHRoZSB0ZXN0IGNhc2UgZW5kcyBvciBpcyBvdGhlcndpc2UgdGVybWluYXRlZC4gVGhp cyBpcyBhcworb3Bwb3NlZCB0byAqYXNzZXJ0aW9ucyogd2hpY2ggYXJlIGRpc2N1c3NlZCBsYXRl ci4KKworVG8gbGVhcm4gYWJvdXQgbW9yZSBleHBlY3RhdGlvbnMgc3VwcG9ydGVkIGJ5IEtVbml0 LCBzZWUgOmRvYzpgYXBpL3Rlc3RgLgorCisuLiBub3RlOjoKKyAgIEEgc2luZ2xlIHRlc3QgY2Fz ZSBzaG91bGQgYmUgcHJldHR5IHNob3J0LCBwcmV0dHkgZWFzeSB0byB1bmRlcnN0YW5kLAorICAg Zm9jdXNlZCBvbiBhIHNpbmdsZSBiZWhhdmlvci4KKworRm9yIGV4YW1wbGUsIGlmIHdlIHdhbnRl ZCB0byBwcm9wZXJseSB0ZXN0IHRoZSBhZGQgZnVuY3Rpb24gYWJvdmUsIHdlIHdvdWxkCitjcmVh dGUgYWRkaXRpb25hbCB0ZXN0cyBjYXNlcyB3aGljaCB3b3VsZCBlYWNoIHRlc3QgYSBkaWZmZXJl bnQgcHJvcGVydHkgdGhhdCBhbgorYWRkIGZ1bmN0aW9uIHNob3VsZCBoYXZlIGxpa2UgdGhpczoK KworLi4gY29kZS1ibG9jazo6IGMKKworCXZvaWQgYWRkX3Rlc3RfYmFzaWMoc3RydWN0IGt1bml0 ICp0ZXN0KQorCXsKKwkJS1VOSVRfRVhQRUNUX0VRKHRlc3QsIDEsIGFkZCgxLCAwKSk7CisJCUtV TklUX0VYUEVDVF9FUSh0ZXN0LCAyLCBhZGQoMSwgMSkpOworCX0KKworCXZvaWQgYWRkX3Rlc3Rf bmVnYXRpdmUoc3RydWN0IGt1bml0ICp0ZXN0KQorCXsKKwkJS1VOSVRfRVhQRUNUX0VRKHRlc3Qs IDAsIGFkZCgtMSwgMSkpOworCX0KKworCXZvaWQgYWRkX3Rlc3RfbWF4KHN0cnVjdCBrdW5pdCAq dGVzdCkKKwl7CisJCUtVTklUX0VYUEVDVF9FUSh0ZXN0LCBJTlRfTUFYLCBhZGQoMCwgSU5UX01B WCkpOworCQlLVU5JVF9FWFBFQ1RfRVEodGVzdCwgLTEsIGFkZChJTlRfTUFYLCBJTlRfTUlOKSk7 CisJfQorCisJdm9pZCBhZGRfdGVzdF9vdmVyZmxvdyhzdHJ1Y3Qga3VuaXQgKnRlc3QpCisJewor CQlLVU5JVF9FWFBFQ1RfRVEodGVzdCwgSU5UX01JTiwgYWRkKElOVF9NQVgsIDEpKTsKKwl9CisK K05vdGljZSBob3cgaXQgaXMgaW1tZWRpYXRlbHkgb2J2aW91cyB3aGF0IGFsbCB0aGUgcHJvcGVy dGllcyB0aGF0IHdlIGFyZSB0ZXN0aW5nCitmb3IgYXJlLgorCitBc3NlcnRpb25zCit+fn5+fn5+ fn5+CisKK0tVbml0IGFsc28gaGFzIHRoZSBjb25jZXB0IG9mIGFuICphc3NlcnRpb24qLiBBbiBh c3NlcnRpb24gaXMganVzdCBsaWtlIGFuCitleHBlY3RhdGlvbiBleGNlcHQgdGhlIGFzc2VydGlv biBpbW1lZGlhdGVseSB0ZXJtaW5hdGVzIHRoZSB0ZXN0IGNhc2UgaWYgaXQgaXMKK25vdCBzYXRp c2ZpZWQuCisKK0ZvciBleGFtcGxlOgorCisuLiBjb2RlLWJsb2NrOjogYworCisJc3RhdGljIHZv aWQgbW9ja190ZXN0X2RvX2V4cGVjdF9kZWZhdWx0X3JldHVybihzdHJ1Y3Qga3VuaXQgKnRlc3Qp CisJeworCQlzdHJ1Y3QgbW9ja190ZXN0X2NvbnRleHQgKmN0eCA9IHRlc3QtPnByaXY7CisJCXN0 cnVjdCBtb2NrICptb2NrID0gY3R4LT5tb2NrOworCQlpbnQgcGFyYW0wID0gNSwgcGFyYW0xID0g LTU7CisJCWNvbnN0IGNoYXIgKnR3b19wYXJhbV90eXBlc1tdID0geyJpbnQiLCAiaW50In07CisJ CWNvbnN0IHZvaWQgKnR3b19wYXJhbXNbXSA9IHsmcGFyYW0wLCAmcGFyYW0xfTsKKwkJY29uc3Qg dm9pZCAqcmV0OworCisJCXJldCA9IG1vY2stPmRvX2V4cGVjdChtb2NrLAorCQkJCSAgICAgICJ0 ZXN0X3ByaW50ayIsIHRlc3RfcHJpbnRrLAorCQkJCSAgICAgIHR3b19wYXJhbV90eXBlcywgdHdv X3BhcmFtcywKKwkJCQkgICAgICBBUlJBWV9TSVpFKHR3b19wYXJhbXMpKTsKKwkJS1VOSVRfQVNT RVJUX05PVF9FUlJfT1JfTlVMTCh0ZXN0LCByZXQpOworCQlLVU5JVF9FWFBFQ1RfRVEodGVzdCwg LTQsICooKGludCAqKSByZXQpKTsKKwl9CisKK0luIHRoaXMgZXhhbXBsZSwgdGhlIG1ldGhvZCB1 bmRlciB0ZXN0IHNob3VsZCByZXR1cm4gYSBwb2ludGVyIHRvIGEgdmFsdWUsIHNvCitpZiB0aGUg cG9pbnRlciByZXR1cm5lZCBieSB0aGUgbWV0aG9kIGlzIG51bGwgb3IgYW4gZXJybm8sIHdlIGRv bid0IHdhbnQgdG8KK2JvdGhlciBjb250aW51aW5nIHRoZSB0ZXN0IHNpbmNlIHRoZSBmb2xsb3dp bmcgZXhwZWN0YXRpb24gY291bGQgY3Jhc2ggdGhlIHRlc3QKK2Nhc2UuIGBBU1NFUlRfTk9UX0VS Ul9PUl9OVUxMKC4uLilgIGFsbG93cyB1cyB0byBiYWlsIG91dCBvZiB0aGUgdGVzdCBjYXNlIGlm Cit0aGUgYXBwcm9wcmlhdGUgY29uZGl0aW9ucyBoYXZlIG5vdCBiZWVuIHNhdGlzZmllZCB0byBj b21wbGV0ZSB0aGUgdGVzdC4KKworTW9kdWxlcyAvIFRlc3QgU3VpdGVzCit+fn5+fn5+fn5+fn5+ fn5+fn5+fn4KKworTm93IG9idmlvdXNseSBvbmUgdW5pdCB0ZXN0IGlzbid0IHZlcnkgaGVscGZ1 bDsgdGhlIHBvd2VyIGNvbWVzIGZyb20gaGF2aW5nCittYW55IHRlc3QgY2FzZXMgY292ZXJpbmcg YWxsIG9mIHlvdXIgYmVoYXZpb3JzLiBDb25zZXF1ZW50bHkgaXQgaXMgY29tbW9uIHRvCitoYXZl IG1hbnkgKnNpbWlsYXIqIHRlc3RzOyBpbiBvcmRlciB0byByZWR1Y2UgZHVwbGljYXRpb24gaW4g dGhlc2UgY2xvc2VseQorcmVsYXRlZCB0ZXN0cyBtb3N0IHVuaXQgdGVzdGluZyBmcmFtZXdvcmtz IHByb3ZpZGUgdGhlIGNvbmNlcHQgb2YgYSAqdGVzdAorc3VpdGUqLCBpbiBLVW5pdCB3ZSBjYWxs IGl0IGEgKnRlc3QgbW9kdWxlKjsgYWxsIGl0IGlzIGlzIGp1c3QgYSBjb2xsZWN0aW9uIG9mCit0 ZXN0IGNhc2VzIGZvciBhIHVuaXQgb2YgY29kZSB3aXRoIGEgc2V0IHVwIGZ1bmN0aW9uIHRoYXQg Z2V0cyBpbnZva2VkIGJlZm9yZQorZXZlcnkgdGVzdCBjYXNlcyBhbmQgdGhlbiBhIHRlYXIgZG93 biBmdW5jdGlvbiB0aGF0IGdldHMgaW52b2tlZCBhZnRlciBldmVyeQordGVzdCBjYXNlIGNvbXBs ZXRlcy4KKworRXhhbXBsZToKKworLi4gY29kZS1ibG9jazo6IGMKKworCXN0YXRpYyBzdHJ1Y3Qg a3VuaXRfY2FzZSBleGFtcGxlX3Rlc3RfY2FzZXNbXSA9IHsKKwkJS1VOSVRfQ0FTRShleGFtcGxl X3Rlc3RfZm9vKSwKKwkJS1VOSVRfQ0FTRShleGFtcGxlX3Rlc3RfYmFyKSwKKwkJS1VOSVRfQ0FT RShleGFtcGxlX3Rlc3RfYmF6KSwKKwkJe30sCisJfTsKKworCXN0YXRpYyBzdHJ1Y3Qga3VuaXRf bW9kdWxlIGV4YW1wbGVfdGVzdF9tb2R1bGUgPSB7CisJCS5uYW1lID0gImV4YW1wbGUiLAorCQku aW5pdCA9IGV4YW1wbGVfdGVzdF9pbml0LAorCQkuZXhpdCA9IGV4YW1wbGVfdGVzdF9leGl0LAor CQkudGVzdF9jYXNlcyA9IGV4YW1wbGVfdGVzdF9jYXNlcywKKwl9OworCW1vZHVsZV90ZXN0KGV4 YW1wbGVfdGVzdF9tb2R1bGUpOworCitJbiB0aGUgYWJvdmUgZXhhbXBsZSB0aGUgdGVzdCBzdWl0 ZSwgYGBleGFtcGxlX3Rlc3RfbW9kdWxlYGAsIHdvdWxkIHJ1biB0aGUgdGVzdAorY2FzZXMgYGBl eGFtcGxlX3Rlc3RfZm9vYGAsIGBgZXhhbXBsZV90ZXN0X2JhcmBgLCBhbmQgYGBleGFtcGxlX3Rl c3RfYmF6YGAsIGVhY2gKK3dvdWxkIGhhdmUgYGBleGFtcGxlX3Rlc3RfaW5pdGBgIGNhbGxlZCBp bW1lZGlhdGVseSBiZWZvcmUgaXQgYW5kIHdvdWxkIGhhdmUKK2BgZXhhbXBsZV90ZXN0X2V4aXRg YCBjYWxsZWQgaW1tZWRpYXRlbHkgYWZ0ZXIgaXQuCitgYG1vZHVsZV90ZXN0KGV4YW1wbGVfdGVz dF9tb2R1bGUpYGAgcmVnaXN0ZXJzIHRoZSB0ZXN0IHN1aXRlIHdpdGggdGhlIEtVbml0Cit0ZXN0 IGZyYW1ld29yay4KKworLi4gbm90ZTo6CisgICBBIHRlc3QgY2FzZSB3aWxsIG9ubHkgYmUgcnVu IGlmIGl0IGlzIGFzc29jaWF0ZWQgd2l0aCBhIHRlc3Qgc3VpdGUuCisKK0ZvciBhIG1vcmUgaW5m b3JtYXRpb24gb24gdGhlc2UgdHlwZXMgb2YgdGhpbmdzIHNlZSB0aGUgOmRvYzpgYXBpL3Rlc3Rg LgorCitJc29sYXRpbmcgQmVoYXZpb3IKKz09PT09PT09PT09PT09PT09PQorCitUaGUgbW9zdCBp bXBvcnRhbnQgYXNwZWN0IG9mIHVuaXQgdGVzdGluZyB0aGF0IG90aGVyIGZvcm1zIG9mIHRlc3Rp bmcgZG8gbm90Citwcm92aWRlIGlzIHRoZSBhYmlsaXR5IHRvIGxpbWl0IHRoZSBhbW91bnQgb2Yg Y29kZSB1bmRlciB0ZXN0IHRvIGEgc2luZ2xlIHVuaXQuCitJbiBwcmFjdGljZSwgdGhpcyBpcyBv bmx5IHBvc3NpYmxlIGJ5IGJlaW5nIGFibGUgdG8gY29udHJvbCB3aGF0IGNvZGUgZ2V0cyBydW4K K3doZW4gdGhlIHVuaXQgdW5kZXIgdGVzdCBjYWxscyBhIGZ1bmN0aW9uIGFuZCB0aGlzIGlzIHVz dWFsbHkgYWNjb21wbGlzaGVkCit0aHJvdWdoIHNvbWUgc29ydCBvZiBpbmRpcmVjdGlvbiB3aGVy ZSBhIGZ1bmN0aW9uIGlzIGV4cG9zZWQgYXMgcGFydCBvZiBhbiBBUEkKK3N1Y2ggdGhhdCB0aGUg ZGVmaW5pdGlvbiBvZiB0aGF0IGZ1bmN0aW9uIGNhbiBiZSBjaGFuZ2VkIHdpdGhvdXQgYWZmZWN0 aW5nIHRoZQorcmVzdCBvZiB0aGUgY29kZSBiYXNlLiBJbiB0aGUga2VybmVsIHRoaXMgcHJpbWFy aWx5IGNvbWVzIGZyb20gdHdvIGNvbnN0cnVjdHMsCitjbGFzc2VzLCBzdHJ1Y3RzIHRoYXQgY29u dGFpbiBmdW5jdGlvbiBwb2ludGVycyB0aGF0IGFyZSBwcm92aWRlZCBieSB0aGUKK2ltcGxlbWVu dGVyLCBhbmQgYXJjaGl0ZWN0dXJlIHNwZWNpZmljIGZ1bmN0aW9ucyB3aGljaCBoYXZlIGRlZmlu aXRpb25zIHNlbGVjdGVkCithdCBjb21waWxlIHRpbWUuCisKK0NsYXNzZXMKKy0tLS0tLS0KKwor Q2xhc3NlcyBhcmUgbm90IGEgY29uc3RydWN0IHRoYXQgaXMgYnVpbHQgaW50byB0aGUgQyBwcm9n cmFtbWluZyBsYW5ndWFnZTsKK2hvd2V2ZXIsIGl0IGlzIGFuIGVhc2lseSBkZXJpdmVkIGNvbmNl cHQuIEFjY29yZGluZ2x5LCBwcmV0dHkgbXVjaCBldmVyeSBwcm9qZWN0Cit0aGF0IGRvZXMgbm90 IHVzZSBhIHN0YW5kYXJkaXplZCBvYmplY3Qgb3JpZW50ZWQgbGlicmFyeSAobGlrZSBHTk9NRSdz IEdPYmplY3QpCitoYXMgdGhlaXIgb3duIHNsaWdodGx5IGRpZmZlcmVudCB3YXkgb2YgZG9pbmcg b2JqZWN0IG9yaWVudGVkIHByb2dyYW1taW5nOyB0aGUKK0xpbnV4IGtlcm5lbCBpcyBubyBleGNl cHRpb24uCisKK1RoZSBjZW50cmFsIGNvbmNlcHQgaW4ga2VybmVsIG9iamVjdCBvcmllbnRlZCBw cm9ncmFtbWluZyBpcyB0aGUgY2xhc3MuIEluIHRoZQora2VybmVsLCBhICpjbGFzcyogaXMgYSBz dHJ1Y3QgdGhhdCBjb250YWlucyBmdW5jdGlvbiBwb2ludGVycy4gVGhpcyBjcmVhdGVzIGEKK2Nv bnRyYWN0IGJldHdlZW4gKmltcGxlbWVudGVycyogYW5kICp1c2Vycyogc2luY2UgaXQgZm9yY2Vz IHRoZW0gdG8gdXNlIHRoZQorc2FtZSBmdW5jdGlvbiBzaWduYXR1cmUgd2l0aG91dCBoYXZpbmcg dG8gY2FsbCB0aGUgZnVuY3Rpb24gZGlyZWN0bHkuIEluIG9yZGVyCitmb3IgaXQgdG8gdHJ1bHkg YmUgYSBjbGFzcywgdGhlIGZ1bmN0aW9uIHBvaW50ZXJzIG11c3Qgc3BlY2lmeSB0aGF0IGEgcG9p bnRlcgordG8gdGhlIGNsYXNzLCBrbm93biBhcyBhICpjbGFzcyBoYW5kbGUqLCBiZSBvbmUgb2Yg dGhlIHBhcmFtZXRlcnM7IHRoaXMgbWFrZXMKK2l0IHBvc3NpYmxlIGZvciB0aGUgbWVtYmVyIGZ1 bmN0aW9ucyAoYWxzbyBrbm93biBhcyAqbWV0aG9kcyopIHRvIGhhdmUgYWNjZXNzCit0byBtZW1i ZXIgdmFyaWFibGVzIChtb3JlIGNvbW1vbmx5IGtub3duIGFzICpmaWVsZHMqKSBhbGxvd2luZyB0 aGUgc2FtZQoraW1wbGVtZW50YXRpb24gdG8gaGF2ZSBtdWx0aXBsZSAqaW5zdGFuY2VzKi4KKwor VHlwaWNhbGx5IGEgY2xhc3MgY2FuIGJlICpvdmVycmlkZGVuKiBieSAqY2hpbGQgY2xhc3Nlcyog YnkgZW1iZWRkaW5nIHRoZQorKnBhcmVudCBjbGFzcyogaW4gdGhlIGNoaWxkIGNsYXNzLiBUaGVu IHdoZW4gYSBtZXRob2QgcHJvdmlkZWQgYnkgdGhlIGNoaWxkCitjbGFzcyBpcyBjYWxsZWQsIHRo ZSBjaGlsZCBpbXBsZW1lbnRhdGlvbiBrbm93cyB0aGF0IHRoZSBwb2ludGVyIHBhc3NlZCB0byBp dCBpcworb2YgYSBwYXJlbnQgY29udGFpbmVkIHdpdGhpbiB0aGUgY2hpbGQ7IGJlY2F1c2Ugb2Yg dGhpcywgdGhlIGNoaWxkIGNhbiBjb21wdXRlCit0aGUgcG9pbnRlciB0byBpdHNlbGYgYmVjYXVz ZSB0aGUgcG9pbnRlciB0byB0aGUgcGFyZW50IGlzIGFsd2F5cyBhIGZpeGVkIG9mZnNldAorZnJv bSB0aGUgcG9pbnRlciB0byB0aGUgY2hpbGQ7IHRoaXMgb2Zmc2V0IGlzIHRoZSBvZmZzZXQgb2Yg dGhlIHBhcmVudCBjb250YWluZWQKK2luIHRoZSBjaGlsZCBzdHJ1Y3QuIEZvciBleGFtcGxlOgor CisuLiBjb2RlLWJsb2NrOjogYworCisJc3RydWN0IHNoYXBlIHsKKwkJaW50ICgqYXJlYSkoc3Ry dWN0IHNoYXBlICp0aGlzKTsKKwl9OworCisJc3RydWN0IHJlY3RhbmdsZSB7CisJCXN0cnVjdCBz aGFwZSBwYXJlbnQ7CisJCWludCBsZW5ndGg7CisJCWludCB3aWR0aDsKKwl9OworCisJaW50IHJl Y3RhbmdsZV9hcmVhKHN0cnVjdCBzaGFwZSAqdGhpcykKKwl7CisJCXN0cnVjdCByZWN0YW5nbGUg KnNlbGYgPSBjb250YWluZXJfb2YodGhpcywgc3RydWN0IHNoYXBlLCBwYXJlbnQpOworCisJCXJl dHVybiBzZWxmLT5sZW5ndGggKiBzZWxmLT53aWR0aDsKKwl9OworCisJdm9pZCByZWN0YW5nbGVf bmV3KHN0cnVjdCByZWN0YW5nbGUgKnNlbGYsIGludCBsZW5ndGgsIGludCB3aWR0aCkKKwl7CisJ CXNlbGYtPnBhcmVudC5hcmVhID0gcmVjdGFuZ2xlX2FyZWE7CisJCXNlbGYtPmxlbmd0aCA9IGxl bmd0aDsKKwkJc2VsZi0+d2lkdGggPSB3aWR0aDsKKwl9CisKK0luIHRoaXMgZXhhbXBsZSAoYXMg aW4gbW9zdCBrZXJuZWwgY29kZSkgdGhlIG9wZXJhdGlvbiBvZiBjb21wdXRpbmcgdGhlIHBvaW50 ZXIKK3RvIHRoZSBjaGlsZCBmcm9tIHRoZSBwb2ludGVyIHRvIHRoZSBwYXJlbnQgaXMgZG9uZSBi eSBgYGNvbnRhaW5lcl9vZmBgLgorCitGYWtpbmcgQ2xhc3Nlcworfn5+fn5+fn5+fn5+fn4KKwor SW4gb3JkZXIgdG8gdW5pdCB0ZXN0IGEgcGllY2Ugb2YgY29kZSB0aGF0IGNhbGxzIGEgbWV0aG9k IGluIGEgY2xhc3MsIHRoZQorYmVoYXZpb3Igb2YgdGhlIG1ldGhvZCBtdXN0IGJlIGNvbnRyb2xs YWJsZSwgb3RoZXJ3aXNlIHRoZSB0ZXN0IGNlYXNlcyB0byBiZSBhCit1bml0IHRlc3QgYW5kIGJl Y29tZXMgYW4gaW50ZWdyYXRpb24gdGVzdC4KKworQSBmYWtlIGp1c3QgcHJvdmlkZXMgYW4gaW1w bGVtZW50YXRpb24gb2YgYSBwaWVjZSBvZiBjb2RlIHRoYXQgaXMgZGlmZmVyZW50IHRoYW4KK3do YXQgcnVucyBpbiBhIHByb2R1Y3Rpb24gaW5zdGFuY2UsIGJ1dCBiZWhhdmVzIGlkZW50aWNhbGx5 IGZyb20gdGhlIHN0YW5kcG9pbnQKK29mIHRoZSBjYWxsZXJzOyB0aGlzIGlzIHVzdWFsbHkgZG9u ZSB0byByZXBsYWNlIGEgZGVwZW5kZW5jeSB0aGF0IGlzIGhhcmQgdG8KK2RlYWwgd2l0aCwgb3Ig aXMgc2xvdy4KKworQSBnb29kIGV4YW1wbGUgZm9yIHRoaXMgbWlnaHQgYmUgaW1wbGVtZW50aW5n IGEgZmFrZSBFRVBST00gdGhhdCBqdXN0IHN0b3JlcyB0aGUKKyJjb250ZW50cyIgaW4gYW4gaW50 ZXJuYWwgYnVmZmVyLiBGb3IgZXhhbXBsZSwgbGV0J3MgYXNzdW1lIHdlIGhhdmUgYSBjbGFzcyB0 aGF0CityZXByZXNlbnRzIGFuIEVFUFJPTToKKworLi4gY29kZS1ibG9jazo6IGMKKworCXN0cnVj dCBlZXByb20geworCQlzc2l6ZV90ICgqcmVhZCkoc3RydWN0IGVlcHJvbSAqdGhpcywgc2l6ZV90 IG9mZnNldCwgY2hhciAqYnVmZmVyLCBzaXplX3QgY291bnQpOworCQlzc2l6ZV90ICgqd3JpdGUp KHN0cnVjdCBlZXByb20gKnRoaXMsIHNpemVfdCBvZmZzZXQsIGNvbnN0IGNoYXIgKmJ1ZmZlciwg c2l6ZV90IGNvdW50KTsKKwl9OworCitBbmQgd2Ugd2FudCB0byB0ZXN0IHNvbWUgY29kZSB0aGF0 IGJ1ZmZlcnMgd3JpdGVzIHRvIHRoZSBFRVBST006CisKKy4uIGNvZGUtYmxvY2s6OiBjCisKKwlz dHJ1Y3QgZWVwcm9tX2J1ZmZlciB7CisJCXNzaXplX3QgKCp3cml0ZSkoc3RydWN0IGVlcHJvbV9i dWZmZXIgKnRoaXMsIGNvbnN0IGNoYXIgKmJ1ZmZlciwgc2l6ZV90IGNvdW50KTsKKwkJaW50IGZs dXNoKHN0cnVjdCBlZXByb21fYnVmZmVyICp0aGlzKTsKKwkJc2l6ZV90IGZsdXNoX2NvdW50OyAv KiBGbHVzaGVzIHdoZW4gYnVmZmVyIGV4Y2VlZHMgZmx1c2hfY291bnQuICovCisJfTsKKworCXN0 cnVjdCBlZXByb21fYnVmZmVyICpuZXdfZWVwcm9tX2J1ZmZlcihzdHJ1Y3QgZWVwcm9tICplZXBy b20pOworCXZvaWQgZGVzdHJveV9lZXByb21fYnVmZmVyKHN0cnVjdCBlZXByb20gKmVlcHJvbSk7 CisKK1dlIGNhbiBlYXNpbHkgdGVzdCB0aGlzIGNvZGUgYnkgKmZha2luZyBvdXQqIHRoZSB1bmRl cmx5aW5nIEVFUFJPTToKKworLi4gY29kZS1ibG9jazo6IGMKKworCXN0cnVjdCBmYWtlX2VlcHJv bSB7CisJCXN0cnVjdCBlZXByb20gcGFyZW50OworCQljaGFyIGNvbnRlbnRzW0ZBS0VfRUVQUk9N X0NPTlRFTlRTX1NJWkVdOworCX07CisKKwlzc2l6ZV90IGZha2VfZWVwcm9tX3JlYWQoc3RydWN0 IGVlcHJvbSAqcGFyZW50LCBzaXplX3Qgb2Zmc2V0LCBjaGFyICpidWZmZXIsIHNpemVfdCBjb3Vu dCkKKwl7CisJCXN0cnVjdCBmYWtlX2VlcHJvbSAqdGhpcyA9IGNvbnRhaW5lcl9vZihwYXJlbnQs IHN0cnVjdCBmYWtlX2VlcHJvbSwgcGFyZW50KTsKKworCQljb3VudCA9IG1pbihjb3VudCwgRkFL RV9FRVBST01fQ09OVEVOVFNfU0laRSAtIG9mZnNldCk7CisJCW1lbWNweShidWZmZXIsIHRoaXMt PmNvbnRlbnRzICsgb2Zmc2V0LCBjb3VudCk7CisKKwkJcmV0dXJuIGNvdW50OworCX0KKworCXNz aXplX3QgZmFrZV9lZXByb21fd3JpdGUoc3RydWN0IGVlcHJvbSAqdGhpcywgc2l6ZV90IG9mZnNl dCwgY29uc3QgY2hhciAqYnVmZmVyLCBzaXplX3QgY291bnQpCisJeworCQlzdHJ1Y3QgZmFrZV9l ZXByb20gKnRoaXMgPSBjb250YWluZXJfb2YocGFyZW50LCBzdHJ1Y3QgZmFrZV9lZXByb20sIHBh cmVudCk7CisKKwkJY291bnQgPSBtaW4oY291bnQsIEZBS0VfRUVQUk9NX0NPTlRFTlRTX1NJWkUg LSBvZmZzZXQpOworCQltZW1jcHkodGhpcy0+Y29udGVudHMgKyBvZmZzZXQsIGJ1ZmZlciwgY291 bnQpOworCisJCXJldHVybiBjb3VudDsKKwl9CisKKwl2b2lkIGZha2VfZWVwcm9tX2luaXQoc3Ry dWN0IGZha2VfZWVwcm9tICp0aGlzKQorCXsKKwkJdGhpcy0+cGFyZW50LnJlYWQgPSBmYWtlX2Vl cHJvbV9yZWFkOworCQl0aGlzLT5wYXJlbnQud3JpdGUgPSBmYWtlX2VlcHJvbV93cml0ZTsKKwkJ bWVtc2V0KHRoaXMtPmNvbnRlbnRzLCAwLCBGQUtFX0VFUFJPTV9DT05URU5UU19TSVpFKTsKKwl9 CisKK1dlIGNhbiBub3cgdXNlIGl0IHRvIHRlc3QgYGBzdHJ1Y3QgZWVwcm9tX2J1ZmZlcmBgOgor CisuLiBjb2RlLWJsb2NrOjogYworCisJc3RydWN0IGVlcHJvbV9idWZmZXJfdGVzdCB7CisJCXN0 cnVjdCBmYWtlX2VlcHJvbSAqZmFrZV9lZXByb207CisJCXN0cnVjdCBlZXByb21fYnVmZmVyICpl ZXByb21fYnVmZmVyOworCX07CisKKwlzdGF0aWMgdm9pZCBlZXByb21fYnVmZmVyX3Rlc3RfZG9l c19ub3Rfd3JpdGVfdW50aWxfZmx1c2goc3RydWN0IGt1bml0ICp0ZXN0KQorCXsKKwkJc3RydWN0 IGVlcHJvbV9idWZmZXJfdGVzdCAqY3R4ID0gdGVzdC0+cHJpdjsKKwkJc3RydWN0IGVlcHJvbV9i dWZmZXIgKmVlcHJvbV9idWZmZXIgPSBjdHgtPmVlcHJvbV9idWZmZXI7CisJCXN0cnVjdCBmYWtl X2VlcHJvbSAqZmFrZV9lZXByb20gPSBjdHgtPmZha2VfZWVwcm9tOworCQljaGFyIGJ1ZmZlcltd ID0gezB4ZmZ9OworCisJCWVlcHJvbV9idWZmZXItPmZsdXNoX2NvdW50ID0gU0laRV9NQVg7CisK KwkJZWVwcm9tX2J1ZmZlci0+d3JpdGUoZWVwcm9tX2J1ZmZlciwgYnVmZmVyLCAxKTsKKwkJS1VO SVRfRVhQRUNUX0VRKHRlc3QsIGZha2VfZWVwcm9tLT5jb250ZW50c1swXSwgMCk7CisKKwkJZWVw cm9tX2J1ZmZlci0+d3JpdGUoZWVwcm9tX2J1ZmZlciwgYnVmZmVyLCAxKTsKKwkJS1VOSVRfRVhQ RUNUX0VRKHRlc3QsIGZha2VfZWVwcm9tLT5jb250ZW50c1sxXSwgMCk7CisKKwkJZWVwcm9tX2J1 ZmZlci0+Zmx1c2goZWVwcm9tX2J1ZmZlcik7CisJCUtVTklUX0VYUEVDVF9FUSh0ZXN0LCBmYWtl X2VlcHJvbS0+Y29udGVudHNbMF0sIDB4ZmYpOworCQlLVU5JVF9FWFBFQ1RfRVEodGVzdCwgZmFr ZV9lZXByb20tPmNvbnRlbnRzWzFdLCAweGZmKTsKKwl9CisKKwlzdGF0aWMgdm9pZCBlZXByb21f YnVmZmVyX3Rlc3RfZmx1c2hlc19hZnRlcl9mbHVzaF9jb3VudF9tZXQoc3RydWN0IGt1bml0ICp0 ZXN0KQorCXsKKwkJc3RydWN0IGVlcHJvbV9idWZmZXJfdGVzdCAqY3R4ID0gdGVzdC0+cHJpdjsK KwkJc3RydWN0IGVlcHJvbV9idWZmZXIgKmVlcHJvbV9idWZmZXIgPSBjdHgtPmVlcHJvbV9idWZm ZXI7CisJCXN0cnVjdCBmYWtlX2VlcHJvbSAqZmFrZV9lZXByb20gPSBjdHgtPmZha2VfZWVwcm9t OworCQljaGFyIGJ1ZmZlcltdID0gezB4ZmZ9OworCisJCWVlcHJvbV9idWZmZXItPmZsdXNoX2Nv dW50ID0gMjsKKworCQllZXByb21fYnVmZmVyLT53cml0ZShlZXByb21fYnVmZmVyLCBidWZmZXIs IDEpOworCQlLVU5JVF9FWFBFQ1RfRVEodGVzdCwgZmFrZV9lZXByb20tPmNvbnRlbnRzWzBdLCAw KTsKKworCQllZXByb21fYnVmZmVyLT53cml0ZShlZXByb21fYnVmZmVyLCBidWZmZXIsIDEpOwor CQlLVU5JVF9FWFBFQ1RfRVEodGVzdCwgZmFrZV9lZXByb20tPmNvbnRlbnRzWzBdLCAweGZmKTsK KwkJS1VOSVRfRVhQRUNUX0VRKHRlc3QsIGZha2VfZWVwcm9tLT5jb250ZW50c1sxXSwgMHhmZik7 CisJfQorCisJc3RhdGljIHZvaWQgZWVwcm9tX2J1ZmZlcl90ZXN0X2ZsdXNoZXNfaW5jcmVtZW50 c19vZl9mbHVzaF9jb3VudChzdHJ1Y3Qga3VuaXQgKnRlc3QpCisJeworCQlzdHJ1Y3QgZWVwcm9t X2J1ZmZlcl90ZXN0ICpjdHggPSB0ZXN0LT5wcml2OworCQlzdHJ1Y3QgZWVwcm9tX2J1ZmZlciAq ZWVwcm9tX2J1ZmZlciA9IGN0eC0+ZWVwcm9tX2J1ZmZlcjsKKwkJc3RydWN0IGZha2VfZWVwcm9t ICpmYWtlX2VlcHJvbSA9IGN0eC0+ZmFrZV9lZXByb207CisJCWNoYXIgYnVmZmVyW10gPSB7MHhm ZiwgMHhmZn07CisKKwkJZWVwcm9tX2J1ZmZlci0+Zmx1c2hfY291bnQgPSAyOworCisJCWVlcHJv bV9idWZmZXItPndyaXRlKGVlcHJvbV9idWZmZXIsIGJ1ZmZlciwgMSk7CisJCUtVTklUX0VYUEVD VF9FUSh0ZXN0LCBmYWtlX2VlcHJvbS0+Y29udGVudHNbMF0sIDApOworCisJCWVlcHJvbV9idWZm ZXItPndyaXRlKGVlcHJvbV9idWZmZXIsIGJ1ZmZlciwgMik7CisJCUtVTklUX0VYUEVDVF9FUSh0 ZXN0LCBmYWtlX2VlcHJvbS0+Y29udGVudHNbMF0sIDB4ZmYpOworCQlLVU5JVF9FWFBFQ1RfRVEo dGVzdCwgZmFrZV9lZXByb20tPmNvbnRlbnRzWzFdLCAweGZmKTsKKwkJLyogU2hvdWxkIGhhdmUg b25seSBmbHVzaGVkIHRoZSBmaXJzdCB0d28gYnl0ZXMuICovCisJCUtVTklUX0VYUEVDVF9FUSh0 ZXN0LCBmYWtlX2VlcHJvbS0+Y29udGVudHNbMl0sIDApOworCX0KKworCXN0YXRpYyBpbnQgZWVw cm9tX2J1ZmZlcl90ZXN0X2luaXQoc3RydWN0IGt1bml0ICp0ZXN0KQorCXsKKwkJc3RydWN0IGVl cHJvbV9idWZmZXJfdGVzdCAqY3R4OworCisJCWN0eCA9IGt1bml0X2t6YWxsb2ModGVzdCwgc2l6 ZW9mKCpjdHgpLCBHRlBfS0VSTkVMKTsKKwkJQVNTRVJUX05PVF9FUlJfT1JfTlVMTCh0ZXN0LCBj dHgpOworCisJCWN0eC0+ZmFrZV9lZXByb20gPSBrdW5pdF9remFsbG9jKHRlc3QsIHNpemVvZigq Y3R4LT5mYWtlX2VlcHJvbSksIEdGUF9LRVJORUwpOworCQlBU1NFUlRfTk9UX0VSUl9PUl9OVUxM KHRlc3QsIGN0eC0+ZmFrZV9lZXByb20pOworCisJCWN0eC0+ZWVwcm9tX2J1ZmZlciA9IG5ld19l ZXByb21fYnVmZmVyKCZjdHgtPmZha2VfZWVwcm9tLT5wYXJlbnQpOworCQlBU1NFUlRfTk9UX0VS Ul9PUl9OVUxMKHRlc3QsIGN0eC0+ZWVwcm9tX2J1ZmZlcik7CisKKwkJdGVzdC0+cHJpdiA9IGN0 eDsKKworCQlyZXR1cm4gMDsKKwl9CisKKwlzdGF0aWMgdm9pZCBlZXByb21fYnVmZmVyX3Rlc3Rf ZXhpdChzdHJ1Y3Qga3VuaXQgKnRlc3QpCisJeworCQlzdHJ1Y3QgZWVwcm9tX2J1ZmZlcl90ZXN0 ICpjdHggPSB0ZXN0LT5wcml2OworCisJCWRlc3Ryb3lfZWVwcm9tX2J1ZmZlcihjdHgtPmVlcHJv bV9idWZmZXIpOworCX0KKwotLSAKMi4yMS4wLjM5Mi5nZjhmNjc4NzE1OWUtZ29vZwoKX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KZHJpLWRldmVsIG1haWxp bmcgbGlzdApkcmktZGV2ZWxAbGlzdHMuZnJlZWRlc2t0b3Aub3JnCmh0dHBzOi8vbGlzdHMuZnJl ZWRlc2t0b3Aub3JnL21haWxtYW4vbGlzdGluZm8vZHJpLWRldmVs From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-21.6 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C44EC4360F for ; Thu, 4 Apr 2019 22:10:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1355B2177E for ; Thu, 4 Apr 2019 22:10:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="HQ5eBI/j" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731085AbfDDWKk (ORCPT ); Thu, 4 Apr 2019 18:10:40 -0400 Received: from mail-qk1-f201.google.com ([209.85.222.201]:38339 "EHLO mail-qk1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731040AbfDDWKh (ORCPT ); Thu, 4 Apr 2019 18:10:37 -0400 Received: by mail-qk1-f201.google.com with SMTP id c67so3455952qkg.5 for ; Thu, 04 Apr 2019 15:10:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=QnDrrGbCo53L11X5mUfPFeYZnVbg3fp/tx/NOx6FCzA=; b=HQ5eBI/j0M3KuYR7idjjQwsnt2Pe1NISC/ARhHQpcLZO843n94sG+AGe6aJEbEHXjH DsdmDtPCFsVN5UDmgz2U0US1QJzA8VLTTyExo4eWr9jHzj4VRZrFcnjzQuFEIRhimIga femSsIuvfvmJ1F7mA6XhvGdEtCYSC+6rZXzlTEvETUXZLUJE7wN0G/Y/PIVY/AjyDkxf uMwoKGcO1wau4CQXoVLNT6vVsFuLTcK6FWIKn2zErneKJygZob3/NKTmhJzXegLBZaW1 +MIiC3ZDoFx5uGpqd+6QCi/XjfLFBhUgdAsmljU6WALAebOeaF75NmpSlcbCg7cofbtB RYcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=QnDrrGbCo53L11X5mUfPFeYZnVbg3fp/tx/NOx6FCzA=; b=QrAZBjlKtAv/o62w+nrL+490X4bGz33yqlLd7fdJ83lg5HxvHpIea+zf7ydIYUIX34 zk7TKCGwTCqSuBjTgbtuKLZYTNxP777hVpYm6TOLhdpwfvKbG7HanNzCe8kPdo4tUXAw LZAp7YxlmSp7iD8Hx9k5USW0fiXBWX8lvJ9von6mu/IKPJHSMavtqCuj98t1YTGrccvm 4OGlJetkg9QWHC0GYDCpg+Oyp5UJElFgtfkRqCVP7igeusTKZR7i62wUw1wo2ruZ5mo+ ZOOslvxIta//GyQwpXpZCw8gblEDrtv6fv/jllDdon+/AvloHs2Ud+CxPufMKvBsrtvN BkTg== X-Gm-Message-State: APjAAAUS8NWpDx0yVWJa3Bn9LX06QAMpBly3NmIJmbp+hCIVr86J/Lk5 /Z5JHkMclNTAKCG80Qxx0tvaTMc1mJh4hRJ9zEWlhw== X-Google-Smtp-Source: APXvYqxwBd6MIskXvtvgI8r2bf2NeXAPmH5apwWgpY74qQMKldfxVzK3Xh7xnEgNXIZYupZgB08LDsQ/OdWkvKh87walOA== X-Received: by 2002:a05:620a:149b:: with SMTP id w27mr1226774qkj.30.1554415836168; Thu, 04 Apr 2019 15:10:36 -0700 (PDT) Date: Thu, 4 Apr 2019 15:06:49 -0700 In-Reply-To: <20190404220652.19765-1-brendanhiggins@google.com> Message-Id: <20190404220652.19765-15-brendanhiggins@google.com> Mime-Version: 1.0 References: <20190404220652.19765-1-brendanhiggins@google.com> X-Mailer: git-send-email 2.21.0.392.gf8f6787159e-goog Subject: [PATCH v1 14/17] Documentation: kunit: add documentation for KUnit From: Brendan Higgins To: corbet@lwn.net, frowand.list@gmail.com, keescook@google.com, kieran.bingham@ideasonboard.com, mcgrof@kernel.org, robh@kernel.org, shuah@kernel.org, yamada.masahiro@socionext.com Cc: devicetree@vger.kernel.org, dri-devel@lists.freedesktop.org, kunit-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-nvdimm@lists.01.org, linux-um@lists.infradead.org, Alexander.Levin@microsoft.com, Tim.Bird@sony.com, amir73il@gmail.com, dan.carpenter@oracle.com, dan.j.williams@intel.com, daniel@ffwll.ch, gregkh@linuxfoundation.org, jdike@addtoit.com, joel@jms.id.au, julia.lawall@lip6.fr, khilman@baylibre.com, knut.omang@oracle.com, mpe@ellerman.id.au, pmladek@suse.com, richard@nod.at, rostedt@goodmis.org, wfg@linux.intel.com, Brendan Higgins , Felix Guo Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add documentation for KUnit, the Linux kernel unit testing framework. - Add intro and usage guide for KUnit - Add API reference Signed-off-by: Felix Guo Signed-off-by: Brendan Higgins --- Documentation/index.rst | 1 + Documentation/kunit/api/index.rst | 16 ++ Documentation/kunit/api/test.rst | 15 + Documentation/kunit/faq.rst | 46 +++ Documentation/kunit/index.rst | 80 ++++++ Documentation/kunit/start.rst | 180 ++++++++++++ Documentation/kunit/usage.rst | 447 ++++++++++++++++++++++++++++++ 7 files changed, 785 insertions(+) create mode 100644 Documentation/kunit/api/index.rst create mode 100644 Documentation/kunit/api/test.rst create mode 100644 Documentation/kunit/faq.rst create mode 100644 Documentation/kunit/index.rst create mode 100644 Documentation/kunit/start.rst create mode 100644 Documentation/kunit/usage.rst diff --git a/Documentation/index.rst b/Documentation/index.rst index 80a421cb935e7..264cfd613a774 100644 --- a/Documentation/index.rst +++ b/Documentation/index.rst @@ -65,6 +65,7 @@ merged much easier. kernel-hacking/index trace/index maintainer/index + kunit/index Kernel API documentation ------------------------ diff --git a/Documentation/kunit/api/index.rst b/Documentation/kunit/api/index.rst new file mode 100644 index 0000000000000..c31c530088153 --- /dev/null +++ b/Documentation/kunit/api/index.rst @@ -0,0 +1,16 @@ +.. SPDX-License-Identifier: GPL-2.0 + +============= +API Reference +============= +.. toctree:: + + test + +This section documents the KUnit kernel testing API. It is divided into 3 +sections: + +================================= ============================================== +:doc:`test` documents all of the standard testing API + excluding mocking or mocking related features. +================================= ============================================== diff --git a/Documentation/kunit/api/test.rst b/Documentation/kunit/api/test.rst new file mode 100644 index 0000000000000..7c926014f047c --- /dev/null +++ b/Documentation/kunit/api/test.rst @@ -0,0 +1,15 @@ +.. SPDX-License-Identifier: GPL-2.0 + +======== +Test API +======== + +This file documents all of the standard testing API excluding mocking or mocking +related features. + +.. kernel-doc:: include/kunit/test.h + :internal: + +.. kernel-doc:: include/kunit/kunit-stream.h + :internal: + diff --git a/Documentation/kunit/faq.rst b/Documentation/kunit/faq.rst new file mode 100644 index 0000000000000..cb8e4fb2257a0 --- /dev/null +++ b/Documentation/kunit/faq.rst @@ -0,0 +1,46 @@ +.. SPDX-License-Identifier: GPL-2.0 + +========================================= +Frequently Asked Questions +========================================= + +How is this different from Autotest, kselftest, etc? +==================================================== +KUnit is a unit testing framework. Autotest, kselftest (and some others) are +not. + +A `unit test `_ is supposed to +test a single unit of code in isolation, hence the name. A unit test should be +the finest granularity of testing and as such should allow all possible code +paths to be tested in the code under test; this is only possible if the code +under test is very small and does not have any external dependencies outside of +the test's control like hardware. + +There are no testing frameworks currently available for the kernel that do not +require installing the kernel on a test machine or in a VM and all require +tests to be written in userspace and run on the kernel under test; this is true +for Autotest, kselftest, and some others, disqualifying any of them from being +considered unit testing frameworks. + +What is the difference between a unit test and these other kinds of tests? +========================================================================== +Most existing tests for the Linux kernel would be categorized as an integration +test, or an end-to-end test. + +- A unit test is supposed to test a single unit of code in isolation, hence the + name. A unit test should be the finest granularity of testing and as such + should allow all possible code paths to be tested in the code under test; this + is only possible if the code under test is very small and does not have any + external dependencies outside of the test's control like hardware. +- An integration test tests the interaction between a minimal set of components, + usually just two or three. For example, someone might write an integration + test to test the interaction between a driver and a piece of hardware, or to + test the interaction between the userspace libraries the kernel provides and + the kernel itself; however, one of these tests would probably not test the + entire kernel along with hardware interactions and interactions with the + userspace. +- An end-to-end test usually tests the entire system from the perspective of the + code under test. For example, someone might write an end-to-end test for the + kernel by installing a production configuration of the kernel on production + hardware with a production userspace and then trying to exercise some behavior + that depends on interactions between the hardware, the kernel, and userspace. diff --git a/Documentation/kunit/index.rst b/Documentation/kunit/index.rst new file mode 100644 index 0000000000000..c6710211b647f --- /dev/null +++ b/Documentation/kunit/index.rst @@ -0,0 +1,80 @@ +.. SPDX-License-Identifier: GPL-2.0 + +========================================= +KUnit - Unit Testing for the Linux Kernel +========================================= + +.. toctree:: + :maxdepth: 2 + + start + usage + api/index + faq + +What is KUnit? +============== + +KUnit is a lightweight unit testing and mocking framework for the Linux kernel. +These tests are able to be run locally on a developer's workstation without a VM +or special hardware. + +KUnit is heavily inspired by JUnit, Python's unittest.mock, and +Googletest/Googlemock for C++. KUnit provides facilities for defining unit test +cases, grouping related test cases into test suites, providing common +infrastructure for running tests, and much more. + +Get started now: :doc:`start` + +Why KUnit? +========== + +A unit test is supposed to test a single unit of code in isolation, hence the +name. A unit test should be the finest granularity of testing and as such should +allow all possible code paths to be tested in the code under test; this is only +possible if the code under test is very small and does not have any external +dependencies outside of the test's control like hardware. + +Outside of KUnit, there are no testing frameworks currently +available for the kernel that do not require installing the kernel on a test +machine or in a VM and all require tests to be written in userspace running on +the kernel; this is true for Autotest, and kselftest, disqualifying +any of them from being considered unit testing frameworks. + +KUnit addresses the problem of being able to run tests without needing a virtual +machine or actual hardware with User Mode Linux. User Mode Linux is a Linux +architecture, like ARM or x86; however, unlike other architectures it compiles +to a standalone program that can be run like any other program directly inside +of a host operating system; to be clear, it does not require any virtualization +support; it is just a regular program. + +KUnit is fast. Excluding build time, from invocation to completion KUnit can run +several dozen tests in only 10 to 20 seconds; this might not sound like a big +deal to some people, but having such fast and easy to run tests fundamentally +changes the way you go about testing and even writing code in the first place. +Linus himself said in his `git talk at Google +`_: + + "... a lot of people seem to think that performance is about doing the + same thing, just doing it faster, and that is not true. That is not what + performance is all about. If you can do something really fast, really + well, people will start using it differently." + +In this context Linus was talking about branching and merging, +but this point also applies to testing. If your tests are slow, unreliable, are +difficult to write, and require a special setup or special hardware to run, +then you wait a lot longer to write tests, and you wait a lot longer to run +tests; this means that tests are likely to break, unlikely to test a lot of +things, and are unlikely to be rerun once they pass. If your tests are really +fast, you run them all the time, every time you make a change, and every time +someone sends you some code. Why trust that someone ran all their tests +correctly on every change when you can just run them yourself in less time than +it takes to read his / her test log? + +How do I use it? +=================== + +* :doc:`start` - for new users of KUnit +* :doc:`usage` - for a more detailed explanation of KUnit features +* :doc:`api/index` - for the list of KUnit APIs used for testing + diff --git a/Documentation/kunit/start.rst b/Documentation/kunit/start.rst new file mode 100644 index 0000000000000..5cdba5091905e --- /dev/null +++ b/Documentation/kunit/start.rst @@ -0,0 +1,180 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=============== +Getting Started +=============== + +Installing dependencies +======================= +KUnit has the same dependencies as the Linux kernel. As long as you can build +the kernel, you can run KUnit. + +KUnit Wrapper +============= +Included with KUnit is a simple Python wrapper that helps format the output to +easily use and read KUnit output. It handles building and running the kernel, as +well as formatting the output. + +The wrapper can be run with: + +.. code-block:: bash + + ./tools/testing/kunit/kunit.py + +Creating a kunitconfig +====================== +The Python script is a thin wrapper around Kbuild as such, it needs to be +configured with a ``kunitconfig`` file. This file essentially contains the +regular Kernel config, with the specific test targets as well. + +.. code-block:: bash + + git clone -b master https://kunit.googlesource.com/kunitconfig $PATH_TO_KUNITCONFIG_REPO + cd $PATH_TO_LINUX_REPO + ln -s $PATH_TO_KUNIT_CONFIG_REPO/kunitconfig kunitconfig + +You may want to add kunitconfig to your local gitignore. + +Verifying KUnit Works +------------------------- + +To make sure that everything is set up correctly, simply invoke the Python +wrapper from your kernel repo: + +.. code-block:: bash + + ./tools/testing/kunit/kunit.py + +.. note:: + You may want to run ``make mrproper`` first. + +If everything worked correctly, you should see the following: + +.. code-block:: bash + + Generating .config ... + Building KUnit Kernel ... + Starting KUnit Kernel ... + +followed by a list of tests that are run. All of them should be passing. + +.. note:: + Because it is building a lot of sources for the first time, the ``Building + kunit kernel`` step may take a while. + +Writing your first test +========================== + +In your kernel repo let's add some code that we can test. Create a file +``drivers/misc/example.h`` with the contents: + +.. code-block:: c + + int misc_example_add(int left, int right); + +create a file ``drivers/misc/example.c``: + +.. code-block:: c + + #include + + #include "example.h" + + int misc_example_add(int left, int right) + { + return left + right; + } + +Now add the following lines to ``drivers/misc/Kconfig``: + +.. code-block:: kconfig + + config MISC_EXAMPLE + bool "My example" + +and the following lines to ``drivers/misc/Makefile``: + +.. code-block:: make + + obj-$(CONFIG_MISC_EXAMPLE) += example.o + +Now we are ready to write the test. The test will be in +``drivers/misc/example-test.c``: + +.. code-block:: c + + #include + #include "example.h" + + /* Define the test cases. */ + + static void misc_example_add_test_basic(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 1, misc_example_add(1, 0)); + KUNIT_EXPECT_EQ(test, 2, misc_example_add(1, 1)); + KUNIT_EXPECT_EQ(test, 0, misc_example_add(-1, 1)); + KUNIT_EXPECT_EQ(test, INT_MAX, misc_example_add(0, INT_MAX)); + KUNIT_EXPECT_EQ(test, -1, misc_example_add(INT_MAX, INT_MIN)); + } + + static void misc_example_test_failure(struct kunit *test) + { + KUNIT_FAIL(test, "This test never passes."); + } + + static struct kunit_case misc_example_test_cases[] = { + KUNIT_CASE(misc_example_add_test_basic), + KUNIT_CASE(misc_example_test_failure), + {}, + }; + + static struct kunit_module misc_example_test_module = { + .name = "misc-example", + .test_cases = misc_example_test_cases, + }; + module_test(misc_example_test_module); + +Now add the following to ``drivers/misc/Kconfig``: + +.. code-block:: kconfig + + config MISC_EXAMPLE_TEST + bool "Test for my example" + depends on MISC_EXAMPLE && KUNIT + +and the following to ``drivers/misc/Makefile``: + +.. code-block:: make + + obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o + +Now add it to your ``kunitconfig``: + +.. code-block:: none + + CONFIG_MISC_EXAMPLE=y + CONFIG_MISC_EXAMPLE_TEST=y + +Now you can run the test: + +.. code-block:: bash + + ./tools/testing/kunit/kunit.py + +You should see the following failure: + +.. code-block:: none + + ... + [16:08:57] [PASSED] misc-example:misc_example_add_test_basic + [16:08:57] [FAILED] misc-example:misc_example_test_failure + [16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17 + [16:08:57] This test never passes. + ... + +Congrats! You just wrote your first KUnit test! + +Next Steps +============= +* Check out the :doc:`usage` page for a more + in-depth explanation of KUnit. diff --git a/Documentation/kunit/usage.rst b/Documentation/kunit/usage.rst new file mode 100644 index 0000000000000..5c83ea9e21bc5 --- /dev/null +++ b/Documentation/kunit/usage.rst @@ -0,0 +1,447 @@ +.. SPDX-License-Identifier: GPL-2.0 + +============= +Using KUnit +============= + +The purpose of this document is to describe what KUnit is, how it works, how it +is intended to be used, and all the concepts and terminology that are needed to +understand it. This guide assumes a working knowledge of the Linux kernel and +some basic knowledge of testing. + +For a high level introduction to KUnit, including setting up KUnit for your +project, see :doc:`start`. + +Organization of this document +================================= + +This document is organized into two main sections: Testing and Isolating +Behavior. The first covers what a unit test is and how to use KUnit to write +them. The second covers how to use KUnit to isolate code and make it possible +to unit test code that was otherwise un-unit-testable. + +Testing +========== + +What is KUnit? +------------------ + +"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing +Framework." KUnit is intended first and foremost for writing unit tests; it is +general enough that it can be used to write integration tests; however, this is +a secondary goal. KUnit has no ambition of being the only testing framework for +the kernel; for example, it does not intend to be an end-to-end testing +framework. + +What is Unit Testing? +------------------------- + +A `unit test `_ is a test that +tests code at the smallest possible scope, a *unit* of code. In the C +programming language that's a function. + +Unit tests should be written for all the publicly exposed functions in a +compilation unit; so that is all the functions that are exported in either a +*class* (defined below) or all functions which are **not** static. + +Writing Tests +------------- + +Test Cases +~~~~~~~~~~ + +The fundamental unit in KUnit is the test case. A test case is a function with +the signature ``void (*)(struct kunit *test)``. It calls a function to be tested +and then sets *expectations* for what should happen. For example: + +.. code-block:: c + + void example_test_success(struct kunit *test) + { + } + + void example_test_failure(struct kunit *test) + { + KUNIT_FAIL(test, "This test never passes."); + } + +In the above example ``example_test_success`` always passes because it does +nothing; no expectations are set, so all expectations pass. On the other hand +``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is +a special expectation that logs a message and causes the test case to fail. + +Expectations +~~~~~~~~~~~~ +An *expectation* is a way to specify that you expect a piece of code to do +something in a test. An expectation is called like a function. A test is made +by setting expectations about the behavior of a piece of code under test; when +one or more of the expectations fail, the test case fails and information about +the failure is logged. For example: + +.. code-block:: c + + void add_test_basic(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 1, add(1, 0)); + KUNIT_EXPECT_EQ(test, 2, add(1, 1)); + } + +In the above example ``add_test_basic`` makes a number of assertions about the +behavior of a function called ``add``; the first parameter is always of type +``struct kunit *``, which contains information about the current test context; +the second parameter, in this case, is what the value is expected to be; the +last value is what the value actually is. If ``add`` passes all of these +expectations, the test case, ``add_test_basic`` will pass; if any one of these +expectations fail, the test case will fail. + +It is important to understand that a test case *fails* when any expectation is +violated; however, the test will continue running, potentially trying other +expectations until the test case ends or is otherwise terminated. This is as +opposed to *assertions* which are discussed later. + +To learn about more expectations supported by KUnit, see :doc:`api/test`. + +.. note:: + A single test case should be pretty short, pretty easy to understand, + focused on a single behavior. + +For example, if we wanted to properly test the add function above, we would +create additional tests cases which would each test a different property that an +add function should have like this: + +.. code-block:: c + + void add_test_basic(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 1, add(1, 0)); + KUNIT_EXPECT_EQ(test, 2, add(1, 1)); + } + + void add_test_negative(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 0, add(-1, 1)); + } + + void add_test_max(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX)); + KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN)); + } + + void add_test_overflow(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1)); + } + +Notice how it is immediately obvious what all the properties that we are testing +for are. + +Assertions +~~~~~~~~~~ + +KUnit also has the concept of an *assertion*. An assertion is just like an +expectation except the assertion immediately terminates the test case if it is +not satisfied. + +For example: + +.. code-block:: c + + static void mock_test_do_expect_default_return(struct kunit *test) + { + struct mock_test_context *ctx = test->priv; + struct mock *mock = ctx->mock; + int param0 = 5, param1 = -5; + const char *two_param_types[] = {"int", "int"}; + const void *two_params[] = {¶m0, ¶m1}; + const void *ret; + + ret = mock->do_expect(mock, + "test_printk", test_printk, + two_param_types, two_params, + ARRAY_SIZE(two_params)); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret); + KUNIT_EXPECT_EQ(test, -4, *((int *) ret)); + } + +In this example, the method under test should return a pointer to a value, so +if the pointer returned by the method is null or an errno, we don't want to +bother continuing the test since the following expectation could crash the test +case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if +the appropriate conditions have not been satisfied to complete the test. + +Modules / Test Suites +~~~~~~~~~~~~~~~~~~~~~ + +Now obviously one unit test isn't very helpful; the power comes from having +many test cases covering all of your behaviors. Consequently it is common to +have many *similar* tests; in order to reduce duplication in these closely +related tests most unit testing frameworks provide the concept of a *test +suite*, in KUnit we call it a *test module*; all it is is just a collection of +test cases for a unit of code with a set up function that gets invoked before +every test cases and then a tear down function that gets invoked after every +test case completes. + +Example: + +.. code-block:: c + + static struct kunit_case example_test_cases[] = { + KUNIT_CASE(example_test_foo), + KUNIT_CASE(example_test_bar), + KUNIT_CASE(example_test_baz), + {}, + }; + + static struct kunit_module example_test_module = { + .name = "example", + .init = example_test_init, + .exit = example_test_exit, + .test_cases = example_test_cases, + }; + module_test(example_test_module); + +In the above example the test suite, ``example_test_module``, would run the test +cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``, each +would have ``example_test_init`` called immediately before it and would have +``example_test_exit`` called immediately after it. +``module_test(example_test_module)`` registers the test suite with the KUnit +test framework. + +.. note:: + A test case will only be run if it is associated with a test suite. + +For a more information on these types of things see the :doc:`api/test`. + +Isolating Behavior +================== + +The most important aspect of unit testing that other forms of testing do not +provide is the ability to limit the amount of code under test to a single unit. +In practice, this is only possible by being able to control what code gets run +when the unit under test calls a function and this is usually accomplished +through some sort of indirection where a function is exposed as part of an API +such that the definition of that function can be changed without affecting the +rest of the code base. In the kernel this primarily comes from two constructs, +classes, structs that contain function pointers that are provided by the +implementer, and architecture specific functions which have definitions selected +at compile time. + +Classes +------- + +Classes are not a construct that is built into the C programming language; +however, it is an easily derived concept. Accordingly, pretty much every project +that does not use a standardized object oriented library (like GNOME's GObject) +has their own slightly different way of doing object oriented programming; the +Linux kernel is no exception. + +The central concept in kernel object oriented programming is the class. In the +kernel, a *class* is a struct that contains function pointers. This creates a +contract between *implementers* and *users* since it forces them to use the +same function signature without having to call the function directly. In order +for it to truly be a class, the function pointers must specify that a pointer +to the class, known as a *class handle*, be one of the parameters; this makes +it possible for the member functions (also known as *methods*) to have access +to member variables (more commonly known as *fields*) allowing the same +implementation to have multiple *instances*. + +Typically a class can be *overridden* by *child classes* by embedding the +*parent class* in the child class. Then when a method provided by the child +class is called, the child implementation knows that the pointer passed to it is +of a parent contained within the child; because of this, the child can compute +the pointer to itself because the pointer to the parent is always a fixed offset +from the pointer to the child; this offset is the offset of the parent contained +in the child struct. For example: + +.. code-block:: c + + struct shape { + int (*area)(struct shape *this); + }; + + struct rectangle { + struct shape parent; + int length; + int width; + }; + + int rectangle_area(struct shape *this) + { + struct rectangle *self = container_of(this, struct shape, parent); + + return self->length * self->width; + }; + + void rectangle_new(struct rectangle *self, int length, int width) + { + self->parent.area = rectangle_area; + self->length = length; + self->width = width; + } + +In this example (as in most kernel code) the operation of computing the pointer +to the child from the pointer to the parent is done by ``container_of``. + +Faking Classes +~~~~~~~~~~~~~~ + +In order to unit test a piece of code that calls a method in a class, the +behavior of the method must be controllable, otherwise the test ceases to be a +unit test and becomes an integration test. + +A fake just provides an implementation of a piece of code that is different than +what runs in a production instance, but behaves identically from the standpoint +of the callers; this is usually done to replace a dependency that is hard to +deal with, or is slow. + +A good example for this might be implementing a fake EEPROM that just stores the +"contents" in an internal buffer. For example, let's assume we have a class that +represents an EEPROM: + +.. code-block:: c + + struct eeprom { + ssize_t (*read)(struct eeprom *this, size_t offset, char *buffer, size_t count); + ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count); + }; + +And we want to test some code that buffers writes to the EEPROM: + +.. code-block:: c + + struct eeprom_buffer { + ssize_t (*write)(struct eeprom_buffer *this, const char *buffer, size_t count); + int flush(struct eeprom_buffer *this); + size_t flush_count; /* Flushes when buffer exceeds flush_count. */ + }; + + struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom); + void destroy_eeprom_buffer(struct eeprom *eeprom); + +We can easily test this code by *faking out* the underlying EEPROM: + +.. code-block:: c + + struct fake_eeprom { + struct eeprom parent; + char contents[FAKE_EEPROM_CONTENTS_SIZE]; + }; + + ssize_t fake_eeprom_read(struct eeprom *parent, size_t offset, char *buffer, size_t count) + { + struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent); + + count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset); + memcpy(buffer, this->contents + offset, count); + + return count; + } + + ssize_t fake_eeprom_write(struct eeprom *this, size_t offset, const char *buffer, size_t count) + { + struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent); + + count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset); + memcpy(this->contents + offset, buffer, count); + + return count; + } + + void fake_eeprom_init(struct fake_eeprom *this) + { + this->parent.read = fake_eeprom_read; + this->parent.write = fake_eeprom_write; + memset(this->contents, 0, FAKE_EEPROM_CONTENTS_SIZE); + } + +We can now use it to test ``struct eeprom_buffer``: + +.. code-block:: c + + struct eeprom_buffer_test { + struct fake_eeprom *fake_eeprom; + struct eeprom_buffer *eeprom_buffer; + }; + + static void eeprom_buffer_test_does_not_write_until_flush(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer; + struct fake_eeprom *fake_eeprom = ctx->fake_eeprom; + char buffer[] = {0xff}; + + eeprom_buffer->flush_count = SIZE_MAX; + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0); + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0); + + eeprom_buffer->flush(eeprom_buffer); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff); + } + + static void eeprom_buffer_test_flushes_after_flush_count_met(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer; + struct fake_eeprom *fake_eeprom = ctx->fake_eeprom; + char buffer[] = {0xff}; + + eeprom_buffer->flush_count = 2; + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0); + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff); + } + + static void eeprom_buffer_test_flushes_increments_of_flush_count(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer; + struct fake_eeprom *fake_eeprom = ctx->fake_eeprom; + char buffer[] = {0xff, 0xff}; + + eeprom_buffer->flush_count = 2; + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0); + + eeprom_buffer->write(eeprom_buffer, buffer, 2); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff); + /* Should have only flushed the first two bytes. */ + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[2], 0); + } + + static int eeprom_buffer_test_init(struct kunit *test) + { + struct eeprom_buffer_test *ctx; + + ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL); + ASSERT_NOT_ERR_OR_NULL(test, ctx); + + ctx->fake_eeprom = kunit_kzalloc(test, sizeof(*ctx->fake_eeprom), GFP_KERNEL); + ASSERT_NOT_ERR_OR_NULL(test, ctx->fake_eeprom); + + ctx->eeprom_buffer = new_eeprom_buffer(&ctx->fake_eeprom->parent); + ASSERT_NOT_ERR_OR_NULL(test, ctx->eeprom_buffer); + + test->priv = ctx; + + return 0; + } + + static void eeprom_buffer_test_exit(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + + destroy_eeprom_buffer(ctx->eeprom_buffer); + } + -- 2.21.0.392.gf8f6787159e-goog From mboxrd@z Thu Jan 1 00:00:00 1970 From: brendanhiggins at google.com (Brendan Higgins) Date: Thu, 4 Apr 2019 15:06:49 -0700 Subject: [PATCH v1 14/17] Documentation: kunit: add documentation for KUnit In-Reply-To: <20190404220652.19765-1-brendanhiggins@google.com> References: <20190404220652.19765-1-brendanhiggins@google.com> Message-ID: <20190404220652.19765-15-brendanhiggins@google.com> Add documentation for KUnit, the Linux kernel unit testing framework. - Add intro and usage guide for KUnit - Add API reference Signed-off-by: Felix Guo Signed-off-by: Brendan Higgins --- Documentation/index.rst | 1 + Documentation/kunit/api/index.rst | 16 ++ Documentation/kunit/api/test.rst | 15 + Documentation/kunit/faq.rst | 46 +++ Documentation/kunit/index.rst | 80 ++++++ Documentation/kunit/start.rst | 180 ++++++++++++ Documentation/kunit/usage.rst | 447 ++++++++++++++++++++++++++++++ 7 files changed, 785 insertions(+) create mode 100644 Documentation/kunit/api/index.rst create mode 100644 Documentation/kunit/api/test.rst create mode 100644 Documentation/kunit/faq.rst create mode 100644 Documentation/kunit/index.rst create mode 100644 Documentation/kunit/start.rst create mode 100644 Documentation/kunit/usage.rst diff --git a/Documentation/index.rst b/Documentation/index.rst index 80a421cb935e7..264cfd613a774 100644 --- a/Documentation/index.rst +++ b/Documentation/index.rst @@ -65,6 +65,7 @@ merged much easier. kernel-hacking/index trace/index maintainer/index + kunit/index Kernel API documentation ------------------------ diff --git a/Documentation/kunit/api/index.rst b/Documentation/kunit/api/index.rst new file mode 100644 index 0000000000000..c31c530088153 --- /dev/null +++ b/Documentation/kunit/api/index.rst @@ -0,0 +1,16 @@ +.. SPDX-License-Identifier: GPL-2.0 + +============= +API Reference +============= +.. toctree:: + + test + +This section documents the KUnit kernel testing API. It is divided into 3 +sections: + +================================= ============================================== +:doc:`test` documents all of the standard testing API + excluding mocking or mocking related features. +================================= ============================================== diff --git a/Documentation/kunit/api/test.rst b/Documentation/kunit/api/test.rst new file mode 100644 index 0000000000000..7c926014f047c --- /dev/null +++ b/Documentation/kunit/api/test.rst @@ -0,0 +1,15 @@ +.. SPDX-License-Identifier: GPL-2.0 + +======== +Test API +======== + +This file documents all of the standard testing API excluding mocking or mocking +related features. + +.. kernel-doc:: include/kunit/test.h + :internal: + +.. kernel-doc:: include/kunit/kunit-stream.h + :internal: + diff --git a/Documentation/kunit/faq.rst b/Documentation/kunit/faq.rst new file mode 100644 index 0000000000000..cb8e4fb2257a0 --- /dev/null +++ b/Documentation/kunit/faq.rst @@ -0,0 +1,46 @@ +.. SPDX-License-Identifier: GPL-2.0 + +========================================= +Frequently Asked Questions +========================================= + +How is this different from Autotest, kselftest, etc? +==================================================== +KUnit is a unit testing framework. Autotest, kselftest (and some others) are +not. + +A `unit test `_ is supposed to +test a single unit of code in isolation, hence the name. A unit test should be +the finest granularity of testing and as such should allow all possible code +paths to be tested in the code under test; this is only possible if the code +under test is very small and does not have any external dependencies outside of +the test's control like hardware. + +There are no testing frameworks currently available for the kernel that do not +require installing the kernel on a test machine or in a VM and all require +tests to be written in userspace and run on the kernel under test; this is true +for Autotest, kselftest, and some others, disqualifying any of them from being +considered unit testing frameworks. + +What is the difference between a unit test and these other kinds of tests? +========================================================================== +Most existing tests for the Linux kernel would be categorized as an integration +test, or an end-to-end test. + +- A unit test is supposed to test a single unit of code in isolation, hence the + name. A unit test should be the finest granularity of testing and as such + should allow all possible code paths to be tested in the code under test; this + is only possible if the code under test is very small and does not have any + external dependencies outside of the test's control like hardware. +- An integration test tests the interaction between a minimal set of components, + usually just two or three. For example, someone might write an integration + test to test the interaction between a driver and a piece of hardware, or to + test the interaction between the userspace libraries the kernel provides and + the kernel itself; however, one of these tests would probably not test the + entire kernel along with hardware interactions and interactions with the + userspace. +- An end-to-end test usually tests the entire system from the perspective of the + code under test. For example, someone might write an end-to-end test for the + kernel by installing a production configuration of the kernel on production + hardware with a production userspace and then trying to exercise some behavior + that depends on interactions between the hardware, the kernel, and userspace. diff --git a/Documentation/kunit/index.rst b/Documentation/kunit/index.rst new file mode 100644 index 0000000000000..c6710211b647f --- /dev/null +++ b/Documentation/kunit/index.rst @@ -0,0 +1,80 @@ +.. SPDX-License-Identifier: GPL-2.0 + +========================================= +KUnit - Unit Testing for the Linux Kernel +========================================= + +.. toctree:: + :maxdepth: 2 + + start + usage + api/index + faq + +What is KUnit? +============== + +KUnit is a lightweight unit testing and mocking framework for the Linux kernel. +These tests are able to be run locally on a developer's workstation without a VM +or special hardware. + +KUnit is heavily inspired by JUnit, Python's unittest.mock, and +Googletest/Googlemock for C++. KUnit provides facilities for defining unit test +cases, grouping related test cases into test suites, providing common +infrastructure for running tests, and much more. + +Get started now: :doc:`start` + +Why KUnit? +========== + +A unit test is supposed to test a single unit of code in isolation, hence the +name. A unit test should be the finest granularity of testing and as such should +allow all possible code paths to be tested in the code under test; this is only +possible if the code under test is very small and does not have any external +dependencies outside of the test's control like hardware. + +Outside of KUnit, there are no testing frameworks currently +available for the kernel that do not require installing the kernel on a test +machine or in a VM and all require tests to be written in userspace running on +the kernel; this is true for Autotest, and kselftest, disqualifying +any of them from being considered unit testing frameworks. + +KUnit addresses the problem of being able to run tests without needing a virtual +machine or actual hardware with User Mode Linux. User Mode Linux is a Linux +architecture, like ARM or x86; however, unlike other architectures it compiles +to a standalone program that can be run like any other program directly inside +of a host operating system; to be clear, it does not require any virtualization +support; it is just a regular program. + +KUnit is fast. Excluding build time, from invocation to completion KUnit can run +several dozen tests in only 10 to 20 seconds; this might not sound like a big +deal to some people, but having such fast and easy to run tests fundamentally +changes the way you go about testing and even writing code in the first place. +Linus himself said in his `git talk at Google +`_: + + "... a lot of people seem to think that performance is about doing the + same thing, just doing it faster, and that is not true. That is not what + performance is all about. If you can do something really fast, really + well, people will start using it differently." + +In this context Linus was talking about branching and merging, +but this point also applies to testing. If your tests are slow, unreliable, are +difficult to write, and require a special setup or special hardware to run, +then you wait a lot longer to write tests, and you wait a lot longer to run +tests; this means that tests are likely to break, unlikely to test a lot of +things, and are unlikely to be rerun once they pass. If your tests are really +fast, you run them all the time, every time you make a change, and every time +someone sends you some code. Why trust that someone ran all their tests +correctly on every change when you can just run them yourself in less time than +it takes to read his / her test log? + +How do I use it? +=================== + +* :doc:`start` - for new users of KUnit +* :doc:`usage` - for a more detailed explanation of KUnit features +* :doc:`api/index` - for the list of KUnit APIs used for testing + diff --git a/Documentation/kunit/start.rst b/Documentation/kunit/start.rst new file mode 100644 index 0000000000000..5cdba5091905e --- /dev/null +++ b/Documentation/kunit/start.rst @@ -0,0 +1,180 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=============== +Getting Started +=============== + +Installing dependencies +======================= +KUnit has the same dependencies as the Linux kernel. As long as you can build +the kernel, you can run KUnit. + +KUnit Wrapper +============= +Included with KUnit is a simple Python wrapper that helps format the output to +easily use and read KUnit output. It handles building and running the kernel, as +well as formatting the output. + +The wrapper can be run with: + +.. code-block:: bash + + ./tools/testing/kunit/kunit.py + +Creating a kunitconfig +====================== +The Python script is a thin wrapper around Kbuild as such, it needs to be +configured with a ``kunitconfig`` file. This file essentially contains the +regular Kernel config, with the specific test targets as well. + +.. code-block:: bash + + git clone -b master https://kunit.googlesource.com/kunitconfig $PATH_TO_KUNITCONFIG_REPO + cd $PATH_TO_LINUX_REPO + ln -s $PATH_TO_KUNIT_CONFIG_REPO/kunitconfig kunitconfig + +You may want to add kunitconfig to your local gitignore. + +Verifying KUnit Works +------------------------- + +To make sure that everything is set up correctly, simply invoke the Python +wrapper from your kernel repo: + +.. code-block:: bash + + ./tools/testing/kunit/kunit.py + +.. note:: + You may want to run ``make mrproper`` first. + +If everything worked correctly, you should see the following: + +.. code-block:: bash + + Generating .config ... + Building KUnit Kernel ... + Starting KUnit Kernel ... + +followed by a list of tests that are run. All of them should be passing. + +.. note:: + Because it is building a lot of sources for the first time, the ``Building + kunit kernel`` step may take a while. + +Writing your first test +========================== + +In your kernel repo let's add some code that we can test. Create a file +``drivers/misc/example.h`` with the contents: + +.. code-block:: c + + int misc_example_add(int left, int right); + +create a file ``drivers/misc/example.c``: + +.. code-block:: c + + #include + + #include "example.h" + + int misc_example_add(int left, int right) + { + return left + right; + } + +Now add the following lines to ``drivers/misc/Kconfig``: + +.. code-block:: kconfig + + config MISC_EXAMPLE + bool "My example" + +and the following lines to ``drivers/misc/Makefile``: + +.. code-block:: make + + obj-$(CONFIG_MISC_EXAMPLE) += example.o + +Now we are ready to write the test. The test will be in +``drivers/misc/example-test.c``: + +.. code-block:: c + + #include + #include "example.h" + + /* Define the test cases. */ + + static void misc_example_add_test_basic(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 1, misc_example_add(1, 0)); + KUNIT_EXPECT_EQ(test, 2, misc_example_add(1, 1)); + KUNIT_EXPECT_EQ(test, 0, misc_example_add(-1, 1)); + KUNIT_EXPECT_EQ(test, INT_MAX, misc_example_add(0, INT_MAX)); + KUNIT_EXPECT_EQ(test, -1, misc_example_add(INT_MAX, INT_MIN)); + } + + static void misc_example_test_failure(struct kunit *test) + { + KUNIT_FAIL(test, "This test never passes."); + } + + static struct kunit_case misc_example_test_cases[] = { + KUNIT_CASE(misc_example_add_test_basic), + KUNIT_CASE(misc_example_test_failure), + {}, + }; + + static struct kunit_module misc_example_test_module = { + .name = "misc-example", + .test_cases = misc_example_test_cases, + }; + module_test(misc_example_test_module); + +Now add the following to ``drivers/misc/Kconfig``: + +.. code-block:: kconfig + + config MISC_EXAMPLE_TEST + bool "Test for my example" + depends on MISC_EXAMPLE && KUNIT + +and the following to ``drivers/misc/Makefile``: + +.. code-block:: make + + obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o + +Now add it to your ``kunitconfig``: + +.. code-block:: none + + CONFIG_MISC_EXAMPLE=y + CONFIG_MISC_EXAMPLE_TEST=y + +Now you can run the test: + +.. code-block:: bash + + ./tools/testing/kunit/kunit.py + +You should see the following failure: + +.. code-block:: none + + ... + [16:08:57] [PASSED] misc-example:misc_example_add_test_basic + [16:08:57] [FAILED] misc-example:misc_example_test_failure + [16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17 + [16:08:57] This test never passes. + ... + +Congrats! You just wrote your first KUnit test! + +Next Steps +============= +* Check out the :doc:`usage` page for a more + in-depth explanation of KUnit. diff --git a/Documentation/kunit/usage.rst b/Documentation/kunit/usage.rst new file mode 100644 index 0000000000000..5c83ea9e21bc5 --- /dev/null +++ b/Documentation/kunit/usage.rst @@ -0,0 +1,447 @@ +.. SPDX-License-Identifier: GPL-2.0 + +============= +Using KUnit +============= + +The purpose of this document is to describe what KUnit is, how it works, how it +is intended to be used, and all the concepts and terminology that are needed to +understand it. This guide assumes a working knowledge of the Linux kernel and +some basic knowledge of testing. + +For a high level introduction to KUnit, including setting up KUnit for your +project, see :doc:`start`. + +Organization of this document +================================= + +This document is organized into two main sections: Testing and Isolating +Behavior. The first covers what a unit test is and how to use KUnit to write +them. The second covers how to use KUnit to isolate code and make it possible +to unit test code that was otherwise un-unit-testable. + +Testing +========== + +What is KUnit? +------------------ + +"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing +Framework." KUnit is intended first and foremost for writing unit tests; it is +general enough that it can be used to write integration tests; however, this is +a secondary goal. KUnit has no ambition of being the only testing framework for +the kernel; for example, it does not intend to be an end-to-end testing +framework. + +What is Unit Testing? +------------------------- + +A `unit test `_ is a test that +tests code at the smallest possible scope, a *unit* of code. In the C +programming language that's a function. + +Unit tests should be written for all the publicly exposed functions in a +compilation unit; so that is all the functions that are exported in either a +*class* (defined below) or all functions which are **not** static. + +Writing Tests +------------- + +Test Cases +~~~~~~~~~~ + +The fundamental unit in KUnit is the test case. A test case is a function with +the signature ``void (*)(struct kunit *test)``. It calls a function to be tested +and then sets *expectations* for what should happen. For example: + +.. code-block:: c + + void example_test_success(struct kunit *test) + { + } + + void example_test_failure(struct kunit *test) + { + KUNIT_FAIL(test, "This test never passes."); + } + +In the above example ``example_test_success`` always passes because it does +nothing; no expectations are set, so all expectations pass. On the other hand +``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is +a special expectation that logs a message and causes the test case to fail. + +Expectations +~~~~~~~~~~~~ +An *expectation* is a way to specify that you expect a piece of code to do +something in a test. An expectation is called like a function. A test is made +by setting expectations about the behavior of a piece of code under test; when +one or more of the expectations fail, the test case fails and information about +the failure is logged. For example: + +.. code-block:: c + + void add_test_basic(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 1, add(1, 0)); + KUNIT_EXPECT_EQ(test, 2, add(1, 1)); + } + +In the above example ``add_test_basic`` makes a number of assertions about the +behavior of a function called ``add``; the first parameter is always of type +``struct kunit *``, which contains information about the current test context; +the second parameter, in this case, is what the value is expected to be; the +last value is what the value actually is. If ``add`` passes all of these +expectations, the test case, ``add_test_basic`` will pass; if any one of these +expectations fail, the test case will fail. + +It is important to understand that a test case *fails* when any expectation is +violated; however, the test will continue running, potentially trying other +expectations until the test case ends or is otherwise terminated. This is as +opposed to *assertions* which are discussed later. + +To learn about more expectations supported by KUnit, see :doc:`api/test`. + +.. note:: + A single test case should be pretty short, pretty easy to understand, + focused on a single behavior. + +For example, if we wanted to properly test the add function above, we would +create additional tests cases which would each test a different property that an +add function should have like this: + +.. code-block:: c + + void add_test_basic(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 1, add(1, 0)); + KUNIT_EXPECT_EQ(test, 2, add(1, 1)); + } + + void add_test_negative(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 0, add(-1, 1)); + } + + void add_test_max(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX)); + KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN)); + } + + void add_test_overflow(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1)); + } + +Notice how it is immediately obvious what all the properties that we are testing +for are. + +Assertions +~~~~~~~~~~ + +KUnit also has the concept of an *assertion*. An assertion is just like an +expectation except the assertion immediately terminates the test case if it is +not satisfied. + +For example: + +.. code-block:: c + + static void mock_test_do_expect_default_return(struct kunit *test) + { + struct mock_test_context *ctx = test->priv; + struct mock *mock = ctx->mock; + int param0 = 5, param1 = -5; + const char *two_param_types[] = {"int", "int"}; + const void *two_params[] = {¶m0, ¶m1}; + const void *ret; + + ret = mock->do_expect(mock, + "test_printk", test_printk, + two_param_types, two_params, + ARRAY_SIZE(two_params)); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret); + KUNIT_EXPECT_EQ(test, -4, *((int *) ret)); + } + +In this example, the method under test should return a pointer to a value, so +if the pointer returned by the method is null or an errno, we don't want to +bother continuing the test since the following expectation could crash the test +case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if +the appropriate conditions have not been satisfied to complete the test. + +Modules / Test Suites +~~~~~~~~~~~~~~~~~~~~~ + +Now obviously one unit test isn't very helpful; the power comes from having +many test cases covering all of your behaviors. Consequently it is common to +have many *similar* tests; in order to reduce duplication in these closely +related tests most unit testing frameworks provide the concept of a *test +suite*, in KUnit we call it a *test module*; all it is is just a collection of +test cases for a unit of code with a set up function that gets invoked before +every test cases and then a tear down function that gets invoked after every +test case completes. + +Example: + +.. code-block:: c + + static struct kunit_case example_test_cases[] = { + KUNIT_CASE(example_test_foo), + KUNIT_CASE(example_test_bar), + KUNIT_CASE(example_test_baz), + {}, + }; + + static struct kunit_module example_test_module = { + .name = "example", + .init = example_test_init, + .exit = example_test_exit, + .test_cases = example_test_cases, + }; + module_test(example_test_module); + +In the above example the test suite, ``example_test_module``, would run the test +cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``, each +would have ``example_test_init`` called immediately before it and would have +``example_test_exit`` called immediately after it. +``module_test(example_test_module)`` registers the test suite with the KUnit +test framework. + +.. note:: + A test case will only be run if it is associated with a test suite. + +For a more information on these types of things see the :doc:`api/test`. + +Isolating Behavior +================== + +The most important aspect of unit testing that other forms of testing do not +provide is the ability to limit the amount of code under test to a single unit. +In practice, this is only possible by being able to control what code gets run +when the unit under test calls a function and this is usually accomplished +through some sort of indirection where a function is exposed as part of an API +such that the definition of that function can be changed without affecting the +rest of the code base. In the kernel this primarily comes from two constructs, +classes, structs that contain function pointers that are provided by the +implementer, and architecture specific functions which have definitions selected +at compile time. + +Classes +------- + +Classes are not a construct that is built into the C programming language; +however, it is an easily derived concept. Accordingly, pretty much every project +that does not use a standardized object oriented library (like GNOME's GObject) +has their own slightly different way of doing object oriented programming; the +Linux kernel is no exception. + +The central concept in kernel object oriented programming is the class. In the +kernel, a *class* is a struct that contains function pointers. This creates a +contract between *implementers* and *users* since it forces them to use the +same function signature without having to call the function directly. In order +for it to truly be a class, the function pointers must specify that a pointer +to the class, known as a *class handle*, be one of the parameters; this makes +it possible for the member functions (also known as *methods*) to have access +to member variables (more commonly known as *fields*) allowing the same +implementation to have multiple *instances*. + +Typically a class can be *overridden* by *child classes* by embedding the +*parent class* in the child class. Then when a method provided by the child +class is called, the child implementation knows that the pointer passed to it is +of a parent contained within the child; because of this, the child can compute +the pointer to itself because the pointer to the parent is always a fixed offset +from the pointer to the child; this offset is the offset of the parent contained +in the child struct. For example: + +.. code-block:: c + + struct shape { + int (*area)(struct shape *this); + }; + + struct rectangle { + struct shape parent; + int length; + int width; + }; + + int rectangle_area(struct shape *this) + { + struct rectangle *self = container_of(this, struct shape, parent); + + return self->length * self->width; + }; + + void rectangle_new(struct rectangle *self, int length, int width) + { + self->parent.area = rectangle_area; + self->length = length; + self->width = width; + } + +In this example (as in most kernel code) the operation of computing the pointer +to the child from the pointer to the parent is done by ``container_of``. + +Faking Classes +~~~~~~~~~~~~~~ + +In order to unit test a piece of code that calls a method in a class, the +behavior of the method must be controllable, otherwise the test ceases to be a +unit test and becomes an integration test. + +A fake just provides an implementation of a piece of code that is different than +what runs in a production instance, but behaves identically from the standpoint +of the callers; this is usually done to replace a dependency that is hard to +deal with, or is slow. + +A good example for this might be implementing a fake EEPROM that just stores the +"contents" in an internal buffer. For example, let's assume we have a class that +represents an EEPROM: + +.. code-block:: c + + struct eeprom { + ssize_t (*read)(struct eeprom *this, size_t offset, char *buffer, size_t count); + ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count); + }; + +And we want to test some code that buffers writes to the EEPROM: + +.. code-block:: c + + struct eeprom_buffer { + ssize_t (*write)(struct eeprom_buffer *this, const char *buffer, size_t count); + int flush(struct eeprom_buffer *this); + size_t flush_count; /* Flushes when buffer exceeds flush_count. */ + }; + + struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom); + void destroy_eeprom_buffer(struct eeprom *eeprom); + +We can easily test this code by *faking out* the underlying EEPROM: + +.. code-block:: c + + struct fake_eeprom { + struct eeprom parent; + char contents[FAKE_EEPROM_CONTENTS_SIZE]; + }; + + ssize_t fake_eeprom_read(struct eeprom *parent, size_t offset, char *buffer, size_t count) + { + struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent); + + count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset); + memcpy(buffer, this->contents + offset, count); + + return count; + } + + ssize_t fake_eeprom_write(struct eeprom *this, size_t offset, const char *buffer, size_t count) + { + struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent); + + count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset); + memcpy(this->contents + offset, buffer, count); + + return count; + } + + void fake_eeprom_init(struct fake_eeprom *this) + { + this->parent.read = fake_eeprom_read; + this->parent.write = fake_eeprom_write; + memset(this->contents, 0, FAKE_EEPROM_CONTENTS_SIZE); + } + +We can now use it to test ``struct eeprom_buffer``: + +.. code-block:: c + + struct eeprom_buffer_test { + struct fake_eeprom *fake_eeprom; + struct eeprom_buffer *eeprom_buffer; + }; + + static void eeprom_buffer_test_does_not_write_until_flush(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer; + struct fake_eeprom *fake_eeprom = ctx->fake_eeprom; + char buffer[] = {0xff}; + + eeprom_buffer->flush_count = SIZE_MAX; + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0); + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0); + + eeprom_buffer->flush(eeprom_buffer); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff); + } + + static void eeprom_buffer_test_flushes_after_flush_count_met(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer; + struct fake_eeprom *fake_eeprom = ctx->fake_eeprom; + char buffer[] = {0xff}; + + eeprom_buffer->flush_count = 2; + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0); + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff); + } + + static void eeprom_buffer_test_flushes_increments_of_flush_count(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer; + struct fake_eeprom *fake_eeprom = ctx->fake_eeprom; + char buffer[] = {0xff, 0xff}; + + eeprom_buffer->flush_count = 2; + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0); + + eeprom_buffer->write(eeprom_buffer, buffer, 2); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff); + /* Should have only flushed the first two bytes. */ + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[2], 0); + } + + static int eeprom_buffer_test_init(struct kunit *test) + { + struct eeprom_buffer_test *ctx; + + ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL); + ASSERT_NOT_ERR_OR_NULL(test, ctx); + + ctx->fake_eeprom = kunit_kzalloc(test, sizeof(*ctx->fake_eeprom), GFP_KERNEL); + ASSERT_NOT_ERR_OR_NULL(test, ctx->fake_eeprom); + + ctx->eeprom_buffer = new_eeprom_buffer(&ctx->fake_eeprom->parent); + ASSERT_NOT_ERR_OR_NULL(test, ctx->eeprom_buffer); + + test->priv = ctx; + + return 0; + } + + static void eeprom_buffer_test_exit(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + + destroy_eeprom_buffer(ctx->eeprom_buffer); + } + -- 2.21.0.392.gf8f6787159e-goog From mboxrd@z Thu Jan 1 00:00:00 1970 From: brendanhiggins@google.com (Brendan Higgins) Date: Thu, 4 Apr 2019 15:06:49 -0700 Subject: [PATCH v1 14/17] Documentation: kunit: add documentation for KUnit In-Reply-To: <20190404220652.19765-1-brendanhiggins@google.com> References: <20190404220652.19765-1-brendanhiggins@google.com> Message-ID: <20190404220652.19765-15-brendanhiggins@google.com> Content-Type: text/plain; charset="UTF-8" Message-ID: <20190404220649.8-mZd9eWXmkZODS25rt3Au6POpJbuLbMzC6mZYmmg-c@z> Add documentation for KUnit, the Linux kernel unit testing framework. - Add intro and usage guide for KUnit - Add API reference Signed-off-by: Felix Guo Signed-off-by: Brendan Higgins --- Documentation/index.rst | 1 + Documentation/kunit/api/index.rst | 16 ++ Documentation/kunit/api/test.rst | 15 + Documentation/kunit/faq.rst | 46 +++ Documentation/kunit/index.rst | 80 ++++++ Documentation/kunit/start.rst | 180 ++++++++++++ Documentation/kunit/usage.rst | 447 ++++++++++++++++++++++++++++++ 7 files changed, 785 insertions(+) create mode 100644 Documentation/kunit/api/index.rst create mode 100644 Documentation/kunit/api/test.rst create mode 100644 Documentation/kunit/faq.rst create mode 100644 Documentation/kunit/index.rst create mode 100644 Documentation/kunit/start.rst create mode 100644 Documentation/kunit/usage.rst diff --git a/Documentation/index.rst b/Documentation/index.rst index 80a421cb935e7..264cfd613a774 100644 --- a/Documentation/index.rst +++ b/Documentation/index.rst @@ -65,6 +65,7 @@ merged much easier. kernel-hacking/index trace/index maintainer/index + kunit/index Kernel API documentation ------------------------ diff --git a/Documentation/kunit/api/index.rst b/Documentation/kunit/api/index.rst new file mode 100644 index 0000000000000..c31c530088153 --- /dev/null +++ b/Documentation/kunit/api/index.rst @@ -0,0 +1,16 @@ +.. SPDX-License-Identifier: GPL-2.0 + +============= +API Reference +============= +.. toctree:: + + test + +This section documents the KUnit kernel testing API. It is divided into 3 +sections: + +================================= ============================================== +:doc:`test` documents all of the standard testing API + excluding mocking or mocking related features. +================================= ============================================== diff --git a/Documentation/kunit/api/test.rst b/Documentation/kunit/api/test.rst new file mode 100644 index 0000000000000..7c926014f047c --- /dev/null +++ b/Documentation/kunit/api/test.rst @@ -0,0 +1,15 @@ +.. SPDX-License-Identifier: GPL-2.0 + +======== +Test API +======== + +This file documents all of the standard testing API excluding mocking or mocking +related features. + +.. kernel-doc:: include/kunit/test.h + :internal: + +.. kernel-doc:: include/kunit/kunit-stream.h + :internal: + diff --git a/Documentation/kunit/faq.rst b/Documentation/kunit/faq.rst new file mode 100644 index 0000000000000..cb8e4fb2257a0 --- /dev/null +++ b/Documentation/kunit/faq.rst @@ -0,0 +1,46 @@ +.. SPDX-License-Identifier: GPL-2.0 + +========================================= +Frequently Asked Questions +========================================= + +How is this different from Autotest, kselftest, etc? +==================================================== +KUnit is a unit testing framework. Autotest, kselftest (and some others) are +not. + +A `unit test `_ is supposed to +test a single unit of code in isolation, hence the name. A unit test should be +the finest granularity of testing and as such should allow all possible code +paths to be tested in the code under test; this is only possible if the code +under test is very small and does not have any external dependencies outside of +the test's control like hardware. + +There are no testing frameworks currently available for the kernel that do not +require installing the kernel on a test machine or in a VM and all require +tests to be written in userspace and run on the kernel under test; this is true +for Autotest, kselftest, and some others, disqualifying any of them from being +considered unit testing frameworks. + +What is the difference between a unit test and these other kinds of tests? +========================================================================== +Most existing tests for the Linux kernel would be categorized as an integration +test, or an end-to-end test. + +- A unit test is supposed to test a single unit of code in isolation, hence the + name. A unit test should be the finest granularity of testing and as such + should allow all possible code paths to be tested in the code under test; this + is only possible if the code under test is very small and does not have any + external dependencies outside of the test's control like hardware. +- An integration test tests the interaction between a minimal set of components, + usually just two or three. For example, someone might write an integration + test to test the interaction between a driver and a piece of hardware, or to + test the interaction between the userspace libraries the kernel provides and + the kernel itself; however, one of these tests would probably not test the + entire kernel along with hardware interactions and interactions with the + userspace. +- An end-to-end test usually tests the entire system from the perspective of the + code under test. For example, someone might write an end-to-end test for the + kernel by installing a production configuration of the kernel on production + hardware with a production userspace and then trying to exercise some behavior + that depends on interactions between the hardware, the kernel, and userspace. diff --git a/Documentation/kunit/index.rst b/Documentation/kunit/index.rst new file mode 100644 index 0000000000000..c6710211b647f --- /dev/null +++ b/Documentation/kunit/index.rst @@ -0,0 +1,80 @@ +.. SPDX-License-Identifier: GPL-2.0 + +========================================= +KUnit - Unit Testing for the Linux Kernel +========================================= + +.. toctree:: + :maxdepth: 2 + + start + usage + api/index + faq + +What is KUnit? +============== + +KUnit is a lightweight unit testing and mocking framework for the Linux kernel. +These tests are able to be run locally on a developer's workstation without a VM +or special hardware. + +KUnit is heavily inspired by JUnit, Python's unittest.mock, and +Googletest/Googlemock for C++. KUnit provides facilities for defining unit test +cases, grouping related test cases into test suites, providing common +infrastructure for running tests, and much more. + +Get started now: :doc:`start` + +Why KUnit? +========== + +A unit test is supposed to test a single unit of code in isolation, hence the +name. A unit test should be the finest granularity of testing and as such should +allow all possible code paths to be tested in the code under test; this is only +possible if the code under test is very small and does not have any external +dependencies outside of the test's control like hardware. + +Outside of KUnit, there are no testing frameworks currently +available for the kernel that do not require installing the kernel on a test +machine or in a VM and all require tests to be written in userspace running on +the kernel; this is true for Autotest, and kselftest, disqualifying +any of them from being considered unit testing frameworks. + +KUnit addresses the problem of being able to run tests without needing a virtual +machine or actual hardware with User Mode Linux. User Mode Linux is a Linux +architecture, like ARM or x86; however, unlike other architectures it compiles +to a standalone program that can be run like any other program directly inside +of a host operating system; to be clear, it does not require any virtualization +support; it is just a regular program. + +KUnit is fast. Excluding build time, from invocation to completion KUnit can run +several dozen tests in only 10 to 20 seconds; this might not sound like a big +deal to some people, but having such fast and easy to run tests fundamentally +changes the way you go about testing and even writing code in the first place. +Linus himself said in his `git talk at Google +`_: + + "... a lot of people seem to think that performance is about doing the + same thing, just doing it faster, and that is not true. That is not what + performance is all about. If you can do something really fast, really + well, people will start using it differently." + +In this context Linus was talking about branching and merging, +but this point also applies to testing. If your tests are slow, unreliable, are +difficult to write, and require a special setup or special hardware to run, +then you wait a lot longer to write tests, and you wait a lot longer to run +tests; this means that tests are likely to break, unlikely to test a lot of +things, and are unlikely to be rerun once they pass. If your tests are really +fast, you run them all the time, every time you make a change, and every time +someone sends you some code. Why trust that someone ran all their tests +correctly on every change when you can just run them yourself in less time than +it takes to read his / her test log? + +How do I use it? +=================== + +* :doc:`start` - for new users of KUnit +* :doc:`usage` - for a more detailed explanation of KUnit features +* :doc:`api/index` - for the list of KUnit APIs used for testing + diff --git a/Documentation/kunit/start.rst b/Documentation/kunit/start.rst new file mode 100644 index 0000000000000..5cdba5091905e --- /dev/null +++ b/Documentation/kunit/start.rst @@ -0,0 +1,180 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=============== +Getting Started +=============== + +Installing dependencies +======================= +KUnit has the same dependencies as the Linux kernel. As long as you can build +the kernel, you can run KUnit. + +KUnit Wrapper +============= +Included with KUnit is a simple Python wrapper that helps format the output to +easily use and read KUnit output. It handles building and running the kernel, as +well as formatting the output. + +The wrapper can be run with: + +.. code-block:: bash + + ./tools/testing/kunit/kunit.py + +Creating a kunitconfig +====================== +The Python script is a thin wrapper around Kbuild as such, it needs to be +configured with a ``kunitconfig`` file. This file essentially contains the +regular Kernel config, with the specific test targets as well. + +.. code-block:: bash + + git clone -b master https://kunit.googlesource.com/kunitconfig $PATH_TO_KUNITCONFIG_REPO + cd $PATH_TO_LINUX_REPO + ln -s $PATH_TO_KUNIT_CONFIG_REPO/kunitconfig kunitconfig + +You may want to add kunitconfig to your local gitignore. + +Verifying KUnit Works +------------------------- + +To make sure that everything is set up correctly, simply invoke the Python +wrapper from your kernel repo: + +.. code-block:: bash + + ./tools/testing/kunit/kunit.py + +.. note:: + You may want to run ``make mrproper`` first. + +If everything worked correctly, you should see the following: + +.. code-block:: bash + + Generating .config ... + Building KUnit Kernel ... + Starting KUnit Kernel ... + +followed by a list of tests that are run. All of them should be passing. + +.. note:: + Because it is building a lot of sources for the first time, the ``Building + kunit kernel`` step may take a while. + +Writing your first test +========================== + +In your kernel repo let's add some code that we can test. Create a file +``drivers/misc/example.h`` with the contents: + +.. code-block:: c + + int misc_example_add(int left, int right); + +create a file ``drivers/misc/example.c``: + +.. code-block:: c + + #include + + #include "example.h" + + int misc_example_add(int left, int right) + { + return left + right; + } + +Now add the following lines to ``drivers/misc/Kconfig``: + +.. code-block:: kconfig + + config MISC_EXAMPLE + bool "My example" + +and the following lines to ``drivers/misc/Makefile``: + +.. code-block:: make + + obj-$(CONFIG_MISC_EXAMPLE) += example.o + +Now we are ready to write the test. The test will be in +``drivers/misc/example-test.c``: + +.. code-block:: c + + #include + #include "example.h" + + /* Define the test cases. */ + + static void misc_example_add_test_basic(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 1, misc_example_add(1, 0)); + KUNIT_EXPECT_EQ(test, 2, misc_example_add(1, 1)); + KUNIT_EXPECT_EQ(test, 0, misc_example_add(-1, 1)); + KUNIT_EXPECT_EQ(test, INT_MAX, misc_example_add(0, INT_MAX)); + KUNIT_EXPECT_EQ(test, -1, misc_example_add(INT_MAX, INT_MIN)); + } + + static void misc_example_test_failure(struct kunit *test) + { + KUNIT_FAIL(test, "This test never passes."); + } + + static struct kunit_case misc_example_test_cases[] = { + KUNIT_CASE(misc_example_add_test_basic), + KUNIT_CASE(misc_example_test_failure), + {}, + }; + + static struct kunit_module misc_example_test_module = { + .name = "misc-example", + .test_cases = misc_example_test_cases, + }; + module_test(misc_example_test_module); + +Now add the following to ``drivers/misc/Kconfig``: + +.. code-block:: kconfig + + config MISC_EXAMPLE_TEST + bool "Test for my example" + depends on MISC_EXAMPLE && KUNIT + +and the following to ``drivers/misc/Makefile``: + +.. code-block:: make + + obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o + +Now add it to your ``kunitconfig``: + +.. code-block:: none + + CONFIG_MISC_EXAMPLE=y + CONFIG_MISC_EXAMPLE_TEST=y + +Now you can run the test: + +.. code-block:: bash + + ./tools/testing/kunit/kunit.py + +You should see the following failure: + +.. code-block:: none + + ... + [16:08:57] [PASSED] misc-example:misc_example_add_test_basic + [16:08:57] [FAILED] misc-example:misc_example_test_failure + [16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17 + [16:08:57] This test never passes. + ... + +Congrats! You just wrote your first KUnit test! + +Next Steps +============= +* Check out the :doc:`usage` page for a more + in-depth explanation of KUnit. diff --git a/Documentation/kunit/usage.rst b/Documentation/kunit/usage.rst new file mode 100644 index 0000000000000..5c83ea9e21bc5 --- /dev/null +++ b/Documentation/kunit/usage.rst @@ -0,0 +1,447 @@ +.. SPDX-License-Identifier: GPL-2.0 + +============= +Using KUnit +============= + +The purpose of this document is to describe what KUnit is, how it works, how it +is intended to be used, and all the concepts and terminology that are needed to +understand it. This guide assumes a working knowledge of the Linux kernel and +some basic knowledge of testing. + +For a high level introduction to KUnit, including setting up KUnit for your +project, see :doc:`start`. + +Organization of this document +================================= + +This document is organized into two main sections: Testing and Isolating +Behavior. The first covers what a unit test is and how to use KUnit to write +them. The second covers how to use KUnit to isolate code and make it possible +to unit test code that was otherwise un-unit-testable. + +Testing +========== + +What is KUnit? +------------------ + +"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing +Framework." KUnit is intended first and foremost for writing unit tests; it is +general enough that it can be used to write integration tests; however, this is +a secondary goal. KUnit has no ambition of being the only testing framework for +the kernel; for example, it does not intend to be an end-to-end testing +framework. + +What is Unit Testing? +------------------------- + +A `unit test `_ is a test that +tests code at the smallest possible scope, a *unit* of code. In the C +programming language that's a function. + +Unit tests should be written for all the publicly exposed functions in a +compilation unit; so that is all the functions that are exported in either a +*class* (defined below) or all functions which are **not** static. + +Writing Tests +------------- + +Test Cases +~~~~~~~~~~ + +The fundamental unit in KUnit is the test case. A test case is a function with +the signature ``void (*)(struct kunit *test)``. It calls a function to be tested +and then sets *expectations* for what should happen. For example: + +.. code-block:: c + + void example_test_success(struct kunit *test) + { + } + + void example_test_failure(struct kunit *test) + { + KUNIT_FAIL(test, "This test never passes."); + } + +In the above example ``example_test_success`` always passes because it does +nothing; no expectations are set, so all expectations pass. On the other hand +``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is +a special expectation that logs a message and causes the test case to fail. + +Expectations +~~~~~~~~~~~~ +An *expectation* is a way to specify that you expect a piece of code to do +something in a test. An expectation is called like a function. A test is made +by setting expectations about the behavior of a piece of code under test; when +one or more of the expectations fail, the test case fails and information about +the failure is logged. For example: + +.. code-block:: c + + void add_test_basic(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 1, add(1, 0)); + KUNIT_EXPECT_EQ(test, 2, add(1, 1)); + } + +In the above example ``add_test_basic`` makes a number of assertions about the +behavior of a function called ``add``; the first parameter is always of type +``struct kunit *``, which contains information about the current test context; +the second parameter, in this case, is what the value is expected to be; the +last value is what the value actually is. If ``add`` passes all of these +expectations, the test case, ``add_test_basic`` will pass; if any one of these +expectations fail, the test case will fail. + +It is important to understand that a test case *fails* when any expectation is +violated; however, the test will continue running, potentially trying other +expectations until the test case ends or is otherwise terminated. This is as +opposed to *assertions* which are discussed later. + +To learn about more expectations supported by KUnit, see :doc:`api/test`. + +.. note:: + A single test case should be pretty short, pretty easy to understand, + focused on a single behavior. + +For example, if we wanted to properly test the add function above, we would +create additional tests cases which would each test a different property that an +add function should have like this: + +.. code-block:: c + + void add_test_basic(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 1, add(1, 0)); + KUNIT_EXPECT_EQ(test, 2, add(1, 1)); + } + + void add_test_negative(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 0, add(-1, 1)); + } + + void add_test_max(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX)); + KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN)); + } + + void add_test_overflow(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1)); + } + +Notice how it is immediately obvious what all the properties that we are testing +for are. + +Assertions +~~~~~~~~~~ + +KUnit also has the concept of an *assertion*. An assertion is just like an +expectation except the assertion immediately terminates the test case if it is +not satisfied. + +For example: + +.. code-block:: c + + static void mock_test_do_expect_default_return(struct kunit *test) + { + struct mock_test_context *ctx = test->priv; + struct mock *mock = ctx->mock; + int param0 = 5, param1 = -5; + const char *two_param_types[] = {"int", "int"}; + const void *two_params[] = {¶m0, ¶m1}; + const void *ret; + + ret = mock->do_expect(mock, + "test_printk", test_printk, + two_param_types, two_params, + ARRAY_SIZE(two_params)); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret); + KUNIT_EXPECT_EQ(test, -4, *((int *) ret)); + } + +In this example, the method under test should return a pointer to a value, so +if the pointer returned by the method is null or an errno, we don't want to +bother continuing the test since the following expectation could crash the test +case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if +the appropriate conditions have not been satisfied to complete the test. + +Modules / Test Suites +~~~~~~~~~~~~~~~~~~~~~ + +Now obviously one unit test isn't very helpful; the power comes from having +many test cases covering all of your behaviors. Consequently it is common to +have many *similar* tests; in order to reduce duplication in these closely +related tests most unit testing frameworks provide the concept of a *test +suite*, in KUnit we call it a *test module*; all it is is just a collection of +test cases for a unit of code with a set up function that gets invoked before +every test cases and then a tear down function that gets invoked after every +test case completes. + +Example: + +.. code-block:: c + + static struct kunit_case example_test_cases[] = { + KUNIT_CASE(example_test_foo), + KUNIT_CASE(example_test_bar), + KUNIT_CASE(example_test_baz), + {}, + }; + + static struct kunit_module example_test_module = { + .name = "example", + .init = example_test_init, + .exit = example_test_exit, + .test_cases = example_test_cases, + }; + module_test(example_test_module); + +In the above example the test suite, ``example_test_module``, would run the test +cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``, each +would have ``example_test_init`` called immediately before it and would have +``example_test_exit`` called immediately after it. +``module_test(example_test_module)`` registers the test suite with the KUnit +test framework. + +.. note:: + A test case will only be run if it is associated with a test suite. + +For a more information on these types of things see the :doc:`api/test`. + +Isolating Behavior +================== + +The most important aspect of unit testing that other forms of testing do not +provide is the ability to limit the amount of code under test to a single unit. +In practice, this is only possible by being able to control what code gets run +when the unit under test calls a function and this is usually accomplished +through some sort of indirection where a function is exposed as part of an API +such that the definition of that function can be changed without affecting the +rest of the code base. In the kernel this primarily comes from two constructs, +classes, structs that contain function pointers that are provided by the +implementer, and architecture specific functions which have definitions selected +at compile time. + +Classes +------- + +Classes are not a construct that is built into the C programming language; +however, it is an easily derived concept. Accordingly, pretty much every project +that does not use a standardized object oriented library (like GNOME's GObject) +has their own slightly different way of doing object oriented programming; the +Linux kernel is no exception. + +The central concept in kernel object oriented programming is the class. In the +kernel, a *class* is a struct that contains function pointers. This creates a +contract between *implementers* and *users* since it forces them to use the +same function signature without having to call the function directly. In order +for it to truly be a class, the function pointers must specify that a pointer +to the class, known as a *class handle*, be one of the parameters; this makes +it possible for the member functions (also known as *methods*) to have access +to member variables (more commonly known as *fields*) allowing the same +implementation to have multiple *instances*. + +Typically a class can be *overridden* by *child classes* by embedding the +*parent class* in the child class. Then when a method provided by the child +class is called, the child implementation knows that the pointer passed to it is +of a parent contained within the child; because of this, the child can compute +the pointer to itself because the pointer to the parent is always a fixed offset +from the pointer to the child; this offset is the offset of the parent contained +in the child struct. For example: + +.. code-block:: c + + struct shape { + int (*area)(struct shape *this); + }; + + struct rectangle { + struct shape parent; + int length; + int width; + }; + + int rectangle_area(struct shape *this) + { + struct rectangle *self = container_of(this, struct shape, parent); + + return self->length * self->width; + }; + + void rectangle_new(struct rectangle *self, int length, int width) + { + self->parent.area = rectangle_area; + self->length = length; + self->width = width; + } + +In this example (as in most kernel code) the operation of computing the pointer +to the child from the pointer to the parent is done by ``container_of``. + +Faking Classes +~~~~~~~~~~~~~~ + +In order to unit test a piece of code that calls a method in a class, the +behavior of the method must be controllable, otherwise the test ceases to be a +unit test and becomes an integration test. + +A fake just provides an implementation of a piece of code that is different than +what runs in a production instance, but behaves identically from the standpoint +of the callers; this is usually done to replace a dependency that is hard to +deal with, or is slow. + +A good example for this might be implementing a fake EEPROM that just stores the +"contents" in an internal buffer. For example, let's assume we have a class that +represents an EEPROM: + +.. code-block:: c + + struct eeprom { + ssize_t (*read)(struct eeprom *this, size_t offset, char *buffer, size_t count); + ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count); + }; + +And we want to test some code that buffers writes to the EEPROM: + +.. code-block:: c + + struct eeprom_buffer { + ssize_t (*write)(struct eeprom_buffer *this, const char *buffer, size_t count); + int flush(struct eeprom_buffer *this); + size_t flush_count; /* Flushes when buffer exceeds flush_count. */ + }; + + struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom); + void destroy_eeprom_buffer(struct eeprom *eeprom); + +We can easily test this code by *faking out* the underlying EEPROM: + +.. code-block:: c + + struct fake_eeprom { + struct eeprom parent; + char contents[FAKE_EEPROM_CONTENTS_SIZE]; + }; + + ssize_t fake_eeprom_read(struct eeprom *parent, size_t offset, char *buffer, size_t count) + { + struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent); + + count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset); + memcpy(buffer, this->contents + offset, count); + + return count; + } + + ssize_t fake_eeprom_write(struct eeprom *this, size_t offset, const char *buffer, size_t count) + { + struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent); + + count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset); + memcpy(this->contents + offset, buffer, count); + + return count; + } + + void fake_eeprom_init(struct fake_eeprom *this) + { + this->parent.read = fake_eeprom_read; + this->parent.write = fake_eeprom_write; + memset(this->contents, 0, FAKE_EEPROM_CONTENTS_SIZE); + } + +We can now use it to test ``struct eeprom_buffer``: + +.. code-block:: c + + struct eeprom_buffer_test { + struct fake_eeprom *fake_eeprom; + struct eeprom_buffer *eeprom_buffer; + }; + + static void eeprom_buffer_test_does_not_write_until_flush(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer; + struct fake_eeprom *fake_eeprom = ctx->fake_eeprom; + char buffer[] = {0xff}; + + eeprom_buffer->flush_count = SIZE_MAX; + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0); + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0); + + eeprom_buffer->flush(eeprom_buffer); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff); + } + + static void eeprom_buffer_test_flushes_after_flush_count_met(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer; + struct fake_eeprom *fake_eeprom = ctx->fake_eeprom; + char buffer[] = {0xff}; + + eeprom_buffer->flush_count = 2; + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0); + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff); + } + + static void eeprom_buffer_test_flushes_increments_of_flush_count(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer; + struct fake_eeprom *fake_eeprom = ctx->fake_eeprom; + char buffer[] = {0xff, 0xff}; + + eeprom_buffer->flush_count = 2; + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0); + + eeprom_buffer->write(eeprom_buffer, buffer, 2); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff); + /* Should have only flushed the first two bytes. */ + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[2], 0); + } + + static int eeprom_buffer_test_init(struct kunit *test) + { + struct eeprom_buffer_test *ctx; + + ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL); + ASSERT_NOT_ERR_OR_NULL(test, ctx); + + ctx->fake_eeprom = kunit_kzalloc(test, sizeof(*ctx->fake_eeprom), GFP_KERNEL); + ASSERT_NOT_ERR_OR_NULL(test, ctx->fake_eeprom); + + ctx->eeprom_buffer = new_eeprom_buffer(&ctx->fake_eeprom->parent); + ASSERT_NOT_ERR_OR_NULL(test, ctx->eeprom_buffer); + + test->priv = ctx; + + return 0; + } + + static void eeprom_buffer_test_exit(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + + destroy_eeprom_buffer(ctx->eeprom_buffer); + } + -- 2.21.0.392.gf8f6787159e-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt1-x84a.google.com ([2607:f8b0:4864:20::84a]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hCAZB-0000BI-9Q for linux-um@lists.infradead.org; Thu, 04 Apr 2019 22:10:40 +0000 Received: by mail-qt1-x84a.google.com with SMTP id g48so3641548qtk.19 for ; Thu, 04 Apr 2019 15:10:36 -0700 (PDT) Date: Thu, 4 Apr 2019 15:06:49 -0700 In-Reply-To: <20190404220652.19765-1-brendanhiggins@google.com> Message-Id: <20190404220652.19765-15-brendanhiggins@google.com> Mime-Version: 1.0 References: <20190404220652.19765-1-brendanhiggins@google.com> Subject: [PATCH v1 14/17] Documentation: kunit: add documentation for KUnit From: Brendan Higgins List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-um" Errors-To: linux-um-bounces+geert=linux-m68k.org@lists.infradead.org To: corbet@lwn.net, frowand.list@gmail.com, keescook@google.com, kieran.bingham@ideasonboard.com, mcgrof@kernel.org, robh@kernel.org, shuah@kernel.org, yamada.masahiro@socionext.com Cc: pmladek@suse.com, linux-doc@vger.kernel.org, amir73il@gmail.com, Brendan Higgins , dri-devel@lists.freedesktop.org, Alexander.Levin@microsoft.com, linux-kselftest@vger.kernel.org, linux-nvdimm@lists.01.org, khilman@baylibre.com, knut.omang@oracle.com, Felix Guo , wfg@linux.intel.com, joel@jms.id.au, jdike@addtoit.com, dan.carpenter@oracle.com, devicetree@vger.kernel.org, linux-kbuild@vger.kernel.org, Tim.Bird@sony.com, linux-um@lists.infradead.org, rostedt@goodmis.org, julia.lawall@lip6.fr, dan.j.williams@intel.com, kunit-dev@googlegroups.com, richard@nod.at, gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, daniel@ffwll.ch, mpe@ellerman.id.au, linux-fsdevel@vger.kernel.org Add documentation for KUnit, the Linux kernel unit testing framework. - Add intro and usage guide for KUnit - Add API reference Signed-off-by: Felix Guo Signed-off-by: Brendan Higgins --- Documentation/index.rst | 1 + Documentation/kunit/api/index.rst | 16 ++ Documentation/kunit/api/test.rst | 15 + Documentation/kunit/faq.rst | 46 +++ Documentation/kunit/index.rst | 80 ++++++ Documentation/kunit/start.rst | 180 ++++++++++++ Documentation/kunit/usage.rst | 447 ++++++++++++++++++++++++++++++ 7 files changed, 785 insertions(+) create mode 100644 Documentation/kunit/api/index.rst create mode 100644 Documentation/kunit/api/test.rst create mode 100644 Documentation/kunit/faq.rst create mode 100644 Documentation/kunit/index.rst create mode 100644 Documentation/kunit/start.rst create mode 100644 Documentation/kunit/usage.rst diff --git a/Documentation/index.rst b/Documentation/index.rst index 80a421cb935e7..264cfd613a774 100644 --- a/Documentation/index.rst +++ b/Documentation/index.rst @@ -65,6 +65,7 @@ merged much easier. kernel-hacking/index trace/index maintainer/index + kunit/index Kernel API documentation ------------------------ diff --git a/Documentation/kunit/api/index.rst b/Documentation/kunit/api/index.rst new file mode 100644 index 0000000000000..c31c530088153 --- /dev/null +++ b/Documentation/kunit/api/index.rst @@ -0,0 +1,16 @@ +.. SPDX-License-Identifier: GPL-2.0 + +============= +API Reference +============= +.. toctree:: + + test + +This section documents the KUnit kernel testing API. It is divided into 3 +sections: + +================================= ============================================== +:doc:`test` documents all of the standard testing API + excluding mocking or mocking related features. +================================= ============================================== diff --git a/Documentation/kunit/api/test.rst b/Documentation/kunit/api/test.rst new file mode 100644 index 0000000000000..7c926014f047c --- /dev/null +++ b/Documentation/kunit/api/test.rst @@ -0,0 +1,15 @@ +.. SPDX-License-Identifier: GPL-2.0 + +======== +Test API +======== + +This file documents all of the standard testing API excluding mocking or mocking +related features. + +.. kernel-doc:: include/kunit/test.h + :internal: + +.. kernel-doc:: include/kunit/kunit-stream.h + :internal: + diff --git a/Documentation/kunit/faq.rst b/Documentation/kunit/faq.rst new file mode 100644 index 0000000000000..cb8e4fb2257a0 --- /dev/null +++ b/Documentation/kunit/faq.rst @@ -0,0 +1,46 @@ +.. SPDX-License-Identifier: GPL-2.0 + +========================================= +Frequently Asked Questions +========================================= + +How is this different from Autotest, kselftest, etc? +==================================================== +KUnit is a unit testing framework. Autotest, kselftest (and some others) are +not. + +A `unit test `_ is supposed to +test a single unit of code in isolation, hence the name. A unit test should be +the finest granularity of testing and as such should allow all possible code +paths to be tested in the code under test; this is only possible if the code +under test is very small and does not have any external dependencies outside of +the test's control like hardware. + +There are no testing frameworks currently available for the kernel that do not +require installing the kernel on a test machine or in a VM and all require +tests to be written in userspace and run on the kernel under test; this is true +for Autotest, kselftest, and some others, disqualifying any of them from being +considered unit testing frameworks. + +What is the difference between a unit test and these other kinds of tests? +========================================================================== +Most existing tests for the Linux kernel would be categorized as an integration +test, or an end-to-end test. + +- A unit test is supposed to test a single unit of code in isolation, hence the + name. A unit test should be the finest granularity of testing and as such + should allow all possible code paths to be tested in the code under test; this + is only possible if the code under test is very small and does not have any + external dependencies outside of the test's control like hardware. +- An integration test tests the interaction between a minimal set of components, + usually just two or three. For example, someone might write an integration + test to test the interaction between a driver and a piece of hardware, or to + test the interaction between the userspace libraries the kernel provides and + the kernel itself; however, one of these tests would probably not test the + entire kernel along with hardware interactions and interactions with the + userspace. +- An end-to-end test usually tests the entire system from the perspective of the + code under test. For example, someone might write an end-to-end test for the + kernel by installing a production configuration of the kernel on production + hardware with a production userspace and then trying to exercise some behavior + that depends on interactions between the hardware, the kernel, and userspace. diff --git a/Documentation/kunit/index.rst b/Documentation/kunit/index.rst new file mode 100644 index 0000000000000..c6710211b647f --- /dev/null +++ b/Documentation/kunit/index.rst @@ -0,0 +1,80 @@ +.. SPDX-License-Identifier: GPL-2.0 + +========================================= +KUnit - Unit Testing for the Linux Kernel +========================================= + +.. toctree:: + :maxdepth: 2 + + start + usage + api/index + faq + +What is KUnit? +============== + +KUnit is a lightweight unit testing and mocking framework for the Linux kernel. +These tests are able to be run locally on a developer's workstation without a VM +or special hardware. + +KUnit is heavily inspired by JUnit, Python's unittest.mock, and +Googletest/Googlemock for C++. KUnit provides facilities for defining unit test +cases, grouping related test cases into test suites, providing common +infrastructure for running tests, and much more. + +Get started now: :doc:`start` + +Why KUnit? +========== + +A unit test is supposed to test a single unit of code in isolation, hence the +name. A unit test should be the finest granularity of testing and as such should +allow all possible code paths to be tested in the code under test; this is only +possible if the code under test is very small and does not have any external +dependencies outside of the test's control like hardware. + +Outside of KUnit, there are no testing frameworks currently +available for the kernel that do not require installing the kernel on a test +machine or in a VM and all require tests to be written in userspace running on +the kernel; this is true for Autotest, and kselftest, disqualifying +any of them from being considered unit testing frameworks. + +KUnit addresses the problem of being able to run tests without needing a virtual +machine or actual hardware with User Mode Linux. User Mode Linux is a Linux +architecture, like ARM or x86; however, unlike other architectures it compiles +to a standalone program that can be run like any other program directly inside +of a host operating system; to be clear, it does not require any virtualization +support; it is just a regular program. + +KUnit is fast. Excluding build time, from invocation to completion KUnit can run +several dozen tests in only 10 to 20 seconds; this might not sound like a big +deal to some people, but having such fast and easy to run tests fundamentally +changes the way you go about testing and even writing code in the first place. +Linus himself said in his `git talk at Google +`_: + + "... a lot of people seem to think that performance is about doing the + same thing, just doing it faster, and that is not true. That is not what + performance is all about. If you can do something really fast, really + well, people will start using it differently." + +In this context Linus was talking about branching and merging, +but this point also applies to testing. If your tests are slow, unreliable, are +difficult to write, and require a special setup or special hardware to run, +then you wait a lot longer to write tests, and you wait a lot longer to run +tests; this means that tests are likely to break, unlikely to test a lot of +things, and are unlikely to be rerun once they pass. If your tests are really +fast, you run them all the time, every time you make a change, and every time +someone sends you some code. Why trust that someone ran all their tests +correctly on every change when you can just run them yourself in less time than +it takes to read his / her test log? + +How do I use it? +=================== + +* :doc:`start` - for new users of KUnit +* :doc:`usage` - for a more detailed explanation of KUnit features +* :doc:`api/index` - for the list of KUnit APIs used for testing + diff --git a/Documentation/kunit/start.rst b/Documentation/kunit/start.rst new file mode 100644 index 0000000000000..5cdba5091905e --- /dev/null +++ b/Documentation/kunit/start.rst @@ -0,0 +1,180 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=============== +Getting Started +=============== + +Installing dependencies +======================= +KUnit has the same dependencies as the Linux kernel. As long as you can build +the kernel, you can run KUnit. + +KUnit Wrapper +============= +Included with KUnit is a simple Python wrapper that helps format the output to +easily use and read KUnit output. It handles building and running the kernel, as +well as formatting the output. + +The wrapper can be run with: + +.. code-block:: bash + + ./tools/testing/kunit/kunit.py + +Creating a kunitconfig +====================== +The Python script is a thin wrapper around Kbuild as such, it needs to be +configured with a ``kunitconfig`` file. This file essentially contains the +regular Kernel config, with the specific test targets as well. + +.. code-block:: bash + + git clone -b master https://kunit.googlesource.com/kunitconfig $PATH_TO_KUNITCONFIG_REPO + cd $PATH_TO_LINUX_REPO + ln -s $PATH_TO_KUNIT_CONFIG_REPO/kunitconfig kunitconfig + +You may want to add kunitconfig to your local gitignore. + +Verifying KUnit Works +------------------------- + +To make sure that everything is set up correctly, simply invoke the Python +wrapper from your kernel repo: + +.. code-block:: bash + + ./tools/testing/kunit/kunit.py + +.. note:: + You may want to run ``make mrproper`` first. + +If everything worked correctly, you should see the following: + +.. code-block:: bash + + Generating .config ... + Building KUnit Kernel ... + Starting KUnit Kernel ... + +followed by a list of tests that are run. All of them should be passing. + +.. note:: + Because it is building a lot of sources for the first time, the ``Building + kunit kernel`` step may take a while. + +Writing your first test +========================== + +In your kernel repo let's add some code that we can test. Create a file +``drivers/misc/example.h`` with the contents: + +.. code-block:: c + + int misc_example_add(int left, int right); + +create a file ``drivers/misc/example.c``: + +.. code-block:: c + + #include + + #include "example.h" + + int misc_example_add(int left, int right) + { + return left + right; + } + +Now add the following lines to ``drivers/misc/Kconfig``: + +.. code-block:: kconfig + + config MISC_EXAMPLE + bool "My example" + +and the following lines to ``drivers/misc/Makefile``: + +.. code-block:: make + + obj-$(CONFIG_MISC_EXAMPLE) += example.o + +Now we are ready to write the test. The test will be in +``drivers/misc/example-test.c``: + +.. code-block:: c + + #include + #include "example.h" + + /* Define the test cases. */ + + static void misc_example_add_test_basic(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 1, misc_example_add(1, 0)); + KUNIT_EXPECT_EQ(test, 2, misc_example_add(1, 1)); + KUNIT_EXPECT_EQ(test, 0, misc_example_add(-1, 1)); + KUNIT_EXPECT_EQ(test, INT_MAX, misc_example_add(0, INT_MAX)); + KUNIT_EXPECT_EQ(test, -1, misc_example_add(INT_MAX, INT_MIN)); + } + + static void misc_example_test_failure(struct kunit *test) + { + KUNIT_FAIL(test, "This test never passes."); + } + + static struct kunit_case misc_example_test_cases[] = { + KUNIT_CASE(misc_example_add_test_basic), + KUNIT_CASE(misc_example_test_failure), + {}, + }; + + static struct kunit_module misc_example_test_module = { + .name = "misc-example", + .test_cases = misc_example_test_cases, + }; + module_test(misc_example_test_module); + +Now add the following to ``drivers/misc/Kconfig``: + +.. code-block:: kconfig + + config MISC_EXAMPLE_TEST + bool "Test for my example" + depends on MISC_EXAMPLE && KUNIT + +and the following to ``drivers/misc/Makefile``: + +.. code-block:: make + + obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o + +Now add it to your ``kunitconfig``: + +.. code-block:: none + + CONFIG_MISC_EXAMPLE=y + CONFIG_MISC_EXAMPLE_TEST=y + +Now you can run the test: + +.. code-block:: bash + + ./tools/testing/kunit/kunit.py + +You should see the following failure: + +.. code-block:: none + + ... + [16:08:57] [PASSED] misc-example:misc_example_add_test_basic + [16:08:57] [FAILED] misc-example:misc_example_test_failure + [16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17 + [16:08:57] This test never passes. + ... + +Congrats! You just wrote your first KUnit test! + +Next Steps +============= +* Check out the :doc:`usage` page for a more + in-depth explanation of KUnit. diff --git a/Documentation/kunit/usage.rst b/Documentation/kunit/usage.rst new file mode 100644 index 0000000000000..5c83ea9e21bc5 --- /dev/null +++ b/Documentation/kunit/usage.rst @@ -0,0 +1,447 @@ +.. SPDX-License-Identifier: GPL-2.0 + +============= +Using KUnit +============= + +The purpose of this document is to describe what KUnit is, how it works, how it +is intended to be used, and all the concepts and terminology that are needed to +understand it. This guide assumes a working knowledge of the Linux kernel and +some basic knowledge of testing. + +For a high level introduction to KUnit, including setting up KUnit for your +project, see :doc:`start`. + +Organization of this document +================================= + +This document is organized into two main sections: Testing and Isolating +Behavior. The first covers what a unit test is and how to use KUnit to write +them. The second covers how to use KUnit to isolate code and make it possible +to unit test code that was otherwise un-unit-testable. + +Testing +========== + +What is KUnit? +------------------ + +"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing +Framework." KUnit is intended first and foremost for writing unit tests; it is +general enough that it can be used to write integration tests; however, this is +a secondary goal. KUnit has no ambition of being the only testing framework for +the kernel; for example, it does not intend to be an end-to-end testing +framework. + +What is Unit Testing? +------------------------- + +A `unit test `_ is a test that +tests code at the smallest possible scope, a *unit* of code. In the C +programming language that's a function. + +Unit tests should be written for all the publicly exposed functions in a +compilation unit; so that is all the functions that are exported in either a +*class* (defined below) or all functions which are **not** static. + +Writing Tests +------------- + +Test Cases +~~~~~~~~~~ + +The fundamental unit in KUnit is the test case. A test case is a function with +the signature ``void (*)(struct kunit *test)``. It calls a function to be tested +and then sets *expectations* for what should happen. For example: + +.. code-block:: c + + void example_test_success(struct kunit *test) + { + } + + void example_test_failure(struct kunit *test) + { + KUNIT_FAIL(test, "This test never passes."); + } + +In the above example ``example_test_success`` always passes because it does +nothing; no expectations are set, so all expectations pass. On the other hand +``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is +a special expectation that logs a message and causes the test case to fail. + +Expectations +~~~~~~~~~~~~ +An *expectation* is a way to specify that you expect a piece of code to do +something in a test. An expectation is called like a function. A test is made +by setting expectations about the behavior of a piece of code under test; when +one or more of the expectations fail, the test case fails and information about +the failure is logged. For example: + +.. code-block:: c + + void add_test_basic(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 1, add(1, 0)); + KUNIT_EXPECT_EQ(test, 2, add(1, 1)); + } + +In the above example ``add_test_basic`` makes a number of assertions about the +behavior of a function called ``add``; the first parameter is always of type +``struct kunit *``, which contains information about the current test context; +the second parameter, in this case, is what the value is expected to be; the +last value is what the value actually is. If ``add`` passes all of these +expectations, the test case, ``add_test_basic`` will pass; if any one of these +expectations fail, the test case will fail. + +It is important to understand that a test case *fails* when any expectation is +violated; however, the test will continue running, potentially trying other +expectations until the test case ends or is otherwise terminated. This is as +opposed to *assertions* which are discussed later. + +To learn about more expectations supported by KUnit, see :doc:`api/test`. + +.. note:: + A single test case should be pretty short, pretty easy to understand, + focused on a single behavior. + +For example, if we wanted to properly test the add function above, we would +create additional tests cases which would each test a different property that an +add function should have like this: + +.. code-block:: c + + void add_test_basic(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 1, add(1, 0)); + KUNIT_EXPECT_EQ(test, 2, add(1, 1)); + } + + void add_test_negative(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, 0, add(-1, 1)); + } + + void add_test_max(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX)); + KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN)); + } + + void add_test_overflow(struct kunit *test) + { + KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1)); + } + +Notice how it is immediately obvious what all the properties that we are testing +for are. + +Assertions +~~~~~~~~~~ + +KUnit also has the concept of an *assertion*. An assertion is just like an +expectation except the assertion immediately terminates the test case if it is +not satisfied. + +For example: + +.. code-block:: c + + static void mock_test_do_expect_default_return(struct kunit *test) + { + struct mock_test_context *ctx = test->priv; + struct mock *mock = ctx->mock; + int param0 = 5, param1 = -5; + const char *two_param_types[] = {"int", "int"}; + const void *two_params[] = {¶m0, ¶m1}; + const void *ret; + + ret = mock->do_expect(mock, + "test_printk", test_printk, + two_param_types, two_params, + ARRAY_SIZE(two_params)); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret); + KUNIT_EXPECT_EQ(test, -4, *((int *) ret)); + } + +In this example, the method under test should return a pointer to a value, so +if the pointer returned by the method is null or an errno, we don't want to +bother continuing the test since the following expectation could crash the test +case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if +the appropriate conditions have not been satisfied to complete the test. + +Modules / Test Suites +~~~~~~~~~~~~~~~~~~~~~ + +Now obviously one unit test isn't very helpful; the power comes from having +many test cases covering all of your behaviors. Consequently it is common to +have many *similar* tests; in order to reduce duplication in these closely +related tests most unit testing frameworks provide the concept of a *test +suite*, in KUnit we call it a *test module*; all it is is just a collection of +test cases for a unit of code with a set up function that gets invoked before +every test cases and then a tear down function that gets invoked after every +test case completes. + +Example: + +.. code-block:: c + + static struct kunit_case example_test_cases[] = { + KUNIT_CASE(example_test_foo), + KUNIT_CASE(example_test_bar), + KUNIT_CASE(example_test_baz), + {}, + }; + + static struct kunit_module example_test_module = { + .name = "example", + .init = example_test_init, + .exit = example_test_exit, + .test_cases = example_test_cases, + }; + module_test(example_test_module); + +In the above example the test suite, ``example_test_module``, would run the test +cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``, each +would have ``example_test_init`` called immediately before it and would have +``example_test_exit`` called immediately after it. +``module_test(example_test_module)`` registers the test suite with the KUnit +test framework. + +.. note:: + A test case will only be run if it is associated with a test suite. + +For a more information on these types of things see the :doc:`api/test`. + +Isolating Behavior +================== + +The most important aspect of unit testing that other forms of testing do not +provide is the ability to limit the amount of code under test to a single unit. +In practice, this is only possible by being able to control what code gets run +when the unit under test calls a function and this is usually accomplished +through some sort of indirection where a function is exposed as part of an API +such that the definition of that function can be changed without affecting the +rest of the code base. In the kernel this primarily comes from two constructs, +classes, structs that contain function pointers that are provided by the +implementer, and architecture specific functions which have definitions selected +at compile time. + +Classes +------- + +Classes are not a construct that is built into the C programming language; +however, it is an easily derived concept. Accordingly, pretty much every project +that does not use a standardized object oriented library (like GNOME's GObject) +has their own slightly different way of doing object oriented programming; the +Linux kernel is no exception. + +The central concept in kernel object oriented programming is the class. In the +kernel, a *class* is a struct that contains function pointers. This creates a +contract between *implementers* and *users* since it forces them to use the +same function signature without having to call the function directly. In order +for it to truly be a class, the function pointers must specify that a pointer +to the class, known as a *class handle*, be one of the parameters; this makes +it possible for the member functions (also known as *methods*) to have access +to member variables (more commonly known as *fields*) allowing the same +implementation to have multiple *instances*. + +Typically a class can be *overridden* by *child classes* by embedding the +*parent class* in the child class. Then when a method provided by the child +class is called, the child implementation knows that the pointer passed to it is +of a parent contained within the child; because of this, the child can compute +the pointer to itself because the pointer to the parent is always a fixed offset +from the pointer to the child; this offset is the offset of the parent contained +in the child struct. For example: + +.. code-block:: c + + struct shape { + int (*area)(struct shape *this); + }; + + struct rectangle { + struct shape parent; + int length; + int width; + }; + + int rectangle_area(struct shape *this) + { + struct rectangle *self = container_of(this, struct shape, parent); + + return self->length * self->width; + }; + + void rectangle_new(struct rectangle *self, int length, int width) + { + self->parent.area = rectangle_area; + self->length = length; + self->width = width; + } + +In this example (as in most kernel code) the operation of computing the pointer +to the child from the pointer to the parent is done by ``container_of``. + +Faking Classes +~~~~~~~~~~~~~~ + +In order to unit test a piece of code that calls a method in a class, the +behavior of the method must be controllable, otherwise the test ceases to be a +unit test and becomes an integration test. + +A fake just provides an implementation of a piece of code that is different than +what runs in a production instance, but behaves identically from the standpoint +of the callers; this is usually done to replace a dependency that is hard to +deal with, or is slow. + +A good example for this might be implementing a fake EEPROM that just stores the +"contents" in an internal buffer. For example, let's assume we have a class that +represents an EEPROM: + +.. code-block:: c + + struct eeprom { + ssize_t (*read)(struct eeprom *this, size_t offset, char *buffer, size_t count); + ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count); + }; + +And we want to test some code that buffers writes to the EEPROM: + +.. code-block:: c + + struct eeprom_buffer { + ssize_t (*write)(struct eeprom_buffer *this, const char *buffer, size_t count); + int flush(struct eeprom_buffer *this); + size_t flush_count; /* Flushes when buffer exceeds flush_count. */ + }; + + struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom); + void destroy_eeprom_buffer(struct eeprom *eeprom); + +We can easily test this code by *faking out* the underlying EEPROM: + +.. code-block:: c + + struct fake_eeprom { + struct eeprom parent; + char contents[FAKE_EEPROM_CONTENTS_SIZE]; + }; + + ssize_t fake_eeprom_read(struct eeprom *parent, size_t offset, char *buffer, size_t count) + { + struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent); + + count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset); + memcpy(buffer, this->contents + offset, count); + + return count; + } + + ssize_t fake_eeprom_write(struct eeprom *this, size_t offset, const char *buffer, size_t count) + { + struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent); + + count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset); + memcpy(this->contents + offset, buffer, count); + + return count; + } + + void fake_eeprom_init(struct fake_eeprom *this) + { + this->parent.read = fake_eeprom_read; + this->parent.write = fake_eeprom_write; + memset(this->contents, 0, FAKE_EEPROM_CONTENTS_SIZE); + } + +We can now use it to test ``struct eeprom_buffer``: + +.. code-block:: c + + struct eeprom_buffer_test { + struct fake_eeprom *fake_eeprom; + struct eeprom_buffer *eeprom_buffer; + }; + + static void eeprom_buffer_test_does_not_write_until_flush(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer; + struct fake_eeprom *fake_eeprom = ctx->fake_eeprom; + char buffer[] = {0xff}; + + eeprom_buffer->flush_count = SIZE_MAX; + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0); + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0); + + eeprom_buffer->flush(eeprom_buffer); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff); + } + + static void eeprom_buffer_test_flushes_after_flush_count_met(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer; + struct fake_eeprom *fake_eeprom = ctx->fake_eeprom; + char buffer[] = {0xff}; + + eeprom_buffer->flush_count = 2; + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0); + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff); + } + + static void eeprom_buffer_test_flushes_increments_of_flush_count(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer; + struct fake_eeprom *fake_eeprom = ctx->fake_eeprom; + char buffer[] = {0xff, 0xff}; + + eeprom_buffer->flush_count = 2; + + eeprom_buffer->write(eeprom_buffer, buffer, 1); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0); + + eeprom_buffer->write(eeprom_buffer, buffer, 2); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff); + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff); + /* Should have only flushed the first two bytes. */ + KUNIT_EXPECT_EQ(test, fake_eeprom->contents[2], 0); + } + + static int eeprom_buffer_test_init(struct kunit *test) + { + struct eeprom_buffer_test *ctx; + + ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL); + ASSERT_NOT_ERR_OR_NULL(test, ctx); + + ctx->fake_eeprom = kunit_kzalloc(test, sizeof(*ctx->fake_eeprom), GFP_KERNEL); + ASSERT_NOT_ERR_OR_NULL(test, ctx->fake_eeprom); + + ctx->eeprom_buffer = new_eeprom_buffer(&ctx->fake_eeprom->parent); + ASSERT_NOT_ERR_OR_NULL(test, ctx->eeprom_buffer); + + test->priv = ctx; + + return 0; + } + + static void eeprom_buffer_test_exit(struct kunit *test) + { + struct eeprom_buffer_test *ctx = test->priv; + + destroy_eeprom_buffer(ctx->eeprom_buffer); + } + -- 2.21.0.392.gf8f6787159e-goog _______________________________________________ linux-um mailing list linux-um@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-um