From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933651AbcH2PhG (ORCPT ); Mon, 29 Aug 2016 11:37:06 -0400 Received: from mail-dm3nam03on0119.outbound.protection.outlook.com ([104.47.41.119]:10112 "EHLO NAM03-DM3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S933330AbcH2PhD (ORCPT ); Mon, 29 Aug 2016 11:37:03 -0400 X-Greylist: delayed 900 seconds by postgrey-1.27 at vger.kernel.org; Mon, 29 Aug 2016 11:37:03 EDT From: Matthew Wilcox To: Konstantin Khlebnikov CC: Ross Zwisler , "linux-kernel@vger.kernel.org" Subject: RE: [PATCH RFC 1/4] lib/radix: add universal radix_tree_fill_range Thread-Topic: [PATCH RFC 1/4] lib/radix: add universal radix_tree_fill_range Thread-Index: AQHSAI4pNn/ps2i9FEaiEt5NHHGFKKBf8b9w Date: Mon, 29 Aug 2016 15:21:52 +0000 Message-ID: References: <147230727479.9957.1087787722571077339.stgit@zurg> <20160827180927.GA22919@linux.intel.com> In-Reply-To: <20160827180927.GA22919@linux.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: yes X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=mawilcox@microsoft.com; x-originating-ip: [75.119.248.4] x-ms-office365-filtering-correlation-id: a5a38ec5-f6b1-4c98-77e1-08d3d02034fa x-microsoft-exchange-diagnostics: 1;DM2PR21MB0091;6:HU7nUs/r6HK8Mb66Y5XGwdLyZ0m7Pni2jld0VuQWuQmS6z805H3YIwXgCu13s9WnuPoMhf32aLis3ibD0eZ5jD3LyrTJiCB9mPe0ljUZD+2CRtze/z65Mq16WRKy1rGiupffI0Aj2aHqSXpZn5sLyJW4V2E1HF0DMJ0bcXi1y5X3ZlKaZAMMVx+Zqs0uTHU6Uyjw52gSBXTph6AljvJF2mQqBoLspdUs3jvyELSVbYln4ZseRcaGSXaRhwQu2upM9PhPnTYBgTr3/k1FZCsJmeiInvImCjivSh3jSBhK9mWgkJhtnQNOB8KXUEEsp/qzPVcCRAVE9QxY1idHfaFo2Q==;5:stK9RVaP7HX45W6e9oEnTcWYRhMEcxV+lReDZ71vlMn7FA+g/vy6pH+w/eKUe0cgqADHqXrN0LdwbuKXkS/CbR2ndCR+ybPssZ1p0oTnkrz3FY+2r14fM1K/tWanl93Gw3PVn1bRrcS2+icGjvssuw==;24:9mbmwxaOhy+ImNCNOVW2GxICrz8TWLdSOD3ZcgiTf8bsm5TwgunvoaY0TZNDmoTBDnuQnkKUR+RCBNXP+yR11M2TxV2frpaoYgvC82wcZXA=;7:N42yvJ4gNw3TlJSNGQQESYbZ2vrMZQIY6GizjB2wyW3AlcWIzhdYbcbrfG+VfRqXdpdzoGD5hn3rzvvSgiv0qBAsOuUwpIbkanqCyIo3XKmhL+v56JDYeqWXrVRpHkN9sNQADYZR9mkcvuk1CJuSWqeM1im5watpG5Ewi52ENPpBxxG48HYksPc0s75LrIHcAsPPBsk5FASJnLJ5NOSelXUoRf5hR11dCU79I5DUQPOk/txqT3nLRz9z9TZDFQXW x-microsoft-antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:DM2PR21MB0091; x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(788757137089)(228905959029699)(17755550239193); x-exchange-antispam-report-cfa-test: BCL:0;PCL:0;RULEID:(102415321)(61425038)(6040176)(601004)(2401047)(5005006)(8121501046)(3002001)(10201501046)(6055026)(61426038)(61427038);SRVR:DM2PR21MB0091;BCL:0;PCL:0;RULEID:;SRVR:DM2PR21MB0091; x-forefront-prvs: 0049B3F387 x-forefront-antispam-report: SFV:NSPM;SFS:(10019020)(6009001)(7916002)(189002)(13464003)(377454003)(24454002)(51694002)(199003)(87936001)(122556002)(5002640100001)(3280700002)(106116001)(105586002)(8990500004)(10290500002)(3846002)(6116002)(102836003)(11100500001)(586003)(99286002)(8676002)(106356001)(81166006)(81156014)(74316002)(110136002)(2906002)(8936002)(5660300001)(305945005)(68736007)(189998001)(4326007)(7846002)(7736002)(7696003)(97736004)(5890100001)(3660700001)(9686002)(1411001)(54356999)(50986999)(99936001)(76576001)(77096005)(86362001)(33656002)(76176999)(2950100001)(2900100001)(19580405001)(86612001)(92566002)(19580395003)(101416001)(5005710100001)(10090500001)(10400500002)(66066001)(217873001);DIR:OUT;SFP:1102;SCL:1;SRVR:DM2PR21MB0091;H:DM2PR21MB0089.namprd21.prod.outlook.com;FPR:;SPF:None;PTR:InfoNoRecords;MX:1;A:1;LANG:en; spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: multipart/mixed; boundary="_002_DM2PR21MB0089CB40A8A9CE53A23EED50CBE10DM2PR21MB0089namp_" MIME-Version: 1.0 X-OriginatorOrg: microsoft.com X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Aug 2016 15:21:52.7185 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 72f988bf-86f1-41af-91ab-2d7cd011db47 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM2PR21MB0091 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org --_002_DM2PR21MB0089CB40A8A9CE53A23EED50CBE10DM2PR21MB0089namp_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Thanks, Ross. Konstantin, I think there are problems with the concept behind this series.= You have multiple entries in the tree with the same value. That works ou= t fine when the entry is a pointer (eg to a struct page), but not so well w= hen it's an exceptional entry (eg a swap cache entry or a DAX radix tree en= try). If you look at the recent DAX work, you'll see there's a lock bit, a= nd having multiple lock bits is a recipe for disaster. But I did notice that we have a missing test in the test-suite; one that ch= ecks whether replace_slot actually replaces all of the parts of a multiorde= r entry. See attachment. -----Original Message----- From: Ross Zwisler [mailto:ross.zwisler@linux.intel.com]=20 Sent: Saturday, August 27, 2016 2:09 PM To: Matthew Wilcox Subject: Re: [PATCH RFC 1/4] lib/radix: add universal radix_tree_fill_range Hey Matthew, Just wanted to make sure that you saw this series. - Ross On Sat, Aug 27, 2016 at 05:14:34PM +0300, Konstantin Khlebnikov wrote: > This patch adds function for filling and truncating ranges of slots: >=20 > radix_tree_node *radix_tree_fill_range(root, start, end, item, flags) >=20 > It fills slots in range "begin".."end" with "item" and returns pointer=20 > to the last filled node. Filling with NULL truncates range. >=20 > This is intended for managing transparent huge pages in page cache=20 > where all entries are aligned but this function can handle arbitrary=20 > unaligned ranges. Might be useful for PAT or VMA-like extent trees. >=20 > By default filling range constructs shallow tree: entries are assigned=20 > directly inner slots if possible. In worst case any range requires=20 > only > 2 * RADIX_TREE_MAX_PATH nodes. If length is power of two and start=20 > index is aligned then all slots are always in single node and requires=20 > at most RADIX_TREE_MAX_PATH nodes. >=20 > Function accepts several flags: >=20 > RADIX_TREE_FILL_LEAVES - build deep tree, insert entry into leaves. >=20 > RADIX_TREE_FILL_OVERWRITE - overwrite instead of failing with -EEXIST. >=20 > RADIX_TREE_FILL_ATOMIC - play well with concurrent RCU-protected lookup: > fill new nodes with RADIX_TREE_RETRY before inserting them into the tree. > At following iterations these slots are filled with @item or sub-nodes. >=20 > RADIX_TREE_FILL_CLEAR_TAGS - also clears all tags. >=20 > radix_tree_fill_range() returns pointer to the node which holds the=20 > last slot in range, NULL if this is root slot, or ERR_PTR in case of erro= r. >=20 > Thus, radix_tree_fill_range() can handle all operations required for THP: >=20 > * Insert > Fill range with pointer to head page. >=20 > radix_tree_fill_range(root, index, index + nr_pages - 1, head_page, > RADIX_TREE_FILL_ATOMIC) >=20 > * Remove > Fill range with NULL or shadow entry, returned value will be used for=20 > linking completely shadow nodes into slab shrinker. >=20 > radix_tree_fill_range(root, index, index + nr_pages - 1, NULL, > RADIX_TREE_FILL_OVERWRITE) >=20 > * Merge > Fill range with overwrite to replace 0-order pages with THP. >=20 > radix_tree_fill_range(root, index, index + nr_pages - 1, head_page, > RADIX_TREE_FILL_OVERWRITE | RADIX_TREE_FILL_ATOMIC) >=20 > * Split > Two passes: first fill leaves with head_page entry and then replace=20 > each slot with pointer to individual tail page. This could be done in=20 > single pass but makes radix_tree_fill_range much more complicated. >=20 > radix_tree_fill_range(root, index, index + nr_pages - 1, head_page, > RADIX_TREE_FILL_LEAVES | RADIX_TREE_FILL_OVERWRITE | > RADIX_TREE_FILL_ATOMIC); > radix_tree_for_each_slot(...) > radix_tree_replace_slot(slot, head + iter.index - head->index); >=20 >=20 > Page lookup and iterator will return pointer to head page for any index. >=20 >=20 > Code inside iterator loop could detect huge entry, handle all=20 > sub-pages and jump to next index using new helper function radix_tree_ite= r_jump(): >=20 > slot =3D radix_tree_iter_jump(&iter, page->index +=20 > hpage_nr_pages(page)); >=20 > This helper has builtin protection against overflows: jump to index =3D=20 > 0 stops iterator. This uses existing logic in radix_tree_next_chunk(): > if iter.next_index is zero then iter.index must be zero too. >=20 >=20 > Tags should be set only for last index of THP range: this way iterator=20 > will find them regardless of starting index. >=20 > radix_tree_preload_range() pre-allocates nodes for filling range. >=20 > Signed-off-by: Konstantin Khlebnikov > --- > include/linux/radix-tree.h | 46 ++++++++ > lib/radix-tree.c | 245 ++++++++++++++++++++++++++++++++++++++= ++++++ > 2 files changed, 291 insertions(+) >=20 > diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h=20 > index 4613bf35c311..af33e8d93ec3 100644 > --- a/include/linux/radix-tree.h > +++ b/include/linux/radix-tree.h > @@ -319,6 +319,35 @@ static inline void radix_tree_preload_end(void) > preempt_enable(); > } > =20 > +#define RADIX_TREE_FILL_LEAVES 1 /* build full depth tree */ > +#define RADIX_TREE_FILL_OVERWRITE 2 /* overwrite non-empty slots */ > +#define RADIX_TREE_FILL_CLEAR_TAGS 4 /* clear all tags */ > +#define RADIX_TREE_FILL_ATOMIC 8 /* play well with rcu lookup */ > + > +struct radix_tree_node * > +radix_tree_fill_range(struct radix_tree_root *root, unsigned long start, > + unsigned long end, void *item, unsigned int flags); > + > +int radix_tree_preload_range(gfp_t gfp_mask, unsigned long start, > + unsigned long end, unsigned int flags); > + > +/** > + * radix_tree_truncate_range - remove everything in range > + * @root: radix tree root > + * @start: first index > + * @end: last index > + * > + * This function removes all items and tags within given range. > + */ > +static inline void > +radix_tree_truncate_range(struct radix_tree_root *root, > + unsigned long start, unsigned long end) { > + radix_tree_fill_range(root, start, end, NULL, > + RADIX_TREE_FILL_OVERWRITE | > + RADIX_TREE_FILL_CLEAR_TAGS); } > + > /** > * struct radix_tree_iter - radix tree iterator state > * > @@ -435,6 +464,23 @@ void **radix_tree_iter_next(struct=20 > radix_tree_iter *iter) } > =20 > /** > + * radix_tree_iter_jump - restart iterating from given index if it non-z= ero > + * @iter: iterator state > + * @index: next index > + * > + * If index is zero when iterator will stop. This protects from=20 > +endless loop > + * when index overflows after visiting last entry. > + */ > +static inline __must_check > +void **radix_tree_iter_jump(struct radix_tree_iter *iter, unsigned=20 > +long index) { > + iter->index =3D index - 1; > + iter->next_index =3D index; > + iter->tags =3D 0; > + return NULL; > +} > + > +/** > * radix_tree_chunk_size - get current chunk size > * > * @iter: pointer to radix tree iterator > diff --git a/lib/radix-tree.c b/lib/radix-tree.c index=20 > 1b7bf7314141..c46a60065a77 100644 > --- a/lib/radix-tree.c > +++ b/lib/radix-tree.c > @@ -36,6 +36,7 @@ > #include > #include > #include /* in_interrupt() */ > +#include > =20 > =20 > /* Number of nodes in fully populated tree of given height */ @@=20 > -1014,6 +1015,250 @@ void **radix_tree_next_chunk(struct=20 > radix_tree_root *root, EXPORT_SYMBOL(radix_tree_next_chunk); > =20 > /** > + * radix_tree_preload_range - preload nodes for filling range. > + * @gfp_mask: > + * @start: first index > + * @end: last index > + * @flags: RADIX_TREE_FILL_* > + */ > +int radix_tree_preload_range(gfp_t gfp_mask, unsigned long start, > + unsigned long end, unsigned int flags) { > + unsigned long length =3D end - start + 1; > + int nr_nodes, shift; > + > + /* Preloading doesn't help anything with this gfp mask, skip it */ > + if (!gfpflags_allow_blocking(gfp_mask)) { > + preempt_disable(); > + return 0; > + } > + > + /* > + * For filling leaves tree must cover all indexes in range at all > + * levels plus RADIX_TREE_MAX_PATH required for growing tree depth > + * and only root node is shared for sure. > + * > + * If for aligned range we need RADIX_TREE_MAX_PATH for growing depth > + * and RADIX_TREE_MAX_PATH for path where all slots will be. > + * > + * For arbitrary range we need again RADIX_TREE_MAX_PATH for growing > + * depth and two RADIX_TREE_MAX_PATH chains for constructing arc of > + * slots from leaf to root and back. Only root node is shared. > + */ > + if (flags & RADIX_TREE_FILL_LEAVES) { > + if (start > end) > + return -EINVAL; > + shift =3D 0; > + nr_nodes =3D RADIX_TREE_MAX_PATH - 1; > + do { > + shift +=3D RADIX_TREE_MAP_SHIFT; > + nr_nodes +=3D (end >> shift) - (start >> shift) + 1; > + } while (shift < RADIX_TREE_INDEX_BITS); > + } else if (is_power_of_2(length) && IS_ALIGNED(start, length)) > + nr_nodes =3D RADIX_TREE_MAX_PATH * 2 - 1; > + else > + nr_nodes =3D RADIX_TREE_MAX_PATH * 3 - 2; > + return __radix_tree_preload(gfp_mask, nr_nodes); }=20 > +EXPORT_SYMBOL(radix_tree_preload_range); > + > +/** > + * radix_tree_fill_range - fill range of slots > + * @root: radix tree root > + * @start: first index > + * @end: last index > + * @item: value for filling, NULL for removing > + * @flags: RADIX_TREE_FILL_* flags > + * Returns: pointer last node or NULL, ERR_PTR for errors > + * > + * By default builds shallow tree: assign entry to inner slots if possib= le. > + * In wost case range requires up to 2 * RADIX_TREE_MAX_PATH nodes=20 > +plus > + * RADIX_TREE_MAX_PATH for extending tree depth. > + * > + * If length is 2^n and start aligned to it then all slots are in one no= de. > + * > + * This function cannot fill or cut part of bugger range if this=20 > +require > + * spltting inner slots and insering new nodes: fails with -ERANGE. > + * > + * With flag RADIX_TREE_FILL_LEAVES builds deep tree and insert @item=20 > +into > + * leaf slots. This requires much more nodes. > + * > + * With flag RADIX_TREE_FILL_OVERWRITE removes everything in range=20 > +and cut > + * sub-tree if @item is NULL. Without that flag function undo all=20 > +chandges > + * and fails with code -EEXIST if finds any populated slot. > + * > + * With flag RADIX_TREE_FILL_ATOMIC function plays well with=20 > +rcu-protected > + * lookups: it fills new nodes with RADIX_TREE_RETRY before inserting=20 > +them > + * into the tree: lookup will see either old entry, @item or retry entry= . > + * At following iterations these slots are filled with @item or sub-node= s. > + * > + * With flag RADIX_TREE_FILL_CLEAR_TAGS also clears all tags. > + * > + * Function returns pointer to node which holds the last slot in=20 > +range, > + * NULL if that was root slot, or ERR_PTR: -ENOMEM, -EEXIST, -ERANGE. > + */ > +struct radix_tree_node * > +radix_tree_fill_range(struct radix_tree_root *root, unsigned long start, > + unsigned long end, void *item, unsigned int flags) { > + unsigned long index =3D start, maxindex; > + struct radix_tree_node *node, *child; > + int error, root_shift, shift, tag, offset; > + void *entry; > + > + /* Sanity check */ > + if (start > end) > + return ERR_PTR(-EINVAL); > + > + /* Make sure the tree is high enough. */ > + root_shift =3D radix_tree_load_root(root, &node, &maxindex); > + if (end > maxindex) { > + error =3D radix_tree_extend(root, end, root_shift); > + if (error < 0) > + return ERR_PTR(error); > + root_shift =3D error; > + } > + > + /* Special case: single slot tree */ > + if (!root_shift) { > + if (node && (!(flags & RADIX_TREE_FILL_OVERWRITE))) > + return ERR_PTR(-EEXIST); > + if (flags & RADIX_TREE_FILL_CLEAR_TAGS) > + root_tag_clear_all(root); > + rcu_assign_pointer(root->rnode, item); > + return NULL; > + } > + > +next_node: > + node =3D NULL; > + offset =3D 0; > + entry =3D rcu_dereference_raw(root->rnode); > + shift =3D root_shift; > + > + /* Descend to the index. Do at least one step. */ > + do { > + child =3D entry_to_node(entry); > + shift -=3D RADIX_TREE_MAP_SHIFT; > + if (!child || !radix_tree_is_internal_node(entry)) { > + /* Entry wider than range */ > + if (child) { > + error =3D -ERANGE; > + goto undo; > + } > + /* Hole wider tnan truncated range */ > + if (!item) > + goto skip_node; > + child =3D radix_tree_node_alloc(root); > + if (!child) { > + error =3D -ENOMEM; > + goto undo; > + } > + child->shift =3D shift; > + child->offset =3D offset; > + child->parent =3D node; > + /* Populate range with retry entries. */ > + if (flags & RADIX_TREE_FILL_ATOMIC) { > + int idx =3D (index >> shift) & > + RADIX_TREE_MAP_MASK; > + int last =3D RADIX_TREE_MAP_SIZE; > + > + if (end < (index | shift_maxindex(shift))) > + last =3D (end >> shift) & > + RADIX_TREE_MAP_MASK; > + for (; idx <=3D last; idx++) > + child->slots[idx] =3D RADIX_TREE_RETRY; > + } > + entry =3D node_to_entry(child); > + if (node) { > + rcu_assign_pointer(node->slots[offset], entry); > + node->count++; > + } else > + rcu_assign_pointer(root->rnode, entry); > + } > + node =3D child; > + offset =3D (index >> shift) & RADIX_TREE_MAP_MASK; > + entry =3D rcu_dereference_raw(node->slots[offset]); > + > + /* Stop if find leaf or slot inside range */ > + } while ((flags & RADIX_TREE_FILL_LEAVES) ? shift : > + ((index & ((1ul << shift) - 1)) || > + (index | ((1ul << shift) - 1)) > end)); > + > +next_slot: > + /* NULL or retry entry */ > + if (entry <=3D RADIX_TREE_RETRY) > + goto fill; > + > + if (!(flags & RADIX_TREE_FILL_OVERWRITE)) { > + error =3D -EEXIST; > + goto undo; > + } > + > + /* Cut sub-tree */ > + if (unlikely(radix_tree_is_internal_node(entry))) { > + rcu_assign_pointer(node->slots[offset], item); > + child =3D entry_to_node(entry); > + offset =3D 0; > + do { > + entry =3D rcu_dereference_raw(child->slots[offset]); > + if (entry) > + child->count--; > + if (radix_tree_is_internal_node(entry)) { > + child =3D entry_to_node(entry); > + offset =3D 0; > + } else if (++offset =3D=3D RADIX_TREE_MAP_SIZE) { > + offset =3D child->offset; > + entry =3D child->parent; > + WARN_ON_ONCE(child->count); > + radix_tree_node_free(child); > + child =3D entry; > + } > + } while (child !=3D node); > + } > + > + if (flags & RADIX_TREE_FILL_CLEAR_TAGS) { > + for (tag =3D 0; tag < RADIX_TREE_MAX_TAGS; tag++) > + node_tag_clear(root, node, tag, offset); > + } > + > + /* Skip the rest if we're cleared class slot in node */ > + if (!--node->count && !item && __radix_tree_delete_node(root, node)) > + goto skip_node; > + > + > +fill: > + rcu_assign_pointer(node->slots[offset], item); > + if (item) > + node->count++; > + > + index +=3D 1ul << shift; > + if (index - 1 =3D=3D end) > + return node; > + > + /* Next slot in this node and still in range */ > + if (index + (1ul << shift) - 1 <=3D end && > + ++offset < RADIX_TREE_MAP_SIZE) { > + entry =3D rcu_dereference_raw(node->slots[offset]); > + goto next_slot; > + } > + > + goto next_node; > + > +skip_node: > + index |=3D shift_maxindex(shift); > + if (index++ >=3D end) > + return node; > + goto next_node; > + > +undo: > + if (index > start) > + radix_tree_fill_range(root, start, index - 1, NULL, > + RADIX_TREE_FILL_OVERWRITE); > + return ERR_PTR(error); > +} > +EXPORT_SYMBOL(radix_tree_fill_range); > + > +/** > * radix_tree_range_tag_if_tagged - for each item in given range set giv= en > * tag if item has another tag set > * @root: radix tree root >=20 --_002_DM2PR21MB0089CB40A8A9CE53A23EED50CBE10DM2PR21MB0089namp_ Content-Type: application/octet-stream; name="rtts.replace-slot.diff" Content-Description: rtts.replace-slot.diff Content-Disposition: attachment; filename="rtts.replace-slot.diff"; size=1971; creation-date="Mon, 29 Aug 2016 15:20:37 GMT"; modification-date="Mon, 29 Aug 2016 15:20:37 GMT" Content-Transfer-Encoding: base64 ZGlmZiAtLWdpdCBhL3Rvb2xzL3Rlc3RpbmcvcmFkaXgtdHJlZS9saW51eC90eXBlcy5oIGIvdG9v bHMvdGVzdGluZy9yYWRpeC10cmVlL2xpbnV4L3R5cGVzLmgKaW5kZXggZmFhMGI2Zi4uODQ5MWQ4 OSAxMDA2NDQKLS0tIGEvdG9vbHMvdGVzdGluZy9yYWRpeC10cmVlL2xpbnV4L3R5cGVzLmgKKysr IGIvdG9vbHMvdGVzdGluZy9yYWRpeC10cmVlL2xpbnV4L3R5cGVzLmgKQEAgLTYsOCArNiw2IEBA CiAjZGVmaW5lIF9fcmN1CiAjZGVmaW5lIF9fcmVhZF9tb3N0bHkKIAotI2RlZmluZSBCSVRTX1BF Ul9MT05HIChzaXplb2YobG9uZykgKiA4KQotCiBzdGF0aWMgaW5saW5lIHZvaWQgSU5JVF9MSVNU X0hFQUQoc3RydWN0IGxpc3RfaGVhZCAqbGlzdCkKIHsKIAlsaXN0LT5uZXh0ID0gbGlzdDsKZGlm ZiAtLWdpdCBhL3Rvb2xzL3Rlc3RpbmcvcmFkaXgtdHJlZS9tdWx0aW9yZGVyLmMgYi90b29scy90 ZXN0aW5nL3JhZGl4LXRyZWUvbXVsdGlvcmRlci5jCmluZGV4IGYxNDI1MmQuLjJkZWEwMDEgMTAw NjQ0Ci0tLSBhL3Rvb2xzL3Rlc3RpbmcvcmFkaXgtdHJlZS9tdWx0aW9yZGVyLmMKKysrIGIvdG9v bHMvdGVzdGluZy9yYWRpeC10cmVlL211bHRpb3JkZXIuYwpAQCAtMTIzLDYgKzEyMyw5IEBAIHN0 YXRpYyB2b2lkIG11bHRpb3JkZXJfY2hlY2sodW5zaWduZWQgbG9uZyBpbmRleCwgaW50IG9yZGVy KQogCXVuc2lnbmVkIGxvbmcgaTsKIAl1bnNpZ25lZCBsb25nIG1pbiA9IGluZGV4ICYgfigoMVVM IDw8IG9yZGVyKSAtIDEpOwogCXVuc2lnbmVkIGxvbmcgbWF4ID0gbWluICsgKDFVTCA8PCBvcmRl cik7CisJdm9pZCAqKnNsb3Q7CisJc3RhdGljIHZvaWQgKmVudHJ5ID0gKHZvaWQgKikoMHhBMCB8 IFJBRElYX1RSRUVfRVhDRVBUSU9OQUxfRU5UUlkpOworCXN0cnVjdCBpdGVtICppdGVtOwogCVJB RElYX1RSRUUodHJlZSwgR0ZQX0tFUk5FTCk7CiAKIAlwcmludGYoIk11bHRpb3JkZXIgaW5kZXgg JWxkLCBvcmRlciAlZFxuIiwgaW5kZXgsIG9yZGVyKTsKQEAgLTEzMCw3ICsxMzMsNyBAQCBzdGF0 aWMgdm9pZCBtdWx0aW9yZGVyX2NoZWNrKHVuc2lnbmVkIGxvbmcgaW5kZXgsIGludCBvcmRlcikK IAlhc3NlcnQoaXRlbV9pbnNlcnRfb3JkZXIoJnRyZWUsIGluZGV4LCBvcmRlcikgPT0gMCk7CiAK IAlmb3IgKGkgPSBtaW47IGkgPCBtYXg7IGkrKykgewotCQlzdHJ1Y3QgaXRlbSAqaXRlbSA9IGl0 ZW1fbG9va3VwKCZ0cmVlLCBpKTsKKwkJaXRlbSA9IGl0ZW1fbG9va3VwKCZ0cmVlLCBpKTsKIAkJ YXNzZXJ0KGl0ZW0gIT0gMCk7CiAJCWFzc2VydChpdGVtLT5pbmRleCA9PSBpbmRleCk7CiAJfQpA QCAtMTM4LDExICsxNDEsMTYgQEAgc3RhdGljIHZvaWQgbXVsdGlvcmRlcl9jaGVjayh1bnNpZ25l ZCBsb25nIGluZGV4LCBpbnQgb3JkZXIpCiAJCWl0ZW1fY2hlY2tfYWJzZW50KCZ0cmVlLCBpKTsK IAlmb3IgKGkgPSBtYXg7IGkgPCAyKm1heDsgaSsrKQogCQlpdGVtX2NoZWNrX2Fic2VudCgmdHJl ZSwgaSk7Ci0JZm9yIChpID0gbWluOyBpIDwgbWF4OyBpKyspIHsKLQkJc3RhdGljIHZvaWQgKmVu dHJ5ID0gKHZvaWQgKikKLQkJCQkJKDB4QTAgfCBSQURJWF9UUkVFX0VYQ0VQVElPTkFMX0VOVFJZ KTsKKwlmb3IgKGkgPSBtaW47IGkgPCBtYXg7IGkrKykKIAkJYXNzZXJ0KHJhZGl4X3RyZWVfaW5z ZXJ0KCZ0cmVlLCBpLCBlbnRyeSkgPT0gLUVFWElTVCk7CisKKwlzbG90ID0gcmFkaXhfdHJlZV9s b29rdXBfc2xvdCgmdHJlZSwgaW5kZXgpOworCXJhZGl4X3RyZWVfcmVwbGFjZV9zbG90KHNsb3Qs IGVudHJ5KTsKKwlmb3IgKGkgPSBtaW47IGkgPCBtYXg7IGkrKykgeworCQlzdHJ1Y3QgaXRlbSAq aXRlbTIgPSBpdGVtX2xvb2t1cCgmdHJlZSwgaSk7CisJCWFzc2VydChpdGVtMiA9PSBlbnRyeSk7 CiAJfQorCXJhZGl4X3RyZWVfcmVwbGFjZV9zbG90KHNsb3QsIGl0ZW0pOwogCiAJYXNzZXJ0KGl0 ZW1fZGVsZXRlKCZ0cmVlLCBpbmRleCkgIT0gMCk7CiAK --_002_DM2PR21MB0089CB40A8A9CE53A23EED50CBE10DM2PR21MB0089namp_--