From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755525AbcETOYL (ORCPT ); Fri, 20 May 2016 10:24:11 -0400 Received: from LGEAMRELO13.lge.com ([156.147.23.53]:47046 "EHLO lgeamrelo13.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755167AbcETOYH (ORCPT ); Fri, 20 May 2016 10:24:07 -0400 X-Original-SENDERIP: 156.147.1.126 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 10.177.223.161 X-Original-MAILFROM: minchan@kernel.org From: Minchan Kim To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Minchan Kim , Rik van Riel , Vlastimil Babka , Joonsoo Kim , Mel Gorman , Hugh Dickins , Rafael Aquini , virtualization@lists.linux-foundation.org, Jonathan Corbet , John Einar Reitan , dri-devel@lists.freedesktop.org, Sergey Senozhatsky , Gioh Kim Subject: [PATCH v6 02/12] mm: migrate: support non-lru movable page migration Date: Fri, 20 May 2016 23:23:35 +0900 Message-Id: <1463754225-31311-3-git-send-email-minchan@kernel.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1463754225-31311-1-git-send-email-minchan@kernel.org> References: <1463754225-31311-1-git-send-email-minchan@kernel.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We have allowed migration for only LRU pages until now and it was enough to make high-order pages. But recently, embedded system(e.g., webOS, android) uses lots of non-movable pages(e.g., zram, GPU memory) so we have seen several reports about troubles of small high-order allocation. For fixing the problem, there were several efforts (e,g,. enhance compaction algorithm, SLUB fallback to 0-order page, reserved memory, vmalloc and so on) but if there are lots of non-movable pages in system, their solutions are void in the long run. So, this patch is to support facility to change non-movable pages with movable. For the feature, this patch introduces functions related to migration to address_space_operations as well as some page flags. If a driver want to make own pages movable, it should define three functions which are function pointers of struct address_space_operations. 1. bool (*isolate_page) (struct page *page, isolate_mode_t mode); What VM expects on isolate_page function of driver is to return *true* if driver isolates page successfully. On returing true, VM marks the page as PG_isolated so concurrent isolation in several CPUs skip the page for isolation. If a driver cannot isolate the page, it should return *false*. Once page is successfully isolated, VM uses page.lru fields so driver shouldn't expect to preserve values in that fields. 2. int (*migratepage) (struct address_space *mapping, struct page *newpage, struct page *oldpage, enum migrate_mode); After isolation, VM calls migratepage of driver with isolated page. The function of migratepage is to move content of the old page to new page and set up fields of struct page newpage. Keep in mind that you should clear PG_movable of oldpage via __ClearPageMovable under page_lock if you migrated the oldpage successfully and returns 0. If driver cannot migrate the page at the moment, driver can return -EAGAIN. On -EAGAIN, VM will retry page migration in a short time because VM interprets -EAGAIN as "temporal migration failure". On returning any error except -EAGAIN, VM will give up the page migration without retrying in this time. Driver shouldn't touch page.lru field VM using in the functions. 3. void (*putback_page)(struct page *); If migration fails on isolated page, VM should return the isolated page to the driver so VM calls driver's putback_page with migration failed page. In this function, driver should put the isolated page back to the own data structure. 4. non-lru movable page flags There are two page flags for supporting non-lru movable page. * PG_movable Driver should use the below function to make page movable under page_lock. void __SetPageMovable(struct page *page, struct address_space *mapping) It needs argument of address_space for registering migration family functions which will be called by VM. Exactly speaking, PG_movable is not a real flag of struct page. Rather than, VM reuses page->mapping's lower bits to represent it. #define PAGE_MAPPING_MOVABLE 0x2 page->mapping = page->mapping | PAGE_MAPPING_MOVABLE; so driver shouldn't access page->mapping directly. Instead, driver should use page_mapping which mask off the low two bits of page->mapping so it can get right struct address_space. For testing of non-lru movable page, VM supports __PageMovable function. However, it doesn't guarantee to identify non-lru movable page because page->mapping field is unified with other variables in struct page. As well, if driver releases the page after isolation by VM, page->mapping doesn't have stable value although it has PAGE_MAPPING_MOVABLE (Look at __ClearPageMovable). But __PageMovable is cheap to catch whether page is LRU or non-lru movable once the page has been isolated. Because LRU pages never can have PAGE_MAPPING_MOVABLE in page->mapping. It is also good for just peeking to test non-lru movable pages before more expensive checking with lock_page in pfn scanning to select victim. For guaranteeing non-lru movable page, VM provides PageMovable function. Unlike __PageMovable, PageMovable functions validates page->mapping and mapping->a_ops->isolate_page under lock_page. The lock_page prevents sudden destroying of page->mapping. Driver using __SetPageMovable should clear the flag via __ClearMovablePage under page_lock before the releasing the page. * PG_isolated To prevent concurrent isolation among several CPUs, VM marks isolated page as PG_isolated under lock_page. So if a CPU encounters PG_isolated non-lru movable page, it can skip it. Driver doesn't need to manipulate the flag because VM will set/clear it automatically. Keep in mind that if driver sees PG_isolated page, it means the page have been isolated by VM so it shouldn't touch page.lru field. PG_isolated is alias with PG_reclaim flag so driver shouldn't use the flag for own purpose. Cc: Rik van Riel Cc: Vlastimil Babka Cc: Joonsoo Kim Cc: Mel Gorman Cc: Hugh Dickins Cc: Rafael Aquini Cc: virtualization@lists.linux-foundation.org Cc: Jonathan Corbet Cc: John Einar Reitan Cc: dri-devel@lists.freedesktop.org Cc: Sergey Senozhatsky Signed-off-by: Gioh Kim Signed-off-by: Minchan Kim --- Documentation/filesystems/Locking | 4 + Documentation/filesystems/vfs.txt | 11 +++ Documentation/vm/page_migration | 107 ++++++++++++++++++++- include/linux/compaction.h | 17 ++++ include/linux/fs.h | 2 + include/linux/ksm.h | 3 +- include/linux/migrate.h | 2 + include/linux/mm.h | 1 + include/linux/page-flags.h | 33 +++++-- mm/compaction.c | 82 ++++++++++++---- mm/ksm.c | 4 +- mm/migrate.c | 191 ++++++++++++++++++++++++++++++++++---- mm/page_alloc.c | 2 +- mm/util.c | 6 +- 14 files changed, 411 insertions(+), 54 deletions(-) diff --git a/Documentation/filesystems/Locking b/Documentation/filesystems/Locking index 75eea7ce3d7c..dda6e3f8e203 100644 --- a/Documentation/filesystems/Locking +++ b/Documentation/filesystems/Locking @@ -195,7 +195,9 @@ unlocks and drops the reference. int (*releasepage) (struct page *, int); void (*freepage)(struct page *); int (*direct_IO)(struct kiocb *, struct iov_iter *iter); + bool (*isolate_page) (struct page *, isolate_mode_t); int (*migratepage)(struct address_space *, struct page *, struct page *); + void (*putback_page) (struct page *); int (*launder_page)(struct page *); int (*is_partially_uptodate)(struct page *, unsigned long, unsigned long); int (*error_remove_page)(struct address_space *, struct page *); @@ -219,7 +221,9 @@ invalidatepage: yes releasepage: yes freepage: yes direct_IO: +isolate_page: yes migratepage: yes (both) +putback_page: yes launder_page: yes is_partially_uptodate: yes error_remove_page: yes diff --git a/Documentation/filesystems/vfs.txt b/Documentation/filesystems/vfs.txt index c61a223ef3ff..900360cbcdae 100644 --- a/Documentation/filesystems/vfs.txt +++ b/Documentation/filesystems/vfs.txt @@ -592,9 +592,14 @@ struct address_space_operations { int (*releasepage) (struct page *, int); void (*freepage)(struct page *); ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter); + /* isolate a page for migration */ + bool (*isolate_page) (struct page *, isolate_mode_t); /* migrate the contents of a page to the specified target */ int (*migratepage) (struct page *, struct page *); + /* put migration-failed page back to right list */ + void (*putback_page) (struct page *); int (*launder_page) (struct page *); + int (*is_partially_uptodate) (struct page *, unsigned long, unsigned long); void (*is_dirty_writeback) (struct page *, bool *, bool *); @@ -747,6 +752,10 @@ struct address_space_operations { and transfer data directly between the storage and the application's address space. + isolate_page: Called by the VM when isolating a movable non-lru page. + If page is successfully isolated, VM marks the page as PG_isolated + via __SetPageIsolated. + migrate_page: This is used to compact the physical memory usage. If the VM wants to relocate a page (maybe off a memory card that is signalling imminent failure) it will pass a new page @@ -754,6 +763,8 @@ struct address_space_operations { transfer any private data across and update any references that it has to the page. + putback_page: Called by the VM when isolated page's migration fails. + launder_page: Called before freeing a page - it writes back the dirty page. To prevent redirtying the page, it is kept locked during the whole operation. diff --git a/Documentation/vm/page_migration b/Documentation/vm/page_migration index fea5c0864170..80e98af46e95 100644 --- a/Documentation/vm/page_migration +++ b/Documentation/vm/page_migration @@ -142,5 +142,110 @@ is increased so that the page cannot be freed while page migration occurs. 20. The new page is moved to the LRU and can be scanned by the swapper etc again. -Christoph Lameter, May 8, 2006. +C. Non-LRU page migration +------------------------- + +Although original migration aimed for reducing the latency of memory access +for NUMA, compaction who want to create high-order page is also main customer. + +Current problem of the implementation is that it is designed to migrate only +*LRU* pages. However, there are potential non-lru pages which can be migrated +in drivers, for example, zsmalloc, virtio-balloon pages. + +For virtio-balloon pages, some parts of migration code path have been hooked +up and added virtio-balloon specific functions to intercept migration logics. +It's too specific to a driver so other drivers who want to make their pages +movable would have to add own specific hooks in migration path. + +To overclome the problem, VM supports non-LRU page migration which provides +generic functions for non-LRU movable pages without driver specific hooks +migration path. + +If a driver want to make own pages movable, it should define three functions +which are function pointers of struct address_space_operations. + +1. bool (*isolate_page) (struct page *page, isolate_mode_t mode); + +What VM expects on isolate_page function of driver is to return *true* +if driver isolates page successfully. On returing true, VM marks the page +as PG_isolated so concurrent isolation in several CPUs skip the page +for isolation. If a driver cannot isolate the page, it should return *false*. + +Once page is successfully isolated, VM uses page.lru fields so driver +shouldn't expect to preserve values in that fields. + +2. int (*migratepage) (struct address_space *mapping, + struct page *newpage, struct page *oldpage, enum migrate_mode); + +After isolation, VM calls migratepage of driver with isolated page. +The function of migratepage is to move content of the old page to new page +and set up fields of struct page newpage. Keep in mind that you should +clear PG_movable of oldpage via __ClearPageMovable under page_lock if you +migrated the oldpage successfully and returns 0. +If driver cannot migrate the page at the moment, driver can return -EAGAIN. +On -EAGAIN, VM will retry page migration in a short time because VM interprets +-EAGAIN as "temporal migration failure". On returning any error except -EAGAIN, +VM will give up the page migration without retrying in this time. + +Driver shouldn't touch page.lru field VM using in the functions. + +3. void (*putback_page)(struct page *); + +If migration fails on isolated page, VM should return the isolated page +to the driver so VM calls driver's putback_page with migration failed page. +In this function, driver should put the isolated page back to the own data +structure. +4. non-lru movable page flags + +There are two page flags for supporting non-lru movable page. + +* PG_movable + +Driver should use the below function to make page movable under page_lock. + + void __SetPageMovable(struct page *page, struct address_space *mapping) + +It needs argument of address_space for registering migration family functions +which will be called by VM. Exactly speaking, PG_movable is not a real flag of +struct page. Rather than, VM reuses page->mapping's lower bits to represent it. + + #define PAGE_MAPPING_MOVABLE 0x2 + page->mapping = page->mapping | PAGE_MAPPING_MOVABLE; + +so driver shouldn't access page->mapping directly. Instead, driver should +use page_mapping which mask off the low two bits of page->mapping so it can get +right struct address_space. + +For testing of non-lru movable page, VM supports __PageMovable function. +However, it doesn't guarantee to identify non-lru movable page because +page->mapping field is unified with other variables in struct page. +As well, if driver releases the page after isolation by VM, page->mapping +doesn't have stable value although it has PAGE_MAPPING_MOVABLE +(Look at __ClearPageMovable). But __PageMovable is cheap to catch whether +page is LRU or non-lru movable once the page has been isolated. Because +LRU pages never can have PAGE_MAPPING_MOVABLE in page->mapping. It is also +good for just peeking to test non-lru movable pages before more expensive +checking with lock_page in pfn scanning to select victim. + +For guaranteeing non-lru movable page, VM provides PageMovable function. +Unlike __PageMovable, PageMovable functions validates page->mapping and +mapping->a_ops->isolate_page under lock_page. The lock_page prevents sudden +destroying of page->mapping. + +Driver using __SetPageMovable should clear the flag via __ClearMovablePage +under page_lock before the releasing the page. + +* PG_isolated + +To prevent concurrent isolation among several CPUs, VM marks isolated page +as PG_isolated under lock_page. So if a CPU encounters PG_isolated non-lru +movable page, it can skip it. Driver doesn't need to manipulate the flag +because VM will set/clear it automatically. Keep in mind that if driver +sees PG_isolated page, it means the page have been isolated by VM so it +shouldn't touch page.lru field. +PG_isolated is alias with PG_reclaim flag so driver shouldn't use the flag +for own purpose. + +Christoph Lameter, May 8, 2006. +Minchan Kim, Mar 28, 2016. diff --git a/include/linux/compaction.h b/include/linux/compaction.h index a58c852a268f..c6b47c861cea 100644 --- a/include/linux/compaction.h +++ b/include/linux/compaction.h @@ -54,6 +54,9 @@ enum compact_result { struct alloc_context; /* in mm/internal.h */ #ifdef CONFIG_COMPACTION +extern int PageMovable(struct page *page); +extern void __SetPageMovable(struct page *page, struct address_space *mapping); +extern void __ClearPageMovable(struct page *page); extern int sysctl_compact_memory; extern int sysctl_compaction_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos); @@ -151,6 +154,19 @@ extern void kcompactd_stop(int nid); extern void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_idx); #else +static inline int PageMovable(struct page *page) +{ + return 0; +} +static inline void __SetPageMovable(struct page *page, + struct address_space *mapping) +{ +} + +static inline void __ClearPageMovable(struct page *page) +{ +} + static inline enum compact_result try_to_compact_pages(gfp_t gfp_mask, unsigned int order, int alloc_flags, const struct alloc_context *ac, @@ -212,6 +228,7 @@ static inline void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_i #endif /* CONFIG_COMPACTION */ #if defined(CONFIG_COMPACTION) && defined(CONFIG_SYSFS) && defined(CONFIG_NUMA) +struct node; extern int compaction_register_node(struct node *node); extern void compaction_unregister_node(struct node *node); diff --git a/include/linux/fs.h b/include/linux/fs.h index c9cc1f699dc1..6a2ce439ea42 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -402,6 +402,8 @@ struct address_space_operations { */ int (*migratepage) (struct address_space *, struct page *, struct page *, enum migrate_mode); + bool (*isolate_page)(struct page *, isolate_mode_t); + void (*putback_page)(struct page *); int (*launder_page) (struct page *); int (*is_partially_uptodate) (struct page *, unsigned long, unsigned long); diff --git a/include/linux/ksm.h b/include/linux/ksm.h index 7ae216a39c9e..481c8c4627ca 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -43,8 +43,7 @@ static inline struct stable_node *page_stable_node(struct page *page) static inline void set_page_stable_node(struct page *page, struct stable_node *stable_node) { - page->mapping = (void *)stable_node + - (PAGE_MAPPING_ANON | PAGE_MAPPING_KSM); + page->mapping = (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM); } /* diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 9b50325e4ddf..404fbfefeb33 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -37,6 +37,8 @@ extern int migrate_page(struct address_space *, struct page *, struct page *, enum migrate_mode); extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, unsigned long private, enum migrate_mode mode, int reason); +extern bool isolate_movable_page(struct page *page, isolate_mode_t mode); +extern void putback_movable_page(struct page *page); extern int migrate_prep(void); extern int migrate_prep_local(void); diff --git a/include/linux/mm.h b/include/linux/mm.h index a00ec816233a..33eaec57e997 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1035,6 +1035,7 @@ static inline pgoff_t page_file_index(struct page *page) } bool page_mapped(struct page *page); +struct address_space *page_mapping(struct page *page); /* * Return true only if the page has been allocated with diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index e5a32445f930..f8a2c4881608 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -129,6 +129,9 @@ enum pageflags { /* Compound pages. Stored in first tail page's flags */ PG_double_map = PG_private_2, + + /* non-lru isolated movable page */ + PG_isolated = PG_reclaim, }; #ifndef __GENERATING_BOUNDS_H @@ -357,29 +360,37 @@ PAGEFLAG(Idle, idle, PF_ANY) * with the PAGE_MAPPING_ANON bit set to distinguish it. See rmap.h. * * On an anonymous page in a VM_MERGEABLE area, if CONFIG_KSM is enabled, - * the PAGE_MAPPING_KSM bit may be set along with the PAGE_MAPPING_ANON bit; - * and then page->mapping points, not to an anon_vma, but to a private + * the PAGE_MAPPING_MOVABLE bit may be set along with the PAGE_MAPPING_ANON + * bit; and then page->mapping points, not to an anon_vma, but to a private * structure which KSM associates with that merged page. See ksm.h. * - * PAGE_MAPPING_KSM without PAGE_MAPPING_ANON is currently never used. + * PAGE_MAPPING_KSM without PAGE_MAPPING_ANON is used for non-lru movable + * page and then page->mapping points a struct address_space. * * Please note that, confusingly, "page_mapping" refers to the inode * address_space which maps the page from disk; whereas "page_mapped" * refers to user virtual address space into which the page is mapped. */ -#define PAGE_MAPPING_ANON 1 -#define PAGE_MAPPING_KSM 2 -#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_KSM) +#define PAGE_MAPPING_ANON 0x1 +#define PAGE_MAPPING_MOVABLE 0x2 +#define PAGE_MAPPING_KSM (PAGE_MAPPING_ANON | PAGE_MAPPING_MOVABLE) +#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_MOVABLE) -static __always_inline int PageAnonHead(struct page *page) +static __always_inline int PageMappingFlag(struct page *page) { - return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0; + return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) != 0; } static __always_inline int PageAnon(struct page *page) { page = compound_head(page); - return PageAnonHead(page); + return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0; +} + +static __always_inline int __PageMovable(struct page *page) +{ + return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) == + PAGE_MAPPING_MOVABLE; } #ifdef CONFIG_KSM @@ -393,7 +404,7 @@ static __always_inline int PageKsm(struct page *page) { page = compound_head(page); return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) == - (PAGE_MAPPING_ANON | PAGE_MAPPING_KSM); + PAGE_MAPPING_KSM; } #else TESTPAGEFLAG_FALSE(Ksm) @@ -641,6 +652,8 @@ static inline void __ClearPageBalloon(struct page *page) atomic_set(&page->_mapcount, -1); } +__PAGEFLAG(Isolated, isolated, PF_ANY); + /* * If network-based swap is enabled, sl*b must keep track of whether pages * were allocated from pfmemalloc reserves. diff --git a/mm/compaction.c b/mm/compaction.c index 1427366ad673..2d6862d0df60 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -81,6 +81,41 @@ static inline bool migrate_async_suitable(int migratetype) #ifdef CONFIG_COMPACTION +int PageMovable(struct page *page) +{ + struct address_space *mapping; + + WARN_ON(!PageLocked(page)); + if (!__PageMovable(page)) + goto out; + + mapping = page_mapping(page); + if (mapping && mapping->a_ops && mapping->a_ops->isolate_page) + return 1; +out: + return 0; +} +EXPORT_SYMBOL(PageMovable); + +void __SetPageMovable(struct page *page, struct address_space *mapping) +{ + VM_BUG_ON_PAGE(!PageLocked(page), page); + VM_BUG_ON_PAGE((unsigned long)mapping & PAGE_MAPPING_MOVABLE, page); + page->mapping = (void *)((unsigned long)mapping | PAGE_MAPPING_MOVABLE); +} +EXPORT_SYMBOL(__SetPageMovable); + +void __ClearPageMovable(struct page *page) +{ + VM_BUG_ON_PAGE(!PageLocked(page), page); + VM_BUG_ON_PAGE(!PageMovable(page), page); + VM_BUG_ON_PAGE(!((unsigned long)page->mapping & PAGE_MAPPING_MOVABLE), + page); + page->mapping = (void *)((unsigned long)page->mapping & + PAGE_MAPPING_MOVABLE); +} +EXPORT_SYMBOL(__ClearPageMovable); + /* Do not skip compaction more than 64 times */ #define COMPACT_MAX_DEFER_SHIFT 6 @@ -735,21 +770,6 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, } /* - * Check may be lockless but that's ok as we recheck later. - * It's possible to migrate LRU pages and balloon pages - * Skip any other type of page - */ - is_lru = PageLRU(page); - if (!is_lru) { - if (unlikely(balloon_page_movable(page))) { - if (balloon_page_isolate(page)) { - /* Successfully isolated */ - goto isolate_success; - } - } - } - - /* * Regardless of being on LRU, compound pages such as THP and * hugetlbfs are not to be compacted. We can potentially save * a lot of iterations if we skip them at once. The check is @@ -765,8 +785,38 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, goto isolate_fail; } - if (!is_lru) + /* + * Check may be lockless but that's ok as we recheck later. + * It's possible to migrate LRU and non-lru movable pages. + * Skip any other type of page + */ + is_lru = PageLRU(page); + if (!is_lru) { + if (unlikely(balloon_page_movable(page))) { + if (balloon_page_isolate(page)) { + /* Successfully isolated */ + goto isolate_success; + } + } + + /* + * __PageMovable can return false positive so we need + * to verify it under page_lock. + */ + if (unlikely(__PageMovable(page)) && + !PageIsolated(page)) { + if (locked) { + spin_unlock_irqrestore(&zone->lru_lock, + flags); + locked = false; + } + + if (isolate_movable_page(page, isolate_mode)) + goto isolate_success; + } + goto isolate_fail; + } /* * Migration will fail if an anonymous page is pinned in memory, diff --git a/mm/ksm.c b/mm/ksm.c index 4786b4150f62..35b8aef867a9 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -532,8 +532,8 @@ static struct page *get_ksm_page(struct stable_node *stable_node, bool lock_it) void *expected_mapping; unsigned long kpfn; - expected_mapping = (void *)stable_node + - (PAGE_MAPPING_ANON | PAGE_MAPPING_KSM); + expected_mapping = (void *)((unsigned long)stable_node | + PAGE_MAPPING_KSM); again: kpfn = READ_ONCE(stable_node->kpfn); page = pfn_to_page(kpfn); diff --git a/mm/migrate.c b/mm/migrate.c index 2666f28b5236..57559ca7c904 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include #include @@ -73,6 +74,79 @@ int migrate_prep_local(void) return 0; } +bool isolate_movable_page(struct page *page, isolate_mode_t mode) +{ + struct address_space *mapping; + + /* + * Avoid burning cycles with pages that are yet under __free_pages(), + * or just got freed under us. + * + * In case we 'win' a race for a movable page being freed under us and + * raise its refcount preventing __free_pages() from doing its job + * the put_page() at the end of this block will take care of + * release this page, thus avoiding a nasty leakage. + */ + if (unlikely(!get_page_unless_zero(page))) + goto out; + + /* + * Check PageMovable before holding a PG_lock because page's owner + * assumes anybody doesn't touch PG_lock of newly allocated page + * so unconditionally grapping the lock ruins page's owner side. + */ + if (unlikely(!__PageMovable(page))) + goto out_putpage; + /* + * As movable pages are not isolated from LRU lists, concurrent + * compaction threads can race against page migration functions + * as well as race against the releasing a page. + * + * In order to avoid having an already isolated movable page + * being (wrongly) re-isolated while it is under migration, + * or to avoid attempting to isolate pages being released, + * lets be sure we have the page lock + * before proceeding with the movable page isolation steps. + */ + if (unlikely(!trylock_page(page))) + goto out_putpage; + + if (!PageMovable(page) || PageIsolated(page)) + goto out_no_isolated; + + mapping = page_mapping(page); + if (!mapping->a_ops->isolate_page(page, mode)) + goto out_no_isolated; + + /* Driver shouldn't use PG_isolated bit of page->flags */ + WARN_ON_ONCE(PageIsolated(page)); + __SetPageIsolated(page); + unlock_page(page); + + return true; + +out_no_isolated: + unlock_page(page); +out_putpage: + put_page(page); +out: + return false; +} + +/* It should be called on page which is PG_movable */ +void putback_movable_page(struct page *page) +{ + struct address_space *mapping; + + VM_BUG_ON_PAGE(!PageLocked(page), page); + VM_BUG_ON_PAGE(!PageMovable(page), page); + VM_BUG_ON_PAGE(!PageIsolated(page), page); + + mapping = page_mapping(page); + mapping->a_ops->putback_page(page); + __ClearPageIsolated(page); +} + /* * Put previously isolated pages back onto the appropriate lists * from where they were once taken off for compaction/migration. @@ -94,10 +168,25 @@ void putback_movable_pages(struct list_head *l) list_del(&page->lru); dec_zone_page_state(page, NR_ISOLATED_ANON + page_is_file_cache(page)); - if (unlikely(isolated_balloon_page(page))) + if (unlikely(isolated_balloon_page(page))) { balloon_page_putback(page); - else + /* + * We isolated non-lru movable page so here we can use + * __PageMovable because LRU page's mapping cannot have + * PAGE_MAPPING_MOVABLE. + */ + } else if (unlikely(__PageMovable(page))) { + VM_BUG_ON_PAGE(!PageIsolated(page), page); + lock_page(page); + if (PageMovable(page)) + putback_movable_page(page); + else + __ClearPageIsolated(page); + unlock_page(page); + put_page(page); + } else { putback_lru_page(page); + } } } @@ -592,7 +681,7 @@ void migrate_page_copy(struct page *newpage, struct page *page) ***********************************************************/ /* - * Common logic to directly migrate a single page suitable for + * Common logic to directly migrate a single LRU page suitable for * pages that do not use PagePrivate/PagePrivate2. * * Pages are locked upon entry and exit. @@ -755,33 +844,69 @@ static int move_to_new_page(struct page *newpage, struct page *page, enum migrate_mode mode) { struct address_space *mapping; - int rc; + int rc = -EAGAIN; + bool is_lru = !__PageMovable(page); VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(!PageLocked(newpage), newpage); mapping = page_mapping(page); - if (!mapping) - rc = migrate_page(mapping, newpage, page, mode); - else if (mapping->a_ops->migratepage) - /* - * Most pages have a mapping and most filesystems provide a - * migratepage callback. Anonymous pages are part of swap - * space which also has its own migratepage callback. This - * is the most common path for page migration. - */ - rc = mapping->a_ops->migratepage(mapping, newpage, page, mode); - else - rc = fallback_migrate_page(mapping, newpage, page, mode); + /* + * In case of non-lru page, it could be released after + * isolation step. In that case, we shouldn't try + * fallback migration which is designed for LRU pages. + */ + if (unlikely(!is_lru)) { + VM_BUG_ON_PAGE(!PageIsolated(page), page); + if (!PageMovable(page)) { + rc = MIGRATEPAGE_SUCCESS; + __ClearPageIsolated(page); + goto out; + } + } + + if (likely(is_lru)) { + if (!mapping) + rc = migrate_page(mapping, newpage, page, mode); + else if (mapping->a_ops->migratepage) + /* + * Most pages have a mapping and most filesystems + * provide a migratepage callback. Anonymous pages + * are part of swap space which also has its own + * migratepage callback. This is the most common path + * for page migration. + */ + rc = mapping->a_ops->migratepage(mapping, newpage, + page, mode); + else + rc = fallback_migrate_page(mapping, newpage, + page, mode); + } else { + rc = mapping->a_ops->migratepage(mapping, newpage, + page, mode); + WARN_ON_ONCE(rc == MIGRATEPAGE_SUCCESS && + !PageIsolated(page)); + } /* * When successful, old pagecache page->mapping must be cleared before * page is freed; but stats require that PageAnon be left as PageAnon. */ if (rc == MIGRATEPAGE_SUCCESS) { - if (!PageAnon(page)) + if (__PageMovable(page)) { + VM_BUG_ON_PAGE(!PageIsolated(page), page); + + /* + * We clear PG_movable under page_lock so any compactor + * cannot try to migrate this page. + */ + __ClearPageIsolated(page); + } + + if (!((unsigned long)page->mapping & PAGE_MAPPING_FLAGS)) page->mapping = NULL; } +out: return rc; } @@ -791,6 +916,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage, int rc = -EAGAIN; int page_was_mapped = 0; struct anon_vma *anon_vma = NULL; + bool is_lru = !__PageMovable(page); if (!trylock_page(page)) { if (!force || mode == MIGRATE_ASYNC) @@ -871,6 +997,11 @@ static int __unmap_and_move(struct page *page, struct page *newpage, goto out_unlock_both; } + if (unlikely(!is_lru)) { + rc = move_to_new_page(newpage, page, mode); + goto out_unlock_both; + } + /* * Corner case handling: * 1. When a new swap-cache page is read into, it is added to the LRU @@ -920,7 +1051,8 @@ static int __unmap_and_move(struct page *page, struct page *newpage, * list in here. */ if (rc == MIGRATEPAGE_SUCCESS) { - if (unlikely(__is_movable_balloon_page(newpage))) + if (unlikely(__is_movable_balloon_page(newpage) || + __PageMovable(newpage))) put_page(newpage); else putback_lru_page(newpage); @@ -961,6 +1093,12 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, /* page was freed from under us. So we are done. */ ClearPageActive(page); ClearPageUnevictable(page); + if (unlikely(__PageMovable(page))) { + lock_page(page); + if (!PageMovable(page)) + __ClearPageIsolated(page); + unlock_page(page); + } if (put_new_page) put_new_page(newpage, private); else @@ -1010,8 +1148,21 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, num_poisoned_pages_inc(); } } else { - if (rc != -EAGAIN) - putback_lru_page(page); + if (rc != -EAGAIN) { + if (likely(!__PageMovable(page))) { + putback_lru_page(page); + goto put_new; + } + + lock_page(page); + if (PageMovable(page)) + putback_movable_page(page); + else + __ClearPageIsolated(page); + unlock_page(page); + put_page(page); + } +put_new: if (put_new_page) put_new_page(newpage, private); else diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f8f3bfc435ee..26868bbaecce 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1008,7 +1008,7 @@ static __always_inline bool free_pages_prepare(struct page *page, (page + i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; } } - if (PageAnonHead(page)) + if (PageMappingFlag(page)) page->mapping = NULL; if (check_free) bad += free_pages_check(page); diff --git a/mm/util.c b/mm/util.c index 224d36e43a94..a04ccff7cc17 100644 --- a/mm/util.c +++ b/mm/util.c @@ -399,10 +399,12 @@ struct address_space *page_mapping(struct page *page) } mapping = page->mapping; - if ((unsigned long)mapping & PAGE_MAPPING_FLAGS) + if ((unsigned long)mapping & PAGE_MAPPING_ANON) return NULL; - return mapping; + + return (void *)((unsigned long)mapping & ~PAGE_MAPPING_FLAGS); } +EXPORT_SYMBOL(page_mapping); /* Slow path of page_mapcount() for compound pages */ int __page_mapcount(struct page *page) -- 1.9.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-f69.google.com (mail-oi0-f69.google.com [209.85.218.69]) by kanga.kvack.org (Postfix) with ESMTP id DB8E66B025E for ; Fri, 20 May 2016 10:24:07 -0400 (EDT) Received: by mail-oi0-f69.google.com with SMTP id a143so73432889oii.2 for ; Fri, 20 May 2016 07:24:07 -0700 (PDT) Received: from lgeamrelo13.lge.com (LGEAMRELO13.lge.com. [156.147.23.53]) by mx.google.com with ESMTP id ft8si5366289igb.58.2016.05.20.07.24.05 for ; Fri, 20 May 2016 07:24:06 -0700 (PDT) From: Minchan Kim Subject: [PATCH v6 02/12] mm: migrate: support non-lru movable page migration Date: Fri, 20 May 2016 23:23:35 +0900 Message-Id: <1463754225-31311-3-git-send-email-minchan@kernel.org> In-Reply-To: <1463754225-31311-1-git-send-email-minchan@kernel.org> References: <1463754225-31311-1-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Minchan Kim , Rik van Riel , Vlastimil Babka , Joonsoo Kim , Mel Gorman , Hugh Dickins , Rafael Aquini , virtualization@lists.linux-foundation.org, Jonathan Corbet , John Einar Reitan , dri-devel@lists.freedesktop.org, Sergey Senozhatsky , Gioh Kim We have allowed migration for only LRU pages until now and it was enough to make high-order pages. But recently, embedded system(e.g., webOS, android) uses lots of non-movable pages(e.g., zram, GPU memory) so we have seen several reports about troubles of small high-order allocation. For fixing the problem, there were several efforts (e,g,. enhance compaction algorithm, SLUB fallback to 0-order page, reserved memory, vmalloc and so on) but if there are lots of non-movable pages in system, their solutions are void in the long run. So, this patch is to support facility to change non-movable pages with movable. For the feature, this patch introduces functions related to migration to address_space_operations as well as some page flags. If a driver want to make own pages movable, it should define three functions which are function pointers of struct address_space_operations. 1. bool (*isolate_page) (struct page *page, isolate_mode_t mode); What VM expects on isolate_page function of driver is to return *true* if driver isolates page successfully. On returing true, VM marks the page as PG_isolated so concurrent isolation in several CPUs skip the page for isolation. If a driver cannot isolate the page, it should return *false*. Once page is successfully isolated, VM uses page.lru fields so driver shouldn't expect to preserve values in that fields. 2. int (*migratepage) (struct address_space *mapping, struct page *newpage, struct page *oldpage, enum migrate_mode); After isolation, VM calls migratepage of driver with isolated page. The function of migratepage is to move content of the old page to new page and set up fields of struct page newpage. Keep in mind that you should clear PG_movable of oldpage via __ClearPageMovable under page_lock if you migrated the oldpage successfully and returns 0. If driver cannot migrate the page at the moment, driver can return -EAGAIN. On -EAGAIN, VM will retry page migration in a short time because VM interprets -EAGAIN as "temporal migration failure". On returning any error except -EAGAIN, VM will give up the page migration without retrying in this time. Driver shouldn't touch page.lru field VM using in the functions. 3. void (*putback_page)(struct page *); If migration fails on isolated page, VM should return the isolated page to the driver so VM calls driver's putback_page with migration failed page. In this function, driver should put the isolated page back to the own data structure. 4. non-lru movable page flags There are two page flags for supporting non-lru movable page. * PG_movable Driver should use the below function to make page movable under page_lock. void __SetPageMovable(struct page *page, struct address_space *mapping) It needs argument of address_space for registering migration family functions which will be called by VM. Exactly speaking, PG_movable is not a real flag of struct page. Rather than, VM reuses page->mapping's lower bits to represent it. #define PAGE_MAPPING_MOVABLE 0x2 page->mapping = page->mapping | PAGE_MAPPING_MOVABLE; so driver shouldn't access page->mapping directly. Instead, driver should use page_mapping which mask off the low two bits of page->mapping so it can get right struct address_space. For testing of non-lru movable page, VM supports __PageMovable function. However, it doesn't guarantee to identify non-lru movable page because page->mapping field is unified with other variables in struct page. As well, if driver releases the page after isolation by VM, page->mapping doesn't have stable value although it has PAGE_MAPPING_MOVABLE (Look at __ClearPageMovable). But __PageMovable is cheap to catch whether page is LRU or non-lru movable once the page has been isolated. Because LRU pages never can have PAGE_MAPPING_MOVABLE in page->mapping. It is also good for just peeking to test non-lru movable pages before more expensive checking with lock_page in pfn scanning to select victim. For guaranteeing non-lru movable page, VM provides PageMovable function. Unlike __PageMovable, PageMovable functions validates page->mapping and mapping->a_ops->isolate_page under lock_page. The lock_page prevents sudden destroying of page->mapping. Driver using __SetPageMovable should clear the flag via __ClearMovablePage under page_lock before the releasing the page. * PG_isolated To prevent concurrent isolation among several CPUs, VM marks isolated page as PG_isolated under lock_page. So if a CPU encounters PG_isolated non-lru movable page, it can skip it. Driver doesn't need to manipulate the flag because VM will set/clear it automatically. Keep in mind that if driver sees PG_isolated page, it means the page have been isolated by VM so it shouldn't touch page.lru field. PG_isolated is alias with PG_reclaim flag so driver shouldn't use the flag for own purpose. Cc: Rik van Riel Cc: Vlastimil Babka Cc: Joonsoo Kim Cc: Mel Gorman Cc: Hugh Dickins Cc: Rafael Aquini Cc: virtualization@lists.linux-foundation.org Cc: Jonathan Corbet Cc: John Einar Reitan Cc: dri-devel@lists.freedesktop.org Cc: Sergey Senozhatsky Signed-off-by: Gioh Kim Signed-off-by: Minchan Kim --- Documentation/filesystems/Locking | 4 + Documentation/filesystems/vfs.txt | 11 +++ Documentation/vm/page_migration | 107 ++++++++++++++++++++- include/linux/compaction.h | 17 ++++ include/linux/fs.h | 2 + include/linux/ksm.h | 3 +- include/linux/migrate.h | 2 + include/linux/mm.h | 1 + include/linux/page-flags.h | 33 +++++-- mm/compaction.c | 82 ++++++++++++---- mm/ksm.c | 4 +- mm/migrate.c | 191 ++++++++++++++++++++++++++++++++++---- mm/page_alloc.c | 2 +- mm/util.c | 6 +- 14 files changed, 411 insertions(+), 54 deletions(-) diff --git a/Documentation/filesystems/Locking b/Documentation/filesystems/Locking index 75eea7ce3d7c..dda6e3f8e203 100644 --- a/Documentation/filesystems/Locking +++ b/Documentation/filesystems/Locking @@ -195,7 +195,9 @@ unlocks and drops the reference. int (*releasepage) (struct page *, int); void (*freepage)(struct page *); int (*direct_IO)(struct kiocb *, struct iov_iter *iter); + bool (*isolate_page) (struct page *, isolate_mode_t); int (*migratepage)(struct address_space *, struct page *, struct page *); + void (*putback_page) (struct page *); int (*launder_page)(struct page *); int (*is_partially_uptodate)(struct page *, unsigned long, unsigned long); int (*error_remove_page)(struct address_space *, struct page *); @@ -219,7 +221,9 @@ invalidatepage: yes releasepage: yes freepage: yes direct_IO: +isolate_page: yes migratepage: yes (both) +putback_page: yes launder_page: yes is_partially_uptodate: yes error_remove_page: yes diff --git a/Documentation/filesystems/vfs.txt b/Documentation/filesystems/vfs.txt index c61a223ef3ff..900360cbcdae 100644 --- a/Documentation/filesystems/vfs.txt +++ b/Documentation/filesystems/vfs.txt @@ -592,9 +592,14 @@ struct address_space_operations { int (*releasepage) (struct page *, int); void (*freepage)(struct page *); ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter); + /* isolate a page for migration */ + bool (*isolate_page) (struct page *, isolate_mode_t); /* migrate the contents of a page to the specified target */ int (*migratepage) (struct page *, struct page *); + /* put migration-failed page back to right list */ + void (*putback_page) (struct page *); int (*launder_page) (struct page *); + int (*is_partially_uptodate) (struct page *, unsigned long, unsigned long); void (*is_dirty_writeback) (struct page *, bool *, bool *); @@ -747,6 +752,10 @@ struct address_space_operations { and transfer data directly between the storage and the application's address space. + isolate_page: Called by the VM when isolating a movable non-lru page. + If page is successfully isolated, VM marks the page as PG_isolated + via __SetPageIsolated. + migrate_page: This is used to compact the physical memory usage. If the VM wants to relocate a page (maybe off a memory card that is signalling imminent failure) it will pass a new page @@ -754,6 +763,8 @@ struct address_space_operations { transfer any private data across and update any references that it has to the page. + putback_page: Called by the VM when isolated page's migration fails. + launder_page: Called before freeing a page - it writes back the dirty page. To prevent redirtying the page, it is kept locked during the whole operation. diff --git a/Documentation/vm/page_migration b/Documentation/vm/page_migration index fea5c0864170..80e98af46e95 100644 --- a/Documentation/vm/page_migration +++ b/Documentation/vm/page_migration @@ -142,5 +142,110 @@ is increased so that the page cannot be freed while page migration occurs. 20. The new page is moved to the LRU and can be scanned by the swapper etc again. -Christoph Lameter, May 8, 2006. +C. Non-LRU page migration +------------------------- + +Although original migration aimed for reducing the latency of memory access +for NUMA, compaction who want to create high-order page is also main customer. + +Current problem of the implementation is that it is designed to migrate only +*LRU* pages. However, there are potential non-lru pages which can be migrated +in drivers, for example, zsmalloc, virtio-balloon pages. + +For virtio-balloon pages, some parts of migration code path have been hooked +up and added virtio-balloon specific functions to intercept migration logics. +It's too specific to a driver so other drivers who want to make their pages +movable would have to add own specific hooks in migration path. + +To overclome the problem, VM supports non-LRU page migration which provides +generic functions for non-LRU movable pages without driver specific hooks +migration path. + +If a driver want to make own pages movable, it should define three functions +which are function pointers of struct address_space_operations. + +1. bool (*isolate_page) (struct page *page, isolate_mode_t mode); + +What VM expects on isolate_page function of driver is to return *true* +if driver isolates page successfully. On returing true, VM marks the page +as PG_isolated so concurrent isolation in several CPUs skip the page +for isolation. If a driver cannot isolate the page, it should return *false*. + +Once page is successfully isolated, VM uses page.lru fields so driver +shouldn't expect to preserve values in that fields. + +2. int (*migratepage) (struct address_space *mapping, + struct page *newpage, struct page *oldpage, enum migrate_mode); + +After isolation, VM calls migratepage of driver with isolated page. +The function of migratepage is to move content of the old page to new page +and set up fields of struct page newpage. Keep in mind that you should +clear PG_movable of oldpage via __ClearPageMovable under page_lock if you +migrated the oldpage successfully and returns 0. +If driver cannot migrate the page at the moment, driver can return -EAGAIN. +On -EAGAIN, VM will retry page migration in a short time because VM interprets +-EAGAIN as "temporal migration failure". On returning any error except -EAGAIN, +VM will give up the page migration without retrying in this time. + +Driver shouldn't touch page.lru field VM using in the functions. + +3. void (*putback_page)(struct page *); + +If migration fails on isolated page, VM should return the isolated page +to the driver so VM calls driver's putback_page with migration failed page. +In this function, driver should put the isolated page back to the own data +structure. +4. non-lru movable page flags + +There are two page flags for supporting non-lru movable page. + +* PG_movable + +Driver should use the below function to make page movable under page_lock. + + void __SetPageMovable(struct page *page, struct address_space *mapping) + +It needs argument of address_space for registering migration family functions +which will be called by VM. Exactly speaking, PG_movable is not a real flag of +struct page. Rather than, VM reuses page->mapping's lower bits to represent it. + + #define PAGE_MAPPING_MOVABLE 0x2 + page->mapping = page->mapping | PAGE_MAPPING_MOVABLE; + +so driver shouldn't access page->mapping directly. Instead, driver should +use page_mapping which mask off the low two bits of page->mapping so it can get +right struct address_space. + +For testing of non-lru movable page, VM supports __PageMovable function. +However, it doesn't guarantee to identify non-lru movable page because +page->mapping field is unified with other variables in struct page. +As well, if driver releases the page after isolation by VM, page->mapping +doesn't have stable value although it has PAGE_MAPPING_MOVABLE +(Look at __ClearPageMovable). But __PageMovable is cheap to catch whether +page is LRU or non-lru movable once the page has been isolated. Because +LRU pages never can have PAGE_MAPPING_MOVABLE in page->mapping. It is also +good for just peeking to test non-lru movable pages before more expensive +checking with lock_page in pfn scanning to select victim. + +For guaranteeing non-lru movable page, VM provides PageMovable function. +Unlike __PageMovable, PageMovable functions validates page->mapping and +mapping->a_ops->isolate_page under lock_page. The lock_page prevents sudden +destroying of page->mapping. + +Driver using __SetPageMovable should clear the flag via __ClearMovablePage +under page_lock before the releasing the page. + +* PG_isolated + +To prevent concurrent isolation among several CPUs, VM marks isolated page +as PG_isolated under lock_page. So if a CPU encounters PG_isolated non-lru +movable page, it can skip it. Driver doesn't need to manipulate the flag +because VM will set/clear it automatically. Keep in mind that if driver +sees PG_isolated page, it means the page have been isolated by VM so it +shouldn't touch page.lru field. +PG_isolated is alias with PG_reclaim flag so driver shouldn't use the flag +for own purpose. + +Christoph Lameter, May 8, 2006. +Minchan Kim, Mar 28, 2016. diff --git a/include/linux/compaction.h b/include/linux/compaction.h index a58c852a268f..c6b47c861cea 100644 --- a/include/linux/compaction.h +++ b/include/linux/compaction.h @@ -54,6 +54,9 @@ enum compact_result { struct alloc_context; /* in mm/internal.h */ #ifdef CONFIG_COMPACTION +extern int PageMovable(struct page *page); +extern void __SetPageMovable(struct page *page, struct address_space *mapping); +extern void __ClearPageMovable(struct page *page); extern int sysctl_compact_memory; extern int sysctl_compaction_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos); @@ -151,6 +154,19 @@ extern void kcompactd_stop(int nid); extern void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_idx); #else +static inline int PageMovable(struct page *page) +{ + return 0; +} +static inline void __SetPageMovable(struct page *page, + struct address_space *mapping) +{ +} + +static inline void __ClearPageMovable(struct page *page) +{ +} + static inline enum compact_result try_to_compact_pages(gfp_t gfp_mask, unsigned int order, int alloc_flags, const struct alloc_context *ac, @@ -212,6 +228,7 @@ static inline void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_i #endif /* CONFIG_COMPACTION */ #if defined(CONFIG_COMPACTION) && defined(CONFIG_SYSFS) && defined(CONFIG_NUMA) +struct node; extern int compaction_register_node(struct node *node); extern void compaction_unregister_node(struct node *node); diff --git a/include/linux/fs.h b/include/linux/fs.h index c9cc1f699dc1..6a2ce439ea42 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -402,6 +402,8 @@ struct address_space_operations { */ int (*migratepage) (struct address_space *, struct page *, struct page *, enum migrate_mode); + bool (*isolate_page)(struct page *, isolate_mode_t); + void (*putback_page)(struct page *); int (*launder_page) (struct page *); int (*is_partially_uptodate) (struct page *, unsigned long, unsigned long); diff --git a/include/linux/ksm.h b/include/linux/ksm.h index 7ae216a39c9e..481c8c4627ca 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -43,8 +43,7 @@ static inline struct stable_node *page_stable_node(struct page *page) static inline void set_page_stable_node(struct page *page, struct stable_node *stable_node) { - page->mapping = (void *)stable_node + - (PAGE_MAPPING_ANON | PAGE_MAPPING_KSM); + page->mapping = (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM); } /* diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 9b50325e4ddf..404fbfefeb33 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -37,6 +37,8 @@ extern int migrate_page(struct address_space *, struct page *, struct page *, enum migrate_mode); extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, unsigned long private, enum migrate_mode mode, int reason); +extern bool isolate_movable_page(struct page *page, isolate_mode_t mode); +extern void putback_movable_page(struct page *page); extern int migrate_prep(void); extern int migrate_prep_local(void); diff --git a/include/linux/mm.h b/include/linux/mm.h index a00ec816233a..33eaec57e997 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1035,6 +1035,7 @@ static inline pgoff_t page_file_index(struct page *page) } bool page_mapped(struct page *page); +struct address_space *page_mapping(struct page *page); /* * Return true only if the page has been allocated with diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index e5a32445f930..f8a2c4881608 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -129,6 +129,9 @@ enum pageflags { /* Compound pages. Stored in first tail page's flags */ PG_double_map = PG_private_2, + + /* non-lru isolated movable page */ + PG_isolated = PG_reclaim, }; #ifndef __GENERATING_BOUNDS_H @@ -357,29 +360,37 @@ PAGEFLAG(Idle, idle, PF_ANY) * with the PAGE_MAPPING_ANON bit set to distinguish it. See rmap.h. * * On an anonymous page in a VM_MERGEABLE area, if CONFIG_KSM is enabled, - * the PAGE_MAPPING_KSM bit may be set along with the PAGE_MAPPING_ANON bit; - * and then page->mapping points, not to an anon_vma, but to a private + * the PAGE_MAPPING_MOVABLE bit may be set along with the PAGE_MAPPING_ANON + * bit; and then page->mapping points, not to an anon_vma, but to a private * structure which KSM associates with that merged page. See ksm.h. * - * PAGE_MAPPING_KSM without PAGE_MAPPING_ANON is currently never used. + * PAGE_MAPPING_KSM without PAGE_MAPPING_ANON is used for non-lru movable + * page and then page->mapping points a struct address_space. * * Please note that, confusingly, "page_mapping" refers to the inode * address_space which maps the page from disk; whereas "page_mapped" * refers to user virtual address space into which the page is mapped. */ -#define PAGE_MAPPING_ANON 1 -#define PAGE_MAPPING_KSM 2 -#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_KSM) +#define PAGE_MAPPING_ANON 0x1 +#define PAGE_MAPPING_MOVABLE 0x2 +#define PAGE_MAPPING_KSM (PAGE_MAPPING_ANON | PAGE_MAPPING_MOVABLE) +#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_MOVABLE) -static __always_inline int PageAnonHead(struct page *page) +static __always_inline int PageMappingFlag(struct page *page) { - return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0; + return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) != 0; } static __always_inline int PageAnon(struct page *page) { page = compound_head(page); - return PageAnonHead(page); + return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0; +} + +static __always_inline int __PageMovable(struct page *page) +{ + return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) == + PAGE_MAPPING_MOVABLE; } #ifdef CONFIG_KSM @@ -393,7 +404,7 @@ static __always_inline int PageKsm(struct page *page) { page = compound_head(page); return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) == - (PAGE_MAPPING_ANON | PAGE_MAPPING_KSM); + PAGE_MAPPING_KSM; } #else TESTPAGEFLAG_FALSE(Ksm) @@ -641,6 +652,8 @@ static inline void __ClearPageBalloon(struct page *page) atomic_set(&page->_mapcount, -1); } +__PAGEFLAG(Isolated, isolated, PF_ANY); + /* * If network-based swap is enabled, sl*b must keep track of whether pages * were allocated from pfmemalloc reserves. diff --git a/mm/compaction.c b/mm/compaction.c index 1427366ad673..2d6862d0df60 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -81,6 +81,41 @@ static inline bool migrate_async_suitable(int migratetype) #ifdef CONFIG_COMPACTION +int PageMovable(struct page *page) +{ + struct address_space *mapping; + + WARN_ON(!PageLocked(page)); + if (!__PageMovable(page)) + goto out; + + mapping = page_mapping(page); + if (mapping && mapping->a_ops && mapping->a_ops->isolate_page) + return 1; +out: + return 0; +} +EXPORT_SYMBOL(PageMovable); + +void __SetPageMovable(struct page *page, struct address_space *mapping) +{ + VM_BUG_ON_PAGE(!PageLocked(page), page); + VM_BUG_ON_PAGE((unsigned long)mapping & PAGE_MAPPING_MOVABLE, page); + page->mapping = (void *)((unsigned long)mapping | PAGE_MAPPING_MOVABLE); +} +EXPORT_SYMBOL(__SetPageMovable); + +void __ClearPageMovable(struct page *page) +{ + VM_BUG_ON_PAGE(!PageLocked(page), page); + VM_BUG_ON_PAGE(!PageMovable(page), page); + VM_BUG_ON_PAGE(!((unsigned long)page->mapping & PAGE_MAPPING_MOVABLE), + page); + page->mapping = (void *)((unsigned long)page->mapping & + PAGE_MAPPING_MOVABLE); +} +EXPORT_SYMBOL(__ClearPageMovable); + /* Do not skip compaction more than 64 times */ #define COMPACT_MAX_DEFER_SHIFT 6 @@ -735,21 +770,6 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, } /* - * Check may be lockless but that's ok as we recheck later. - * It's possible to migrate LRU pages and balloon pages - * Skip any other type of page - */ - is_lru = PageLRU(page); - if (!is_lru) { - if (unlikely(balloon_page_movable(page))) { - if (balloon_page_isolate(page)) { - /* Successfully isolated */ - goto isolate_success; - } - } - } - - /* * Regardless of being on LRU, compound pages such as THP and * hugetlbfs are not to be compacted. We can potentially save * a lot of iterations if we skip them at once. The check is @@ -765,8 +785,38 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, goto isolate_fail; } - if (!is_lru) + /* + * Check may be lockless but that's ok as we recheck later. + * It's possible to migrate LRU and non-lru movable pages. + * Skip any other type of page + */ + is_lru = PageLRU(page); + if (!is_lru) { + if (unlikely(balloon_page_movable(page))) { + if (balloon_page_isolate(page)) { + /* Successfully isolated */ + goto isolate_success; + } + } + + /* + * __PageMovable can return false positive so we need + * to verify it under page_lock. + */ + if (unlikely(__PageMovable(page)) && + !PageIsolated(page)) { + if (locked) { + spin_unlock_irqrestore(&zone->lru_lock, + flags); + locked = false; + } + + if (isolate_movable_page(page, isolate_mode)) + goto isolate_success; + } + goto isolate_fail; + } /* * Migration will fail if an anonymous page is pinned in memory, diff --git a/mm/ksm.c b/mm/ksm.c index 4786b4150f62..35b8aef867a9 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -532,8 +532,8 @@ static struct page *get_ksm_page(struct stable_node *stable_node, bool lock_it) void *expected_mapping; unsigned long kpfn; - expected_mapping = (void *)stable_node + - (PAGE_MAPPING_ANON | PAGE_MAPPING_KSM); + expected_mapping = (void *)((unsigned long)stable_node | + PAGE_MAPPING_KSM); again: kpfn = READ_ONCE(stable_node->kpfn); page = pfn_to_page(kpfn); diff --git a/mm/migrate.c b/mm/migrate.c index 2666f28b5236..57559ca7c904 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include #include @@ -73,6 +74,79 @@ int migrate_prep_local(void) return 0; } +bool isolate_movable_page(struct page *page, isolate_mode_t mode) +{ + struct address_space *mapping; + + /* + * Avoid burning cycles with pages that are yet under __free_pages(), + * or just got freed under us. + * + * In case we 'win' a race for a movable page being freed under us and + * raise its refcount preventing __free_pages() from doing its job + * the put_page() at the end of this block will take care of + * release this page, thus avoiding a nasty leakage. + */ + if (unlikely(!get_page_unless_zero(page))) + goto out; + + /* + * Check PageMovable before holding a PG_lock because page's owner + * assumes anybody doesn't touch PG_lock of newly allocated page + * so unconditionally grapping the lock ruins page's owner side. + */ + if (unlikely(!__PageMovable(page))) + goto out_putpage; + /* + * As movable pages are not isolated from LRU lists, concurrent + * compaction threads can race against page migration functions + * as well as race against the releasing a page. + * + * In order to avoid having an already isolated movable page + * being (wrongly) re-isolated while it is under migration, + * or to avoid attempting to isolate pages being released, + * lets be sure we have the page lock + * before proceeding with the movable page isolation steps. + */ + if (unlikely(!trylock_page(page))) + goto out_putpage; + + if (!PageMovable(page) || PageIsolated(page)) + goto out_no_isolated; + + mapping = page_mapping(page); + if (!mapping->a_ops->isolate_page(page, mode)) + goto out_no_isolated; + + /* Driver shouldn't use PG_isolated bit of page->flags */ + WARN_ON_ONCE(PageIsolated(page)); + __SetPageIsolated(page); + unlock_page(page); + + return true; + +out_no_isolated: + unlock_page(page); +out_putpage: + put_page(page); +out: + return false; +} + +/* It should be called on page which is PG_movable */ +void putback_movable_page(struct page *page) +{ + struct address_space *mapping; + + VM_BUG_ON_PAGE(!PageLocked(page), page); + VM_BUG_ON_PAGE(!PageMovable(page), page); + VM_BUG_ON_PAGE(!PageIsolated(page), page); + + mapping = page_mapping(page); + mapping->a_ops->putback_page(page); + __ClearPageIsolated(page); +} + /* * Put previously isolated pages back onto the appropriate lists * from where they were once taken off for compaction/migration. @@ -94,10 +168,25 @@ void putback_movable_pages(struct list_head *l) list_del(&page->lru); dec_zone_page_state(page, NR_ISOLATED_ANON + page_is_file_cache(page)); - if (unlikely(isolated_balloon_page(page))) + if (unlikely(isolated_balloon_page(page))) { balloon_page_putback(page); - else + /* + * We isolated non-lru movable page so here we can use + * __PageMovable because LRU page's mapping cannot have + * PAGE_MAPPING_MOVABLE. + */ + } else if (unlikely(__PageMovable(page))) { + VM_BUG_ON_PAGE(!PageIsolated(page), page); + lock_page(page); + if (PageMovable(page)) + putback_movable_page(page); + else + __ClearPageIsolated(page); + unlock_page(page); + put_page(page); + } else { putback_lru_page(page); + } } } @@ -592,7 +681,7 @@ void migrate_page_copy(struct page *newpage, struct page *page) ***********************************************************/ /* - * Common logic to directly migrate a single page suitable for + * Common logic to directly migrate a single LRU page suitable for * pages that do not use PagePrivate/PagePrivate2. * * Pages are locked upon entry and exit. @@ -755,33 +844,69 @@ static int move_to_new_page(struct page *newpage, struct page *page, enum migrate_mode mode) { struct address_space *mapping; - int rc; + int rc = -EAGAIN; + bool is_lru = !__PageMovable(page); VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(!PageLocked(newpage), newpage); mapping = page_mapping(page); - if (!mapping) - rc = migrate_page(mapping, newpage, page, mode); - else if (mapping->a_ops->migratepage) - /* - * Most pages have a mapping and most filesystems provide a - * migratepage callback. Anonymous pages are part of swap - * space which also has its own migratepage callback. This - * is the most common path for page migration. - */ - rc = mapping->a_ops->migratepage(mapping, newpage, page, mode); - else - rc = fallback_migrate_page(mapping, newpage, page, mode); + /* + * In case of non-lru page, it could be released after + * isolation step. In that case, we shouldn't try + * fallback migration which is designed for LRU pages. + */ + if (unlikely(!is_lru)) { + VM_BUG_ON_PAGE(!PageIsolated(page), page); + if (!PageMovable(page)) { + rc = MIGRATEPAGE_SUCCESS; + __ClearPageIsolated(page); + goto out; + } + } + + if (likely(is_lru)) { + if (!mapping) + rc = migrate_page(mapping, newpage, page, mode); + else if (mapping->a_ops->migratepage) + /* + * Most pages have a mapping and most filesystems + * provide a migratepage callback. Anonymous pages + * are part of swap space which also has its own + * migratepage callback. This is the most common path + * for page migration. + */ + rc = mapping->a_ops->migratepage(mapping, newpage, + page, mode); + else + rc = fallback_migrate_page(mapping, newpage, + page, mode); + } else { + rc = mapping->a_ops->migratepage(mapping, newpage, + page, mode); + WARN_ON_ONCE(rc == MIGRATEPAGE_SUCCESS && + !PageIsolated(page)); + } /* * When successful, old pagecache page->mapping must be cleared before * page is freed; but stats require that PageAnon be left as PageAnon. */ if (rc == MIGRATEPAGE_SUCCESS) { - if (!PageAnon(page)) + if (__PageMovable(page)) { + VM_BUG_ON_PAGE(!PageIsolated(page), page); + + /* + * We clear PG_movable under page_lock so any compactor + * cannot try to migrate this page. + */ + __ClearPageIsolated(page); + } + + if (!((unsigned long)page->mapping & PAGE_MAPPING_FLAGS)) page->mapping = NULL; } +out: return rc; } @@ -791,6 +916,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage, int rc = -EAGAIN; int page_was_mapped = 0; struct anon_vma *anon_vma = NULL; + bool is_lru = !__PageMovable(page); if (!trylock_page(page)) { if (!force || mode == MIGRATE_ASYNC) @@ -871,6 +997,11 @@ static int __unmap_and_move(struct page *page, struct page *newpage, goto out_unlock_both; } + if (unlikely(!is_lru)) { + rc = move_to_new_page(newpage, page, mode); + goto out_unlock_both; + } + /* * Corner case handling: * 1. When a new swap-cache page is read into, it is added to the LRU @@ -920,7 +1051,8 @@ static int __unmap_and_move(struct page *page, struct page *newpage, * list in here. */ if (rc == MIGRATEPAGE_SUCCESS) { - if (unlikely(__is_movable_balloon_page(newpage))) + if (unlikely(__is_movable_balloon_page(newpage) || + __PageMovable(newpage))) put_page(newpage); else putback_lru_page(newpage); @@ -961,6 +1093,12 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, /* page was freed from under us. So we are done. */ ClearPageActive(page); ClearPageUnevictable(page); + if (unlikely(__PageMovable(page))) { + lock_page(page); + if (!PageMovable(page)) + __ClearPageIsolated(page); + unlock_page(page); + } if (put_new_page) put_new_page(newpage, private); else @@ -1010,8 +1148,21 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, num_poisoned_pages_inc(); } } else { - if (rc != -EAGAIN) - putback_lru_page(page); + if (rc != -EAGAIN) { + if (likely(!__PageMovable(page))) { + putback_lru_page(page); + goto put_new; + } + + lock_page(page); + if (PageMovable(page)) + putback_movable_page(page); + else + __ClearPageIsolated(page); + unlock_page(page); + put_page(page); + } +put_new: if (put_new_page) put_new_page(newpage, private); else diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f8f3bfc435ee..26868bbaecce 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1008,7 +1008,7 @@ static __always_inline bool free_pages_prepare(struct page *page, (page + i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; } } - if (PageAnonHead(page)) + if (PageMappingFlag(page)) page->mapping = NULL; if (check_free) bad += free_pages_check(page); diff --git a/mm/util.c b/mm/util.c index 224d36e43a94..a04ccff7cc17 100644 --- a/mm/util.c +++ b/mm/util.c @@ -399,10 +399,12 @@ struct address_space *page_mapping(struct page *page) } mapping = page->mapping; - if ((unsigned long)mapping & PAGE_MAPPING_FLAGS) + if ((unsigned long)mapping & PAGE_MAPPING_ANON) return NULL; - return mapping; + + return (void *)((unsigned long)mapping & ~PAGE_MAPPING_FLAGS); } +EXPORT_SYMBOL(page_mapping); /* Slow path of page_mapcount() for compound pages */ int __page_mapcount(struct page *page) -- 1.9.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 From: Minchan Kim Subject: [PATCH v6 02/12] mm: migrate: support non-lru movable page migration Date: Fri, 20 May 2016 23:23:35 +0900 Message-ID: <1463754225-31311-3-git-send-email-minchan@kernel.org> References: <1463754225-31311-1-git-send-email-minchan@kernel.org> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: Received: from lgeamrelo13.lge.com (LGEAMRELO13.lge.com [156.147.23.53]) by gabe.freedesktop.org (Postfix) with ESMTP id 897336EAD8 for ; Fri, 20 May 2016 14:24:05 +0000 (UTC) In-Reply-To: <1463754225-31311-1-git-send-email-minchan@kernel.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" To: Andrew Morton Cc: Rik van Riel , Sergey Senozhatsky , Rafael Aquini , Minchan Kim , Jonathan Corbet , Hugh Dickins , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org, John Einar Reitan , linux-mm@kvack.org, Gioh Kim , Mel Gorman , Joonsoo Kim , Vlastimil Babka List-Id: dri-devel@lists.freedesktop.org V2UgaGF2ZSBhbGxvd2VkIG1pZ3JhdGlvbiBmb3Igb25seSBMUlUgcGFnZXMgdW50aWwgbm93IGFu ZCBpdCB3YXMKZW5vdWdoIHRvIG1ha2UgaGlnaC1vcmRlciBwYWdlcy4gQnV0IHJlY2VudGx5LCBl bWJlZGRlZCBzeXN0ZW0oZS5nLiwKd2ViT1MsIGFuZHJvaWQpIHVzZXMgbG90cyBvZiBub24tbW92 YWJsZSBwYWdlcyhlLmcuLCB6cmFtLCBHUFUgbWVtb3J5KQpzbyB3ZSBoYXZlIHNlZW4gc2V2ZXJh bCByZXBvcnRzIGFib3V0IHRyb3VibGVzIG9mIHNtYWxsIGhpZ2gtb3JkZXIKYWxsb2NhdGlvbi4g Rm9yIGZpeGluZyB0aGUgcHJvYmxlbSwgdGhlcmUgd2VyZSBzZXZlcmFsIGVmZm9ydHMKKGUsZywu IGVuaGFuY2UgY29tcGFjdGlvbiBhbGdvcml0aG0sIFNMVUIgZmFsbGJhY2sgdG8gMC1vcmRlciBw YWdlLApyZXNlcnZlZCBtZW1vcnksIHZtYWxsb2MgYW5kIHNvIG9uKSBidXQgaWYgdGhlcmUgYXJl IGxvdHMgb2YKbm9uLW1vdmFibGUgcGFnZXMgaW4gc3lzdGVtLCB0aGVpciBzb2x1dGlvbnMgYXJl IHZvaWQgaW4gdGhlIGxvbmcgcnVuLgoKU28sIHRoaXMgcGF0Y2ggaXMgdG8gc3VwcG9ydCBmYWNp bGl0eSB0byBjaGFuZ2Ugbm9uLW1vdmFibGUgcGFnZXMKd2l0aCBtb3ZhYmxlLiBGb3IgdGhlIGZl YXR1cmUsIHRoaXMgcGF0Y2ggaW50cm9kdWNlcyBmdW5jdGlvbnMgcmVsYXRlZAp0byBtaWdyYXRp b24gdG8gYWRkcmVzc19zcGFjZV9vcGVyYXRpb25zIGFzIHdlbGwgYXMgc29tZSBwYWdlIGZsYWdz LgoKSWYgYSBkcml2ZXIgd2FudCB0byBtYWtlIG93biBwYWdlcyBtb3ZhYmxlLCBpdCBzaG91bGQg ZGVmaW5lIHRocmVlIGZ1bmN0aW9ucwp3aGljaCBhcmUgZnVuY3Rpb24gcG9pbnRlcnMgb2Ygc3Ry dWN0IGFkZHJlc3Nfc3BhY2Vfb3BlcmF0aW9ucy4KCjEuIGJvb2wgKCppc29sYXRlX3BhZ2UpIChz dHJ1Y3QgcGFnZSAqcGFnZSwgaXNvbGF0ZV9tb2RlX3QgbW9kZSk7CgpXaGF0IFZNIGV4cGVjdHMg b24gaXNvbGF0ZV9wYWdlIGZ1bmN0aW9uIG9mIGRyaXZlciBpcyB0byByZXR1cm4gKnRydWUqCmlm IGRyaXZlciBpc29sYXRlcyBwYWdlIHN1Y2Nlc3NmdWxseS4gT24gcmV0dXJpbmcgdHJ1ZSwgVk0g bWFya3MgdGhlIHBhZ2UKYXMgUEdfaXNvbGF0ZWQgc28gY29uY3VycmVudCBpc29sYXRpb24gaW4g c2V2ZXJhbCBDUFVzIHNraXAgdGhlIHBhZ2UKZm9yIGlzb2xhdGlvbi4gSWYgYSBkcml2ZXIgY2Fu bm90IGlzb2xhdGUgdGhlIHBhZ2UsIGl0IHNob3VsZCByZXR1cm4gKmZhbHNlKi4KCk9uY2UgcGFn ZSBpcyBzdWNjZXNzZnVsbHkgaXNvbGF0ZWQsIFZNIHVzZXMgcGFnZS5scnUgZmllbGRzIHNvIGRy aXZlcgpzaG91bGRuJ3QgZXhwZWN0IHRvIHByZXNlcnZlIHZhbHVlcyBpbiB0aGF0IGZpZWxkcy4K CjIuIGludCAoKm1pZ3JhdGVwYWdlKSAoc3RydWN0IGFkZHJlc3Nfc3BhY2UgKm1hcHBpbmcsCgkJ c3RydWN0IHBhZ2UgKm5ld3BhZ2UsIHN0cnVjdCBwYWdlICpvbGRwYWdlLCBlbnVtIG1pZ3JhdGVf bW9kZSk7CgpBZnRlciBpc29sYXRpb24sIFZNIGNhbGxzIG1pZ3JhdGVwYWdlIG9mIGRyaXZlciB3 aXRoIGlzb2xhdGVkIHBhZ2UuClRoZSBmdW5jdGlvbiBvZiBtaWdyYXRlcGFnZSBpcyB0byBtb3Zl IGNvbnRlbnQgb2YgdGhlIG9sZCBwYWdlIHRvIG5ldyBwYWdlCmFuZCBzZXQgdXAgZmllbGRzIG9m IHN0cnVjdCBwYWdlIG5ld3BhZ2UuIEtlZXAgaW4gbWluZCB0aGF0IHlvdSBzaG91bGQKY2xlYXIg UEdfbW92YWJsZSBvZiBvbGRwYWdlIHZpYSBfX0NsZWFyUGFnZU1vdmFibGUgdW5kZXIgcGFnZV9s b2NrIGlmIHlvdQptaWdyYXRlZCB0aGUgb2xkcGFnZSBzdWNjZXNzZnVsbHkgYW5kIHJldHVybnMg MC4KSWYgZHJpdmVyIGNhbm5vdCBtaWdyYXRlIHRoZSBwYWdlIGF0IHRoZSBtb21lbnQsIGRyaXZl ciBjYW4gcmV0dXJuIC1FQUdBSU4uCk9uIC1FQUdBSU4sIFZNIHdpbGwgcmV0cnkgcGFnZSBtaWdy YXRpb24gaW4gYSBzaG9ydCB0aW1lIGJlY2F1c2UgVk0gaW50ZXJwcmV0cwotRUFHQUlOIGFzICJ0 ZW1wb3JhbCBtaWdyYXRpb24gZmFpbHVyZSIuIE9uIHJldHVybmluZyBhbnkgZXJyb3IgZXhjZXB0 IC1FQUdBSU4sClZNIHdpbGwgZ2l2ZSB1cCB0aGUgcGFnZSBtaWdyYXRpb24gd2l0aG91dCByZXRy eWluZyBpbiB0aGlzIHRpbWUuCgpEcml2ZXIgc2hvdWxkbid0IHRvdWNoIHBhZ2UubHJ1IGZpZWxk IFZNIHVzaW5nIGluIHRoZSBmdW5jdGlvbnMuCgozLiB2b2lkICgqcHV0YmFja19wYWdlKShzdHJ1 Y3QgcGFnZSAqKTsKCklmIG1pZ3JhdGlvbiBmYWlscyBvbiBpc29sYXRlZCBwYWdlLCBWTSBzaG91 bGQgcmV0dXJuIHRoZSBpc29sYXRlZCBwYWdlCnRvIHRoZSBkcml2ZXIgc28gVk0gY2FsbHMgZHJp dmVyJ3MgcHV0YmFja19wYWdlIHdpdGggbWlncmF0aW9uIGZhaWxlZCBwYWdlLgpJbiB0aGlzIGZ1 bmN0aW9uLCBkcml2ZXIgc2hvdWxkIHB1dCB0aGUgaXNvbGF0ZWQgcGFnZSBiYWNrIHRvIHRoZSBv d24gZGF0YQpzdHJ1Y3R1cmUuCgo0LiBub24tbHJ1IG1vdmFibGUgcGFnZSBmbGFncwoKVGhlcmUg YXJlIHR3byBwYWdlIGZsYWdzIGZvciBzdXBwb3J0aW5nIG5vbi1scnUgbW92YWJsZSBwYWdlLgoK KiBQR19tb3ZhYmxlCgpEcml2ZXIgc2hvdWxkIHVzZSB0aGUgYmVsb3cgZnVuY3Rpb24gdG8gbWFr ZSBwYWdlIG1vdmFibGUgdW5kZXIgcGFnZV9sb2NrLgoKCXZvaWQgX19TZXRQYWdlTW92YWJsZShz dHJ1Y3QgcGFnZSAqcGFnZSwgc3RydWN0IGFkZHJlc3Nfc3BhY2UgKm1hcHBpbmcpCgpJdCBuZWVk cyBhcmd1bWVudCBvZiBhZGRyZXNzX3NwYWNlIGZvciByZWdpc3RlcmluZyBtaWdyYXRpb24gZmFt aWx5IGZ1bmN0aW9ucwp3aGljaCB3aWxsIGJlIGNhbGxlZCBieSBWTS4gRXhhY3RseSBzcGVha2lu ZywgUEdfbW92YWJsZSBpcyBub3QgYSByZWFsIGZsYWcgb2YKc3RydWN0IHBhZ2UuIFJhdGhlciB0 aGFuLCBWTSByZXVzZXMgcGFnZS0+bWFwcGluZydzIGxvd2VyIGJpdHMgdG8gcmVwcmVzZW50IGl0 LgoKCSNkZWZpbmUgUEFHRV9NQVBQSU5HX01PVkFCTEUgMHgyCglwYWdlLT5tYXBwaW5nID0gcGFn ZS0+bWFwcGluZyB8IFBBR0VfTUFQUElOR19NT1ZBQkxFOwoKc28gZHJpdmVyIHNob3VsZG4ndCBh Y2Nlc3MgcGFnZS0+bWFwcGluZyBkaXJlY3RseS4gSW5zdGVhZCwgZHJpdmVyIHNob3VsZAp1c2Ug cGFnZV9tYXBwaW5nIHdoaWNoIG1hc2sgb2ZmIHRoZSBsb3cgdHdvIGJpdHMgb2YgcGFnZS0+bWFw cGluZyBzbyBpdCBjYW4gZ2V0CnJpZ2h0IHN0cnVjdCBhZGRyZXNzX3NwYWNlLgoKRm9yIHRlc3Rp bmcgb2Ygbm9uLWxydSBtb3ZhYmxlIHBhZ2UsIFZNIHN1cHBvcnRzIF9fUGFnZU1vdmFibGUgZnVu Y3Rpb24uCkhvd2V2ZXIsIGl0IGRvZXNuJ3QgZ3VhcmFudGVlIHRvIGlkZW50aWZ5IG5vbi1scnUg bW92YWJsZSBwYWdlIGJlY2F1c2UKcGFnZS0+bWFwcGluZyBmaWVsZCBpcyB1bmlmaWVkIHdpdGgg b3RoZXIgdmFyaWFibGVzIGluIHN0cnVjdCBwYWdlLgpBcyB3ZWxsLCBpZiBkcml2ZXIgcmVsZWFz ZXMgdGhlIHBhZ2UgYWZ0ZXIgaXNvbGF0aW9uIGJ5IFZNLCBwYWdlLT5tYXBwaW5nCmRvZXNuJ3Qg aGF2ZSBzdGFibGUgdmFsdWUgYWx0aG91Z2ggaXQgaGFzIFBBR0VfTUFQUElOR19NT1ZBQkxFCihM b29rIGF0IF9fQ2xlYXJQYWdlTW92YWJsZSkuIEJ1dCBfX1BhZ2VNb3ZhYmxlIGlzIGNoZWFwIHRv IGNhdGNoIHdoZXRoZXIKcGFnZSBpcyBMUlUgb3Igbm9uLWxydSBtb3ZhYmxlIG9uY2UgdGhlIHBh Z2UgaGFzIGJlZW4gaXNvbGF0ZWQuIEJlY2F1c2UKTFJVIHBhZ2VzIG5ldmVyIGNhbiBoYXZlIFBB R0VfTUFQUElOR19NT1ZBQkxFIGluIHBhZ2UtPm1hcHBpbmcuIEl0IGlzIGFsc28KZ29vZCBmb3Ig anVzdCBwZWVraW5nIHRvIHRlc3Qgbm9uLWxydSBtb3ZhYmxlIHBhZ2VzIGJlZm9yZSBtb3JlIGV4 cGVuc2l2ZQpjaGVja2luZyB3aXRoIGxvY2tfcGFnZSBpbiBwZm4gc2Nhbm5pbmcgdG8gc2VsZWN0 IHZpY3RpbS4KCkZvciBndWFyYW50ZWVpbmcgbm9uLWxydSBtb3ZhYmxlIHBhZ2UsIFZNIHByb3Zp ZGVzIFBhZ2VNb3ZhYmxlIGZ1bmN0aW9uLgpVbmxpa2UgX19QYWdlTW92YWJsZSwgUGFnZU1vdmFi bGUgZnVuY3Rpb25zIHZhbGlkYXRlcyBwYWdlLT5tYXBwaW5nIGFuZAptYXBwaW5nLT5hX29wcy0+ aXNvbGF0ZV9wYWdlIHVuZGVyIGxvY2tfcGFnZS4gVGhlIGxvY2tfcGFnZSBwcmV2ZW50cyBzdWRk ZW4KZGVzdHJveWluZyBvZiBwYWdlLT5tYXBwaW5nLgoKRHJpdmVyIHVzaW5nIF9fU2V0UGFnZU1v dmFibGUgc2hvdWxkIGNsZWFyIHRoZSBmbGFnIHZpYSBfX0NsZWFyTW92YWJsZVBhZ2UKdW5kZXIg cGFnZV9sb2NrIGJlZm9yZSB0aGUgcmVsZWFzaW5nIHRoZSBwYWdlLgoKKiBQR19pc29sYXRlZAoK VG8gcHJldmVudCBjb25jdXJyZW50IGlzb2xhdGlvbiBhbW9uZyBzZXZlcmFsIENQVXMsIFZNIG1h cmtzIGlzb2xhdGVkIHBhZ2UKYXMgUEdfaXNvbGF0ZWQgdW5kZXIgbG9ja19wYWdlLiBTbyBpZiBh IENQVSBlbmNvdW50ZXJzIFBHX2lzb2xhdGVkIG5vbi1scnUKbW92YWJsZSBwYWdlLCBpdCBjYW4g c2tpcCBpdC4gRHJpdmVyIGRvZXNuJ3QgbmVlZCB0byBtYW5pcHVsYXRlIHRoZSBmbGFnCmJlY2F1 c2UgVk0gd2lsbCBzZXQvY2xlYXIgaXQgYXV0b21hdGljYWxseS4gS2VlcCBpbiBtaW5kIHRoYXQg aWYgZHJpdmVyCnNlZXMgUEdfaXNvbGF0ZWQgcGFnZSwgaXQgbWVhbnMgdGhlIHBhZ2UgaGF2ZSBi ZWVuIGlzb2xhdGVkIGJ5IFZNIHNvIGl0CnNob3VsZG4ndCB0b3VjaCBwYWdlLmxydSBmaWVsZC4K UEdfaXNvbGF0ZWQgaXMgYWxpYXMgd2l0aCBQR19yZWNsYWltIGZsYWcgc28gZHJpdmVyIHNob3Vs ZG4ndCB1c2UgdGhlIGZsYWcKZm9yIG93biBwdXJwb3NlLgoKQ2M6IFJpayB2YW4gUmllbCA8cmll bEByZWRoYXQuY29tPgpDYzogVmxhc3RpbWlsIEJhYmthIDx2YmFia2FAc3VzZS5jej4KQ2M6IEpv b25zb28gS2ltIDxpYW1qb29uc29vLmtpbUBsZ2UuY29tPgpDYzogTWVsIEdvcm1hbiA8bWdvcm1h bkBzdXNlLmRlPgpDYzogSHVnaCBEaWNraW5zIDxodWdoZEBnb29nbGUuY29tPgpDYzogUmFmYWVs IEFxdWluaSA8YXF1aW5pQHJlZGhhdC5jb20+CkNjOiB2aXJ0dWFsaXphdGlvbkBsaXN0cy5saW51 eC1mb3VuZGF0aW9uLm9yZwpDYzogSm9uYXRoYW4gQ29yYmV0IDxjb3JiZXRAbHduLm5ldD4KQ2M6 IEpvaG4gRWluYXIgUmVpdGFuIDxqb2huLnJlaXRhbkBmb3NzLmFybS5jb20+CkNjOiBkcmktZGV2 ZWxAbGlzdHMuZnJlZWRlc2t0b3Aub3JnCkNjOiBTZXJnZXkgU2Vub3poYXRza3kgPHNlcmdleS5z ZW5vemhhdHNreUBnbWFpbC5jb20+ClNpZ25lZC1vZmYtYnk6IEdpb2ggS2ltIDxnaS1vaC5raW1A cHJvZml0YnJpY2tzLmNvbT4KU2lnbmVkLW9mZi1ieTogTWluY2hhbiBLaW0gPG1pbmNoYW5Aa2Vy bmVsLm9yZz4KLS0tCiBEb2N1bWVudGF0aW9uL2ZpbGVzeXN0ZW1zL0xvY2tpbmcgfCAgIDQgKwog RG9jdW1lbnRhdGlvbi9maWxlc3lzdGVtcy92ZnMudHh0IHwgIDExICsrKwogRG9jdW1lbnRhdGlv bi92bS9wYWdlX21pZ3JhdGlvbiAgIHwgMTA3ICsrKysrKysrKysrKysrKysrKysrLQogaW5jbHVk ZS9saW51eC9jb21wYWN0aW9uLmggICAgICAgIHwgIDE3ICsrKysKIGluY2x1ZGUvbGludXgvZnMu aCAgICAgICAgICAgICAgICB8ICAgMiArCiBpbmNsdWRlL2xpbnV4L2tzbS5oICAgICAgICAgICAg ICAgfCAgIDMgKy0KIGluY2x1ZGUvbGludXgvbWlncmF0ZS5oICAgICAgICAgICB8ICAgMiArCiBp bmNsdWRlL2xpbnV4L21tLmggICAgICAgICAgICAgICAgfCAgIDEgKwogaW5jbHVkZS9saW51eC9w YWdlLWZsYWdzLmggICAgICAgIHwgIDMzICsrKysrLS0KIG1tL2NvbXBhY3Rpb24uYyAgICAgICAg ICAgICAgICAgICB8ICA4MiArKysrKysrKysrKystLS0tCiBtbS9rc20uYyAgICAgICAgICAgICAg ICAgICAgICAgICAgfCAgIDQgKy0KIG1tL21pZ3JhdGUuYyAgICAgICAgICAgICAgICAgICAgICB8 IDE5MSArKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrLS0tLQogbW0vcGFnZV9hbGxv Yy5jICAgICAgICAgICAgICAgICAgIHwgICAyICstCiBtbS91dGlsLmMgICAgICAgICAgICAgICAg ICAgICAgICAgfCAgIDYgKy0KIDE0IGZpbGVzIGNoYW5nZWQsIDQxMSBpbnNlcnRpb25zKCspLCA1 NCBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9Eb2N1bWVudGF0aW9uL2ZpbGVzeXN0ZW1zL0xv Y2tpbmcgYi9Eb2N1bWVudGF0aW9uL2ZpbGVzeXN0ZW1zL0xvY2tpbmcKaW5kZXggNzVlZWE3Y2Uz ZDdjLi5kZGE2ZTNmOGUyMDMgMTAwNjQ0Ci0tLSBhL0RvY3VtZW50YXRpb24vZmlsZXN5c3RlbXMv TG9ja2luZworKysgYi9Eb2N1bWVudGF0aW9uL2ZpbGVzeXN0ZW1zL0xvY2tpbmcKQEAgLTE5NSw3 ICsxOTUsOSBAQCB1bmxvY2tzIGFuZCBkcm9wcyB0aGUgcmVmZXJlbmNlLgogCWludCAoKnJlbGVh c2VwYWdlKSAoc3RydWN0IHBhZ2UgKiwgaW50KTsKIAl2b2lkICgqZnJlZXBhZ2UpKHN0cnVjdCBw YWdlICopOwogCWludCAoKmRpcmVjdF9JTykoc3RydWN0IGtpb2NiICosIHN0cnVjdCBpb3ZfaXRl ciAqaXRlcik7CisJYm9vbCAoKmlzb2xhdGVfcGFnZSkgKHN0cnVjdCBwYWdlICosIGlzb2xhdGVf bW9kZV90KTsKIAlpbnQgKCptaWdyYXRlcGFnZSkoc3RydWN0IGFkZHJlc3Nfc3BhY2UgKiwgc3Ry dWN0IHBhZ2UgKiwgc3RydWN0IHBhZ2UgKik7CisJdm9pZCAoKnB1dGJhY2tfcGFnZSkgKHN0cnVj dCBwYWdlICopOwogCWludCAoKmxhdW5kZXJfcGFnZSkoc3RydWN0IHBhZ2UgKik7CiAJaW50ICgq aXNfcGFydGlhbGx5X3VwdG9kYXRlKShzdHJ1Y3QgcGFnZSAqLCB1bnNpZ25lZCBsb25nLCB1bnNp Z25lZCBsb25nKTsKIAlpbnQgKCplcnJvcl9yZW1vdmVfcGFnZSkoc3RydWN0IGFkZHJlc3Nfc3Bh Y2UgKiwgc3RydWN0IHBhZ2UgKik7CkBAIC0yMTksNyArMjIxLDkgQEAgaW52YWxpZGF0ZXBhZ2U6 CQl5ZXMKIHJlbGVhc2VwYWdlOgkJeWVzCiBmcmVlcGFnZToJCXllcwogZGlyZWN0X0lPOgoraXNv bGF0ZV9wYWdlOgkJeWVzCiBtaWdyYXRlcGFnZToJCXllcyAoYm90aCkKK3B1dGJhY2tfcGFnZToJ CXllcwogbGF1bmRlcl9wYWdlOgkJeWVzCiBpc19wYXJ0aWFsbHlfdXB0b2RhdGU6CXllcwogZXJy b3JfcmVtb3ZlX3BhZ2U6CXllcwpkaWZmIC0tZ2l0IGEvRG9jdW1lbnRhdGlvbi9maWxlc3lzdGVt cy92ZnMudHh0IGIvRG9jdW1lbnRhdGlvbi9maWxlc3lzdGVtcy92ZnMudHh0CmluZGV4IGM2MWEy MjNlZjNmZi4uOTAwMzYwY2JjZGFlIDEwMDY0NAotLS0gYS9Eb2N1bWVudGF0aW9uL2ZpbGVzeXN0 ZW1zL3Zmcy50eHQKKysrIGIvRG9jdW1lbnRhdGlvbi9maWxlc3lzdGVtcy92ZnMudHh0CkBAIC01 OTIsOSArNTkyLDE0IEBAIHN0cnVjdCBhZGRyZXNzX3NwYWNlX29wZXJhdGlvbnMgewogCWludCAo KnJlbGVhc2VwYWdlKSAoc3RydWN0IHBhZ2UgKiwgaW50KTsKIAl2b2lkICgqZnJlZXBhZ2UpKHN0 cnVjdCBwYWdlICopOwogCXNzaXplX3QgKCpkaXJlY3RfSU8pKHN0cnVjdCBraW9jYiAqLCBzdHJ1 Y3QgaW92X2l0ZXIgKml0ZXIpOworCS8qIGlzb2xhdGUgYSBwYWdlIGZvciBtaWdyYXRpb24gKi8K Kwlib29sICgqaXNvbGF0ZV9wYWdlKSAoc3RydWN0IHBhZ2UgKiwgaXNvbGF0ZV9tb2RlX3QpOwog CS8qIG1pZ3JhdGUgdGhlIGNvbnRlbnRzIG9mIGEgcGFnZSB0byB0aGUgc3BlY2lmaWVkIHRhcmdl dCAqLwogCWludCAoKm1pZ3JhdGVwYWdlKSAoc3RydWN0IHBhZ2UgKiwgc3RydWN0IHBhZ2UgKik7 CisJLyogcHV0IG1pZ3JhdGlvbi1mYWlsZWQgcGFnZSBiYWNrIHRvIHJpZ2h0IGxpc3QgKi8KKwl2 b2lkICgqcHV0YmFja19wYWdlKSAoc3RydWN0IHBhZ2UgKik7CiAJaW50ICgqbGF1bmRlcl9wYWdl KSAoc3RydWN0IHBhZ2UgKik7CisKIAlpbnQgKCppc19wYXJ0aWFsbHlfdXB0b2RhdGUpIChzdHJ1 Y3QgcGFnZSAqLCB1bnNpZ25lZCBsb25nLAogCQkJCQl1bnNpZ25lZCBsb25nKTsKIAl2b2lkICgq aXNfZGlydHlfd3JpdGViYWNrKSAoc3RydWN0IHBhZ2UgKiwgYm9vbCAqLCBib29sICopOwpAQCAt NzQ3LDYgKzc1MiwxMCBAQCBzdHJ1Y3QgYWRkcmVzc19zcGFjZV9vcGVyYXRpb25zIHsKICAgICAg ICAgYW5kIHRyYW5zZmVyIGRhdGEgZGlyZWN0bHkgYmV0d2VlbiB0aGUgc3RvcmFnZSBhbmQgdGhl CiAgICAgICAgIGFwcGxpY2F0aW9uJ3MgYWRkcmVzcyBzcGFjZS4KIAorICBpc29sYXRlX3BhZ2U6 IENhbGxlZCBieSB0aGUgVk0gd2hlbiBpc29sYXRpbmcgYSBtb3ZhYmxlIG5vbi1scnUgcGFnZS4K KwlJZiBwYWdlIGlzIHN1Y2Nlc3NmdWxseSBpc29sYXRlZCwgVk0gbWFya3MgdGhlIHBhZ2UgYXMg UEdfaXNvbGF0ZWQKKwl2aWEgX19TZXRQYWdlSXNvbGF0ZWQuCisKICAgbWlncmF0ZV9wYWdlOiAg VGhpcyBpcyB1c2VkIHRvIGNvbXBhY3QgdGhlIHBoeXNpY2FsIG1lbW9yeSB1c2FnZS4KICAgICAg ICAgSWYgdGhlIFZNIHdhbnRzIHRvIHJlbG9jYXRlIGEgcGFnZSAobWF5YmUgb2ZmIGEgbWVtb3J5 IGNhcmQKICAgICAgICAgdGhhdCBpcyBzaWduYWxsaW5nIGltbWluZW50IGZhaWx1cmUpIGl0IHdp bGwgcGFzcyBhIG5ldyBwYWdlCkBAIC03NTQsNiArNzYzLDggQEAgc3RydWN0IGFkZHJlc3Nfc3Bh Y2Vfb3BlcmF0aW9ucyB7CiAJdHJhbnNmZXIgYW55IHByaXZhdGUgZGF0YSBhY3Jvc3MgYW5kIHVw ZGF0ZSBhbnkgcmVmZXJlbmNlcwogICAgICAgICB0aGF0IGl0IGhhcyB0byB0aGUgcGFnZS4KIAor ICBwdXRiYWNrX3BhZ2U6IENhbGxlZCBieSB0aGUgVk0gd2hlbiBpc29sYXRlZCBwYWdlJ3MgbWln cmF0aW9uIGZhaWxzLgorCiAgIGxhdW5kZXJfcGFnZTogQ2FsbGVkIGJlZm9yZSBmcmVlaW5nIGEg cGFnZSAtIGl0IHdyaXRlcyBiYWNrIHRoZSBkaXJ0eSBwYWdlLiBUbwogICAJcHJldmVudCByZWRp cnR5aW5nIHRoZSBwYWdlLCBpdCBpcyBrZXB0IGxvY2tlZCBkdXJpbmcgdGhlIHdob2xlCiAJb3Bl cmF0aW9uLgpkaWZmIC0tZ2l0IGEvRG9jdW1lbnRhdGlvbi92bS9wYWdlX21pZ3JhdGlvbiBiL0Rv Y3VtZW50YXRpb24vdm0vcGFnZV9taWdyYXRpb24KaW5kZXggZmVhNWMwODY0MTcwLi44MGU5OGFm NDZlOTUgMTAwNjQ0Ci0tLSBhL0RvY3VtZW50YXRpb24vdm0vcGFnZV9taWdyYXRpb24KKysrIGIv RG9jdW1lbnRhdGlvbi92bS9wYWdlX21pZ3JhdGlvbgpAQCAtMTQyLDUgKzE0MiwxMTAgQEAgaXMg aW5jcmVhc2VkIHNvIHRoYXQgdGhlIHBhZ2UgY2Fubm90IGJlIGZyZWVkIHdoaWxlIHBhZ2UgbWln cmF0aW9uIG9jY3Vycy4KIDIwLiBUaGUgbmV3IHBhZ2UgaXMgbW92ZWQgdG8gdGhlIExSVSBhbmQg Y2FuIGJlIHNjYW5uZWQgYnkgdGhlIHN3YXBwZXIKICAgICBldGMgYWdhaW4uCiAKLUNocmlzdG9w aCBMYW1ldGVyLCBNYXkgOCwgMjAwNi4KK0MuIE5vbi1MUlUgcGFnZSBtaWdyYXRpb24KKy0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0KKworQWx0aG91Z2ggb3JpZ2luYWwgbWlncmF0aW9uIGFpbWVk IGZvciByZWR1Y2luZyB0aGUgbGF0ZW5jeSBvZiBtZW1vcnkgYWNjZXNzCitmb3IgTlVNQSwgY29t cGFjdGlvbiB3aG8gd2FudCB0byBjcmVhdGUgaGlnaC1vcmRlciBwYWdlIGlzIGFsc28gbWFpbiBj dXN0b21lci4KKworQ3VycmVudCBwcm9ibGVtIG9mIHRoZSBpbXBsZW1lbnRhdGlvbiBpcyB0aGF0 IGl0IGlzIGRlc2lnbmVkIHRvIG1pZ3JhdGUgb25seQorKkxSVSogcGFnZXMuIEhvd2V2ZXIsIHRo ZXJlIGFyZSBwb3RlbnRpYWwgbm9uLWxydSBwYWdlcyB3aGljaCBjYW4gYmUgbWlncmF0ZWQKK2lu IGRyaXZlcnMsIGZvciBleGFtcGxlLCB6c21hbGxvYywgdmlydGlvLWJhbGxvb24gcGFnZXMuCisK K0ZvciB2aXJ0aW8tYmFsbG9vbiBwYWdlcywgc29tZSBwYXJ0cyBvZiBtaWdyYXRpb24gY29kZSBw YXRoIGhhdmUgYmVlbiBob29rZWQKK3VwIGFuZCBhZGRlZCB2aXJ0aW8tYmFsbG9vbiBzcGVjaWZp YyBmdW5jdGlvbnMgdG8gaW50ZXJjZXB0IG1pZ3JhdGlvbiBsb2dpY3MuCitJdCdzIHRvbyBzcGVj aWZpYyB0byBhIGRyaXZlciBzbyBvdGhlciBkcml2ZXJzIHdobyB3YW50IHRvIG1ha2UgdGhlaXIg cGFnZXMKK21vdmFibGUgd291bGQgaGF2ZSB0byBhZGQgb3duIHNwZWNpZmljIGhvb2tzIGluIG1p Z3JhdGlvbiBwYXRoLgorCitUbyBvdmVyY2xvbWUgdGhlIHByb2JsZW0sIFZNIHN1cHBvcnRzIG5v bi1MUlUgcGFnZSBtaWdyYXRpb24gd2hpY2ggcHJvdmlkZXMKK2dlbmVyaWMgZnVuY3Rpb25zIGZv ciBub24tTFJVIG1vdmFibGUgcGFnZXMgd2l0aG91dCBkcml2ZXIgc3BlY2lmaWMgaG9va3MKK21p Z3JhdGlvbiBwYXRoLgorCitJZiBhIGRyaXZlciB3YW50IHRvIG1ha2Ugb3duIHBhZ2VzIG1vdmFi bGUsIGl0IHNob3VsZCBkZWZpbmUgdGhyZWUgZnVuY3Rpb25zCit3aGljaCBhcmUgZnVuY3Rpb24g cG9pbnRlcnMgb2Ygc3RydWN0IGFkZHJlc3Nfc3BhY2Vfb3BlcmF0aW9ucy4KKworMS4gYm9vbCAo Kmlzb2xhdGVfcGFnZSkgKHN0cnVjdCBwYWdlICpwYWdlLCBpc29sYXRlX21vZGVfdCBtb2RlKTsK KworV2hhdCBWTSBleHBlY3RzIG9uIGlzb2xhdGVfcGFnZSBmdW5jdGlvbiBvZiBkcml2ZXIgaXMg dG8gcmV0dXJuICp0cnVlKgoraWYgZHJpdmVyIGlzb2xhdGVzIHBhZ2Ugc3VjY2Vzc2Z1bGx5LiBP biByZXR1cmluZyB0cnVlLCBWTSBtYXJrcyB0aGUgcGFnZQorYXMgUEdfaXNvbGF0ZWQgc28gY29u Y3VycmVudCBpc29sYXRpb24gaW4gc2V2ZXJhbCBDUFVzIHNraXAgdGhlIHBhZ2UKK2ZvciBpc29s YXRpb24uIElmIGEgZHJpdmVyIGNhbm5vdCBpc29sYXRlIHRoZSBwYWdlLCBpdCBzaG91bGQgcmV0 dXJuICpmYWxzZSouCisKK09uY2UgcGFnZSBpcyBzdWNjZXNzZnVsbHkgaXNvbGF0ZWQsIFZNIHVz ZXMgcGFnZS5scnUgZmllbGRzIHNvIGRyaXZlcgorc2hvdWxkbid0IGV4cGVjdCB0byBwcmVzZXJ2 ZSB2YWx1ZXMgaW4gdGhhdCBmaWVsZHMuCisKKzIuIGludCAoKm1pZ3JhdGVwYWdlKSAoc3RydWN0 IGFkZHJlc3Nfc3BhY2UgKm1hcHBpbmcsCisJCXN0cnVjdCBwYWdlICpuZXdwYWdlLCBzdHJ1Y3Qg cGFnZSAqb2xkcGFnZSwgZW51bSBtaWdyYXRlX21vZGUpOworCitBZnRlciBpc29sYXRpb24sIFZN IGNhbGxzIG1pZ3JhdGVwYWdlIG9mIGRyaXZlciB3aXRoIGlzb2xhdGVkIHBhZ2UuCitUaGUgZnVu Y3Rpb24gb2YgbWlncmF0ZXBhZ2UgaXMgdG8gbW92ZSBjb250ZW50IG9mIHRoZSBvbGQgcGFnZSB0 byBuZXcgcGFnZQorYW5kIHNldCB1cCBmaWVsZHMgb2Ygc3RydWN0IHBhZ2UgbmV3cGFnZS4gS2Vl cCBpbiBtaW5kIHRoYXQgeW91IHNob3VsZAorY2xlYXIgUEdfbW92YWJsZSBvZiBvbGRwYWdlIHZp YSBfX0NsZWFyUGFnZU1vdmFibGUgdW5kZXIgcGFnZV9sb2NrIGlmIHlvdQorbWlncmF0ZWQgdGhl IG9sZHBhZ2Ugc3VjY2Vzc2Z1bGx5IGFuZCByZXR1cm5zIDAuCitJZiBkcml2ZXIgY2Fubm90IG1p Z3JhdGUgdGhlIHBhZ2UgYXQgdGhlIG1vbWVudCwgZHJpdmVyIGNhbiByZXR1cm4gLUVBR0FJTi4K K09uIC1FQUdBSU4sIFZNIHdpbGwgcmV0cnkgcGFnZSBtaWdyYXRpb24gaW4gYSBzaG9ydCB0aW1l IGJlY2F1c2UgVk0gaW50ZXJwcmV0cworLUVBR0FJTiBhcyAidGVtcG9yYWwgbWlncmF0aW9uIGZh aWx1cmUiLiBPbiByZXR1cm5pbmcgYW55IGVycm9yIGV4Y2VwdCAtRUFHQUlOLAorVk0gd2lsbCBn aXZlIHVwIHRoZSBwYWdlIG1pZ3JhdGlvbiB3aXRob3V0IHJldHJ5aW5nIGluIHRoaXMgdGltZS4K KworRHJpdmVyIHNob3VsZG4ndCB0b3VjaCBwYWdlLmxydSBmaWVsZCBWTSB1c2luZyBpbiB0aGUg ZnVuY3Rpb25zLgorCiszLiB2b2lkICgqcHV0YmFja19wYWdlKShzdHJ1Y3QgcGFnZSAqKTsKKwor SWYgbWlncmF0aW9uIGZhaWxzIG9uIGlzb2xhdGVkIHBhZ2UsIFZNIHNob3VsZCByZXR1cm4gdGhl IGlzb2xhdGVkIHBhZ2UKK3RvIHRoZSBkcml2ZXIgc28gVk0gY2FsbHMgZHJpdmVyJ3MgcHV0YmFj a19wYWdlIHdpdGggbWlncmF0aW9uIGZhaWxlZCBwYWdlLgorSW4gdGhpcyBmdW5jdGlvbiwgZHJp dmVyIHNob3VsZCBwdXQgdGhlIGlzb2xhdGVkIHBhZ2UgYmFjayB0byB0aGUgb3duIGRhdGEKK3N0 cnVjdHVyZS4KIAorNC4gbm9uLWxydSBtb3ZhYmxlIHBhZ2UgZmxhZ3MKKworVGhlcmUgYXJlIHR3 byBwYWdlIGZsYWdzIGZvciBzdXBwb3J0aW5nIG5vbi1scnUgbW92YWJsZSBwYWdlLgorCisqIFBH X21vdmFibGUKKworRHJpdmVyIHNob3VsZCB1c2UgdGhlIGJlbG93IGZ1bmN0aW9uIHRvIG1ha2Ug cGFnZSBtb3ZhYmxlIHVuZGVyIHBhZ2VfbG9jay4KKworCXZvaWQgX19TZXRQYWdlTW92YWJsZShz dHJ1Y3QgcGFnZSAqcGFnZSwgc3RydWN0IGFkZHJlc3Nfc3BhY2UgKm1hcHBpbmcpCisKK0l0IG5l ZWRzIGFyZ3VtZW50IG9mIGFkZHJlc3Nfc3BhY2UgZm9yIHJlZ2lzdGVyaW5nIG1pZ3JhdGlvbiBm YW1pbHkgZnVuY3Rpb25zCit3aGljaCB3aWxsIGJlIGNhbGxlZCBieSBWTS4gRXhhY3RseSBzcGVh a2luZywgUEdfbW92YWJsZSBpcyBub3QgYSByZWFsIGZsYWcgb2YKK3N0cnVjdCBwYWdlLiBSYXRo ZXIgdGhhbiwgVk0gcmV1c2VzIHBhZ2UtPm1hcHBpbmcncyBsb3dlciBiaXRzIHRvIHJlcHJlc2Vu dCBpdC4KKworCSNkZWZpbmUgUEFHRV9NQVBQSU5HX01PVkFCTEUgMHgyCisJcGFnZS0+bWFwcGlu ZyA9IHBhZ2UtPm1hcHBpbmcgfCBQQUdFX01BUFBJTkdfTU9WQUJMRTsKKworc28gZHJpdmVyIHNo b3VsZG4ndCBhY2Nlc3MgcGFnZS0+bWFwcGluZyBkaXJlY3RseS4gSW5zdGVhZCwgZHJpdmVyIHNo b3VsZAordXNlIHBhZ2VfbWFwcGluZyB3aGljaCBtYXNrIG9mZiB0aGUgbG93IHR3byBiaXRzIG9m IHBhZ2UtPm1hcHBpbmcgc28gaXQgY2FuIGdldAorcmlnaHQgc3RydWN0IGFkZHJlc3Nfc3BhY2Uu CisKK0ZvciB0ZXN0aW5nIG9mIG5vbi1scnUgbW92YWJsZSBwYWdlLCBWTSBzdXBwb3J0cyBfX1Bh Z2VNb3ZhYmxlIGZ1bmN0aW9uLgorSG93ZXZlciwgaXQgZG9lc24ndCBndWFyYW50ZWUgdG8gaWRl bnRpZnkgbm9uLWxydSBtb3ZhYmxlIHBhZ2UgYmVjYXVzZQorcGFnZS0+bWFwcGluZyBmaWVsZCBp cyB1bmlmaWVkIHdpdGggb3RoZXIgdmFyaWFibGVzIGluIHN0cnVjdCBwYWdlLgorQXMgd2VsbCwg aWYgZHJpdmVyIHJlbGVhc2VzIHRoZSBwYWdlIGFmdGVyIGlzb2xhdGlvbiBieSBWTSwgcGFnZS0+ bWFwcGluZworZG9lc24ndCBoYXZlIHN0YWJsZSB2YWx1ZSBhbHRob3VnaCBpdCBoYXMgUEFHRV9N QVBQSU5HX01PVkFCTEUKKyhMb29rIGF0IF9fQ2xlYXJQYWdlTW92YWJsZSkuIEJ1dCBfX1BhZ2VN b3ZhYmxlIGlzIGNoZWFwIHRvIGNhdGNoIHdoZXRoZXIKK3BhZ2UgaXMgTFJVIG9yIG5vbi1scnUg bW92YWJsZSBvbmNlIHRoZSBwYWdlIGhhcyBiZWVuIGlzb2xhdGVkLiBCZWNhdXNlCitMUlUgcGFn ZXMgbmV2ZXIgY2FuIGhhdmUgUEFHRV9NQVBQSU5HX01PVkFCTEUgaW4gcGFnZS0+bWFwcGluZy4g SXQgaXMgYWxzbworZ29vZCBmb3IganVzdCBwZWVraW5nIHRvIHRlc3Qgbm9uLWxydSBtb3ZhYmxl IHBhZ2VzIGJlZm9yZSBtb3JlIGV4cGVuc2l2ZQorY2hlY2tpbmcgd2l0aCBsb2NrX3BhZ2UgaW4g cGZuIHNjYW5uaW5nIHRvIHNlbGVjdCB2aWN0aW0uCisKK0ZvciBndWFyYW50ZWVpbmcgbm9uLWxy dSBtb3ZhYmxlIHBhZ2UsIFZNIHByb3ZpZGVzIFBhZ2VNb3ZhYmxlIGZ1bmN0aW9uLgorVW5saWtl IF9fUGFnZU1vdmFibGUsIFBhZ2VNb3ZhYmxlIGZ1bmN0aW9ucyB2YWxpZGF0ZXMgcGFnZS0+bWFw cGluZyBhbmQKK21hcHBpbmctPmFfb3BzLT5pc29sYXRlX3BhZ2UgdW5kZXIgbG9ja19wYWdlLiBU aGUgbG9ja19wYWdlIHByZXZlbnRzIHN1ZGRlbgorZGVzdHJveWluZyBvZiBwYWdlLT5tYXBwaW5n LgorCitEcml2ZXIgdXNpbmcgX19TZXRQYWdlTW92YWJsZSBzaG91bGQgY2xlYXIgdGhlIGZsYWcg dmlhIF9fQ2xlYXJNb3ZhYmxlUGFnZQordW5kZXIgcGFnZV9sb2NrIGJlZm9yZSB0aGUgcmVsZWFz aW5nIHRoZSBwYWdlLgorCisqIFBHX2lzb2xhdGVkCisKK1RvIHByZXZlbnQgY29uY3VycmVudCBp c29sYXRpb24gYW1vbmcgc2V2ZXJhbCBDUFVzLCBWTSBtYXJrcyBpc29sYXRlZCBwYWdlCithcyBQ R19pc29sYXRlZCB1bmRlciBsb2NrX3BhZ2UuIFNvIGlmIGEgQ1BVIGVuY291bnRlcnMgUEdfaXNv bGF0ZWQgbm9uLWxydQorbW92YWJsZSBwYWdlLCBpdCBjYW4gc2tpcCBpdC4gRHJpdmVyIGRvZXNu J3QgbmVlZCB0byBtYW5pcHVsYXRlIHRoZSBmbGFnCitiZWNhdXNlIFZNIHdpbGwgc2V0L2NsZWFy IGl0IGF1dG9tYXRpY2FsbHkuIEtlZXAgaW4gbWluZCB0aGF0IGlmIGRyaXZlcgorc2VlcyBQR19p c29sYXRlZCBwYWdlLCBpdCBtZWFucyB0aGUgcGFnZSBoYXZlIGJlZW4gaXNvbGF0ZWQgYnkgVk0g c28gaXQKK3Nob3VsZG4ndCB0b3VjaCBwYWdlLmxydSBmaWVsZC4KK1BHX2lzb2xhdGVkIGlzIGFs aWFzIHdpdGggUEdfcmVjbGFpbSBmbGFnIHNvIGRyaXZlciBzaG91bGRuJ3QgdXNlIHRoZSBmbGFn Citmb3Igb3duIHB1cnBvc2UuCisKK0NocmlzdG9waCBMYW1ldGVyLCBNYXkgOCwgMjAwNi4KK01p bmNoYW4gS2ltLCBNYXIgMjgsIDIwMTYuCmRpZmYgLS1naXQgYS9pbmNsdWRlL2xpbnV4L2NvbXBh Y3Rpb24uaCBiL2luY2x1ZGUvbGludXgvY29tcGFjdGlvbi5oCmluZGV4IGE1OGM4NTJhMjY4Zi4u YzZiNDdjODYxY2VhIDEwMDY0NAotLS0gYS9pbmNsdWRlL2xpbnV4L2NvbXBhY3Rpb24uaAorKysg Yi9pbmNsdWRlL2xpbnV4L2NvbXBhY3Rpb24uaApAQCAtNTQsNiArNTQsOSBAQCBlbnVtIGNvbXBh Y3RfcmVzdWx0IHsKIHN0cnVjdCBhbGxvY19jb250ZXh0OyAvKiBpbiBtbS9pbnRlcm5hbC5oICov CiAKICNpZmRlZiBDT05GSUdfQ09NUEFDVElPTgorZXh0ZXJuIGludCBQYWdlTW92YWJsZShzdHJ1 Y3QgcGFnZSAqcGFnZSk7CitleHRlcm4gdm9pZCBfX1NldFBhZ2VNb3ZhYmxlKHN0cnVjdCBwYWdl ICpwYWdlLCBzdHJ1Y3QgYWRkcmVzc19zcGFjZSAqbWFwcGluZyk7CitleHRlcm4gdm9pZCBfX0Ns ZWFyUGFnZU1vdmFibGUoc3RydWN0IHBhZ2UgKnBhZ2UpOwogZXh0ZXJuIGludCBzeXNjdGxfY29t cGFjdF9tZW1vcnk7CiBleHRlcm4gaW50IHN5c2N0bF9jb21wYWN0aW9uX2hhbmRsZXIoc3RydWN0 IGN0bF90YWJsZSAqdGFibGUsIGludCB3cml0ZSwKIAkJCXZvaWQgX191c2VyICpidWZmZXIsIHNp emVfdCAqbGVuZ3RoLCBsb2ZmX3QgKnBwb3MpOwpAQCAtMTUxLDYgKzE1NCwxOSBAQCBleHRlcm4g dm9pZCBrY29tcGFjdGRfc3RvcChpbnQgbmlkKTsKIGV4dGVybiB2b2lkIHdha2V1cF9rY29tcGFj dGQocGdfZGF0YV90ICpwZ2RhdCwgaW50IG9yZGVyLCBpbnQgY2xhc3N6b25lX2lkeCk7CiAKICNl bHNlCitzdGF0aWMgaW5saW5lIGludCBQYWdlTW92YWJsZShzdHJ1Y3QgcGFnZSAqcGFnZSkKK3sK KwlyZXR1cm4gMDsKK30KK3N0YXRpYyBpbmxpbmUgdm9pZCBfX1NldFBhZ2VNb3ZhYmxlKHN0cnVj dCBwYWdlICpwYWdlLAorCQkJc3RydWN0IGFkZHJlc3Nfc3BhY2UgKm1hcHBpbmcpCit7Cit9CisK K3N0YXRpYyBpbmxpbmUgdm9pZCBfX0NsZWFyUGFnZU1vdmFibGUoc3RydWN0IHBhZ2UgKnBhZ2Up Cit7Cit9CisKIHN0YXRpYyBpbmxpbmUgZW51bSBjb21wYWN0X3Jlc3VsdCB0cnlfdG9fY29tcGFj dF9wYWdlcyhnZnBfdCBnZnBfbWFzaywKIAkJCXVuc2lnbmVkIGludCBvcmRlciwgaW50IGFsbG9j X2ZsYWdzLAogCQkJY29uc3Qgc3RydWN0IGFsbG9jX2NvbnRleHQgKmFjLApAQCAtMjEyLDYgKzIy OCw3IEBAIHN0YXRpYyBpbmxpbmUgdm9pZCB3YWtldXBfa2NvbXBhY3RkKHBnX2RhdGFfdCAqcGdk YXQsIGludCBvcmRlciwgaW50IGNsYXNzem9uZV9pCiAjZW5kaWYgLyogQ09ORklHX0NPTVBBQ1RJ T04gKi8KIAogI2lmIGRlZmluZWQoQ09ORklHX0NPTVBBQ1RJT04pICYmIGRlZmluZWQoQ09ORklH X1NZU0ZTKSAmJiBkZWZpbmVkKENPTkZJR19OVU1BKQorc3RydWN0IG5vZGU7CiBleHRlcm4gaW50 IGNvbXBhY3Rpb25fcmVnaXN0ZXJfbm9kZShzdHJ1Y3Qgbm9kZSAqbm9kZSk7CiBleHRlcm4gdm9p ZCBjb21wYWN0aW9uX3VucmVnaXN0ZXJfbm9kZShzdHJ1Y3Qgbm9kZSAqbm9kZSk7CiAKZGlmZiAt LWdpdCBhL2luY2x1ZGUvbGludXgvZnMuaCBiL2luY2x1ZGUvbGludXgvZnMuaAppbmRleCBjOWNj MWY2OTlkYzEuLjZhMmNlNDM5ZWE0MiAxMDA2NDQKLS0tIGEvaW5jbHVkZS9saW51eC9mcy5oCisr KyBiL2luY2x1ZGUvbGludXgvZnMuaApAQCAtNDAyLDYgKzQwMiw4IEBAIHN0cnVjdCBhZGRyZXNz X3NwYWNlX29wZXJhdGlvbnMgewogCSAqLwogCWludCAoKm1pZ3JhdGVwYWdlKSAoc3RydWN0IGFk ZHJlc3Nfc3BhY2UgKiwKIAkJCXN0cnVjdCBwYWdlICosIHN0cnVjdCBwYWdlICosIGVudW0gbWln cmF0ZV9tb2RlKTsKKwlib29sICgqaXNvbGF0ZV9wYWdlKShzdHJ1Y3QgcGFnZSAqLCBpc29sYXRl X21vZGVfdCk7CisJdm9pZCAoKnB1dGJhY2tfcGFnZSkoc3RydWN0IHBhZ2UgKik7CiAJaW50ICgq bGF1bmRlcl9wYWdlKSAoc3RydWN0IHBhZ2UgKik7CiAJaW50ICgqaXNfcGFydGlhbGx5X3VwdG9k YXRlKSAoc3RydWN0IHBhZ2UgKiwgdW5zaWduZWQgbG9uZywKIAkJCQkJdW5zaWduZWQgbG9uZyk7 CmRpZmYgLS1naXQgYS9pbmNsdWRlL2xpbnV4L2tzbS5oIGIvaW5jbHVkZS9saW51eC9rc20uaApp bmRleCA3YWUyMTZhMzljOWUuLjQ4MWM4YzQ2MjdjYSAxMDA2NDQKLS0tIGEvaW5jbHVkZS9saW51 eC9rc20uaAorKysgYi9pbmNsdWRlL2xpbnV4L2tzbS5oCkBAIC00Myw4ICs0Myw3IEBAIHN0YXRp YyBpbmxpbmUgc3RydWN0IHN0YWJsZV9ub2RlICpwYWdlX3N0YWJsZV9ub2RlKHN0cnVjdCBwYWdl ICpwYWdlKQogc3RhdGljIGlubGluZSB2b2lkIHNldF9wYWdlX3N0YWJsZV9ub2RlKHN0cnVjdCBw YWdlICpwYWdlLAogCQkJCQlzdHJ1Y3Qgc3RhYmxlX25vZGUgKnN0YWJsZV9ub2RlKQogewotCXBh Z2UtPm1hcHBpbmcgPSAodm9pZCAqKXN0YWJsZV9ub2RlICsKLQkJCQkoUEFHRV9NQVBQSU5HX0FO T04gfCBQQUdFX01BUFBJTkdfS1NNKTsKKwlwYWdlLT5tYXBwaW5nID0gKHZvaWQgKikoKHVuc2ln bmVkIGxvbmcpc3RhYmxlX25vZGUgfCBQQUdFX01BUFBJTkdfS1NNKTsKIH0KIAogLyoKZGlmZiAt LWdpdCBhL2luY2x1ZGUvbGludXgvbWlncmF0ZS5oIGIvaW5jbHVkZS9saW51eC9taWdyYXRlLmgK aW5kZXggOWI1MDMyNWU0ZGRmLi40MDRmYmZlZmViMzMgMTAwNjQ0Ci0tLSBhL2luY2x1ZGUvbGlu dXgvbWlncmF0ZS5oCisrKyBiL2luY2x1ZGUvbGludXgvbWlncmF0ZS5oCkBAIC0zNyw2ICszNyw4 IEBAIGV4dGVybiBpbnQgbWlncmF0ZV9wYWdlKHN0cnVjdCBhZGRyZXNzX3NwYWNlICosCiAJCQlz dHJ1Y3QgcGFnZSAqLCBzdHJ1Y3QgcGFnZSAqLCBlbnVtIG1pZ3JhdGVfbW9kZSk7CiBleHRlcm4g aW50IG1pZ3JhdGVfcGFnZXMoc3RydWN0IGxpc3RfaGVhZCAqbCwgbmV3X3BhZ2VfdCBuZXcsIGZy ZWVfcGFnZV90IGZyZWUsCiAJCXVuc2lnbmVkIGxvbmcgcHJpdmF0ZSwgZW51bSBtaWdyYXRlX21v ZGUgbW9kZSwgaW50IHJlYXNvbik7CitleHRlcm4gYm9vbCBpc29sYXRlX21vdmFibGVfcGFnZShz dHJ1Y3QgcGFnZSAqcGFnZSwgaXNvbGF0ZV9tb2RlX3QgbW9kZSk7CitleHRlcm4gdm9pZCBwdXRi YWNrX21vdmFibGVfcGFnZShzdHJ1Y3QgcGFnZSAqcGFnZSk7CiAKIGV4dGVybiBpbnQgbWlncmF0 ZV9wcmVwKHZvaWQpOwogZXh0ZXJuIGludCBtaWdyYXRlX3ByZXBfbG9jYWwodm9pZCk7CmRpZmYg LS1naXQgYS9pbmNsdWRlL2xpbnV4L21tLmggYi9pbmNsdWRlL2xpbnV4L21tLmgKaW5kZXggYTAw ZWM4MTYyMzNhLi4zM2VhZWM1N2U5OTcgMTAwNjQ0Ci0tLSBhL2luY2x1ZGUvbGludXgvbW0uaAor KysgYi9pbmNsdWRlL2xpbnV4L21tLmgKQEAgLTEwMzUsNiArMTAzNSw3IEBAIHN0YXRpYyBpbmxp bmUgcGdvZmZfdCBwYWdlX2ZpbGVfaW5kZXgoc3RydWN0IHBhZ2UgKnBhZ2UpCiB9CiAKIGJvb2wg cGFnZV9tYXBwZWQoc3RydWN0IHBhZ2UgKnBhZ2UpOworc3RydWN0IGFkZHJlc3Nfc3BhY2UgKnBh Z2VfbWFwcGluZyhzdHJ1Y3QgcGFnZSAqcGFnZSk7CiAKIC8qCiAgKiBSZXR1cm4gdHJ1ZSBvbmx5 IGlmIHRoZSBwYWdlIGhhcyBiZWVuIGFsbG9jYXRlZCB3aXRoCmRpZmYgLS1naXQgYS9pbmNsdWRl L2xpbnV4L3BhZ2UtZmxhZ3MuaCBiL2luY2x1ZGUvbGludXgvcGFnZS1mbGFncy5oCmluZGV4IGU1 YTMyNDQ1ZjkzMC4uZjhhMmM0ODgxNjA4IDEwMDY0NAotLS0gYS9pbmNsdWRlL2xpbnV4L3BhZ2Ut ZmxhZ3MuaAorKysgYi9pbmNsdWRlL2xpbnV4L3BhZ2UtZmxhZ3MuaApAQCAtMTI5LDYgKzEyOSw5 IEBAIGVudW0gcGFnZWZsYWdzIHsKIAogCS8qIENvbXBvdW5kIHBhZ2VzLiBTdG9yZWQgaW4gZmly c3QgdGFpbCBwYWdlJ3MgZmxhZ3MgKi8KIAlQR19kb3VibGVfbWFwID0gUEdfcHJpdmF0ZV8yLAor CisJLyogbm9uLWxydSBpc29sYXRlZCBtb3ZhYmxlIHBhZ2UgKi8KKwlQR19pc29sYXRlZCA9IFBH X3JlY2xhaW0sCiB9OwogCiAjaWZuZGVmIF9fR0VORVJBVElOR19CT1VORFNfSApAQCAtMzU3LDI5 ICszNjAsMzcgQEAgUEFHRUZMQUcoSWRsZSwgaWRsZSwgUEZfQU5ZKQogICogd2l0aCB0aGUgUEFH RV9NQVBQSU5HX0FOT04gYml0IHNldCB0byBkaXN0aW5ndWlzaCBpdC4gIFNlZSBybWFwLmguCiAg KgogICogT24gYW4gYW5vbnltb3VzIHBhZ2UgaW4gYSBWTV9NRVJHRUFCTEUgYXJlYSwgaWYgQ09O RklHX0tTTSBpcyBlbmFibGVkLAotICogdGhlIFBBR0VfTUFQUElOR19LU00gYml0IG1heSBiZSBz ZXQgYWxvbmcgd2l0aCB0aGUgUEFHRV9NQVBQSU5HX0FOT04gYml0OwotICogYW5kIHRoZW4gcGFn ZS0+bWFwcGluZyBwb2ludHMsIG5vdCB0byBhbiBhbm9uX3ZtYSwgYnV0IHRvIGEgcHJpdmF0ZQor ICogdGhlIFBBR0VfTUFQUElOR19NT1ZBQkxFIGJpdCBtYXkgYmUgc2V0IGFsb25nIHdpdGggdGhl IFBBR0VfTUFQUElOR19BTk9OCisgKiBiaXQ7IGFuZCB0aGVuIHBhZ2UtPm1hcHBpbmcgcG9pbnRz LCBub3QgdG8gYW4gYW5vbl92bWEsIGJ1dCB0byBhIHByaXZhdGUKICAqIHN0cnVjdHVyZSB3aGlj aCBLU00gYXNzb2NpYXRlcyB3aXRoIHRoYXQgbWVyZ2VkIHBhZ2UuICBTZWUga3NtLmguCiAgKgot ICogUEFHRV9NQVBQSU5HX0tTTSB3aXRob3V0IFBBR0VfTUFQUElOR19BTk9OIGlzIGN1cnJlbnRs eSBuZXZlciB1c2VkLgorICogUEFHRV9NQVBQSU5HX0tTTSB3aXRob3V0IFBBR0VfTUFQUElOR19B Tk9OIGlzIHVzZWQgZm9yIG5vbi1scnUgbW92YWJsZQorICogcGFnZSBhbmQgdGhlbiBwYWdlLT5t YXBwaW5nIHBvaW50cyBhIHN0cnVjdCBhZGRyZXNzX3NwYWNlLgogICoKICAqIFBsZWFzZSBub3Rl IHRoYXQsIGNvbmZ1c2luZ2x5LCAicGFnZV9tYXBwaW5nIiByZWZlcnMgdG8gdGhlIGlub2RlCiAg KiBhZGRyZXNzX3NwYWNlIHdoaWNoIG1hcHMgdGhlIHBhZ2UgZnJvbSBkaXNrOyB3aGVyZWFzICJw YWdlX21hcHBlZCIKICAqIHJlZmVycyB0byB1c2VyIHZpcnR1YWwgYWRkcmVzcyBzcGFjZSBpbnRv IHdoaWNoIHRoZSBwYWdlIGlzIG1hcHBlZC4KICAqLwotI2RlZmluZSBQQUdFX01BUFBJTkdfQU5P TgkxCi0jZGVmaW5lIFBBR0VfTUFQUElOR19LU00JMgotI2RlZmluZSBQQUdFX01BUFBJTkdfRkxB R1MJKFBBR0VfTUFQUElOR19BTk9OIHwgUEFHRV9NQVBQSU5HX0tTTSkKKyNkZWZpbmUgUEFHRV9N QVBQSU5HX0FOT04JMHgxCisjZGVmaW5lIFBBR0VfTUFQUElOR19NT1ZBQkxFCTB4MgorI2RlZmlu ZSBQQUdFX01BUFBJTkdfS1NNCShQQUdFX01BUFBJTkdfQU5PTiB8IFBBR0VfTUFQUElOR19NT1ZB QkxFKQorI2RlZmluZSBQQUdFX01BUFBJTkdfRkxBR1MJKFBBR0VfTUFQUElOR19BTk9OIHwgUEFH RV9NQVBQSU5HX01PVkFCTEUpCiAKLXN0YXRpYyBfX2Fsd2F5c19pbmxpbmUgaW50IFBhZ2VBbm9u SGVhZChzdHJ1Y3QgcGFnZSAqcGFnZSkKK3N0YXRpYyBfX2Fsd2F5c19pbmxpbmUgaW50IFBhZ2VN YXBwaW5nRmxhZyhzdHJ1Y3QgcGFnZSAqcGFnZSkKIHsKLQlyZXR1cm4gKCh1bnNpZ25lZCBsb25n KXBhZ2UtPm1hcHBpbmcgJiBQQUdFX01BUFBJTkdfQU5PTikgIT0gMDsKKwlyZXR1cm4gKCh1bnNp Z25lZCBsb25nKXBhZ2UtPm1hcHBpbmcgJiBQQUdFX01BUFBJTkdfRkxBR1MpICE9IDA7CiB9CiAK IHN0YXRpYyBfX2Fsd2F5c19pbmxpbmUgaW50IFBhZ2VBbm9uKHN0cnVjdCBwYWdlICpwYWdlKQog ewogCXBhZ2UgPSBjb21wb3VuZF9oZWFkKHBhZ2UpOwotCXJldHVybiBQYWdlQW5vbkhlYWQocGFn ZSk7CisJcmV0dXJuICgodW5zaWduZWQgbG9uZylwYWdlLT5tYXBwaW5nICYgUEFHRV9NQVBQSU5H X0FOT04pICE9IDA7Cit9CisKK3N0YXRpYyBfX2Fsd2F5c19pbmxpbmUgaW50IF9fUGFnZU1vdmFi bGUoc3RydWN0IHBhZ2UgKnBhZ2UpCit7CisJcmV0dXJuICgodW5zaWduZWQgbG9uZylwYWdlLT5t YXBwaW5nICYgUEFHRV9NQVBQSU5HX0ZMQUdTKSA9PQorCQkJCVBBR0VfTUFQUElOR19NT1ZBQkxF OwogfQogCiAjaWZkZWYgQ09ORklHX0tTTQpAQCAtMzkzLDcgKzQwNCw3IEBAIHN0YXRpYyBfX2Fs d2F5c19pbmxpbmUgaW50IFBhZ2VLc20oc3RydWN0IHBhZ2UgKnBhZ2UpCiB7CiAJcGFnZSA9IGNv bXBvdW5kX2hlYWQocGFnZSk7CiAJcmV0dXJuICgodW5zaWduZWQgbG9uZylwYWdlLT5tYXBwaW5n ICYgUEFHRV9NQVBQSU5HX0ZMQUdTKSA9PQotCQkJCShQQUdFX01BUFBJTkdfQU5PTiB8IFBBR0Vf TUFQUElOR19LU00pOworCQkJCVBBR0VfTUFQUElOR19LU007CiB9CiAjZWxzZQogVEVTVFBBR0VG TEFHX0ZBTFNFKEtzbSkKQEAgLTY0MSw2ICs2NTIsOCBAQCBzdGF0aWMgaW5saW5lIHZvaWQgX19D bGVhclBhZ2VCYWxsb29uKHN0cnVjdCBwYWdlICpwYWdlKQogCWF0b21pY19zZXQoJnBhZ2UtPl9t YXBjb3VudCwgLTEpOwogfQogCitfX1BBR0VGTEFHKElzb2xhdGVkLCBpc29sYXRlZCwgUEZfQU5Z KTsKKwogLyoKICAqIElmIG5ldHdvcmstYmFzZWQgc3dhcCBpcyBlbmFibGVkLCBzbCpiIG11c3Qg a2VlcCB0cmFjayBvZiB3aGV0aGVyIHBhZ2VzCiAgKiB3ZXJlIGFsbG9jYXRlZCBmcm9tIHBmbWVt YWxsb2MgcmVzZXJ2ZXMuCmRpZmYgLS1naXQgYS9tbS9jb21wYWN0aW9uLmMgYi9tbS9jb21wYWN0 aW9uLmMKaW5kZXggMTQyNzM2NmFkNjczLi4yZDY4NjJkMGRmNjAgMTAwNjQ0Ci0tLSBhL21tL2Nv bXBhY3Rpb24uYworKysgYi9tbS9jb21wYWN0aW9uLmMKQEAgLTgxLDYgKzgxLDQxIEBAIHN0YXRp YyBpbmxpbmUgYm9vbCBtaWdyYXRlX2FzeW5jX3N1aXRhYmxlKGludCBtaWdyYXRldHlwZSkKIAog I2lmZGVmIENPTkZJR19DT01QQUNUSU9OCiAKK2ludCBQYWdlTW92YWJsZShzdHJ1Y3QgcGFnZSAq cGFnZSkKK3sKKwlzdHJ1Y3QgYWRkcmVzc19zcGFjZSAqbWFwcGluZzsKKworCVdBUk5fT04oIVBh Z2VMb2NrZWQocGFnZSkpOworCWlmICghX19QYWdlTW92YWJsZShwYWdlKSkKKwkJZ290byBvdXQ7 CisKKwltYXBwaW5nID0gcGFnZV9tYXBwaW5nKHBhZ2UpOworCWlmIChtYXBwaW5nICYmIG1hcHBp bmctPmFfb3BzICYmIG1hcHBpbmctPmFfb3BzLT5pc29sYXRlX3BhZ2UpCisJCXJldHVybiAxOwor b3V0OgorCXJldHVybiAwOworfQorRVhQT1JUX1NZTUJPTChQYWdlTW92YWJsZSk7CisKK3ZvaWQg X19TZXRQYWdlTW92YWJsZShzdHJ1Y3QgcGFnZSAqcGFnZSwgc3RydWN0IGFkZHJlc3Nfc3BhY2Ug Km1hcHBpbmcpCit7CisJVk1fQlVHX09OX1BBR0UoIVBhZ2VMb2NrZWQocGFnZSksIHBhZ2UpOwor CVZNX0JVR19PTl9QQUdFKCh1bnNpZ25lZCBsb25nKW1hcHBpbmcgJiBQQUdFX01BUFBJTkdfTU9W QUJMRSwgcGFnZSk7CisJcGFnZS0+bWFwcGluZyA9ICh2b2lkICopKCh1bnNpZ25lZCBsb25nKW1h cHBpbmcgfCBQQUdFX01BUFBJTkdfTU9WQUJMRSk7Cit9CitFWFBPUlRfU1lNQk9MKF9fU2V0UGFn ZU1vdmFibGUpOworCit2b2lkIF9fQ2xlYXJQYWdlTW92YWJsZShzdHJ1Y3QgcGFnZSAqcGFnZSkK K3sKKwlWTV9CVUdfT05fUEFHRSghUGFnZUxvY2tlZChwYWdlKSwgcGFnZSk7CisJVk1fQlVHX09O X1BBR0UoIVBhZ2VNb3ZhYmxlKHBhZ2UpLCBwYWdlKTsKKwlWTV9CVUdfT05fUEFHRSghKCh1bnNp Z25lZCBsb25nKXBhZ2UtPm1hcHBpbmcgJiBQQUdFX01BUFBJTkdfTU9WQUJMRSksCisJCQkJcGFn ZSk7CisJcGFnZS0+bWFwcGluZyA9ICh2b2lkICopKCh1bnNpZ25lZCBsb25nKXBhZ2UtPm1hcHBp bmcgJgorCQkJCVBBR0VfTUFQUElOR19NT1ZBQkxFKTsKK30KK0VYUE9SVF9TWU1CT0woX19DbGVh clBhZ2VNb3ZhYmxlKTsKKwogLyogRG8gbm90IHNraXAgY29tcGFjdGlvbiBtb3JlIHRoYW4gNjQg dGltZXMgKi8KICNkZWZpbmUgQ09NUEFDVF9NQVhfREVGRVJfU0hJRlQgNgogCkBAIC03MzUsMjEg Kzc3MCw2IEBAIGlzb2xhdGVfbWlncmF0ZXBhZ2VzX2Jsb2NrKHN0cnVjdCBjb21wYWN0X2NvbnRy b2wgKmNjLCB1bnNpZ25lZCBsb25nIGxvd19wZm4sCiAJCX0KIAogCQkvKgotCQkgKiBDaGVjayBt YXkgYmUgbG9ja2xlc3MgYnV0IHRoYXQncyBvayBhcyB3ZSByZWNoZWNrIGxhdGVyLgotCQkgKiBJ dCdzIHBvc3NpYmxlIHRvIG1pZ3JhdGUgTFJVIHBhZ2VzIGFuZCBiYWxsb29uIHBhZ2VzCi0JCSAq IFNraXAgYW55IG90aGVyIHR5cGUgb2YgcGFnZQotCQkgKi8KLQkJaXNfbHJ1ID0gUGFnZUxSVShw YWdlKTsKLQkJaWYgKCFpc19scnUpIHsKLQkJCWlmICh1bmxpa2VseShiYWxsb29uX3BhZ2VfbW92 YWJsZShwYWdlKSkpIHsKLQkJCQlpZiAoYmFsbG9vbl9wYWdlX2lzb2xhdGUocGFnZSkpIHsKLQkJ CQkJLyogU3VjY2Vzc2Z1bGx5IGlzb2xhdGVkICovCi0JCQkJCWdvdG8gaXNvbGF0ZV9zdWNjZXNz OwotCQkJCX0KLQkJCX0KLQkJfQotCi0JCS8qCiAJCSAqIFJlZ2FyZGxlc3Mgb2YgYmVpbmcgb24g TFJVLCBjb21wb3VuZCBwYWdlcyBzdWNoIGFzIFRIUCBhbmQKIAkJICogaHVnZXRsYmZzIGFyZSBu b3QgdG8gYmUgY29tcGFjdGVkLiBXZSBjYW4gcG90ZW50aWFsbHkgc2F2ZQogCQkgKiBhIGxvdCBv ZiBpdGVyYXRpb25zIGlmIHdlIHNraXAgdGhlbSBhdCBvbmNlLiBUaGUgY2hlY2sgaXMKQEAgLTc2 NSw4ICs3ODUsMzggQEAgaXNvbGF0ZV9taWdyYXRlcGFnZXNfYmxvY2soc3RydWN0IGNvbXBhY3Rf Y29udHJvbCAqY2MsIHVuc2lnbmVkIGxvbmcgbG93X3BmbiwKIAkJCWdvdG8gaXNvbGF0ZV9mYWls OwogCQl9CiAKLQkJaWYgKCFpc19scnUpCisJCS8qCisJCSAqIENoZWNrIG1heSBiZSBsb2NrbGVz cyBidXQgdGhhdCdzIG9rIGFzIHdlIHJlY2hlY2sgbGF0ZXIuCisJCSAqIEl0J3MgcG9zc2libGUg dG8gbWlncmF0ZSBMUlUgYW5kIG5vbi1scnUgbW92YWJsZSBwYWdlcy4KKwkJICogU2tpcCBhbnkg b3RoZXIgdHlwZSBvZiBwYWdlCisJCSAqLworCQlpc19scnUgPSBQYWdlTFJVKHBhZ2UpOworCQlp ZiAoIWlzX2xydSkgeworCQkJaWYgKHVubGlrZWx5KGJhbGxvb25fcGFnZV9tb3ZhYmxlKHBhZ2Up KSkgeworCQkJCWlmIChiYWxsb29uX3BhZ2VfaXNvbGF0ZShwYWdlKSkgeworCQkJCQkvKiBTdWNj ZXNzZnVsbHkgaXNvbGF0ZWQgKi8KKwkJCQkJZ290byBpc29sYXRlX3N1Y2Nlc3M7CisJCQkJfQor CQkJfQorCisJCQkvKgorCQkJICogX19QYWdlTW92YWJsZSBjYW4gcmV0dXJuIGZhbHNlIHBvc2l0 aXZlIHNvIHdlIG5lZWQKKwkJCSAqIHRvIHZlcmlmeSBpdCB1bmRlciBwYWdlX2xvY2suCisJCQkg Ki8KKwkJCWlmICh1bmxpa2VseShfX1BhZ2VNb3ZhYmxlKHBhZ2UpKSAmJgorCQkJCQkhUGFnZUlz b2xhdGVkKHBhZ2UpKSB7CisJCQkJaWYgKGxvY2tlZCkgeworCQkJCQlzcGluX3VubG9ja19pcnFy ZXN0b3JlKCZ6b25lLT5scnVfbG9jaywKKwkJCQkJCQkJCWZsYWdzKTsKKwkJCQkJbG9ja2VkID0g ZmFsc2U7CisJCQkJfQorCisJCQkJaWYgKGlzb2xhdGVfbW92YWJsZV9wYWdlKHBhZ2UsIGlzb2xh dGVfbW9kZSkpCisJCQkJCWdvdG8gaXNvbGF0ZV9zdWNjZXNzOworCQkJfQorCiAJCQlnb3RvIGlz b2xhdGVfZmFpbDsKKwkJfQogCiAJCS8qCiAJCSAqIE1pZ3JhdGlvbiB3aWxsIGZhaWwgaWYgYW4g YW5vbnltb3VzIHBhZ2UgaXMgcGlubmVkIGluIG1lbW9yeSwKZGlmZiAtLWdpdCBhL21tL2tzbS5j IGIvbW0va3NtLmMKaW5kZXggNDc4NmI0MTUwZjYyLi4zNWI4YWVmODY3YTkgMTAwNjQ0Ci0tLSBh L21tL2tzbS5jCisrKyBiL21tL2tzbS5jCkBAIC01MzIsOCArNTMyLDggQEAgc3RhdGljIHN0cnVj dCBwYWdlICpnZXRfa3NtX3BhZ2Uoc3RydWN0IHN0YWJsZV9ub2RlICpzdGFibGVfbm9kZSwgYm9v bCBsb2NrX2l0KQogCXZvaWQgKmV4cGVjdGVkX21hcHBpbmc7CiAJdW5zaWduZWQgbG9uZyBrcGZu OwogCi0JZXhwZWN0ZWRfbWFwcGluZyA9ICh2b2lkICopc3RhYmxlX25vZGUgKwotCQkJCShQQUdF X01BUFBJTkdfQU5PTiB8IFBBR0VfTUFQUElOR19LU00pOworCWV4cGVjdGVkX21hcHBpbmcgPSAo dm9pZCAqKSgodW5zaWduZWQgbG9uZylzdGFibGVfbm9kZSB8CisJCQkJCVBBR0VfTUFQUElOR19L U00pOwogYWdhaW46CiAJa3BmbiA9IFJFQURfT05DRShzdGFibGVfbm9kZS0+a3Bmbik7CiAJcGFn ZSA9IHBmbl90b19wYWdlKGtwZm4pOwpkaWZmIC0tZ2l0IGEvbW0vbWlncmF0ZS5jIGIvbW0vbWln cmF0ZS5jCmluZGV4IDI2NjZmMjhiNTIzNi4uNTc1NTljYTdjOTA0IDEwMDY0NAotLS0gYS9tbS9t aWdyYXRlLmMKKysrIGIvbW0vbWlncmF0ZS5jCkBAIC0zMSw2ICszMSw3IEBACiAjaW5jbHVkZSA8 bGludXgvdm1hbGxvYy5oPgogI2luY2x1ZGUgPGxpbnV4L3NlY3VyaXR5Lmg+CiAjaW5jbHVkZSA8 bGludXgvYmFja2luZy1kZXYuaD4KKyNpbmNsdWRlIDxsaW51eC9jb21wYWN0aW9uLmg+CiAjaW5j bHVkZSA8bGludXgvc3lzY2FsbHMuaD4KICNpbmNsdWRlIDxsaW51eC9odWdldGxiLmg+CiAjaW5j bHVkZSA8bGludXgvaHVnZXRsYl9jZ3JvdXAuaD4KQEAgLTczLDYgKzc0LDc5IEBAIGludCBtaWdy YXRlX3ByZXBfbG9jYWwodm9pZCkKIAlyZXR1cm4gMDsKIH0KIAorYm9vbCBpc29sYXRlX21vdmFi bGVfcGFnZShzdHJ1Y3QgcGFnZSAqcGFnZSwgaXNvbGF0ZV9tb2RlX3QgbW9kZSkKK3sKKwlzdHJ1 Y3QgYWRkcmVzc19zcGFjZSAqbWFwcGluZzsKKworCS8qCisJICogQXZvaWQgYnVybmluZyBjeWNs ZXMgd2l0aCBwYWdlcyB0aGF0IGFyZSB5ZXQgdW5kZXIgX19mcmVlX3BhZ2VzKCksCisJICogb3Ig anVzdCBnb3QgZnJlZWQgdW5kZXIgdXMuCisJICoKKwkgKiBJbiBjYXNlIHdlICd3aW4nIGEgcmFj ZSBmb3IgYSBtb3ZhYmxlIHBhZ2UgYmVpbmcgZnJlZWQgdW5kZXIgdXMgYW5kCisJICogcmFpc2Ug aXRzIHJlZmNvdW50IHByZXZlbnRpbmcgX19mcmVlX3BhZ2VzKCkgZnJvbSBkb2luZyBpdHMgam9i CisJICogdGhlIHB1dF9wYWdlKCkgYXQgdGhlIGVuZCBvZiB0aGlzIGJsb2NrIHdpbGwgdGFrZSBj YXJlIG9mCisJICogcmVsZWFzZSB0aGlzIHBhZ2UsIHRodXMgYXZvaWRpbmcgYSBuYXN0eSBsZWFr YWdlLgorCSAqLworCWlmICh1bmxpa2VseSghZ2V0X3BhZ2VfdW5sZXNzX3plcm8ocGFnZSkpKQor CQlnb3RvIG91dDsKKworCS8qCisJICogQ2hlY2sgUGFnZU1vdmFibGUgYmVmb3JlIGhvbGRpbmcg YSBQR19sb2NrIGJlY2F1c2UgcGFnZSdzIG93bmVyCisJICogYXNzdW1lcyBhbnlib2R5IGRvZXNu J3QgdG91Y2ggUEdfbG9jayBvZiBuZXdseSBhbGxvY2F0ZWQgcGFnZQorCSAqIHNvIHVuY29uZGl0 aW9uYWxseSBncmFwcGluZyB0aGUgbG9jayBydWlucyBwYWdlJ3Mgb3duZXIgc2lkZS4KKwkgKi8K KwlpZiAodW5saWtlbHkoIV9fUGFnZU1vdmFibGUocGFnZSkpKQorCQlnb3RvIG91dF9wdXRwYWdl OworCS8qCisJICogQXMgbW92YWJsZSBwYWdlcyBhcmUgbm90IGlzb2xhdGVkIGZyb20gTFJVIGxp c3RzLCBjb25jdXJyZW50CisJICogY29tcGFjdGlvbiB0aHJlYWRzIGNhbiByYWNlIGFnYWluc3Qg cGFnZSBtaWdyYXRpb24gZnVuY3Rpb25zCisJICogYXMgd2VsbCBhcyByYWNlIGFnYWluc3QgdGhl IHJlbGVhc2luZyBhIHBhZ2UuCisJICoKKwkgKiBJbiBvcmRlciB0byBhdm9pZCBoYXZpbmcgYW4g YWxyZWFkeSBpc29sYXRlZCBtb3ZhYmxlIHBhZ2UKKwkgKiBiZWluZyAod3JvbmdseSkgcmUtaXNv bGF0ZWQgd2hpbGUgaXQgaXMgdW5kZXIgbWlncmF0aW9uLAorCSAqIG9yIHRvIGF2b2lkIGF0dGVt cHRpbmcgdG8gaXNvbGF0ZSBwYWdlcyBiZWluZyByZWxlYXNlZCwKKwkgKiBsZXRzIGJlIHN1cmUg d2UgaGF2ZSB0aGUgcGFnZSBsb2NrCisJICogYmVmb3JlIHByb2NlZWRpbmcgd2l0aCB0aGUgbW92 YWJsZSBwYWdlIGlzb2xhdGlvbiBzdGVwcy4KKwkgKi8KKwlpZiAodW5saWtlbHkoIXRyeWxvY2tf cGFnZShwYWdlKSkpCisJCWdvdG8gb3V0X3B1dHBhZ2U7CisKKwlpZiAoIVBhZ2VNb3ZhYmxlKHBh Z2UpIHx8IFBhZ2VJc29sYXRlZChwYWdlKSkKKwkJZ290byBvdXRfbm9faXNvbGF0ZWQ7CisKKwlt YXBwaW5nID0gcGFnZV9tYXBwaW5nKHBhZ2UpOworCWlmICghbWFwcGluZy0+YV9vcHMtPmlzb2xh dGVfcGFnZShwYWdlLCBtb2RlKSkKKwkJZ290byBvdXRfbm9faXNvbGF0ZWQ7CisKKwkvKiBEcml2 ZXIgc2hvdWxkbid0IHVzZSBQR19pc29sYXRlZCBiaXQgb2YgcGFnZS0+ZmxhZ3MgKi8KKwlXQVJO X09OX09OQ0UoUGFnZUlzb2xhdGVkKHBhZ2UpKTsKKwlfX1NldFBhZ2VJc29sYXRlZChwYWdlKTsK Kwl1bmxvY2tfcGFnZShwYWdlKTsKKworCXJldHVybiB0cnVlOworCitvdXRfbm9faXNvbGF0ZWQ6 CisJdW5sb2NrX3BhZ2UocGFnZSk7CitvdXRfcHV0cGFnZToKKwlwdXRfcGFnZShwYWdlKTsKK291 dDoKKwlyZXR1cm4gZmFsc2U7Cit9CisKKy8qIEl0IHNob3VsZCBiZSBjYWxsZWQgb24gcGFnZSB3 aGljaCBpcyBQR19tb3ZhYmxlICovCit2b2lkIHB1dGJhY2tfbW92YWJsZV9wYWdlKHN0cnVjdCBw YWdlICpwYWdlKQoreworCXN0cnVjdCBhZGRyZXNzX3NwYWNlICptYXBwaW5nOworCisJVk1fQlVH X09OX1BBR0UoIVBhZ2VMb2NrZWQocGFnZSksIHBhZ2UpOworCVZNX0JVR19PTl9QQUdFKCFQYWdl TW92YWJsZShwYWdlKSwgcGFnZSk7CisJVk1fQlVHX09OX1BBR0UoIVBhZ2VJc29sYXRlZChwYWdl KSwgcGFnZSk7CisKKwltYXBwaW5nID0gcGFnZV9tYXBwaW5nKHBhZ2UpOworCW1hcHBpbmctPmFf b3BzLT5wdXRiYWNrX3BhZ2UocGFnZSk7CisJX19DbGVhclBhZ2VJc29sYXRlZChwYWdlKTsKK30K KwogLyoKICAqIFB1dCBwcmV2aW91c2x5IGlzb2xhdGVkIHBhZ2VzIGJhY2sgb250byB0aGUgYXBw cm9wcmlhdGUgbGlzdHMKICAqIGZyb20gd2hlcmUgdGhleSB3ZXJlIG9uY2UgdGFrZW4gb2ZmIGZv ciBjb21wYWN0aW9uL21pZ3JhdGlvbi4KQEAgLTk0LDEwICsxNjgsMjUgQEAgdm9pZCBwdXRiYWNr X21vdmFibGVfcGFnZXMoc3RydWN0IGxpc3RfaGVhZCAqbCkKIAkJbGlzdF9kZWwoJnBhZ2UtPmxy dSk7CiAJCWRlY196b25lX3BhZ2Vfc3RhdGUocGFnZSwgTlJfSVNPTEFURURfQU5PTiArCiAJCQkJ cGFnZV9pc19maWxlX2NhY2hlKHBhZ2UpKTsKLQkJaWYgKHVubGlrZWx5KGlzb2xhdGVkX2JhbGxv b25fcGFnZShwYWdlKSkpCisJCWlmICh1bmxpa2VseShpc29sYXRlZF9iYWxsb29uX3BhZ2UocGFn ZSkpKSB7CiAJCQliYWxsb29uX3BhZ2VfcHV0YmFjayhwYWdlKTsKLQkJZWxzZQorCQkvKgorCQkg KiBXZSBpc29sYXRlZCBub24tbHJ1IG1vdmFibGUgcGFnZSBzbyBoZXJlIHdlIGNhbiB1c2UKKwkJ ICogX19QYWdlTW92YWJsZSBiZWNhdXNlIExSVSBwYWdlJ3MgbWFwcGluZyBjYW5ub3QgaGF2ZQor CQkgKiBQQUdFX01BUFBJTkdfTU9WQUJMRS4KKwkJICovCisJCX0gZWxzZSBpZiAodW5saWtlbHko X19QYWdlTW92YWJsZShwYWdlKSkpIHsKKwkJCVZNX0JVR19PTl9QQUdFKCFQYWdlSXNvbGF0ZWQo cGFnZSksIHBhZ2UpOworCQkJbG9ja19wYWdlKHBhZ2UpOworCQkJaWYgKFBhZ2VNb3ZhYmxlKHBh Z2UpKQorCQkJCXB1dGJhY2tfbW92YWJsZV9wYWdlKHBhZ2UpOworCQkJZWxzZQorCQkJCV9fQ2xl YXJQYWdlSXNvbGF0ZWQocGFnZSk7CisJCQl1bmxvY2tfcGFnZShwYWdlKTsKKwkJCXB1dF9wYWdl KHBhZ2UpOworCQl9IGVsc2UgewogCQkJcHV0YmFja19scnVfcGFnZShwYWdlKTsKKwkJfQogCX0K IH0KIApAQCAtNTkyLDcgKzY4MSw3IEBAIHZvaWQgbWlncmF0ZV9wYWdlX2NvcHkoc3RydWN0IHBh Z2UgKm5ld3BhZ2UsIHN0cnVjdCBwYWdlICpwYWdlKQogICoqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqLwogCiAvKgotICogQ29tbW9uIGxv Z2ljIHRvIGRpcmVjdGx5IG1pZ3JhdGUgYSBzaW5nbGUgcGFnZSBzdWl0YWJsZSBmb3IKKyAqIENv bW1vbiBsb2dpYyB0byBkaXJlY3RseSBtaWdyYXRlIGEgc2luZ2xlIExSVSBwYWdlIHN1aXRhYmxl IGZvcgogICogcGFnZXMgdGhhdCBkbyBub3QgdXNlIFBhZ2VQcml2YXRlL1BhZ2VQcml2YXRlMi4K ICAqCiAgKiBQYWdlcyBhcmUgbG9ja2VkIHVwb24gZW50cnkgYW5kIGV4aXQuCkBAIC03NTUsMzMg Kzg0NCw2OSBAQCBzdGF0aWMgaW50IG1vdmVfdG9fbmV3X3BhZ2Uoc3RydWN0IHBhZ2UgKm5ld3Bh Z2UsIHN0cnVjdCBwYWdlICpwYWdlLAogCQkJCWVudW0gbWlncmF0ZV9tb2RlIG1vZGUpCiB7CiAJ c3RydWN0IGFkZHJlc3Nfc3BhY2UgKm1hcHBpbmc7Ci0JaW50IHJjOworCWludCByYyA9IC1FQUdB SU47CisJYm9vbCBpc19scnUgPSAhX19QYWdlTW92YWJsZShwYWdlKTsKIAogCVZNX0JVR19PTl9Q QUdFKCFQYWdlTG9ja2VkKHBhZ2UpLCBwYWdlKTsKIAlWTV9CVUdfT05fUEFHRSghUGFnZUxvY2tl ZChuZXdwYWdlKSwgbmV3cGFnZSk7CiAKIAltYXBwaW5nID0gcGFnZV9tYXBwaW5nKHBhZ2UpOwot CWlmICghbWFwcGluZykKLQkJcmMgPSBtaWdyYXRlX3BhZ2UobWFwcGluZywgbmV3cGFnZSwgcGFn ZSwgbW9kZSk7Ci0JZWxzZSBpZiAobWFwcGluZy0+YV9vcHMtPm1pZ3JhdGVwYWdlKQotCQkvKgot CQkgKiBNb3N0IHBhZ2VzIGhhdmUgYSBtYXBwaW5nIGFuZCBtb3N0IGZpbGVzeXN0ZW1zIHByb3Zp ZGUgYQotCQkgKiBtaWdyYXRlcGFnZSBjYWxsYmFjay4gQW5vbnltb3VzIHBhZ2VzIGFyZSBwYXJ0 IG9mIHN3YXAKLQkJICogc3BhY2Ugd2hpY2ggYWxzbyBoYXMgaXRzIG93biBtaWdyYXRlcGFnZSBj YWxsYmFjay4gVGhpcwotCQkgKiBpcyB0aGUgbW9zdCBjb21tb24gcGF0aCBmb3IgcGFnZSBtaWdy YXRpb24uCi0JCSAqLwotCQlyYyA9IG1hcHBpbmctPmFfb3BzLT5taWdyYXRlcGFnZShtYXBwaW5n LCBuZXdwYWdlLCBwYWdlLCBtb2RlKTsKLQllbHNlCi0JCXJjID0gZmFsbGJhY2tfbWlncmF0ZV9w YWdlKG1hcHBpbmcsIG5ld3BhZ2UsIHBhZ2UsIG1vZGUpOworCS8qCisJICogSW4gY2FzZSBvZiBu b24tbHJ1IHBhZ2UsIGl0IGNvdWxkIGJlIHJlbGVhc2VkIGFmdGVyCisJICogaXNvbGF0aW9uIHN0 ZXAuIEluIHRoYXQgY2FzZSwgd2Ugc2hvdWxkbid0IHRyeQorCSAqIGZhbGxiYWNrIG1pZ3JhdGlv biB3aGljaCBpcyBkZXNpZ25lZCBmb3IgTFJVIHBhZ2VzLgorCSAqLworCWlmICh1bmxpa2VseSgh aXNfbHJ1KSkgeworCQlWTV9CVUdfT05fUEFHRSghUGFnZUlzb2xhdGVkKHBhZ2UpLCBwYWdlKTsK KwkJaWYgKCFQYWdlTW92YWJsZShwYWdlKSkgeworCQkJcmMgPSBNSUdSQVRFUEFHRV9TVUNDRVNT OworCQkJX19DbGVhclBhZ2VJc29sYXRlZChwYWdlKTsKKwkJCWdvdG8gb3V0OworCQl9CisJfQor CisJaWYgKGxpa2VseShpc19scnUpKSB7CisJCWlmICghbWFwcGluZykKKwkJCXJjID0gbWlncmF0 ZV9wYWdlKG1hcHBpbmcsIG5ld3BhZ2UsIHBhZ2UsIG1vZGUpOworCQllbHNlIGlmIChtYXBwaW5n LT5hX29wcy0+bWlncmF0ZXBhZ2UpCisJCQkvKgorCQkJICogTW9zdCBwYWdlcyBoYXZlIGEgbWFw cGluZyBhbmQgbW9zdCBmaWxlc3lzdGVtcworCQkJICogcHJvdmlkZSBhIG1pZ3JhdGVwYWdlIGNh bGxiYWNrLiBBbm9ueW1vdXMgcGFnZXMKKwkJCSAqIGFyZSBwYXJ0IG9mIHN3YXAgc3BhY2Ugd2hp Y2ggYWxzbyBoYXMgaXRzIG93bgorCQkJICogbWlncmF0ZXBhZ2UgY2FsbGJhY2suIFRoaXMgaXMg dGhlIG1vc3QgY29tbW9uIHBhdGgKKwkJCSAqIGZvciBwYWdlIG1pZ3JhdGlvbi4KKwkJCSAqLwor CQkJcmMgPSBtYXBwaW5nLT5hX29wcy0+bWlncmF0ZXBhZ2UobWFwcGluZywgbmV3cGFnZSwKKwkJ CQkJCQlwYWdlLCBtb2RlKTsKKwkJZWxzZQorCQkJcmMgPSBmYWxsYmFja19taWdyYXRlX3BhZ2Uo bWFwcGluZywgbmV3cGFnZSwKKwkJCQkJCQlwYWdlLCBtb2RlKTsKKwl9IGVsc2UgeworCQlyYyA9 IG1hcHBpbmctPmFfb3BzLT5taWdyYXRlcGFnZShtYXBwaW5nLCBuZXdwYWdlLAorCQkJCQkJcGFn ZSwgbW9kZSk7CisJCVdBUk5fT05fT05DRShyYyA9PSBNSUdSQVRFUEFHRV9TVUNDRVNTICYmCisJ CQkhUGFnZUlzb2xhdGVkKHBhZ2UpKTsKKwl9CiAKIAkvKgogCSAqIFdoZW4gc3VjY2Vzc2Z1bCwg b2xkIHBhZ2VjYWNoZSBwYWdlLT5tYXBwaW5nIG11c3QgYmUgY2xlYXJlZCBiZWZvcmUKIAkgKiBw YWdlIGlzIGZyZWVkOyBidXQgc3RhdHMgcmVxdWlyZSB0aGF0IFBhZ2VBbm9uIGJlIGxlZnQgYXMg UGFnZUFub24uCiAJICovCiAJaWYgKHJjID09IE1JR1JBVEVQQUdFX1NVQ0NFU1MpIHsKLQkJaWYg KCFQYWdlQW5vbihwYWdlKSkKKwkJaWYgKF9fUGFnZU1vdmFibGUocGFnZSkpIHsKKwkJCVZNX0JV R19PTl9QQUdFKCFQYWdlSXNvbGF0ZWQocGFnZSksIHBhZ2UpOworCisJCQkvKgorCQkJICogV2Ug Y2xlYXIgUEdfbW92YWJsZSB1bmRlciBwYWdlX2xvY2sgc28gYW55IGNvbXBhY3RvcgorCQkJICog Y2Fubm90IHRyeSB0byBtaWdyYXRlIHRoaXMgcGFnZS4KKwkJCSAqLworCQkJX19DbGVhclBhZ2VJ c29sYXRlZChwYWdlKTsKKwkJfQorCisJCWlmICghKCh1bnNpZ25lZCBsb25nKXBhZ2UtPm1hcHBp bmcgJiBQQUdFX01BUFBJTkdfRkxBR1MpKQogCQkJcGFnZS0+bWFwcGluZyA9IE5VTEw7CiAJfQor b3V0OgogCXJldHVybiByYzsKIH0KIApAQCAtNzkxLDYgKzkxNiw3IEBAIHN0YXRpYyBpbnQgX191 bm1hcF9hbmRfbW92ZShzdHJ1Y3QgcGFnZSAqcGFnZSwgc3RydWN0IHBhZ2UgKm5ld3BhZ2UsCiAJ aW50IHJjID0gLUVBR0FJTjsKIAlpbnQgcGFnZV93YXNfbWFwcGVkID0gMDsKIAlzdHJ1Y3QgYW5v bl92bWEgKmFub25fdm1hID0gTlVMTDsKKwlib29sIGlzX2xydSA9ICFfX1BhZ2VNb3ZhYmxlKHBh Z2UpOwogCiAJaWYgKCF0cnlsb2NrX3BhZ2UocGFnZSkpIHsKIAkJaWYgKCFmb3JjZSB8fCBtb2Rl ID09IE1JR1JBVEVfQVNZTkMpCkBAIC04NzEsNiArOTk3LDExIEBAIHN0YXRpYyBpbnQgX191bm1h cF9hbmRfbW92ZShzdHJ1Y3QgcGFnZSAqcGFnZSwgc3RydWN0IHBhZ2UgKm5ld3BhZ2UsCiAJCWdv dG8gb3V0X3VubG9ja19ib3RoOwogCX0KIAorCWlmICh1bmxpa2VseSghaXNfbHJ1KSkgeworCQly YyA9IG1vdmVfdG9fbmV3X3BhZ2UobmV3cGFnZSwgcGFnZSwgbW9kZSk7CisJCWdvdG8gb3V0X3Vu bG9ja19ib3RoOworCX0KKwogCS8qCiAJICogQ29ybmVyIGNhc2UgaGFuZGxpbmc6CiAJICogMS4g V2hlbiBhIG5ldyBzd2FwLWNhY2hlIHBhZ2UgaXMgcmVhZCBpbnRvLCBpdCBpcyBhZGRlZCB0byB0 aGUgTFJVCkBAIC05MjAsNyArMTA1MSw4IEBAIHN0YXRpYyBpbnQgX191bm1hcF9hbmRfbW92ZShz dHJ1Y3QgcGFnZSAqcGFnZSwgc3RydWN0IHBhZ2UgKm5ld3BhZ2UsCiAJICogbGlzdCBpbiBoZXJl LgogCSAqLwogCWlmIChyYyA9PSBNSUdSQVRFUEFHRV9TVUNDRVNTKSB7Ci0JCWlmICh1bmxpa2Vs eShfX2lzX21vdmFibGVfYmFsbG9vbl9wYWdlKG5ld3BhZ2UpKSkKKwkJaWYgKHVubGlrZWx5KF9f aXNfbW92YWJsZV9iYWxsb29uX3BhZ2UobmV3cGFnZSkgfHwKKwkJCQlfX1BhZ2VNb3ZhYmxlKG5l d3BhZ2UpKSkKIAkJCXB1dF9wYWdlKG5ld3BhZ2UpOwogCQllbHNlCiAJCQlwdXRiYWNrX2xydV9w YWdlKG5ld3BhZ2UpOwpAQCAtOTYxLDYgKzEwOTMsMTIgQEAgc3RhdGljIElDRV9ub2lubGluZSBp bnQgdW5tYXBfYW5kX21vdmUobmV3X3BhZ2VfdCBnZXRfbmV3X3BhZ2UsCiAJCS8qIHBhZ2Ugd2Fz IGZyZWVkIGZyb20gdW5kZXIgdXMuIFNvIHdlIGFyZSBkb25lLiAqLwogCQlDbGVhclBhZ2VBY3Rp dmUocGFnZSk7CiAJCUNsZWFyUGFnZVVuZXZpY3RhYmxlKHBhZ2UpOworCQlpZiAodW5saWtlbHko X19QYWdlTW92YWJsZShwYWdlKSkpIHsKKwkJCWxvY2tfcGFnZShwYWdlKTsKKwkJCWlmICghUGFn ZU1vdmFibGUocGFnZSkpCisJCQkJX19DbGVhclBhZ2VJc29sYXRlZChwYWdlKTsKKwkJCXVubG9j a19wYWdlKHBhZ2UpOworCQl9CiAJCWlmIChwdXRfbmV3X3BhZ2UpCiAJCQlwdXRfbmV3X3BhZ2Uo bmV3cGFnZSwgcHJpdmF0ZSk7CiAJCWVsc2UKQEAgLTEwMTAsOCArMTE0OCwyMSBAQCBzdGF0aWMg SUNFX25vaW5saW5lIGludCB1bm1hcF9hbmRfbW92ZShuZXdfcGFnZV90IGdldF9uZXdfcGFnZSwK IAkJCQludW1fcG9pc29uZWRfcGFnZXNfaW5jKCk7CiAJCX0KIAl9IGVsc2UgewotCQlpZiAocmMg IT0gLUVBR0FJTikKLQkJCXB1dGJhY2tfbHJ1X3BhZ2UocGFnZSk7CisJCWlmIChyYyAhPSAtRUFH QUlOKSB7CisJCQlpZiAobGlrZWx5KCFfX1BhZ2VNb3ZhYmxlKHBhZ2UpKSkgeworCQkJCXB1dGJh Y2tfbHJ1X3BhZ2UocGFnZSk7CisJCQkJZ290byBwdXRfbmV3OworCQkJfQorCisJCQlsb2NrX3Bh Z2UocGFnZSk7CisJCQlpZiAoUGFnZU1vdmFibGUocGFnZSkpCisJCQkJcHV0YmFja19tb3ZhYmxl X3BhZ2UocGFnZSk7CisJCQllbHNlCisJCQkJX19DbGVhclBhZ2VJc29sYXRlZChwYWdlKTsKKwkJ CXVubG9ja19wYWdlKHBhZ2UpOworCQkJcHV0X3BhZ2UocGFnZSk7CisJCX0KK3B1dF9uZXc6CiAJ CWlmIChwdXRfbmV3X3BhZ2UpCiAJCQlwdXRfbmV3X3BhZ2UobmV3cGFnZSwgcHJpdmF0ZSk7CiAJ CWVsc2UKZGlmZiAtLWdpdCBhL21tL3BhZ2VfYWxsb2MuYyBiL21tL3BhZ2VfYWxsb2MuYwppbmRl eCBmOGYzYmZjNDM1ZWUuLjI2ODY4YmJhZWNjZSAxMDA2NDQKLS0tIGEvbW0vcGFnZV9hbGxvYy5j CisrKyBiL21tL3BhZ2VfYWxsb2MuYwpAQCAtMTAwOCw3ICsxMDA4LDcgQEAgc3RhdGljIF9fYWx3 YXlzX2lubGluZSBib29sIGZyZWVfcGFnZXNfcHJlcGFyZShzdHJ1Y3QgcGFnZSAqcGFnZSwKIAkJ CShwYWdlICsgaSktPmZsYWdzICY9IH5QQUdFX0ZMQUdTX0NIRUNLX0FUX1BSRVA7CiAJCX0KIAl9 Ci0JaWYgKFBhZ2VBbm9uSGVhZChwYWdlKSkKKwlpZiAoUGFnZU1hcHBpbmdGbGFnKHBhZ2UpKQog CQlwYWdlLT5tYXBwaW5nID0gTlVMTDsKIAlpZiAoY2hlY2tfZnJlZSkKIAkJYmFkICs9IGZyZWVf cGFnZXNfY2hlY2socGFnZSk7CmRpZmYgLS1naXQgYS9tbS91dGlsLmMgYi9tbS91dGlsLmMKaW5k ZXggMjI0ZDM2ZTQzYTk0Li5hMDRjY2ZmN2NjMTcgMTAwNjQ0Ci0tLSBhL21tL3V0aWwuYworKysg Yi9tbS91dGlsLmMKQEAgLTM5OSwxMCArMzk5LDEyIEBAIHN0cnVjdCBhZGRyZXNzX3NwYWNlICpw YWdlX21hcHBpbmcoc3RydWN0IHBhZ2UgKnBhZ2UpCiAJfQogCiAJbWFwcGluZyA9IHBhZ2UtPm1h cHBpbmc7Ci0JaWYgKCh1bnNpZ25lZCBsb25nKW1hcHBpbmcgJiBQQUdFX01BUFBJTkdfRkxBR1Mp CisJaWYgKCh1bnNpZ25lZCBsb25nKW1hcHBpbmcgJiBQQUdFX01BUFBJTkdfQU5PTikKIAkJcmV0 dXJuIE5VTEw7Ci0JcmV0dXJuIG1hcHBpbmc7CisKKwlyZXR1cm4gKHZvaWQgKikoKHVuc2lnbmVk IGxvbmcpbWFwcGluZyAmIH5QQUdFX01BUFBJTkdfRkxBR1MpOwogfQorRVhQT1JUX1NZTUJPTChw YWdlX21hcHBpbmcpOwogCiAvKiBTbG93IHBhdGggb2YgcGFnZV9tYXBjb3VudCgpIGZvciBjb21w b3VuZCBwYWdlcyAqLwogaW50IF9fcGFnZV9tYXBjb3VudChzdHJ1Y3QgcGFnZSAqcGFnZSkKLS0g CjEuOS4xCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpk cmktZGV2ZWwgbWFpbGluZyBsaXN0CmRyaS1kZXZlbEBsaXN0cy5mcmVlZGVza3RvcC5vcmcKaHR0 cHM6Ly9saXN0cy5mcmVlZGVza3RvcC5vcmcvbWFpbG1hbi9saXN0aW5mby9kcmktZGV2ZWwK