From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758740AbcC3HKq (ORCPT ); Wed, 30 Mar 2016 03:10:46 -0400 Received: from LGEAMRELO13.lge.com ([156.147.23.53]:47005 "EHLO lgeamrelo13.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758599AbcC3HKi (ORCPT ); Wed, 30 Mar 2016 03:10:38 -0400 X-Original-SENDERIP: 156.147.1.127 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 10.177.223.161 X-Original-MAILFROM: minchan@kernel.org From: Minchan Kim To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, jlayton@poochiereds.net, bfields@fieldses.org, Vlastimil Babka , Joonsoo Kim , koct9i@gmail.com, aquini@redhat.com, virtualization@lists.linux-foundation.org, Mel Gorman , Hugh Dickins , Sergey Senozhatsky , Rik van Riel , rknize@motorola.com, Gioh Kim , Sangseok Lee , Chan Gyun Jeong , Al Viro , YiPing Xu , Minchan Kim , dri-devel@lists.freedesktop.org, Gioh Kim Subject: [PATCH v3 02/16] mm/compaction: support non-lru movable page migration Date: Wed, 30 Mar 2016 16:12:01 +0900 Message-Id: <1459321935-3655-3-git-send-email-minchan@kernel.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1459321935-3655-1-git-send-email-minchan@kernel.org> References: <1459321935-3655-1-git-send-email-minchan@kernel.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We have allowed migration for only LRU pages until now and it was enough to make high-order pages. But recently, embedded system(e.g., webOS, android) uses lots of non-movable pages(e.g., zram, GPU memory) so we have seen several reports about troubles of small high-order allocation. For fixing the problem, there were several efforts (e,g,. enhance compaction algorithm, SLUB fallback to 0-order page, reserved memory, vmalloc and so on) but if there are lots of non-movable pages in system, their solutions are void in the long run. So, this patch is to support facility to change non-movable pages with movable. For the feature, this patch introduces functions related to migration to address_space_operations as well as some page flags. Basically, this patch supports two page-flags and two functions related to page migration. The flag and page->mapping stability are protected by PG_lock. PG_movable PG_isolated bool (*isolate_page) (struct page *, isolate_mode_t); void (*putback_page) (struct page *); Duty of subsystem want to make their pages as migratable are as follows: 1. It should register address_space to page->mapping then mark the page as PG_movable via __SetPageMovable. 2. It should mark the page as PG_isolated via SetPageIsolated if isolation is sucessful and return true. 3. If migration is successful, it should clear PG_isolated and PG_movable of the page for free preparation then release the reference of the page to free. 4. If migration fails, putback function of subsystem should clear PG_isolated via ClearPageIsolated. 5. If a subsystem want to release isolated page, it should clear PG_isolated but not PG_movable. Instead, VM will do it. Cc: Vlastimil Babka Cc: Mel Gorman Cc: Hugh Dickins Cc: dri-devel@lists.freedesktop.org Cc: virtualization@lists.linux-foundation.org Signed-off-by: Gioh Kim Signed-off-by: Minchan Kim --- Documentation/filesystems/Locking | 4 + Documentation/filesystems/vfs.txt | 5 + fs/proc/page.c | 3 + include/linux/fs.h | 2 + include/linux/migrate.h | 2 + include/linux/page-flags.h | 31 ++++++ include/uapi/linux/kernel-page-flags.h | 1 + mm/compaction.c | 14 ++- mm/migrate.c | 174 +++++++++++++++++++++++++++++---- 9 files changed, 217 insertions(+), 19 deletions(-) diff --git a/Documentation/filesystems/Locking b/Documentation/filesystems/Locking index 619af9bfdcb3..0bb79560abb3 100644 --- a/Documentation/filesystems/Locking +++ b/Documentation/filesystems/Locking @@ -195,7 +195,9 @@ unlocks and drops the reference. int (*releasepage) (struct page *, int); void (*freepage)(struct page *); int (*direct_IO)(struct kiocb *, struct iov_iter *iter, loff_t offset); + bool (*isolate_page) (struct page *, isolate_mode_t); int (*migratepage)(struct address_space *, struct page *, struct page *); + void (*putback_page) (struct page *); int (*launder_page)(struct page *); int (*is_partially_uptodate)(struct page *, unsigned long, unsigned long); int (*error_remove_page)(struct address_space *, struct page *); @@ -219,7 +221,9 @@ invalidatepage: yes releasepage: yes freepage: yes direct_IO: +isolate_page: yes migratepage: yes (both) +putback_page: yes launder_page: yes is_partially_uptodate: yes error_remove_page: yes diff --git a/Documentation/filesystems/vfs.txt b/Documentation/filesystems/vfs.txt index b02a7d598258..4c1b6c3b4bc8 100644 --- a/Documentation/filesystems/vfs.txt +++ b/Documentation/filesystems/vfs.txt @@ -592,9 +592,14 @@ struct address_space_operations { int (*releasepage) (struct page *, int); void (*freepage)(struct page *); ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter, loff_t offset); + /* isolate a page for migration */ + bool (*isolate_page) (struct page *, isolate_mode_t); /* migrate the contents of a page to the specified target */ int (*migratepage) (struct page *, struct page *); + /* put the page back to right list */ + void (*putback_page) (struct page *); int (*launder_page) (struct page *); + int (*is_partially_uptodate) (struct page *, unsigned long, unsigned long); void (*is_dirty_writeback) (struct page *, bool *, bool *); diff --git a/fs/proc/page.c b/fs/proc/page.c index 3ecd445e830d..ce3d08a4ad8d 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -157,6 +157,9 @@ u64 stable_page_flags(struct page *page) if (page_is_idle(page)) u |= 1 << KPF_IDLE; + if (PageMovable(page)) + u |= 1 << KPF_MOVABLE; + u |= kpf_copy_bit(k, KPF_LOCKED, PG_locked); u |= kpf_copy_bit(k, KPF_SLAB, PG_slab); diff --git a/include/linux/fs.h b/include/linux/fs.h index da9e67d937e5..36f2d610e7a8 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -401,6 +401,8 @@ struct address_space_operations { */ int (*migratepage) (struct address_space *, struct page *, struct page *, enum migrate_mode); + bool (*isolate_page)(struct page *, isolate_mode_t); + void (*putback_page)(struct page *); int (*launder_page) (struct page *); int (*is_partially_uptodate) (struct page *, unsigned long, unsigned long); diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 9b50325e4ddf..404fbfefeb33 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -37,6 +37,8 @@ extern int migrate_page(struct address_space *, struct page *, struct page *, enum migrate_mode); extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, unsigned long private, enum migrate_mode mode, int reason); +extern bool isolate_movable_page(struct page *page, isolate_mode_t mode); +extern void putback_movable_page(struct page *page); extern int migrate_prep(void); extern int migrate_prep_local(void); diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index f4ed4f1b0c77..77ebf8fdbc6e 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -129,6 +129,10 @@ enum pageflags { /* Compound pages. Stored in first tail page's flags */ PG_double_map = PG_private_2, + + /* non-lru movable pages */ + PG_movable = PG_reclaim, + PG_isolated = PG_owner_priv_1, }; #ifndef __GENERATING_BOUNDS_H @@ -614,6 +618,33 @@ static inline void __ClearPageBalloon(struct page *page) atomic_set(&page->_mapcount, -1); } +#define PAGE_MOVABLE_MAPCOUNT_VALUE (-255) + +static inline int PageMovable(struct page *page) +{ + return ((test_bit(PG_movable, &(page)->flags) && + atomic_read(&page->_mapcount) == PAGE_MOVABLE_MAPCOUNT_VALUE) + || PageBalloon(page)); +} + +/* Caller should hold a PG_lock */ +static inline void __SetPageMovable(struct page *page, + struct address_space *mapping) +{ + page->mapping = mapping; + __set_bit(PG_movable, &page->flags); + atomic_set(&page->_mapcount, PAGE_MOVABLE_MAPCOUNT_VALUE); +} + +static inline void __ClearPageMovable(struct page *page) +{ + atomic_set(&page->_mapcount, -1); + __clear_bit(PG_movable, &(page)->flags); + page->mapping = NULL; +} + +PAGEFLAG(Isolated, isolated, PF_ANY); + /* * If network-based swap is enabled, sl*b must keep track of whether pages * were allocated from pfmemalloc reserves. diff --git a/include/uapi/linux/kernel-page-flags.h b/include/uapi/linux/kernel-page-flags.h index 5da5f8751ce7..a184fd2434fa 100644 --- a/include/uapi/linux/kernel-page-flags.h +++ b/include/uapi/linux/kernel-page-flags.h @@ -34,6 +34,7 @@ #define KPF_BALLOON 23 #define KPF_ZERO_PAGE 24 #define KPF_IDLE 25 +#define KPF_MOVABLE 26 #endif /* _UAPILINUX_KERNEL_PAGE_FLAGS_H */ diff --git a/mm/compaction.c b/mm/compaction.c index ccf97b02b85f..7557aedddaee 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -703,7 +703,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, /* * Check may be lockless but that's ok as we recheck later. - * It's possible to migrate LRU pages and balloon pages + * It's possible to migrate LRU and movable kernel pages. * Skip any other type of page */ is_lru = PageLRU(page); @@ -714,6 +714,18 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, goto isolate_success; } } + + if (unlikely(PageMovable(page)) && + !PageIsolated(page)) { + if (locked) { + spin_unlock_irqrestore(&zone->lru_lock, + flags); + locked = false; + } + + if (isolate_movable_page(page, isolate_mode)) + goto isolate_success; + } } /* diff --git a/mm/migrate.c b/mm/migrate.c index 53529c805752..b56bf2b3fe8c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -73,6 +73,85 @@ int migrate_prep_local(void) return 0; } +bool isolate_movable_page(struct page *page, isolate_mode_t mode) +{ + bool ret = false; + + /* + * Avoid burning cycles with pages that are yet under __free_pages(), + * or just got freed under us. + * + * In case we 'win' a race for a movable page being freed under us and + * raise its refcount preventing __free_pages() from doing its job + * the put_page() at the end of this block will take care of + * release this page, thus avoiding a nasty leakage. + */ + if (unlikely(!get_page_unless_zero(page))) + goto out; + + /* + * Check PG_movable before holding a PG_lock because page's owner + * assumes anybody doesn't touch PG_lock of newly allocated page. + */ + if (unlikely(!PageMovable(page))) + goto out_putpage; + /* + * As movable pages are not isolated from LRU lists, concurrent + * compaction threads can race against page migration functions + * as well as race against the releasing a page. + * + * In order to avoid having an already isolated movable page + * being (wrongly) re-isolated while it is under migration, + * or to avoid attempting to isolate pages being released, + * lets be sure we have the page lock + * before proceeding with the movable page isolation steps. + */ + if (unlikely(!trylock_page(page))) + goto out_putpage; + + if (!PageMovable(page) || PageIsolated(page)) + goto out_no_isolated; + + ret = page->mapping->a_ops->isolate_page(page, mode); + if (!ret) + goto out_no_isolated; + + WARN_ON_ONCE(!PageIsolated(page)); + unlock_page(page); + return ret; + +out_no_isolated: + unlock_page(page); +out_putpage: + put_page(page); +out: + return ret; +} + +/* It should be called on page which is PG_movable */ +void putback_movable_page(struct page *page) +{ + /* + * 'lock_page()' stabilizes the page and prevents races against + * concurrent isolation threads attempting to re-isolate it. + */ + VM_BUG_ON_PAGE(!PageMovable(page), page); + + lock_page(page); + if (PageIsolated(page)) { + struct address_space *mapping; + + mapping = page_mapping(page); + mapping->a_ops->putback_page(page); + WARN_ON_ONCE(PageIsolated(page)); + } else { + __ClearPageMovable(page); + } + unlock_page(page); + /* drop the extra ref count taken for movable page isolation */ + put_page(page); +} + /* * Put previously isolated pages back onto the appropriate lists * from where they were once taken off for compaction/migration. @@ -94,10 +173,18 @@ void putback_movable_pages(struct list_head *l) list_del(&page->lru); dec_zone_page_state(page, NR_ISOLATED_ANON + page_is_file_cache(page)); - if (unlikely(isolated_balloon_page(page))) + if (unlikely(isolated_balloon_page(page))) { balloon_page_putback(page); - else + } else if (unlikely(PageMovable(page))) { + if (PageIsolated(page)) { + putback_movable_page(page); + } else { + __ClearPageMovable(page); + put_page(page); + } + } else { putback_lru_page(page); + } } } @@ -592,7 +679,7 @@ void migrate_page_copy(struct page *newpage, struct page *page) ***********************************************************/ /* - * Common logic to directly migrate a single page suitable for + * Common logic to directly migrate a single LRU page suitable for * pages that do not use PagePrivate/PagePrivate2. * * Pages are locked upon entry and exit. @@ -755,24 +842,54 @@ static int move_to_new_page(struct page *newpage, struct page *page, enum migrate_mode mode) { struct address_space *mapping; - int rc; + int rc = -EAGAIN; + bool lru_movable = true; VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(!PageLocked(newpage), newpage); mapping = page_mapping(page); - if (!mapping) - rc = migrate_page(mapping, newpage, page, mode); - else if (mapping->a_ops->migratepage) - /* - * Most pages have a mapping and most filesystems provide a - * migratepage callback. Anonymous pages are part of swap - * space which also has its own migratepage callback. This - * is the most common path for page migration. - */ - rc = mapping->a_ops->migratepage(mapping, newpage, page, mode); - else - rc = fallback_migrate_page(mapping, newpage, page, mode); + /* + * In case of non-lru page, it could be released after + * isolation step. In that case, we shouldn't try + * fallback migration which was designed for LRU pages. + * + * The rule for such case is that subsystem should clear + * PG_isolated but remains PG_movable so VM should catch + * it and clear PG_movable for it. + */ + if (unlikely(PageMovable(page))) { + lru_movable = false; + VM_BUG_ON_PAGE(!mapping, page); + if (!PageIsolated(page)) { + rc = MIGRATEPAGE_SUCCESS; + __ClearPageMovable(page); + goto out; + } + } + + if (likely(lru_movable)) { + if (!mapping) + rc = migrate_page(mapping, newpage, page, mode); + else if (mapping->a_ops->migratepage) + /* + * Most pages have a mapping and most filesystems + * provide a migratepage callback. Anonymous pages + * are part of swap space which also has its own + * migratepage callback. This is the most common path + * for page migration. + */ + rc = mapping->a_ops->migratepage(mapping, newpage, + page, mode); + else + rc = fallback_migrate_page(mapping, newpage, + page, mode); + } else { + rc = mapping->a_ops->migratepage(mapping, newpage, + page, mode); + WARN_ON_ONCE(rc == MIGRATEPAGE_SUCCESS && + PageIsolated(page)); + } /* * When successful, old pagecache page->mapping must be cleared before @@ -782,6 +899,7 @@ static int move_to_new_page(struct page *newpage, struct page *page, if (!PageAnon(page)) page->mapping = NULL; } +out: return rc; } @@ -960,6 +1078,8 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, put_new_page(newpage, private); else put_page(newpage); + if (PageMovable(page)) + __ClearPageMovable(page); goto out; } @@ -1000,8 +1120,26 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, num_poisoned_pages_inc(); } } else { - if (rc != -EAGAIN) - putback_lru_page(page); + if (rc != -EAGAIN) { + /* + * subsystem couldn't remove PG_movable since page is + * isolated so PageMovable check is not racy in here. + * But PageIsolated check can be racy but it's okay + * because putback_movable_page checks it under PG_lock + * again. + */ + if (unlikely(PageMovable(page))) { + if (PageIsolated(page)) + putback_movable_page(page); + else { + __ClearPageMovable(page); + put_page(page); + } + } else { + putback_lru_page(page); + } + } + if (put_new_page) put_new_page(newpage, private); else -- 1.9.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f181.google.com (mail-pf0-f181.google.com [209.85.192.181]) by kanga.kvack.org (Postfix) with ESMTP id 4E34E6B025F for ; Wed, 30 Mar 2016 03:10:43 -0400 (EDT) Received: by mail-pf0-f181.google.com with SMTP id 4so34907001pfd.0 for ; Wed, 30 Mar 2016 00:10:43 -0700 (PDT) Received: from lgeamrelo13.lge.com (LGEAMRELO13.lge.com. [156.147.23.53]) by mx.google.com with ESMTP id 73si4349170pfq.164.2016.03.30.00.10.37 for ; Wed, 30 Mar 2016 00:10:38 -0700 (PDT) From: Minchan Kim Subject: [PATCH v3 02/16] mm/compaction: support non-lru movable page migration Date: Wed, 30 Mar 2016 16:12:01 +0900 Message-Id: <1459321935-3655-3-git-send-email-minchan@kernel.org> In-Reply-To: <1459321935-3655-1-git-send-email-minchan@kernel.org> References: <1459321935-3655-1-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, jlayton@poochiereds.net, bfields@fieldses.org, Vlastimil Babka , Joonsoo Kim , koct9i@gmail.com, aquini@redhat.com, virtualization@lists.linux-foundation.org, Mel Gorman , Hugh Dickins , Sergey Senozhatsky , Rik van Riel , rknize@motorola.com, Gioh Kim , Sangseok Lee , Chan Gyun Jeong , Al Viro , YiPing Xu , Minchan Kim , dri-devel@lists.freedesktop.org, Gioh Kim We have allowed migration for only LRU pages until now and it was enough to make high-order pages. But recently, embedded system(e.g., webOS, android) uses lots of non-movable pages(e.g., zram, GPU memory) so we have seen several reports about troubles of small high-order allocation. For fixing the problem, there were several efforts (e,g,. enhance compaction algorithm, SLUB fallback to 0-order page, reserved memory, vmalloc and so on) but if there are lots of non-movable pages in system, their solutions are void in the long run. So, this patch is to support facility to change non-movable pages with movable. For the feature, this patch introduces functions related to migration to address_space_operations as well as some page flags. Basically, this patch supports two page-flags and two functions related to page migration. The flag and page->mapping stability are protected by PG_lock. PG_movable PG_isolated bool (*isolate_page) (struct page *, isolate_mode_t); void (*putback_page) (struct page *); Duty of subsystem want to make their pages as migratable are as follows: 1. It should register address_space to page->mapping then mark the page as PG_movable via __SetPageMovable. 2. It should mark the page as PG_isolated via SetPageIsolated if isolation is sucessful and return true. 3. If migration is successful, it should clear PG_isolated and PG_movable of the page for free preparation then release the reference of the page to free. 4. If migration fails, putback function of subsystem should clear PG_isolated via ClearPageIsolated. 5. If a subsystem want to release isolated page, it should clear PG_isolated but not PG_movable. Instead, VM will do it. Cc: Vlastimil Babka Cc: Mel Gorman Cc: Hugh Dickins Cc: dri-devel@lists.freedesktop.org Cc: virtualization@lists.linux-foundation.org Signed-off-by: Gioh Kim Signed-off-by: Minchan Kim --- Documentation/filesystems/Locking | 4 + Documentation/filesystems/vfs.txt | 5 + fs/proc/page.c | 3 + include/linux/fs.h | 2 + include/linux/migrate.h | 2 + include/linux/page-flags.h | 31 ++++++ include/uapi/linux/kernel-page-flags.h | 1 + mm/compaction.c | 14 ++- mm/migrate.c | 174 +++++++++++++++++++++++++++++---- 9 files changed, 217 insertions(+), 19 deletions(-) diff --git a/Documentation/filesystems/Locking b/Documentation/filesystems/Locking index 619af9bfdcb3..0bb79560abb3 100644 --- a/Documentation/filesystems/Locking +++ b/Documentation/filesystems/Locking @@ -195,7 +195,9 @@ unlocks and drops the reference. int (*releasepage) (struct page *, int); void (*freepage)(struct page *); int (*direct_IO)(struct kiocb *, struct iov_iter *iter, loff_t offset); + bool (*isolate_page) (struct page *, isolate_mode_t); int (*migratepage)(struct address_space *, struct page *, struct page *); + void (*putback_page) (struct page *); int (*launder_page)(struct page *); int (*is_partially_uptodate)(struct page *, unsigned long, unsigned long); int (*error_remove_page)(struct address_space *, struct page *); @@ -219,7 +221,9 @@ invalidatepage: yes releasepage: yes freepage: yes direct_IO: +isolate_page: yes migratepage: yes (both) +putback_page: yes launder_page: yes is_partially_uptodate: yes error_remove_page: yes diff --git a/Documentation/filesystems/vfs.txt b/Documentation/filesystems/vfs.txt index b02a7d598258..4c1b6c3b4bc8 100644 --- a/Documentation/filesystems/vfs.txt +++ b/Documentation/filesystems/vfs.txt @@ -592,9 +592,14 @@ struct address_space_operations { int (*releasepage) (struct page *, int); void (*freepage)(struct page *); ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter, loff_t offset); + /* isolate a page for migration */ + bool (*isolate_page) (struct page *, isolate_mode_t); /* migrate the contents of a page to the specified target */ int (*migratepage) (struct page *, struct page *); + /* put the page back to right list */ + void (*putback_page) (struct page *); int (*launder_page) (struct page *); + int (*is_partially_uptodate) (struct page *, unsigned long, unsigned long); void (*is_dirty_writeback) (struct page *, bool *, bool *); diff --git a/fs/proc/page.c b/fs/proc/page.c index 3ecd445e830d..ce3d08a4ad8d 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -157,6 +157,9 @@ u64 stable_page_flags(struct page *page) if (page_is_idle(page)) u |= 1 << KPF_IDLE; + if (PageMovable(page)) + u |= 1 << KPF_MOVABLE; + u |= kpf_copy_bit(k, KPF_LOCKED, PG_locked); u |= kpf_copy_bit(k, KPF_SLAB, PG_slab); diff --git a/include/linux/fs.h b/include/linux/fs.h index da9e67d937e5..36f2d610e7a8 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -401,6 +401,8 @@ struct address_space_operations { */ int (*migratepage) (struct address_space *, struct page *, struct page *, enum migrate_mode); + bool (*isolate_page)(struct page *, isolate_mode_t); + void (*putback_page)(struct page *); int (*launder_page) (struct page *); int (*is_partially_uptodate) (struct page *, unsigned long, unsigned long); diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 9b50325e4ddf..404fbfefeb33 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -37,6 +37,8 @@ extern int migrate_page(struct address_space *, struct page *, struct page *, enum migrate_mode); extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, unsigned long private, enum migrate_mode mode, int reason); +extern bool isolate_movable_page(struct page *page, isolate_mode_t mode); +extern void putback_movable_page(struct page *page); extern int migrate_prep(void); extern int migrate_prep_local(void); diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index f4ed4f1b0c77..77ebf8fdbc6e 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -129,6 +129,10 @@ enum pageflags { /* Compound pages. Stored in first tail page's flags */ PG_double_map = PG_private_2, + + /* non-lru movable pages */ + PG_movable = PG_reclaim, + PG_isolated = PG_owner_priv_1, }; #ifndef __GENERATING_BOUNDS_H @@ -614,6 +618,33 @@ static inline void __ClearPageBalloon(struct page *page) atomic_set(&page->_mapcount, -1); } +#define PAGE_MOVABLE_MAPCOUNT_VALUE (-255) + +static inline int PageMovable(struct page *page) +{ + return ((test_bit(PG_movable, &(page)->flags) && + atomic_read(&page->_mapcount) == PAGE_MOVABLE_MAPCOUNT_VALUE) + || PageBalloon(page)); +} + +/* Caller should hold a PG_lock */ +static inline void __SetPageMovable(struct page *page, + struct address_space *mapping) +{ + page->mapping = mapping; + __set_bit(PG_movable, &page->flags); + atomic_set(&page->_mapcount, PAGE_MOVABLE_MAPCOUNT_VALUE); +} + +static inline void __ClearPageMovable(struct page *page) +{ + atomic_set(&page->_mapcount, -1); + __clear_bit(PG_movable, &(page)->flags); + page->mapping = NULL; +} + +PAGEFLAG(Isolated, isolated, PF_ANY); + /* * If network-based swap is enabled, sl*b must keep track of whether pages * were allocated from pfmemalloc reserves. diff --git a/include/uapi/linux/kernel-page-flags.h b/include/uapi/linux/kernel-page-flags.h index 5da5f8751ce7..a184fd2434fa 100644 --- a/include/uapi/linux/kernel-page-flags.h +++ b/include/uapi/linux/kernel-page-flags.h @@ -34,6 +34,7 @@ #define KPF_BALLOON 23 #define KPF_ZERO_PAGE 24 #define KPF_IDLE 25 +#define KPF_MOVABLE 26 #endif /* _UAPILINUX_KERNEL_PAGE_FLAGS_H */ diff --git a/mm/compaction.c b/mm/compaction.c index ccf97b02b85f..7557aedddaee 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -703,7 +703,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, /* * Check may be lockless but that's ok as we recheck later. - * It's possible to migrate LRU pages and balloon pages + * It's possible to migrate LRU and movable kernel pages. * Skip any other type of page */ is_lru = PageLRU(page); @@ -714,6 +714,18 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, goto isolate_success; } } + + if (unlikely(PageMovable(page)) && + !PageIsolated(page)) { + if (locked) { + spin_unlock_irqrestore(&zone->lru_lock, + flags); + locked = false; + } + + if (isolate_movable_page(page, isolate_mode)) + goto isolate_success; + } } /* diff --git a/mm/migrate.c b/mm/migrate.c index 53529c805752..b56bf2b3fe8c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -73,6 +73,85 @@ int migrate_prep_local(void) return 0; } +bool isolate_movable_page(struct page *page, isolate_mode_t mode) +{ + bool ret = false; + + /* + * Avoid burning cycles with pages that are yet under __free_pages(), + * or just got freed under us. + * + * In case we 'win' a race for a movable page being freed under us and + * raise its refcount preventing __free_pages() from doing its job + * the put_page() at the end of this block will take care of + * release this page, thus avoiding a nasty leakage. + */ + if (unlikely(!get_page_unless_zero(page))) + goto out; + + /* + * Check PG_movable before holding a PG_lock because page's owner + * assumes anybody doesn't touch PG_lock of newly allocated page. + */ + if (unlikely(!PageMovable(page))) + goto out_putpage; + /* + * As movable pages are not isolated from LRU lists, concurrent + * compaction threads can race against page migration functions + * as well as race against the releasing a page. + * + * In order to avoid having an already isolated movable page + * being (wrongly) re-isolated while it is under migration, + * or to avoid attempting to isolate pages being released, + * lets be sure we have the page lock + * before proceeding with the movable page isolation steps. + */ + if (unlikely(!trylock_page(page))) + goto out_putpage; + + if (!PageMovable(page) || PageIsolated(page)) + goto out_no_isolated; + + ret = page->mapping->a_ops->isolate_page(page, mode); + if (!ret) + goto out_no_isolated; + + WARN_ON_ONCE(!PageIsolated(page)); + unlock_page(page); + return ret; + +out_no_isolated: + unlock_page(page); +out_putpage: + put_page(page); +out: + return ret; +} + +/* It should be called on page which is PG_movable */ +void putback_movable_page(struct page *page) +{ + /* + * 'lock_page()' stabilizes the page and prevents races against + * concurrent isolation threads attempting to re-isolate it. + */ + VM_BUG_ON_PAGE(!PageMovable(page), page); + + lock_page(page); + if (PageIsolated(page)) { + struct address_space *mapping; + + mapping = page_mapping(page); + mapping->a_ops->putback_page(page); + WARN_ON_ONCE(PageIsolated(page)); + } else { + __ClearPageMovable(page); + } + unlock_page(page); + /* drop the extra ref count taken for movable page isolation */ + put_page(page); +} + /* * Put previously isolated pages back onto the appropriate lists * from where they were once taken off for compaction/migration. @@ -94,10 +173,18 @@ void putback_movable_pages(struct list_head *l) list_del(&page->lru); dec_zone_page_state(page, NR_ISOLATED_ANON + page_is_file_cache(page)); - if (unlikely(isolated_balloon_page(page))) + if (unlikely(isolated_balloon_page(page))) { balloon_page_putback(page); - else + } else if (unlikely(PageMovable(page))) { + if (PageIsolated(page)) { + putback_movable_page(page); + } else { + __ClearPageMovable(page); + put_page(page); + } + } else { putback_lru_page(page); + } } } @@ -592,7 +679,7 @@ void migrate_page_copy(struct page *newpage, struct page *page) ***********************************************************/ /* - * Common logic to directly migrate a single page suitable for + * Common logic to directly migrate a single LRU page suitable for * pages that do not use PagePrivate/PagePrivate2. * * Pages are locked upon entry and exit. @@ -755,24 +842,54 @@ static int move_to_new_page(struct page *newpage, struct page *page, enum migrate_mode mode) { struct address_space *mapping; - int rc; + int rc = -EAGAIN; + bool lru_movable = true; VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(!PageLocked(newpage), newpage); mapping = page_mapping(page); - if (!mapping) - rc = migrate_page(mapping, newpage, page, mode); - else if (mapping->a_ops->migratepage) - /* - * Most pages have a mapping and most filesystems provide a - * migratepage callback. Anonymous pages are part of swap - * space which also has its own migratepage callback. This - * is the most common path for page migration. - */ - rc = mapping->a_ops->migratepage(mapping, newpage, page, mode); - else - rc = fallback_migrate_page(mapping, newpage, page, mode); + /* + * In case of non-lru page, it could be released after + * isolation step. In that case, we shouldn't try + * fallback migration which was designed for LRU pages. + * + * The rule for such case is that subsystem should clear + * PG_isolated but remains PG_movable so VM should catch + * it and clear PG_movable for it. + */ + if (unlikely(PageMovable(page))) { + lru_movable = false; + VM_BUG_ON_PAGE(!mapping, page); + if (!PageIsolated(page)) { + rc = MIGRATEPAGE_SUCCESS; + __ClearPageMovable(page); + goto out; + } + } + + if (likely(lru_movable)) { + if (!mapping) + rc = migrate_page(mapping, newpage, page, mode); + else if (mapping->a_ops->migratepage) + /* + * Most pages have a mapping and most filesystems + * provide a migratepage callback. Anonymous pages + * are part of swap space which also has its own + * migratepage callback. This is the most common path + * for page migration. + */ + rc = mapping->a_ops->migratepage(mapping, newpage, + page, mode); + else + rc = fallback_migrate_page(mapping, newpage, + page, mode); + } else { + rc = mapping->a_ops->migratepage(mapping, newpage, + page, mode); + WARN_ON_ONCE(rc == MIGRATEPAGE_SUCCESS && + PageIsolated(page)); + } /* * When successful, old pagecache page->mapping must be cleared before @@ -782,6 +899,7 @@ static int move_to_new_page(struct page *newpage, struct page *page, if (!PageAnon(page)) page->mapping = NULL; } +out: return rc; } @@ -960,6 +1078,8 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, put_new_page(newpage, private); else put_page(newpage); + if (PageMovable(page)) + __ClearPageMovable(page); goto out; } @@ -1000,8 +1120,26 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, num_poisoned_pages_inc(); } } else { - if (rc != -EAGAIN) - putback_lru_page(page); + if (rc != -EAGAIN) { + /* + * subsystem couldn't remove PG_movable since page is + * isolated so PageMovable check is not racy in here. + * But PageIsolated check can be racy but it's okay + * because putback_movable_page checks it under PG_lock + * again. + */ + if (unlikely(PageMovable(page))) { + if (PageIsolated(page)) + putback_movable_page(page); + else { + __ClearPageMovable(page); + put_page(page); + } + } else { + putback_lru_page(page); + } + } + if (put_new_page) put_new_page(newpage, private); else -- 1.9.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 From: Minchan Kim Subject: [PATCH v3 02/16] mm/compaction: support non-lru movable page migration Date: Wed, 30 Mar 2016 16:12:01 +0900 Message-ID: <1459321935-3655-3-git-send-email-minchan@kernel.org> References: <1459321935-3655-1-git-send-email-minchan@kernel.org> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: Received: from lgeamrelo13.lge.com (LGEAMRELO13.lge.com [156.147.23.53]) by gabe.freedesktop.org (Postfix) with ESMTP id 6B7E36E70A for ; Wed, 30 Mar 2016 07:10:37 +0000 (UTC) In-Reply-To: <1459321935-3655-1-git-send-email-minchan@kernel.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" To: Andrew Morton Cc: aquini@redhat.com, rknize@motorola.com, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org, bfields@fieldses.org, linux-mm@kvack.org, Sangseok Lee , jlayton@poochiereds.net, koct9i@gmail.com, YiPing Xu , Minchan Kim , Hugh Dickins , Gioh Kim , Mel Gorman , Rik van Riel , Vlastimil Babka , Al Viro , Chan Gyun Jeong , linux-kernel@vger.kernel.org, Sergey Senozhatsky , Gioh Kim , Joonsoo Kim List-Id: dri-devel@lists.freedesktop.org V2UgaGF2ZSBhbGxvd2VkIG1pZ3JhdGlvbiBmb3Igb25seSBMUlUgcGFnZXMgdW50aWwgbm93IGFu ZCBpdCB3YXMKZW5vdWdoIHRvIG1ha2UgaGlnaC1vcmRlciBwYWdlcy4gQnV0IHJlY2VudGx5LCBl bWJlZGRlZCBzeXN0ZW0oZS5nLiwKd2ViT1MsIGFuZHJvaWQpIHVzZXMgbG90cyBvZiBub24tbW92 YWJsZSBwYWdlcyhlLmcuLCB6cmFtLCBHUFUgbWVtb3J5KQpzbyB3ZSBoYXZlIHNlZW4gc2V2ZXJh bCByZXBvcnRzIGFib3V0IHRyb3VibGVzIG9mIHNtYWxsIGhpZ2gtb3JkZXIKYWxsb2NhdGlvbi4g Rm9yIGZpeGluZyB0aGUgcHJvYmxlbSwgdGhlcmUgd2VyZSBzZXZlcmFsIGVmZm9ydHMKKGUsZywu IGVuaGFuY2UgY29tcGFjdGlvbiBhbGdvcml0aG0sIFNMVUIgZmFsbGJhY2sgdG8gMC1vcmRlciBw YWdlLApyZXNlcnZlZCBtZW1vcnksIHZtYWxsb2MgYW5kIHNvIG9uKSBidXQgaWYgdGhlcmUgYXJl IGxvdHMgb2YKbm9uLW1vdmFibGUgcGFnZXMgaW4gc3lzdGVtLCB0aGVpciBzb2x1dGlvbnMgYXJl IHZvaWQgaW4gdGhlIGxvbmcgcnVuLgoKU28sIHRoaXMgcGF0Y2ggaXMgdG8gc3VwcG9ydCBmYWNp bGl0eSB0byBjaGFuZ2Ugbm9uLW1vdmFibGUgcGFnZXMKd2l0aCBtb3ZhYmxlLiBGb3IgdGhlIGZl YXR1cmUsIHRoaXMgcGF0Y2ggaW50cm9kdWNlcyBmdW5jdGlvbnMgcmVsYXRlZAp0byBtaWdyYXRp b24gdG8gYWRkcmVzc19zcGFjZV9vcGVyYXRpb25zIGFzIHdlbGwgYXMgc29tZSBwYWdlIGZsYWdz LgoKQmFzaWNhbGx5LCB0aGlzIHBhdGNoIHN1cHBvcnRzIHR3byBwYWdlLWZsYWdzIGFuZCB0d28g ZnVuY3Rpb25zIHJlbGF0ZWQKdG8gcGFnZSBtaWdyYXRpb24uIFRoZSBmbGFnIGFuZCBwYWdlLT5t YXBwaW5nIHN0YWJpbGl0eSBhcmUgcHJvdGVjdGVkCmJ5IFBHX2xvY2suCgoJUEdfbW92YWJsZQoJ UEdfaXNvbGF0ZWQKCglib29sICgqaXNvbGF0ZV9wYWdlKSAoc3RydWN0IHBhZ2UgKiwgaXNvbGF0 ZV9tb2RlX3QpOwoJdm9pZCAoKnB1dGJhY2tfcGFnZSkgKHN0cnVjdCBwYWdlICopOwoKRHV0eSBv ZiBzdWJzeXN0ZW0gd2FudCB0byBtYWtlIHRoZWlyIHBhZ2VzIGFzIG1pZ3JhdGFibGUgYXJlCmFz IGZvbGxvd3M6CgoxLiBJdCBzaG91bGQgcmVnaXN0ZXIgYWRkcmVzc19zcGFjZSB0byBwYWdlLT5t YXBwaW5nIHRoZW4gbWFyawp0aGUgcGFnZSBhcyBQR19tb3ZhYmxlIHZpYSBfX1NldFBhZ2VNb3Zh YmxlLgoKMi4gSXQgc2hvdWxkIG1hcmsgdGhlIHBhZ2UgYXMgUEdfaXNvbGF0ZWQgdmlhIFNldFBh Z2VJc29sYXRlZAppZiBpc29sYXRpb24gaXMgc3VjZXNzZnVsIGFuZCByZXR1cm4gdHJ1ZS4KCjMu IElmIG1pZ3JhdGlvbiBpcyBzdWNjZXNzZnVsLCBpdCBzaG91bGQgY2xlYXIgUEdfaXNvbGF0ZWQg YW5kClBHX21vdmFibGUgb2YgdGhlIHBhZ2UgZm9yIGZyZWUgcHJlcGFyYXRpb24gdGhlbiByZWxl YXNlIHRoZQpyZWZlcmVuY2Ugb2YgdGhlIHBhZ2UgdG8gZnJlZS4KCjQuIElmIG1pZ3JhdGlvbiBm YWlscywgcHV0YmFjayBmdW5jdGlvbiBvZiBzdWJzeXN0ZW0gc2hvdWxkCmNsZWFyIFBHX2lzb2xh dGVkIHZpYSBDbGVhclBhZ2VJc29sYXRlZC4KCjUuIElmIGEgc3Vic3lzdGVtIHdhbnQgdG8gcmVs ZWFzZSBpc29sYXRlZCBwYWdlLCBpdCBzaG91bGQKY2xlYXIgUEdfaXNvbGF0ZWQgYnV0IG5vdCBQ R19tb3ZhYmxlLiBJbnN0ZWFkLCBWTSB3aWxsIGRvIGl0LgoKQ2M6IFZsYXN0aW1pbCBCYWJrYSA8 dmJhYmthQHN1c2UuY3o+CkNjOiBNZWwgR29ybWFuIDxtZ29ybWFuQHN1c2UuZGU+CkNjOiBIdWdo IERpY2tpbnMgPGh1Z2hkQGdvb2dsZS5jb20+CkNjOiBkcmktZGV2ZWxAbGlzdHMuZnJlZWRlc2t0 b3Aub3JnCkNjOiB2aXJ0dWFsaXphdGlvbkBsaXN0cy5saW51eC1mb3VuZGF0aW9uLm9yZwpTaWdu ZWQtb2ZmLWJ5OiBHaW9oIEtpbSA8Z3VydWdpb0BoYW5tYWlsLm5ldD4KU2lnbmVkLW9mZi1ieTog TWluY2hhbiBLaW0gPG1pbmNoYW5Aa2VybmVsLm9yZz4KLS0tCiBEb2N1bWVudGF0aW9uL2ZpbGVz eXN0ZW1zL0xvY2tpbmcgICAgICB8ICAgNCArCiBEb2N1bWVudGF0aW9uL2ZpbGVzeXN0ZW1zL3Zm cy50eHQgICAgICB8ICAgNSArCiBmcy9wcm9jL3BhZ2UuYyAgICAgICAgICAgICAgICAgICAgICAg ICB8ICAgMyArCiBpbmNsdWRlL2xpbnV4L2ZzLmggICAgICAgICAgICAgICAgICAgICB8ICAgMiAr CiBpbmNsdWRlL2xpbnV4L21pZ3JhdGUuaCAgICAgICAgICAgICAgICB8ICAgMiArCiBpbmNsdWRl L2xpbnV4L3BhZ2UtZmxhZ3MuaCAgICAgICAgICAgICB8ICAzMSArKysrKysKIGluY2x1ZGUvdWFw aS9saW51eC9rZXJuZWwtcGFnZS1mbGFncy5oIHwgICAxICsKIG1tL2NvbXBhY3Rpb24uYyAgICAg ICAgICAgICAgICAgICAgICAgIHwgIDE0ICsrLQogbW0vbWlncmF0ZS5jICAgICAgICAgICAgICAg ICAgICAgICAgICAgfCAxNzQgKysrKysrKysrKysrKysrKysrKysrKysrKysrKystLS0tCiA5IGZp bGVzIGNoYW5nZWQsIDIxNyBpbnNlcnRpb25zKCspLCAxOSBkZWxldGlvbnMoLSkKCmRpZmYgLS1n aXQgYS9Eb2N1bWVudGF0aW9uL2ZpbGVzeXN0ZW1zL0xvY2tpbmcgYi9Eb2N1bWVudGF0aW9uL2Zp bGVzeXN0ZW1zL0xvY2tpbmcKaW5kZXggNjE5YWY5YmZkY2IzLi4wYmI3OTU2MGFiYjMgMTAwNjQ0 Ci0tLSBhL0RvY3VtZW50YXRpb24vZmlsZXN5c3RlbXMvTG9ja2luZworKysgYi9Eb2N1bWVudGF0 aW9uL2ZpbGVzeXN0ZW1zL0xvY2tpbmcKQEAgLTE5NSw3ICsxOTUsOSBAQCB1bmxvY2tzIGFuZCBk cm9wcyB0aGUgcmVmZXJlbmNlLgogCWludCAoKnJlbGVhc2VwYWdlKSAoc3RydWN0IHBhZ2UgKiwg aW50KTsKIAl2b2lkICgqZnJlZXBhZ2UpKHN0cnVjdCBwYWdlICopOwogCWludCAoKmRpcmVjdF9J Tykoc3RydWN0IGtpb2NiICosIHN0cnVjdCBpb3ZfaXRlciAqaXRlciwgbG9mZl90IG9mZnNldCk7 CisJYm9vbCAoKmlzb2xhdGVfcGFnZSkgKHN0cnVjdCBwYWdlICosIGlzb2xhdGVfbW9kZV90KTsK IAlpbnQgKCptaWdyYXRlcGFnZSkoc3RydWN0IGFkZHJlc3Nfc3BhY2UgKiwgc3RydWN0IHBhZ2Ug Kiwgc3RydWN0IHBhZ2UgKik7CisJdm9pZCAoKnB1dGJhY2tfcGFnZSkgKHN0cnVjdCBwYWdlICop OwogCWludCAoKmxhdW5kZXJfcGFnZSkoc3RydWN0IHBhZ2UgKik7CiAJaW50ICgqaXNfcGFydGlh bGx5X3VwdG9kYXRlKShzdHJ1Y3QgcGFnZSAqLCB1bnNpZ25lZCBsb25nLCB1bnNpZ25lZCBsb25n KTsKIAlpbnQgKCplcnJvcl9yZW1vdmVfcGFnZSkoc3RydWN0IGFkZHJlc3Nfc3BhY2UgKiwgc3Ry dWN0IHBhZ2UgKik7CkBAIC0yMTksNyArMjIxLDkgQEAgaW52YWxpZGF0ZXBhZ2U6CQl5ZXMKIHJl bGVhc2VwYWdlOgkJeWVzCiBmcmVlcGFnZToJCXllcwogZGlyZWN0X0lPOgoraXNvbGF0ZV9wYWdl OgkJeWVzCiBtaWdyYXRlcGFnZToJCXllcyAoYm90aCkKK3B1dGJhY2tfcGFnZToJCXllcwogbGF1 bmRlcl9wYWdlOgkJeWVzCiBpc19wYXJ0aWFsbHlfdXB0b2RhdGU6CXllcwogZXJyb3JfcmVtb3Zl X3BhZ2U6CXllcwpkaWZmIC0tZ2l0IGEvRG9jdW1lbnRhdGlvbi9maWxlc3lzdGVtcy92ZnMudHh0 IGIvRG9jdW1lbnRhdGlvbi9maWxlc3lzdGVtcy92ZnMudHh0CmluZGV4IGIwMmE3ZDU5ODI1OC4u NGMxYjZjM2I0YmM4IDEwMDY0NAotLS0gYS9Eb2N1bWVudGF0aW9uL2ZpbGVzeXN0ZW1zL3Zmcy50 eHQKKysrIGIvRG9jdW1lbnRhdGlvbi9maWxlc3lzdGVtcy92ZnMudHh0CkBAIC01OTIsOSArNTky LDE0IEBAIHN0cnVjdCBhZGRyZXNzX3NwYWNlX29wZXJhdGlvbnMgewogCWludCAoKnJlbGVhc2Vw YWdlKSAoc3RydWN0IHBhZ2UgKiwgaW50KTsKIAl2b2lkICgqZnJlZXBhZ2UpKHN0cnVjdCBwYWdl ICopOwogCXNzaXplX3QgKCpkaXJlY3RfSU8pKHN0cnVjdCBraW9jYiAqLCBzdHJ1Y3QgaW92X2l0 ZXIgKml0ZXIsIGxvZmZfdCBvZmZzZXQpOworCS8qIGlzb2xhdGUgYSBwYWdlIGZvciBtaWdyYXRp b24gKi8KKwlib29sICgqaXNvbGF0ZV9wYWdlKSAoc3RydWN0IHBhZ2UgKiwgaXNvbGF0ZV9tb2Rl X3QpOwogCS8qIG1pZ3JhdGUgdGhlIGNvbnRlbnRzIG9mIGEgcGFnZSB0byB0aGUgc3BlY2lmaWVk IHRhcmdldCAqLwogCWludCAoKm1pZ3JhdGVwYWdlKSAoc3RydWN0IHBhZ2UgKiwgc3RydWN0IHBh Z2UgKik7CisJLyogcHV0IHRoZSBwYWdlIGJhY2sgdG8gcmlnaHQgbGlzdCAqLworCXZvaWQgKCpw dXRiYWNrX3BhZ2UpIChzdHJ1Y3QgcGFnZSAqKTsKIAlpbnQgKCpsYXVuZGVyX3BhZ2UpIChzdHJ1 Y3QgcGFnZSAqKTsKKwogCWludCAoKmlzX3BhcnRpYWxseV91cHRvZGF0ZSkgKHN0cnVjdCBwYWdl ICosIHVuc2lnbmVkIGxvbmcsCiAJCQkJCXVuc2lnbmVkIGxvbmcpOwogCXZvaWQgKCppc19kaXJ0 eV93cml0ZWJhY2spIChzdHJ1Y3QgcGFnZSAqLCBib29sICosIGJvb2wgKik7CmRpZmYgLS1naXQg YS9mcy9wcm9jL3BhZ2UuYyBiL2ZzL3Byb2MvcGFnZS5jCmluZGV4IDNlY2Q0NDVlODMwZC4uY2Uz ZDA4YTRhZDhkIDEwMDY0NAotLS0gYS9mcy9wcm9jL3BhZ2UuYworKysgYi9mcy9wcm9jL3BhZ2Uu YwpAQCAtMTU3LDYgKzE1Nyw5IEBAIHU2NCBzdGFibGVfcGFnZV9mbGFncyhzdHJ1Y3QgcGFnZSAq cGFnZSkKIAlpZiAocGFnZV9pc19pZGxlKHBhZ2UpKQogCQl1IHw9IDEgPDwgS1BGX0lETEU7CiAK KwlpZiAoUGFnZU1vdmFibGUocGFnZSkpCisJCXUgfD0gMSA8PCBLUEZfTU9WQUJMRTsKKwogCXUg fD0ga3BmX2NvcHlfYml0KGssIEtQRl9MT0NLRUQsCVBHX2xvY2tlZCk7CiAKIAl1IHw9IGtwZl9j b3B5X2JpdChrLCBLUEZfU0xBQiwJCVBHX3NsYWIpOwpkaWZmIC0tZ2l0IGEvaW5jbHVkZS9saW51 eC9mcy5oIGIvaW5jbHVkZS9saW51eC9mcy5oCmluZGV4IGRhOWU2N2Q5MzdlNS4uMzZmMmQ2MTBl N2E4IDEwMDY0NAotLS0gYS9pbmNsdWRlL2xpbnV4L2ZzLmgKKysrIGIvaW5jbHVkZS9saW51eC9m cy5oCkBAIC00MDEsNiArNDAxLDggQEAgc3RydWN0IGFkZHJlc3Nfc3BhY2Vfb3BlcmF0aW9ucyB7 CiAJICovCiAJaW50ICgqbWlncmF0ZXBhZ2UpIChzdHJ1Y3QgYWRkcmVzc19zcGFjZSAqLAogCQkJ c3RydWN0IHBhZ2UgKiwgc3RydWN0IHBhZ2UgKiwgZW51bSBtaWdyYXRlX21vZGUpOworCWJvb2wg KCppc29sYXRlX3BhZ2UpKHN0cnVjdCBwYWdlICosIGlzb2xhdGVfbW9kZV90KTsKKwl2b2lkICgq cHV0YmFja19wYWdlKShzdHJ1Y3QgcGFnZSAqKTsKIAlpbnQgKCpsYXVuZGVyX3BhZ2UpIChzdHJ1 Y3QgcGFnZSAqKTsKIAlpbnQgKCppc19wYXJ0aWFsbHlfdXB0b2RhdGUpIChzdHJ1Y3QgcGFnZSAq LCB1bnNpZ25lZCBsb25nLAogCQkJCQl1bnNpZ25lZCBsb25nKTsKZGlmZiAtLWdpdCBhL2luY2x1 ZGUvbGludXgvbWlncmF0ZS5oIGIvaW5jbHVkZS9saW51eC9taWdyYXRlLmgKaW5kZXggOWI1MDMy NWU0ZGRmLi40MDRmYmZlZmViMzMgMTAwNjQ0Ci0tLSBhL2luY2x1ZGUvbGludXgvbWlncmF0ZS5o CisrKyBiL2luY2x1ZGUvbGludXgvbWlncmF0ZS5oCkBAIC0zNyw2ICszNyw4IEBAIGV4dGVybiBp bnQgbWlncmF0ZV9wYWdlKHN0cnVjdCBhZGRyZXNzX3NwYWNlICosCiAJCQlzdHJ1Y3QgcGFnZSAq LCBzdHJ1Y3QgcGFnZSAqLCBlbnVtIG1pZ3JhdGVfbW9kZSk7CiBleHRlcm4gaW50IG1pZ3JhdGVf cGFnZXMoc3RydWN0IGxpc3RfaGVhZCAqbCwgbmV3X3BhZ2VfdCBuZXcsIGZyZWVfcGFnZV90IGZy ZWUsCiAJCXVuc2lnbmVkIGxvbmcgcHJpdmF0ZSwgZW51bSBtaWdyYXRlX21vZGUgbW9kZSwgaW50 IHJlYXNvbik7CitleHRlcm4gYm9vbCBpc29sYXRlX21vdmFibGVfcGFnZShzdHJ1Y3QgcGFnZSAq cGFnZSwgaXNvbGF0ZV9tb2RlX3QgbW9kZSk7CitleHRlcm4gdm9pZCBwdXRiYWNrX21vdmFibGVf cGFnZShzdHJ1Y3QgcGFnZSAqcGFnZSk7CiAKIGV4dGVybiBpbnQgbWlncmF0ZV9wcmVwKHZvaWQp OwogZXh0ZXJuIGludCBtaWdyYXRlX3ByZXBfbG9jYWwodm9pZCk7CmRpZmYgLS1naXQgYS9pbmNs dWRlL2xpbnV4L3BhZ2UtZmxhZ3MuaCBiL2luY2x1ZGUvbGludXgvcGFnZS1mbGFncy5oCmluZGV4 IGY0ZWQ0ZjFiMGM3Ny4uNzdlYmY4ZmRiYzZlIDEwMDY0NAotLS0gYS9pbmNsdWRlL2xpbnV4L3Bh Z2UtZmxhZ3MuaAorKysgYi9pbmNsdWRlL2xpbnV4L3BhZ2UtZmxhZ3MuaApAQCAtMTI5LDYgKzEy OSwxMCBAQCBlbnVtIHBhZ2VmbGFncyB7CiAKIAkvKiBDb21wb3VuZCBwYWdlcy4gU3RvcmVkIGlu IGZpcnN0IHRhaWwgcGFnZSdzIGZsYWdzICovCiAJUEdfZG91YmxlX21hcCA9IFBHX3ByaXZhdGVf MiwKKworCS8qIG5vbi1scnUgbW92YWJsZSBwYWdlcyAqLworCVBHX21vdmFibGUgPSBQR19yZWNs YWltLAorCVBHX2lzb2xhdGVkID0gUEdfb3duZXJfcHJpdl8xLAogfTsKIAogI2lmbmRlZiBfX0dF TkVSQVRJTkdfQk9VTkRTX0gKQEAgLTYxNCw2ICs2MTgsMzMgQEAgc3RhdGljIGlubGluZSB2b2lk IF9fQ2xlYXJQYWdlQmFsbG9vbihzdHJ1Y3QgcGFnZSAqcGFnZSkKIAlhdG9taWNfc2V0KCZwYWdl LT5fbWFwY291bnQsIC0xKTsKIH0KIAorI2RlZmluZSBQQUdFX01PVkFCTEVfTUFQQ09VTlRfVkFM VUUgKC0yNTUpCisKK3N0YXRpYyBpbmxpbmUgaW50IFBhZ2VNb3ZhYmxlKHN0cnVjdCBwYWdlICpw YWdlKQoreworCXJldHVybiAoKHRlc3RfYml0KFBHX21vdmFibGUsICYocGFnZSktPmZsYWdzKSAm JgorCQlhdG9taWNfcmVhZCgmcGFnZS0+X21hcGNvdW50KSA9PSBQQUdFX01PVkFCTEVfTUFQQ09V TlRfVkFMVUUpCisJCXx8IFBhZ2VCYWxsb29uKHBhZ2UpKTsKK30KKworLyogQ2FsbGVyIHNob3Vs ZCBob2xkIGEgUEdfbG9jayAqLworc3RhdGljIGlubGluZSB2b2lkIF9fU2V0UGFnZU1vdmFibGUo c3RydWN0IHBhZ2UgKnBhZ2UsCisJCQkJc3RydWN0IGFkZHJlc3Nfc3BhY2UgKm1hcHBpbmcpCit7 CisJcGFnZS0+bWFwcGluZyA9IG1hcHBpbmc7CisJX19zZXRfYml0KFBHX21vdmFibGUsICZwYWdl LT5mbGFncyk7CisJYXRvbWljX3NldCgmcGFnZS0+X21hcGNvdW50LCBQQUdFX01PVkFCTEVfTUFQ Q09VTlRfVkFMVUUpOworfQorCitzdGF0aWMgaW5saW5lIHZvaWQgX19DbGVhclBhZ2VNb3ZhYmxl KHN0cnVjdCBwYWdlICpwYWdlKQoreworCWF0b21pY19zZXQoJnBhZ2UtPl9tYXBjb3VudCwgLTEp OworCV9fY2xlYXJfYml0KFBHX21vdmFibGUsICYocGFnZSktPmZsYWdzKTsKKwlwYWdlLT5tYXBw aW5nID0gTlVMTDsKK30KKworUEFHRUZMQUcoSXNvbGF0ZWQsIGlzb2xhdGVkLCBQRl9BTlkpOwor CiAvKgogICogSWYgbmV0d29yay1iYXNlZCBzd2FwIGlzIGVuYWJsZWQsIHNsKmIgbXVzdCBrZWVw IHRyYWNrIG9mIHdoZXRoZXIgcGFnZXMKICAqIHdlcmUgYWxsb2NhdGVkIGZyb20gcGZtZW1hbGxv YyByZXNlcnZlcy4KZGlmZiAtLWdpdCBhL2luY2x1ZGUvdWFwaS9saW51eC9rZXJuZWwtcGFnZS1m bGFncy5oIGIvaW5jbHVkZS91YXBpL2xpbnV4L2tlcm5lbC1wYWdlLWZsYWdzLmgKaW5kZXggNWRh NWY4NzUxY2U3Li5hMTg0ZmQyNDM0ZmEgMTAwNjQ0Ci0tLSBhL2luY2x1ZGUvdWFwaS9saW51eC9r ZXJuZWwtcGFnZS1mbGFncy5oCisrKyBiL2luY2x1ZGUvdWFwaS9saW51eC9rZXJuZWwtcGFnZS1m bGFncy5oCkBAIC0zNCw2ICszNCw3IEBACiAjZGVmaW5lIEtQRl9CQUxMT09OCQkyMwogI2RlZmlu ZSBLUEZfWkVST19QQUdFCQkyNAogI2RlZmluZSBLUEZfSURMRQkJMjUKKyNkZWZpbmUgS1BGX01P VkFCTEUJCTI2CiAKIAogI2VuZGlmIC8qIF9VQVBJTElOVVhfS0VSTkVMX1BBR0VfRkxBR1NfSCAq LwpkaWZmIC0tZ2l0IGEvbW0vY29tcGFjdGlvbi5jIGIvbW0vY29tcGFjdGlvbi5jCmluZGV4IGNj Zjk3YjAyYjg1Zi4uNzU1N2FlZGRkYWVlIDEwMDY0NAotLS0gYS9tbS9jb21wYWN0aW9uLmMKKysr IGIvbW0vY29tcGFjdGlvbi5jCkBAIC03MDMsNyArNzAzLDcgQEAgaXNvbGF0ZV9taWdyYXRlcGFn ZXNfYmxvY2soc3RydWN0IGNvbXBhY3RfY29udHJvbCAqY2MsIHVuc2lnbmVkIGxvbmcgbG93X3Bm biwKIAogCQkvKgogCQkgKiBDaGVjayBtYXkgYmUgbG9ja2xlc3MgYnV0IHRoYXQncyBvayBhcyB3 ZSByZWNoZWNrIGxhdGVyLgotCQkgKiBJdCdzIHBvc3NpYmxlIHRvIG1pZ3JhdGUgTFJVIHBhZ2Vz IGFuZCBiYWxsb29uIHBhZ2VzCisJCSAqIEl0J3MgcG9zc2libGUgdG8gbWlncmF0ZSBMUlUgYW5k IG1vdmFibGUga2VybmVsIHBhZ2VzLgogCQkgKiBTa2lwIGFueSBvdGhlciB0eXBlIG9mIHBhZ2UK IAkJICovCiAJCWlzX2xydSA9IFBhZ2VMUlUocGFnZSk7CkBAIC03MTQsNiArNzE0LDE4IEBAIGlz b2xhdGVfbWlncmF0ZXBhZ2VzX2Jsb2NrKHN0cnVjdCBjb21wYWN0X2NvbnRyb2wgKmNjLCB1bnNp Z25lZCBsb25nIGxvd19wZm4sCiAJCQkJCWdvdG8gaXNvbGF0ZV9zdWNjZXNzOwogCQkJCX0KIAkJ CX0KKworCQkJaWYgKHVubGlrZWx5KFBhZ2VNb3ZhYmxlKHBhZ2UpKSAmJgorCQkJCQkhUGFnZUlz b2xhdGVkKHBhZ2UpKSB7CisJCQkJaWYgKGxvY2tlZCkgeworCQkJCQlzcGluX3VubG9ja19pcnFy ZXN0b3JlKCZ6b25lLT5scnVfbG9jaywKKwkJCQkJCQkJCWZsYWdzKTsKKwkJCQkJbG9ja2VkID0g ZmFsc2U7CisJCQkJfQorCisJCQkJaWYgKGlzb2xhdGVfbW92YWJsZV9wYWdlKHBhZ2UsIGlzb2xh dGVfbW9kZSkpCisJCQkJCWdvdG8gaXNvbGF0ZV9zdWNjZXNzOworCQkJfQogCQl9CiAKIAkJLyoK ZGlmZiAtLWdpdCBhL21tL21pZ3JhdGUuYyBiL21tL21pZ3JhdGUuYwppbmRleCA1MzUyOWM4MDU3 NTIuLmI1NmJmMmIzZmU4YyAxMDA2NDQKLS0tIGEvbW0vbWlncmF0ZS5jCisrKyBiL21tL21pZ3Jh dGUuYwpAQCAtNzMsNiArNzMsODUgQEAgaW50IG1pZ3JhdGVfcHJlcF9sb2NhbCh2b2lkKQogCXJl dHVybiAwOwogfQogCitib29sIGlzb2xhdGVfbW92YWJsZV9wYWdlKHN0cnVjdCBwYWdlICpwYWdl LCBpc29sYXRlX21vZGVfdCBtb2RlKQoreworCWJvb2wgcmV0ID0gZmFsc2U7CisKKwkvKgorCSAq IEF2b2lkIGJ1cm5pbmcgY3ljbGVzIHdpdGggcGFnZXMgdGhhdCBhcmUgeWV0IHVuZGVyIF9fZnJl ZV9wYWdlcygpLAorCSAqIG9yIGp1c3QgZ290IGZyZWVkIHVuZGVyIHVzLgorCSAqCisJICogSW4g Y2FzZSB3ZSAnd2luJyBhIHJhY2UgZm9yIGEgbW92YWJsZSBwYWdlIGJlaW5nIGZyZWVkIHVuZGVy IHVzIGFuZAorCSAqIHJhaXNlIGl0cyByZWZjb3VudCBwcmV2ZW50aW5nIF9fZnJlZV9wYWdlcygp IGZyb20gZG9pbmcgaXRzIGpvYgorCSAqIHRoZSBwdXRfcGFnZSgpIGF0IHRoZSBlbmQgb2YgdGhp cyBibG9jayB3aWxsIHRha2UgY2FyZSBvZgorCSAqIHJlbGVhc2UgdGhpcyBwYWdlLCB0aHVzIGF2 b2lkaW5nIGEgbmFzdHkgbGVha2FnZS4KKwkgKi8KKwlpZiAodW5saWtlbHkoIWdldF9wYWdlX3Vu bGVzc196ZXJvKHBhZ2UpKSkKKwkJZ290byBvdXQ7CisKKwkvKgorCSAqIENoZWNrIFBHX21vdmFi bGUgYmVmb3JlIGhvbGRpbmcgYSBQR19sb2NrIGJlY2F1c2UgcGFnZSdzIG93bmVyCisJICogYXNz dW1lcyBhbnlib2R5IGRvZXNuJ3QgdG91Y2ggUEdfbG9jayBvZiBuZXdseSBhbGxvY2F0ZWQgcGFn ZS4KKwkgKi8KKwlpZiAodW5saWtlbHkoIVBhZ2VNb3ZhYmxlKHBhZ2UpKSkKKwkJZ290byBvdXRf cHV0cGFnZTsKKwkvKgorCSAqIEFzIG1vdmFibGUgcGFnZXMgYXJlIG5vdCBpc29sYXRlZCBmcm9t IExSVSBsaXN0cywgY29uY3VycmVudAorCSAqIGNvbXBhY3Rpb24gdGhyZWFkcyBjYW4gcmFjZSBh Z2FpbnN0IHBhZ2UgbWlncmF0aW9uIGZ1bmN0aW9ucworCSAqIGFzIHdlbGwgYXMgcmFjZSBhZ2Fp bnN0IHRoZSByZWxlYXNpbmcgYSBwYWdlLgorCSAqCisJICogSW4gb3JkZXIgdG8gYXZvaWQgaGF2 aW5nIGFuIGFscmVhZHkgaXNvbGF0ZWQgbW92YWJsZSBwYWdlCisJICogYmVpbmcgKHdyb25nbHkp IHJlLWlzb2xhdGVkIHdoaWxlIGl0IGlzIHVuZGVyIG1pZ3JhdGlvbiwKKwkgKiBvciB0byBhdm9p ZCBhdHRlbXB0aW5nIHRvIGlzb2xhdGUgcGFnZXMgYmVpbmcgcmVsZWFzZWQsCisJICogbGV0cyBi ZSBzdXJlIHdlIGhhdmUgdGhlIHBhZ2UgbG9jaworCSAqIGJlZm9yZSBwcm9jZWVkaW5nIHdpdGgg dGhlIG1vdmFibGUgcGFnZSBpc29sYXRpb24gc3RlcHMuCisJICovCisJaWYgKHVubGlrZWx5KCF0 cnlsb2NrX3BhZ2UocGFnZSkpKQorCQlnb3RvIG91dF9wdXRwYWdlOworCisJaWYgKCFQYWdlTW92 YWJsZShwYWdlKSB8fCBQYWdlSXNvbGF0ZWQocGFnZSkpCisJCWdvdG8gb3V0X25vX2lzb2xhdGVk OworCisJcmV0ID0gcGFnZS0+bWFwcGluZy0+YV9vcHMtPmlzb2xhdGVfcGFnZShwYWdlLCBtb2Rl KTsKKwlpZiAoIXJldCkKKwkJZ290byBvdXRfbm9faXNvbGF0ZWQ7CisKKwlXQVJOX09OX09OQ0Uo IVBhZ2VJc29sYXRlZChwYWdlKSk7CisJdW5sb2NrX3BhZ2UocGFnZSk7CisJcmV0dXJuIHJldDsK Kworb3V0X25vX2lzb2xhdGVkOgorCXVubG9ja19wYWdlKHBhZ2UpOworb3V0X3B1dHBhZ2U6CisJ cHV0X3BhZ2UocGFnZSk7CitvdXQ6CisJcmV0dXJuIHJldDsKK30KKworLyogSXQgc2hvdWxkIGJl IGNhbGxlZCBvbiBwYWdlIHdoaWNoIGlzIFBHX21vdmFibGUgKi8KK3ZvaWQgcHV0YmFja19tb3Zh YmxlX3BhZ2Uoc3RydWN0IHBhZ2UgKnBhZ2UpCit7CisJLyoKKwkgKiAnbG9ja19wYWdlKCknIHN0 YWJpbGl6ZXMgdGhlIHBhZ2UgYW5kIHByZXZlbnRzIHJhY2VzIGFnYWluc3QKKwkgKiBjb25jdXJy ZW50IGlzb2xhdGlvbiB0aHJlYWRzIGF0dGVtcHRpbmcgdG8gcmUtaXNvbGF0ZSBpdC4KKwkgKi8K KwlWTV9CVUdfT05fUEFHRSghUGFnZU1vdmFibGUocGFnZSksIHBhZ2UpOworCisJbG9ja19wYWdl KHBhZ2UpOworCWlmIChQYWdlSXNvbGF0ZWQocGFnZSkpIHsKKwkJc3RydWN0IGFkZHJlc3Nfc3Bh Y2UgKm1hcHBpbmc7CisKKwkJbWFwcGluZyA9IHBhZ2VfbWFwcGluZyhwYWdlKTsKKwkJbWFwcGlu Zy0+YV9vcHMtPnB1dGJhY2tfcGFnZShwYWdlKTsKKwkJV0FSTl9PTl9PTkNFKFBhZ2VJc29sYXRl ZChwYWdlKSk7CisJfSBlbHNlIHsKKwkJX19DbGVhclBhZ2VNb3ZhYmxlKHBhZ2UpOworCX0KKwl1 bmxvY2tfcGFnZShwYWdlKTsKKwkvKiBkcm9wIHRoZSBleHRyYSByZWYgY291bnQgdGFrZW4gZm9y IG1vdmFibGUgcGFnZSBpc29sYXRpb24gKi8KKwlwdXRfcGFnZShwYWdlKTsKK30KKwogLyoKICAq IFB1dCBwcmV2aW91c2x5IGlzb2xhdGVkIHBhZ2VzIGJhY2sgb250byB0aGUgYXBwcm9wcmlhdGUg bGlzdHMKICAqIGZyb20gd2hlcmUgdGhleSB3ZXJlIG9uY2UgdGFrZW4gb2ZmIGZvciBjb21wYWN0 aW9uL21pZ3JhdGlvbi4KQEAgLTk0LDEwICsxNzMsMTggQEAgdm9pZCBwdXRiYWNrX21vdmFibGVf cGFnZXMoc3RydWN0IGxpc3RfaGVhZCAqbCkKIAkJbGlzdF9kZWwoJnBhZ2UtPmxydSk7CiAJCWRl Y196b25lX3BhZ2Vfc3RhdGUocGFnZSwgTlJfSVNPTEFURURfQU5PTiArCiAJCQkJcGFnZV9pc19m aWxlX2NhY2hlKHBhZ2UpKTsKLQkJaWYgKHVubGlrZWx5KGlzb2xhdGVkX2JhbGxvb25fcGFnZShw YWdlKSkpCisJCWlmICh1bmxpa2VseShpc29sYXRlZF9iYWxsb29uX3BhZ2UocGFnZSkpKSB7CiAJ CQliYWxsb29uX3BhZ2VfcHV0YmFjayhwYWdlKTsKLQkJZWxzZQorCQl9IGVsc2UgaWYgKHVubGlr ZWx5KFBhZ2VNb3ZhYmxlKHBhZ2UpKSkgeworCQkJaWYgKFBhZ2VJc29sYXRlZChwYWdlKSkgewor CQkJCXB1dGJhY2tfbW92YWJsZV9wYWdlKHBhZ2UpOworCQkJfSBlbHNlIHsKKwkJCQlfX0NsZWFy UGFnZU1vdmFibGUocGFnZSk7CisJCQkJcHV0X3BhZ2UocGFnZSk7CisJCQl9CisJCX0gZWxzZSB7 CiAJCQlwdXRiYWNrX2xydV9wYWdlKHBhZ2UpOworCQl9CiAJfQogfQogCkBAIC01OTIsNyArNjc5 LDcgQEAgdm9pZCBtaWdyYXRlX3BhZ2VfY29weShzdHJ1Y3QgcGFnZSAqbmV3cGFnZSwgc3RydWN0 IHBhZ2UgKnBhZ2UpCiAgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKiovCiAKIC8qCi0gKiBDb21tb24gbG9naWMgdG8gZGlyZWN0bHkgbWln cmF0ZSBhIHNpbmdsZSBwYWdlIHN1aXRhYmxlIGZvcgorICogQ29tbW9uIGxvZ2ljIHRvIGRpcmVj dGx5IG1pZ3JhdGUgYSBzaW5nbGUgTFJVIHBhZ2Ugc3VpdGFibGUgZm9yCiAgKiBwYWdlcyB0aGF0 IGRvIG5vdCB1c2UgUGFnZVByaXZhdGUvUGFnZVByaXZhdGUyLgogICoKICAqIFBhZ2VzIGFyZSBs b2NrZWQgdXBvbiBlbnRyeSBhbmQgZXhpdC4KQEAgLTc1NSwyNCArODQyLDU0IEBAIHN0YXRpYyBp bnQgbW92ZV90b19uZXdfcGFnZShzdHJ1Y3QgcGFnZSAqbmV3cGFnZSwgc3RydWN0IHBhZ2UgKnBh Z2UsCiAJCQkJZW51bSBtaWdyYXRlX21vZGUgbW9kZSkKIHsKIAlzdHJ1Y3QgYWRkcmVzc19zcGFj ZSAqbWFwcGluZzsKLQlpbnQgcmM7CisJaW50IHJjID0gLUVBR0FJTjsKKwlib29sIGxydV9tb3Zh YmxlID0gdHJ1ZTsKIAogCVZNX0JVR19PTl9QQUdFKCFQYWdlTG9ja2VkKHBhZ2UpLCBwYWdlKTsK IAlWTV9CVUdfT05fUEFHRSghUGFnZUxvY2tlZChuZXdwYWdlKSwgbmV3cGFnZSk7CiAKIAltYXBw aW5nID0gcGFnZV9tYXBwaW5nKHBhZ2UpOwotCWlmICghbWFwcGluZykKLQkJcmMgPSBtaWdyYXRl X3BhZ2UobWFwcGluZywgbmV3cGFnZSwgcGFnZSwgbW9kZSk7Ci0JZWxzZSBpZiAobWFwcGluZy0+ YV9vcHMtPm1pZ3JhdGVwYWdlKQotCQkvKgotCQkgKiBNb3N0IHBhZ2VzIGhhdmUgYSBtYXBwaW5n IGFuZCBtb3N0IGZpbGVzeXN0ZW1zIHByb3ZpZGUgYQotCQkgKiBtaWdyYXRlcGFnZSBjYWxsYmFj ay4gQW5vbnltb3VzIHBhZ2VzIGFyZSBwYXJ0IG9mIHN3YXAKLQkJICogc3BhY2Ugd2hpY2ggYWxz byBoYXMgaXRzIG93biBtaWdyYXRlcGFnZSBjYWxsYmFjay4gVGhpcwotCQkgKiBpcyB0aGUgbW9z dCBjb21tb24gcGF0aCBmb3IgcGFnZSBtaWdyYXRpb24uCi0JCSAqLwotCQlyYyA9IG1hcHBpbmct PmFfb3BzLT5taWdyYXRlcGFnZShtYXBwaW5nLCBuZXdwYWdlLCBwYWdlLCBtb2RlKTsKLQllbHNl Ci0JCXJjID0gZmFsbGJhY2tfbWlncmF0ZV9wYWdlKG1hcHBpbmcsIG5ld3BhZ2UsIHBhZ2UsIG1v ZGUpOworCS8qCisJICogSW4gY2FzZSBvZiBub24tbHJ1IHBhZ2UsIGl0IGNvdWxkIGJlIHJlbGVh c2VkIGFmdGVyCisJICogaXNvbGF0aW9uIHN0ZXAuIEluIHRoYXQgY2FzZSwgd2Ugc2hvdWxkbid0 IHRyeQorCSAqIGZhbGxiYWNrIG1pZ3JhdGlvbiB3aGljaCB3YXMgZGVzaWduZWQgZm9yIExSVSBw YWdlcy4KKwkgKgorCSAqIFRoZSBydWxlIGZvciBzdWNoIGNhc2UgaXMgdGhhdCBzdWJzeXN0ZW0g c2hvdWxkIGNsZWFyCisJICogUEdfaXNvbGF0ZWQgYnV0IHJlbWFpbnMgUEdfbW92YWJsZSBzbyBW TSBzaG91bGQgY2F0Y2gKKwkgKiBpdCBhbmQgY2xlYXIgUEdfbW92YWJsZSBmb3IgaXQuCisJICov CisJaWYgKHVubGlrZWx5KFBhZ2VNb3ZhYmxlKHBhZ2UpKSkgeworCQlscnVfbW92YWJsZSA9IGZh bHNlOworCQlWTV9CVUdfT05fUEFHRSghbWFwcGluZywgcGFnZSk7CisJCWlmICghUGFnZUlzb2xh dGVkKHBhZ2UpKSB7CisJCQlyYyA9IE1JR1JBVEVQQUdFX1NVQ0NFU1M7CisJCQlfX0NsZWFyUGFn ZU1vdmFibGUocGFnZSk7CisJCQlnb3RvIG91dDsKKwkJfQorCX0KKworCWlmIChsaWtlbHkobHJ1 X21vdmFibGUpKSB7CisJCWlmICghbWFwcGluZykKKwkJCXJjID0gbWlncmF0ZV9wYWdlKG1hcHBp bmcsIG5ld3BhZ2UsIHBhZ2UsIG1vZGUpOworCQllbHNlIGlmIChtYXBwaW5nLT5hX29wcy0+bWln cmF0ZXBhZ2UpCisJCQkvKgorCQkJICogTW9zdCBwYWdlcyBoYXZlIGEgbWFwcGluZyBhbmQgbW9z dCBmaWxlc3lzdGVtcworCQkJICogcHJvdmlkZSBhIG1pZ3JhdGVwYWdlIGNhbGxiYWNrLiBBbm9u eW1vdXMgcGFnZXMKKwkJCSAqIGFyZSBwYXJ0IG9mIHN3YXAgc3BhY2Ugd2hpY2ggYWxzbyBoYXMg aXRzIG93bgorCQkJICogbWlncmF0ZXBhZ2UgY2FsbGJhY2suIFRoaXMgaXMgdGhlIG1vc3QgY29t bW9uIHBhdGgKKwkJCSAqIGZvciBwYWdlIG1pZ3JhdGlvbi4KKwkJCSAqLworCQkJcmMgPSBtYXBw aW5nLT5hX29wcy0+bWlncmF0ZXBhZ2UobWFwcGluZywgbmV3cGFnZSwKKwkJCQkJCQlwYWdlLCBt b2RlKTsKKwkJZWxzZQorCQkJcmMgPSBmYWxsYmFja19taWdyYXRlX3BhZ2UobWFwcGluZywgbmV3 cGFnZSwKKwkJCQkJCQlwYWdlLCBtb2RlKTsKKwl9IGVsc2UgeworCQlyYyA9IG1hcHBpbmctPmFf b3BzLT5taWdyYXRlcGFnZShtYXBwaW5nLCBuZXdwYWdlLAorCQkJCQkJcGFnZSwgbW9kZSk7CisJ CVdBUk5fT05fT05DRShyYyA9PSBNSUdSQVRFUEFHRV9TVUNDRVNTICYmCisJCQlQYWdlSXNvbGF0 ZWQocGFnZSkpOworCX0KIAogCS8qCiAJICogV2hlbiBzdWNjZXNzZnVsLCBvbGQgcGFnZWNhY2hl IHBhZ2UtPm1hcHBpbmcgbXVzdCBiZSBjbGVhcmVkIGJlZm9yZQpAQCAtNzgyLDYgKzg5OSw3IEBA IHN0YXRpYyBpbnQgbW92ZV90b19uZXdfcGFnZShzdHJ1Y3QgcGFnZSAqbmV3cGFnZSwgc3RydWN0 IHBhZ2UgKnBhZ2UsCiAJCWlmICghUGFnZUFub24ocGFnZSkpCiAJCQlwYWdlLT5tYXBwaW5nID0g TlVMTDsKIAl9CitvdXQ6CiAJcmV0dXJuIHJjOwogfQogCkBAIC05NjAsNiArMTA3OCw4IEBAIHN0 YXRpYyBJQ0Vfbm9pbmxpbmUgaW50IHVubWFwX2FuZF9tb3ZlKG5ld19wYWdlX3QgZ2V0X25ld19w YWdlLAogCQkJcHV0X25ld19wYWdlKG5ld3BhZ2UsIHByaXZhdGUpOwogCQllbHNlCiAJCQlwdXRf cGFnZShuZXdwYWdlKTsKKwkJaWYgKFBhZ2VNb3ZhYmxlKHBhZ2UpKQorCQkJX19DbGVhclBhZ2VN b3ZhYmxlKHBhZ2UpOwogCQlnb3RvIG91dDsKIAl9CiAKQEAgLTEwMDAsOCArMTEyMCwyNiBAQCBz dGF0aWMgSUNFX25vaW5saW5lIGludCB1bm1hcF9hbmRfbW92ZShuZXdfcGFnZV90IGdldF9uZXdf cGFnZSwKIAkJCQludW1fcG9pc29uZWRfcGFnZXNfaW5jKCk7CiAJCX0KIAl9IGVsc2UgewotCQlp ZiAocmMgIT0gLUVBR0FJTikKLQkJCXB1dGJhY2tfbHJ1X3BhZ2UocGFnZSk7CisJCWlmIChyYyAh PSAtRUFHQUlOKSB7CisJCQkvKgorCQkJICogc3Vic3lzdGVtIGNvdWxkbid0IHJlbW92ZSBQR19t b3ZhYmxlIHNpbmNlIHBhZ2UgaXMKKwkJCSAqIGlzb2xhdGVkIHNvIFBhZ2VNb3ZhYmxlIGNoZWNr IGlzIG5vdCByYWN5IGluIGhlcmUuCisJCQkgKiBCdXQgUGFnZUlzb2xhdGVkIGNoZWNrIGNhbiBi ZSByYWN5IGJ1dCBpdCdzIG9rYXkKKwkJCSAqIGJlY2F1c2UgcHV0YmFja19tb3ZhYmxlX3BhZ2Ug Y2hlY2tzIGl0IHVuZGVyIFBHX2xvY2sKKwkJCSAqIGFnYWluLgorCQkJICovCisJCQlpZiAodW5s aWtlbHkoUGFnZU1vdmFibGUocGFnZSkpKSB7CisJCQkJaWYgKFBhZ2VJc29sYXRlZChwYWdlKSkK KwkJCQkJcHV0YmFja19tb3ZhYmxlX3BhZ2UocGFnZSk7CisJCQkJZWxzZSB7CisJCQkJCV9fQ2xl YXJQYWdlTW92YWJsZShwYWdlKTsKKwkJCQkJcHV0X3BhZ2UocGFnZSk7CisJCQkJfQorCQkJfSBl bHNlIHsKKwkJCQlwdXRiYWNrX2xydV9wYWdlKHBhZ2UpOworCQkJfQorCQl9CisKIAkJaWYgKHB1 dF9uZXdfcGFnZSkKIAkJCXB1dF9uZXdfcGFnZShuZXdwYWdlLCBwcml2YXRlKTsKIAkJZWxzZQot LSAKMS45LjEKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f CmRyaS1kZXZlbCBtYWlsaW5nIGxpc3QKZHJpLWRldmVsQGxpc3RzLmZyZWVkZXNrdG9wLm9yZwpo dHRwczovL2xpc3RzLmZyZWVkZXNrdG9wLm9yZy9tYWlsbWFuL2xpc3RpbmZvL2RyaS1kZXZlbAo=