Hi all, On Thu, 11 Feb 2021 21:24:37 +1100 Stephen Rothwell wrote: > > Today's linux-next merge of the akpm-current tree got a conflict in: > > include/linux/pagemap.h > > between commits: > > 13aecd8259dc ("mm: Implement readahead_control pageset expansion") > 9a28f7e68602 ("netfs: Rename unlock_page_fscache() and wait_on_page_fscache()") > > from the fscache tree and commits: > > cd669a9cbd89 ("mm/filemap: add mapping_seek_hole_data") > 34c37da5f411 ("mm/filemap: pass a sleep state to put_and_wait_on_page_locked") > > from the akpm-current tree. > > I fixed it up (see below) and can carry the fix as necessary. This > is now fixed as far as linux-next is concerned, but any non trivial > conflicts should be mentioned to your upstream maintainer when your tree > is submitted for merging. You may also want to consider cooperating > with the maintainer of the conflicting tree to minimise any particularly > complex conflicts. > > diff --cc include/linux/pagemap.h > index a88ccc9ab0b1,20225b067583..000000000000 > --- a/include/linux/pagemap.h > +++ b/include/linux/pagemap.h > @@@ -682,22 -681,7 +682,21 @@@ static inline int wait_on_page_locked_k > return wait_on_page_bit_killable(compound_head(page), PG_locked); > } > > +/** > + * wait_on_page_private_2 - Wait for PG_private_2 to be cleared on a page > + * @page: The page > + * > + * Wait for the PG_private_2 page bit to be removed from a page. This is, for > + * example, used to handle a netfs page being written to a local disk cache, > + * thereby allowing writes to the cache for the same page to be serialised. > + */ > +static inline void wait_on_page_private_2(struct page *page) > +{ > + if (PagePrivate2(page)) > + wait_on_page_bit(compound_head(page), PG_private_2); > +} > + > - extern void put_and_wait_on_page_locked(struct page *page); > - > + int put_and_wait_on_page_locked(struct page *page, int state); > void wait_on_page_writeback(struct page *page); > extern void end_page_writeback(struct page *page); > void wait_for_stable_page(struct page *page); > @@@ -772,11 -756,11 +771,13 @@@ int add_to_page_cache_lru(struct page * > pgoff_t index, gfp_t gfp_mask); > extern void delete_from_page_cache(struct page *page); > extern void __delete_from_page_cache(struct page *page, void *shadow); > - int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask); > + void replace_page_cache_page(struct page *old, struct page *new); > void delete_from_page_cache_batch(struct address_space *mapping, > struct pagevec *pvec); > +void readahead_expand(struct readahead_control *ractl, > + loff_t new_start, size_t new_len); > + loff_t mapping_seek_hole_data(struct address_space *, loff_t start, loff_t end, > + int whence); > > /* > * Like add_to_page_cache_locked, but used to add newly allocated pages: This is now a conflict between the fscache tree and Linus' tree. -- Cheers, Stephen Rothwell