From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752344AbbJEHx0 (ORCPT ); Mon, 5 Oct 2015 03:53:26 -0400 Received: from casper.infradead.org ([85.118.1.10]:54261 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751988AbbJEHxX (ORCPT ); Mon, 5 Oct 2015 03:53:23 -0400 Date: Mon, 5 Oct 2015 09:53:18 +0200 From: Peter Zijlstra To: Vlastimil Babka Cc: linux-mm@kvack.org, Jerome Marchand , Andrew Morton , Hugh Dickins , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, Michal Hocko , "Kirill A. Shutemov" , Cyrill Gorcunov , Randy Dunlap , linux-s390@vger.kernel.org, Martin Schwidefsky , Heiko Carstens , Paul Mackerras , Arnaldo Carvalho de Melo , Oleg Nesterov , Linux API , Konstantin Khlebnikov Subject: Re: [PATCH v4 2/4] mm, proc: account for shmem swap in /proc/pid/smaps Message-ID: <20151005075318.GE2903@worktop.programming.kicks-ass.net> References: <1443792951-13944-1-git-send-email-vbabka@suse.cz> <1443792951-13944-3-git-send-email-vbabka@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1443792951-13944-3-git-send-email-vbabka@suse.cz> User-Agent: Mutt/1.5.22.1 (2013-10-16) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Oct 02, 2015 at 03:35:49PM +0200, Vlastimil Babka wrote: > +static unsigned long smaps_shmem_swap(struct vm_area_struct *vma) > +{ > + struct inode *inode; > + unsigned long swapped; > + pgoff_t start, end; > + > + if (!vma->vm_file) > + return 0; > + > + inode = file_inode(vma->vm_file); > + > + if (!shmem_mapping(inode->i_mapping)) > + return 0; > + > + /* > + * The easier cases are when the shmem object has nothing in swap, or > + * we have the whole object mapped. Then we can simply use the stats > + * that are already tracked by shmem. > + */ > + swapped = shmem_swap_usage(inode); > + > + if (swapped == 0) > + return 0; > + > + if (vma->vm_end - vma->vm_start >= inode->i_size) > + return swapped; > + > + /* > + * Here we have to inspect individual pages in our mapped range to > + * determine how much of them are swapped out. Thanks to RCU, we don't > + * need i_mutex to protect against truncating or hole punching. > + */ At the very least put in an assertion that we hold the RCU read lock, otherwise RCU doesn't guarantee anything and its not obvious it is held here. > + start = linear_page_index(vma, vma->vm_start); > + end = linear_page_index(vma, vma->vm_end); > + > + return shmem_partial_swap_usage(inode->i_mapping, start, end); > +} > + * Determine (in bytes) how much of the whole shmem object is swapped out. > + */ > +unsigned long shmem_swap_usage(struct inode *inode) > +{ > + struct shmem_inode_info *info = SHMEM_I(inode); > + unsigned long swapped; > + > + /* Mostly an overkill, but it's not atomic64_t */ Yeah, that don't make any kind of sense. > + spin_lock(&info->lock); > + swapped = info->swapped; > + spin_unlock(&info->lock); > + > + return swapped << PAGE_SHIFT; > +}