From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752603AbdFNPZy (ORCPT ); Wed, 14 Jun 2017 11:25:54 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:37238 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752015AbdFNPZx (ORCPT ); Wed, 14 Jun 2017 11:25:53 -0400 Subject: Re: [HELP-NEEDED, PATCH 0/3] Do not loose dirty bit on THP pages To: "Kirill A. Shutemov" , Andrew Morton , Vlastimil Babka , Vineet Gupta , Russell King , Will Deacon , Catalin Marinas , Ralf Baechle , "David S. Miller" , Heiko Carstens Cc: Martin Schwidefsky , Andrea Arcangeli , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20170614135143.25068-1-kirill.shutemov@linux.intel.com> From: "Aneesh Kumar K.V" Date: Wed, 14 Jun 2017 20:55:26 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.1.1 MIME-Version: 1.0 In-Reply-To: <20170614135143.25068-1-kirill.shutemov@linux.intel.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-TM-AS-GCONF: 00 x-cbid: 17061415-0024-0000-0000-000016A4EA6F X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00007231; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000212; SDB=6.00874703; UDB=6.00435430; IPR=6.00654792; BA=6.00005421; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00015823; XFM=3.00000015; UTC=2017-06-14 15:25:44 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17061415-0025-0000-0000-00004B5F685B Message-Id: X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-06-14_03:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1703280000 definitions=main-1706140260 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wednesday 14 June 2017 07:21 PM, Kirill A. Shutemov wrote: > Hi, > > Vlastimil noted that pmdp_invalidate() is not atomic and we can loose > dirty and access bits if CPU sets them after pmdp dereference, but > before set_pmd_at(). > > The bug doesn't lead to user-visible misbehaviour in current kernel, but > fixing this would be critical for future work on THP: both huge-ext4 and THP > swap out rely on proper dirty tracking. > > Unfortunately, there's no way to address the issue in a generic way. We need to > fix all architectures that support THP one-by-one. > > All architectures that have THP supported have to provide atomic > pmdp_invalidate(). If generic implementation of pmdp_invalidate() is used, > architecture needs to provide atomic pmdp_mknonpresent(). > > I've fixed the issue for x86, but I need help with the rest. > > So far THP is supported on 8 architectures. Power and S390 already provides > atomic pmdp_invalidate(). x86 is fixed by this patches, so 5 architectures > left: > > - arc; > - arm; > - arm64; > - mips; > - sparc -- it has custom pmdp_invalidate(), but it's racy too; > > Please, help me with them. > > Kirill A. Shutemov (3): > x86/mm: Provide pmdp_mknotpresent() helper > mm: Do not loose dirty and access bits in pmdp_invalidate() > mm, thp: Do not loose dirty bit in __split_huge_pmd_locked() > But in __split_huge_pmd_locked() we collected the dirty bit early. So even if we made pmdp_invalidate() atomic, if we had marked the pmd pte entry dirty after we collected the dirty bit, we still loose it right ? May be we should relook at pmd PTE udpate interface. We really need an interface that can update pmd entries such that we don't clear it in between. IMHO, we can avoid the pmdp_invalidate() completely, if we can switch from a pmd PTE entry to a pointer to PTE page (pgtable_t). We also need this interface to avoid the madvise race fixed by https://lkml.kernel.org/r/20170302151034.27829-1-kirill.shutemov@linux.intel.com The usage of pmdp_invalidate while splitting the pmd also need updated documentation. In the earlier version of thp, we were required to keep the pmd present and marked splitting, so that code paths can wait till the splitting is done. With the current design, we can ideally mark the pmdp not present early on right ? As long as we hold the pmd lock a parallel fault will try to mark the pmd accessed and wait on the pmd lock. On taking the lock it will find the pmd modified and we should retry access again ? -aneesh