From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756060Ab3EXNz6 (ORCPT ); Fri, 24 May 2013 09:55:58 -0400 Received: from www.meduna.org ([92.240.244.38]:52324 "EHLO meduna.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751641Ab3EXNz5 (ORCPT ); Fri, 24 May 2013 09:55:57 -0400 Message-ID: <519F7159.5010009@meduna.org> Date: Fri, 24 May 2013 15:55:37 +0200 From: Stanislav Meduna User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130509 Thunderbird/17.0.6 MIME-Version: 1.0 To: Rik van Riel CC: "H. Peter Anvin" , Steven Rostedt , Linus Torvalds , "linux-rt-users@vger.kernel.org" , "linux-kernel@vger.kernel.org" , Thomas Gleixner , Ingo Molnar , the arch/x86 maintainers , Hai Huang Subject: Re: [PATCH] mm: fix up a spurious page fault whenever it happens References: <5195ED8B.7060002@meduna.org> <1369183168.6828.168.camel@gandalf.local.home> <519CBB30.3060200@redhat.com> <20130522134111.33a695c5@cuia.bos.redhat.com> <519D08B0.8050707@meduna.org> <1369246316.6828.176.camel@gandalf.local.home> <519D0CAB.7020800@meduna.org> <519D0FF8.5080200@redhat.com> <519D118B.6010306@zytor.com> <519D11BF.5000604@redhat.com> <519DCE2A.4010801@meduna.org> <519E095A.4000105@redhat.com> <519F24DD.5060700@meduna.org> <519F65DB.2020305@redhat.com> In-Reply-To: <519F65DB.2020305@redhat.com> X-Enigmail-Version: 1.5.1 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Authenticated-User: stano@meduna.org X-Authenticator: dovecot_plain X-Spam-Score: -6.9 X-Spam-Score-Int: -68 X-Exim-Version: 4.72 (build at 25-Oct-2012 18:35:58) X-Date: 2013-05-24 15:55:42 X-Connected-IP: 95.105.165.4:7450 X-Message-Linecount: 44 X-Body-Linecount: 22 X-Message-Size: 1992 X-Body-Size: 542 X-Received-Count: 1 X-Recipient-Count: 10 X-Local-Recipient-Count: 10 X-Local-Recipient-Defer-Count: 0 X-Local-Recipient-Fail-Count: 0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 24.05.2013 15:06, Rik van Riel wrote: > Just to rule something out, are you using > transparent huge pages on those systems? On my present test system they are configured in, but I am not using them. # cat /proc/meminfo | grep Huge HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB However during my (many) previous experiments the problem also happened with kernels that did not have it configured. Thanks -- Stano