From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755924AbcAITjQ (ORCPT ); Sat, 9 Jan 2016 14:39:16 -0500 Received: from mail-wm0-f65.google.com ([74.125.82.65]:35152 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755194AbcAITjP (ORCPT ); Sat, 9 Jan 2016 14:39:15 -0500 MIME-Version: 1.0 In-Reply-To: References: <19f6403f2b04d3448ed2ac958e656645d8b6e70c.1452297867.git.tony.luck@intel.com> Date: Sat, 9 Jan 2016 11:39:14 -0800 Message-ID: Subject: Re: [PATCH v8 3/3] x86, mce: Add __mcsafe_copy() From: Tony Luck To: Andy Lutomirski Cc: linux-nvdimm , Dan Williams , Borislav Petkov , "linux-kernel@vger.kernel.org" , Andrew Morton , Robert , Ingo Molnar , "linux-mm@kvack.org" , X86 ML Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Jan 9, 2016 at 9:57 AM, Andy Lutomirski wrote: > On Sat, Jan 9, 2016 at 9:48 AM, Tony Luck wrote: >> ERMS? > > It's the fast string extension, aka Enhanced REP MOV STOS. On CPUs > with that feature (and not disabled via MSR), plain ol' rep movs is > the fastest way to copy bytes. I think this includes all Intel CPUs > from SNB onwards. Ah ... very fast at copying .. but currently not machine check recoverable. -Tony From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f43.google.com (mail-wm0-f43.google.com [74.125.82.43]) by kanga.kvack.org (Postfix) with ESMTP id 18AEB6B0256 for ; Sat, 9 Jan 2016 14:39:16 -0500 (EST) Received: by mail-wm0-f43.google.com with SMTP id l65so169637114wmf.1 for ; Sat, 09 Jan 2016 11:39:16 -0800 (PST) Received: from mail-wm0-x244.google.com (mail-wm0-x244.google.com. [2a00:1450:400c:c09::244]) by mx.google.com with ESMTPS id jz10si184829555wjb.249.2016.01.09.11.39.14 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 09 Jan 2016 11:39:14 -0800 (PST) Received: by mail-wm0-x244.google.com with SMTP id b14so20579272wmb.1 for ; Sat, 09 Jan 2016 11:39:14 -0800 (PST) MIME-Version: 1.0 In-Reply-To: References: <19f6403f2b04d3448ed2ac958e656645d8b6e70c.1452297867.git.tony.luck@intel.com> Date: Sat, 9 Jan 2016 11:39:14 -0800 Message-ID: Subject: Re: [PATCH v8 3/3] x86, mce: Add __mcsafe_copy() From: Tony Luck Content-Type: text/plain; charset=UTF-8 Sender: owner-linux-mm@kvack.org List-ID: To: Andy Lutomirski Cc: linux-nvdimm , Dan Williams , Borislav Petkov , "linux-kernel@vger.kernel.org" , Andrew Morton , Robert , Ingo Molnar , "linux-mm@kvack.org" , X86 ML On Sat, Jan 9, 2016 at 9:57 AM, Andy Lutomirski wrote: > On Sat, Jan 9, 2016 at 9:48 AM, Tony Luck wrote: >> ERMS? > > It's the fast string extension, aka Enhanced REP MOV STOS. On CPUs > with that feature (and not disabled via MSR), plain ol' rep movs is > the fastest way to copy bytes. I think this includes all Intel CPUs > from SNB onwards. Ah ... very fast at copying .. but currently not machine check recoverable. -Tony -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org