From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1161726AbeCAUx0 (ORCPT ); Thu, 1 Mar 2018 15:53:26 -0500 Received: from mail-wr0-f195.google.com ([209.85.128.195]:38478 "EHLO mail-wr0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1161674AbeCAUxX (ORCPT ); Thu, 1 Mar 2018 15:53:23 -0500 X-Google-Smtp-Source: AG47ELvvCTCdIRRfYnukG5Jm5ldQ42NFbtuocNGkdBJenKJ2Hf18afBSxrFYAY17PuM2Q104z7g9ew== Date: Thu, 1 Mar 2018 13:53:15 -0700 From: Jason Gunthorpe To: Benjamin Herrenschmidt Cc: Dan Williams , Logan Gunthorpe , Linux Kernel Mailing List , linux-pci@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma , linux-nvdimm , linux-block@vger.kernel.org, Stephen Bates , Christoph Hellwig , Jens Axboe , Keith Busch , Sagi Grimberg , Bjorn Helgaas , Max Gurtovoy , =?utf-8?B?SsOpcsO0bWU=?= Glisse , Alex Williamson , Oliver OHalloran Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory Message-ID: <20180301205315.GJ19007@ziepe.ca> References: <20180228234006.21093-1-logang@deltatee.com> <1519876489.4592.3.camel@kernel.crashing.org> <1519876569.4592.4.camel@au1.ibm.com> <1519936477.4592.23.camel@au1.ibm.com> <1519936815.4592.25.camel@au1.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1519936815.4592.25.camel@au1.ibm.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 02, 2018 at 07:40:15AM +1100, Benjamin Herrenschmidt wrote: > Also we need to be able to hard block MEMREMAP_WB mappings of non-RAM > on ppc64 (maybe via an arch hook as it might depend on the processor > family). Server powerpc cannot do cachable accesses on IO memory > (unless it's special OpenCAPI or nVlink, but not on PCIe). I think you are right on this - even on x86 we must not create cachable mappings of PCI BARs - there is no way that works the way anyone would expect. I think this series doesn't have a problem here only because it never touches the BAR pages with the CPU. BAR memory should be mapped into the CPU as WC at best on all arches.. Jason