From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1034855AbcIZMd0 (ORCPT ); Mon, 26 Sep 2016 08:33:26 -0400 Received: from mail-pf0-f180.google.com ([209.85.192.180]:36368 "EHLO mail-pf0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1034604AbcIZMcu (ORCPT ); Mon, 26 Sep 2016 08:32:50 -0400 From: Ard Biesheuvel To: linux-kernel@vger.kernel.org, bskeggs@redhat.com, airlied@linux.ie, dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org Cc: Ard Biesheuvel Subject: [PATCH v4 2/3] drm/nouveau/fb/gf100: defer DMA mapping of scratch page to init() hook Date: Mon, 26 Sep 2016 05:32:39 -0700 Message-Id: <1474893160-12321-3-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1474893160-12321-1-git-send-email-ard.biesheuvel@linaro.org> References: <1474893160-12321-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The 100c10 scratch page is mapped using dma_map_page() before the TTM layer has had a chance to set the DMA mask. This means we are still running with the default of 32 when this code executes, and this causes problems for platforms with no memory below 4 GB (such as AMD Seattle) So move the dma_map_page() to the .init hook, which executes after the DMA mask has been set. Signed-off-by: Ard Biesheuvel --- drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf100.c | 26 ++++++++++++++------ 1 file changed, 18 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf100.c index 76433cc66fff..5c8132873e60 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf100.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf100.c @@ -93,7 +93,18 @@ gf100_fb_init(struct nvkm_fb *base) struct gf100_fb *fb = gf100_fb(base); struct nvkm_device *device = fb->base.subdev.device; - if (fb->r100c10_page) + if (!fb->r100c10) { + dma_addr_t addr = dma_map_page(device->dev, fb->r100c10_page, 0, + PAGE_SIZE, DMA_BIDIRECTIONAL); + if (!dma_mapping_error(device->dev, addr)) { + fb->r100c10 = addr; + } else { + nvkm_warn(&fb->base.subdev, + "dma_map_page() failed on 100c10 page\n"); + } + } + + if (fb->r100c10) nvkm_wr32(device, 0x100c10, fb->r100c10 >> 8); } @@ -103,12 +114,13 @@ gf100_fb_dtor(struct nvkm_fb *base) struct gf100_fb *fb = gf100_fb(base); struct nvkm_device *device = fb->base.subdev.device; - if (fb->r100c10_page) { + if (fb->r100c10) { dma_unmap_page(device->dev, fb->r100c10, PAGE_SIZE, DMA_BIDIRECTIONAL); - __free_page(fb->r100c10_page); } + __free_page(fb->r100c10_page); + return fb; } @@ -124,11 +136,9 @@ gf100_fb_new_(const struct nvkm_fb_func *func, struct nvkm_device *device, *pfb = &fb->base; fb->r100c10_page = alloc_page(GFP_KERNEL | __GFP_ZERO); - if (fb->r100c10_page) { - fb->r100c10 = dma_map_page(device->dev, fb->r100c10_page, 0, - PAGE_SIZE, DMA_BIDIRECTIONAL); - if (dma_mapping_error(device->dev, fb->r100c10)) - return -EFAULT; + if (!fb->r100c10_page) { + nvkm_error(&fb->base.subdev, "failed 100c10 page alloc\n"); + return -ENOMEM; } return 0; -- 2.7.4