All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT
@ 2014-07-17 10:01 Michel Dänzer
  2014-07-17 10:01 ` [PATCH 1/5] drm/radeon: Remove radeon_gart_restore() Michel Dänzer
                   ` (12 more replies)
  0 siblings, 13 replies; 38+ messages in thread
From: Michel Dänzer @ 2014-07-17 10:01 UTC (permalink / raw)
  To: dri-devel, mesa-dev

In order to try and improve X(Shm)PutImage performance with glamor, I
implemented support for write-combined CPU mappings of BOs in GTT.

This did provide a nice speedup, but to my surprise, using VRAM instead
of write-combined GTT turned out to be even faster in general on my
Kaveri machine, both for the internal GPU and for discrete GPUs.

However, I've kept the changes from GTT to VRAM separated, in case this
turns out to be a loss on other setups.

Kernel patches:

[PATCH 1/5] drm/radeon: Remove radeon_gart_restore()
[PATCH 2/5] drm/radeon: Pass GART page flags to
[PATCH 3/5] drm/radeon: Allow write-combined CPU mappings of BOs in
[PATCH 4/5] drm/radeon: Use write-combined CPU mappings of rings and
[PATCH 5/5] drm/radeon: Use VRAM for indirect buffers on >= SI

Mesa patches:

[PATCH 1/5] winsys/radeon: Use separate caching buffer managers for
[PATCH 2/5] r600g/radeonsi: Use write-combined CPU mappings of some
[PATCH 3/5] r600g/radeonsi: Prefer VRAM for CPU -> GPU streaming
[PATCH 4/5] r600g,radeonsi: Use write-combined persistent GTT
[PATCH 5/5] r600g,radeonsi: Prefer VRAM for persistent mappings

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH 1/5] drm/radeon: Remove radeon_gart_restore()
  2014-07-17 10:01 [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT Michel Dänzer
@ 2014-07-17 10:01 ` Michel Dänzer
  2014-07-17 10:01 ` [PATCH 2/5] drm/radeon: Pass GART page flags to radeon_gart_set_page() explicitly Michel Dänzer
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 38+ messages in thread
From: Michel Dänzer @ 2014-07-17 10:01 UTC (permalink / raw)
  To: dri-devel, mesa-dev

From: Michel Dänzer <michel.daenzer@amd.com>

Doesn't seem necessary, the GART table memory should be persistent.

Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
---
 drivers/gpu/drm/radeon/cik.c         |  1 -
 drivers/gpu/drm/radeon/evergreen.c   |  1 -
 drivers/gpu/drm/radeon/ni.c          |  1 -
 drivers/gpu/drm/radeon/r100.c        |  1 -
 drivers/gpu/drm/radeon/r300.c        |  1 -
 drivers/gpu/drm/radeon/r600.c        |  1 -
 drivers/gpu/drm/radeon/radeon.h      |  1 -
 drivers/gpu/drm/radeon/radeon_gart.c | 27 ---------------------------
 drivers/gpu/drm/radeon/rs400.c       |  1 -
 drivers/gpu/drm/radeon/rs600.c       |  1 -
 drivers/gpu/drm/radeon/rv770.c       |  1 -
 drivers/gpu/drm/radeon/si.c          |  1 -
 12 files changed, 38 deletions(-)

diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
index 0b24711..1b0da66 100644
--- a/drivers/gpu/drm/radeon/cik.c
+++ b/drivers/gpu/drm/radeon/cik.c
@@ -5401,7 +5401,6 @@ static int cik_pcie_gart_enable(struct radeon_device *rdev)
 	r = radeon_gart_table_vram_pin(rdev);
 	if (r)
 		return r;
-	radeon_gart_restore(rdev);
 	/* Setup TLB control */
 	WREG32(MC_VM_MX_L1_TLB_CNTL,
 	       (0xA << 7) |
diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c
index 250bac3..39ada71 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -2424,7 +2424,6 @@ static int evergreen_pcie_gart_enable(struct radeon_device *rdev)
 	r = radeon_gart_table_vram_pin(rdev);
 	if (r)
 		return r;
-	radeon_gart_restore(rdev);
 	/* Setup L2 cache */
 	WREG32(VM_L2_CNTL, ENABLE_L2_CACHE | ENABLE_L2_FRAGMENT_PROCESSING |
 				ENABLE_L2_PTE_CACHE_LRU_UPDATE_BY_WRITE |
diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
index 5a33ca6..327b85f 100644
--- a/drivers/gpu/drm/radeon/ni.c
+++ b/drivers/gpu/drm/radeon/ni.c
@@ -1229,7 +1229,6 @@ static int cayman_pcie_gart_enable(struct radeon_device *rdev)
 	r = radeon_gart_table_vram_pin(rdev);
 	if (r)
 		return r;
-	radeon_gart_restore(rdev);
 	/* Setup TLB control */
 	WREG32(MC_VM_MX_L1_TLB_CNTL,
 	       (0xA << 7) |
diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c
index 1544efc..ed1c53e 100644
--- a/drivers/gpu/drm/radeon/r100.c
+++ b/drivers/gpu/drm/radeon/r100.c
@@ -652,7 +652,6 @@ int r100_pci_gart_enable(struct radeon_device *rdev)
 {
 	uint32_t tmp;
 
-	radeon_gart_restore(rdev);
 	/* discard memory request outside of configured range */
 	tmp = RREG32(RADEON_AIC_CNTL) | RADEON_DIS_OUT_OF_PCI_GART_ACCESS;
 	WREG32(RADEON_AIC_CNTL, tmp);
diff --git a/drivers/gpu/drm/radeon/r300.c b/drivers/gpu/drm/radeon/r300.c
index 3c21d77..8d14e66 100644
--- a/drivers/gpu/drm/radeon/r300.c
+++ b/drivers/gpu/drm/radeon/r300.c
@@ -120,7 +120,6 @@ int rv370_pcie_gart_enable(struct radeon_device *rdev)
 	r = radeon_gart_table_vram_pin(rdev);
 	if (r)
 		return r;
-	radeon_gart_restore(rdev);
 	/* discard memory request outside of configured range */
 	tmp = RADEON_PCIE_TX_GART_UNMAPPED_ACCESS_DISCARD;
 	WREG32_PCIE(RADEON_PCIE_TX_GART_CNTL, tmp);
diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c
index c66952d..e1be5ce 100644
--- a/drivers/gpu/drm/radeon/r600.c
+++ b/drivers/gpu/drm/radeon/r600.c
@@ -968,7 +968,6 @@ static int r600_pcie_gart_enable(struct radeon_device *rdev)
 	r = radeon_gart_table_vram_pin(rdev);
 	if (r)
 		return r;
-	radeon_gart_restore(rdev);
 
 	/* Setup L2 cache */
 	WREG32(VM_L2_CNTL, ENABLE_L2_CACHE | ENABLE_L2_FRAGMENT_PROCESSING |
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 079eac7..f4869b4 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -614,7 +614,6 @@ void radeon_gart_unbind(struct radeon_device *rdev, unsigned offset,
 int radeon_gart_bind(struct radeon_device *rdev, unsigned offset,
 		     int pages, struct page **pagelist,
 		     dma_addr_t *dma_addr);
-void radeon_gart_restore(struct radeon_device *rdev);
 
 
 /*
diff --git a/drivers/gpu/drm/radeon/radeon_gart.c b/drivers/gpu/drm/radeon/radeon_gart.c
index 2e72365..b7d3e84 100644
--- a/drivers/gpu/drm/radeon/radeon_gart.c
+++ b/drivers/gpu/drm/radeon/radeon_gart.c
@@ -298,33 +298,6 @@ int radeon_gart_bind(struct radeon_device *rdev, unsigned offset,
 }
 
 /**
- * radeon_gart_restore - bind all pages in the gart page table
- *
- * @rdev: radeon_device pointer
- *
- * Binds all pages in the gart page table (all asics).
- * Used to rebuild the gart table on device startup or resume.
- */
-void radeon_gart_restore(struct radeon_device *rdev)
-{
-	int i, j, t;
-	u64 page_base;
-
-	if (!rdev->gart.ptr) {
-		return;
-	}
-	for (i = 0, t = 0; i < rdev->gart.num_cpu_pages; i++) {
-		page_base = rdev->gart.pages_addr[i];
-		for (j = 0; j < (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); j++, t++) {
-			radeon_gart_set_page(rdev, t, page_base);
-			page_base += RADEON_GPU_PAGE_SIZE;
-		}
-	}
-	mb();
-	radeon_gart_tlb_flush(rdev);
-}
-
-/**
  * radeon_gart_init - init the driver info for managing the gart
  *
  * @rdev: radeon_device pointer
diff --git a/drivers/gpu/drm/radeon/rs400.c b/drivers/gpu/drm/radeon/rs400.c
index a0f96de..4519f9c 100644
--- a/drivers/gpu/drm/radeon/rs400.c
+++ b/drivers/gpu/drm/radeon/rs400.c
@@ -109,7 +109,6 @@ int rs400_gart_enable(struct radeon_device *rdev)
 	uint32_t size_reg;
 	uint32_t tmp;
 
-	radeon_gart_restore(rdev);
 	tmp = RREG32_MC(RS690_AIC_CTRL_SCRATCH);
 	tmp |= RS690_DIS_OUT_OF_PCI_GART_ACCESS;
 	WREG32_MC(RS690_AIC_CTRL_SCRATCH, tmp);
diff --git a/drivers/gpu/drm/radeon/rs600.c b/drivers/gpu/drm/radeon/rs600.c
index d1a35cb..27a56ad 100644
--- a/drivers/gpu/drm/radeon/rs600.c
+++ b/drivers/gpu/drm/radeon/rs600.c
@@ -555,7 +555,6 @@ static int rs600_gart_enable(struct radeon_device *rdev)
 	r = radeon_gart_table_vram_pin(rdev);
 	if (r)
 		return r;
-	radeon_gart_restore(rdev);
 	/* Enable bus master */
 	tmp = RREG32(RADEON_BUS_CNTL) & ~RS600_BUS_MASTER_DIS;
 	WREG32(RADEON_BUS_CNTL, tmp);
diff --git a/drivers/gpu/drm/radeon/rv770.c b/drivers/gpu/drm/radeon/rv770.c
index da8703d..2983f17 100644
--- a/drivers/gpu/drm/radeon/rv770.c
+++ b/drivers/gpu/drm/radeon/rv770.c
@@ -900,7 +900,6 @@ static int rv770_pcie_gart_enable(struct radeon_device *rdev)
 	r = radeon_gart_table_vram_pin(rdev);
 	if (r)
 		return r;
-	radeon_gart_restore(rdev);
 	/* Setup L2 cache */
 	WREG32(VM_L2_CNTL, ENABLE_L2_CACHE | ENABLE_L2_FRAGMENT_PROCESSING |
 				ENABLE_L2_PTE_CACHE_LRU_UPDATE_BY_WRITE |
diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
index eba0225..ba2b453 100644
--- a/drivers/gpu/drm/radeon/si.c
+++ b/drivers/gpu/drm/radeon/si.c
@@ -4048,7 +4048,6 @@ static int si_pcie_gart_enable(struct radeon_device *rdev)
 	r = radeon_gart_table_vram_pin(rdev);
 	if (r)
 		return r;
-	radeon_gart_restore(rdev);
 	/* Setup TLB control */
 	WREG32(MC_VM_MX_L1_TLB_CNTL,
 	       (0xA << 7) |
-- 
2.0.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 2/5] drm/radeon: Pass GART page flags to radeon_gart_set_page() explicitly
  2014-07-17 10:01 [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT Michel Dänzer
  2014-07-17 10:01 ` [PATCH 1/5] drm/radeon: Remove radeon_gart_restore() Michel Dänzer
@ 2014-07-17 10:01 ` Michel Dänzer
  2014-07-17 10:01 ` [PATCH 3/5] drm/radeon: Allow write-combined CPU mappings of BOs in GTT Michel Dänzer
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 38+ messages in thread
From: Michel Dänzer @ 2014-07-17 10:01 UTC (permalink / raw)
  To: dri-devel, mesa-dev

From: Michel Dänzer <michel.daenzer@amd.com>

Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
---
 drivers/gpu/drm/radeon/r100.c        |  2 +-
 drivers/gpu/drm/radeon/r300.c        | 12 +++++++++---
 drivers/gpu/drm/radeon/radeon.h      | 12 +++++++++---
 drivers/gpu/drm/radeon/radeon_asic.h |  8 ++++----
 drivers/gpu/drm/radeon/radeon_gart.c |  9 ++++++---
 drivers/gpu/drm/radeon/radeon_ttm.c  |  8 ++++++--
 drivers/gpu/drm/radeon/rs400.c       | 13 ++++++++++---
 drivers/gpu/drm/radeon/rs600.c       | 16 +++++++++++-----
 include/uapi/drm/radeon_drm.h        |  4 +++-
 9 files changed, 59 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c
index ed1c53e..9241b89 100644
--- a/drivers/gpu/drm/radeon/r100.c
+++ b/drivers/gpu/drm/radeon/r100.c
@@ -682,7 +682,7 @@ void r100_pci_gart_disable(struct radeon_device *rdev)
 }
 
 void r100_pci_gart_set_page(struct radeon_device *rdev, unsigned i,
-			    uint64_t addr)
+			    uint64_t addr, uint32_t flags)
 {
 	u32 *gtt = rdev->gart.ptr;
 	gtt[i] = cpu_to_le32(lower_32_bits(addr));
diff --git a/drivers/gpu/drm/radeon/r300.c b/drivers/gpu/drm/radeon/r300.c
index 8d14e66..75b3033 100644
--- a/drivers/gpu/drm/radeon/r300.c
+++ b/drivers/gpu/drm/radeon/r300.c
@@ -69,17 +69,23 @@ void rv370_pcie_gart_tlb_flush(struct radeon_device *rdev)
 	mb();
 }
 
+#define R300_PTE_UNSNOOPED (1 << 0)
 #define R300_PTE_WRITEABLE (1 << 2)
 #define R300_PTE_READABLE  (1 << 3)
 
 void rv370_pcie_gart_set_page(struct radeon_device *rdev, unsigned i,
-			      uint64_t addr)
+			      uint64_t addr, uint32_t flags)
 {
 	void __iomem *ptr = rdev->gart.ptr;
 
 	addr = (lower_32_bits(addr) >> 8) |
-	       ((upper_32_bits(addr) & 0xff) << 24) |
-	       R300_PTE_WRITEABLE | R300_PTE_READABLE;
+		((upper_32_bits(addr) & 0xff) << 24);
+	if (flags & RADEON_GART_PAGE_READ)
+		addr |= R300_PTE_READABLE;
+	if (flags & RADEON_GART_PAGE_WRITE)
+		addr |= R300_PTE_WRITEABLE;
+	if (!(flags & RADEON_GART_PAGE_SNOOP))
+		addr |= R300_PTE_UNSNOOPED;
 	/* on x86 we want this to be CPU endian, on powerpc
 	 * on powerpc without HW swappers, it'll get swapped on way
 	 * into VRAM - so no need for cpu_to_le32 on VRAM tables */
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index f4869b4..4dd092e 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -589,6 +589,12 @@ struct radeon_mc;
 #define RADEON_GPU_PAGE_SHIFT 12
 #define RADEON_GPU_PAGE_ALIGN(a) (((a) + RADEON_GPU_PAGE_MASK) & ~RADEON_GPU_PAGE_MASK)
 
+#define RADEON_GART_PAGE_DUMMY  0
+#define RADEON_GART_PAGE_VALID	(1 << 0)
+#define RADEON_GART_PAGE_READ	(1 << 1)
+#define RADEON_GART_PAGE_WRITE	(1 << 2)
+#define RADEON_GART_PAGE_SNOOP	(1 << 3)
+
 struct radeon_gart {
 	dma_addr_t			table_addr;
 	struct radeon_bo		*robj;
@@ -613,7 +619,7 @@ void radeon_gart_unbind(struct radeon_device *rdev, unsigned offset,
 			int pages);
 int radeon_gart_bind(struct radeon_device *rdev, unsigned offset,
 		     int pages, struct page **pagelist,
-		     dma_addr_t *dma_addr);
+		     dma_addr_t *dma_addr, uint32_t flags);
 
 
 /*
@@ -1775,7 +1781,7 @@ struct radeon_asic {
 	struct {
 		void (*tlb_flush)(struct radeon_device *rdev);
 		void (*set_page)(struct radeon_device *rdev, unsigned i,
-				 uint64_t addr);
+				 uint64_t addr, uint32_t flags);
 	} gart;
 	struct {
 		int (*init)(struct radeon_device *rdev);
@@ -2702,7 +2708,7 @@ void radeon_ring_write(struct radeon_ring *ring, uint32_t v);
 #define radeon_vga_set_state(rdev, state) (rdev)->asic->vga_set_state((rdev), (state))
 #define radeon_asic_reset(rdev) (rdev)->asic->asic_reset((rdev))
 #define radeon_gart_tlb_flush(rdev) (rdev)->asic->gart.tlb_flush((rdev))
-#define radeon_gart_set_page(rdev, i, p) (rdev)->asic->gart.set_page((rdev), (i), (p))
+#define radeon_gart_set_page(rdev, i, p, f) (rdev)->asic->gart.set_page((rdev), (i), (p), (f))
 #define radeon_asic_vm_init(rdev) (rdev)->asic->vm.init((rdev))
 #define radeon_asic_vm_fini(rdev) (rdev)->asic->vm.fini((rdev))
 #define radeon_asic_vm_set_page(rdev, ib, pe, addr, count, incr, flags) ((rdev)->asic->vm.set_page((rdev), (ib), (pe), (addr), (count), (incr), (flags)))
diff --git a/drivers/gpu/drm/radeon/radeon_asic.h b/drivers/gpu/drm/radeon/radeon_asic.h
index 01e7c0a..f632e31 100644
--- a/drivers/gpu/drm/radeon/radeon_asic.h
+++ b/drivers/gpu/drm/radeon/radeon_asic.h
@@ -68,7 +68,7 @@ int r100_asic_reset(struct radeon_device *rdev);
 u32 r100_get_vblank_counter(struct radeon_device *rdev, int crtc);
 void r100_pci_gart_tlb_flush(struct radeon_device *rdev);
 void r100_pci_gart_set_page(struct radeon_device *rdev, unsigned i,
-			    uint64_t addr);
+			    uint64_t addr, uint32_t flags);
 void r100_ring_start(struct radeon_device *rdev, struct radeon_ring *ring);
 int r100_irq_set(struct radeon_device *rdev);
 int r100_irq_process(struct radeon_device *rdev);
@@ -173,7 +173,7 @@ extern void r300_fence_ring_emit(struct radeon_device *rdev,
 extern int r300_cs_parse(struct radeon_cs_parser *p);
 extern void rv370_pcie_gart_tlb_flush(struct radeon_device *rdev);
 extern void rv370_pcie_gart_set_page(struct radeon_device *rdev, unsigned i,
-				     uint64_t addr);
+				     uint64_t addr, uint32_t flags);
 extern void rv370_set_pcie_lanes(struct radeon_device *rdev, int lanes);
 extern int rv370_get_pcie_lanes(struct radeon_device *rdev);
 extern void r300_set_reg_safe(struct radeon_device *rdev);
@@ -209,7 +209,7 @@ extern int rs400_suspend(struct radeon_device *rdev);
 extern int rs400_resume(struct radeon_device *rdev);
 void rs400_gart_tlb_flush(struct radeon_device *rdev);
 void rs400_gart_set_page(struct radeon_device *rdev, unsigned i,
-			 uint64_t addr);
+			 uint64_t addr, uint32_t flags);
 uint32_t rs400_mc_rreg(struct radeon_device *rdev, uint32_t reg);
 void rs400_mc_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v);
 int rs400_gart_init(struct radeon_device *rdev);
@@ -233,7 +233,7 @@ void rs600_irq_disable(struct radeon_device *rdev);
 u32 rs600_get_vblank_counter(struct radeon_device *rdev, int crtc);
 void rs600_gart_tlb_flush(struct radeon_device *rdev);
 void rs600_gart_set_page(struct radeon_device *rdev, unsigned i,
-			 uint64_t addr);
+			 uint64_t addr, uint32_t flags);
 uint32_t rs600_mc_rreg(struct radeon_device *rdev, uint32_t reg);
 void rs600_mc_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v);
 void rs600_bandwidth_update(struct radeon_device *rdev);
diff --git a/drivers/gpu/drm/radeon/radeon_gart.c b/drivers/gpu/drm/radeon/radeon_gart.c
index b7d3e84..d684642 100644
--- a/drivers/gpu/drm/radeon/radeon_gart.c
+++ b/drivers/gpu/drm/radeon/radeon_gart.c
@@ -243,7 +243,8 @@ void radeon_gart_unbind(struct radeon_device *rdev, unsigned offset,
 			page_base = rdev->gart.pages_addr[p];
 			for (j = 0; j < (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); j++, t++) {
 				if (rdev->gart.ptr) {
-					radeon_gart_set_page(rdev, t, page_base);
+					radeon_gart_set_page(rdev, t, page_base,
+							     RADEON_GART_PAGE_DUMMY);
 				}
 				page_base += RADEON_GPU_PAGE_SIZE;
 			}
@@ -261,13 +262,15 @@ void radeon_gart_unbind(struct radeon_device *rdev, unsigned offset,
  * @pages: number of pages to bind
  * @pagelist: pages to bind
  * @dma_addr: DMA addresses of pages
+ * @flags: RADEON_GART_PAGE_* flags
  *
  * Binds the requested pages to the gart page table
  * (all asics).
  * Returns 0 for success, -EINVAL for failure.
  */
 int radeon_gart_bind(struct radeon_device *rdev, unsigned offset,
-		     int pages, struct page **pagelist, dma_addr_t *dma_addr)
+		     int pages, struct page **pagelist, dma_addr_t *dma_addr,
+		     uint32_t flags)
 {
 	unsigned t;
 	unsigned p;
@@ -287,7 +290,7 @@ int radeon_gart_bind(struct radeon_device *rdev, unsigned offset,
 		if (rdev->gart.ptr) {
 			page_base = rdev->gart.pages_addr[p];
 			for (j = 0; j < (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); j++, t++) {
-				radeon_gart_set_page(rdev, t, page_base);
+				radeon_gart_set_page(rdev, t, page_base, flags);
 				page_base += RADEON_GPU_PAGE_SIZE;
 			}
 		}
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
index c8a8a51..7fb7c1c 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -521,6 +521,8 @@ static int radeon_ttm_backend_bind(struct ttm_tt *ttm,
 				   struct ttm_mem_reg *bo_mem)
 {
 	struct radeon_ttm_tt *gtt = (void*)ttm;
+	uint32_t flags = RADEON_GART_PAGE_VALID | RADEON_GART_PAGE_READ |
+		RADEON_GART_PAGE_WRITE;
 	int r;
 
 	gtt->offset = (unsigned long)(bo_mem->start << PAGE_SHIFT);
@@ -528,8 +530,10 @@ static int radeon_ttm_backend_bind(struct ttm_tt *ttm,
 		WARN(1, "nothing to bind %lu pages for mreg %p back %p!\n",
 		     ttm->num_pages, bo_mem, ttm);
 	}
-	r = radeon_gart_bind(gtt->rdev, gtt->offset,
-			     ttm->num_pages, ttm->pages, gtt->ttm.dma_address);
+	if (ttm->caching_state == tt_cached)
+		flags |= RADEON_GART_PAGE_SNOOP;
+	r = radeon_gart_bind(gtt->rdev, gtt->offset, ttm->num_pages,
+			     ttm->pages, gtt->ttm.dma_address, flags);
 	if (r) {
 		DRM_ERROR("failed to bind %lu pages at 0x%08X\n",
 			  ttm->num_pages, (unsigned)gtt->offset);
diff --git a/drivers/gpu/drm/radeon/rs400.c b/drivers/gpu/drm/radeon/rs400.c
index 4519f9c..6c1fc33 100644
--- a/drivers/gpu/drm/radeon/rs400.c
+++ b/drivers/gpu/drm/radeon/rs400.c
@@ -208,17 +208,24 @@ void rs400_gart_fini(struct radeon_device *rdev)
 	radeon_gart_table_ram_free(rdev);
 }
 
+#define RS400_PTE_UNSNOOPED (1 << 0)
 #define RS400_PTE_WRITEABLE (1 << 2)
 #define RS400_PTE_READABLE  (1 << 3)
 
-void rs400_gart_set_page(struct radeon_device *rdev, unsigned i, uint64_t addr)
+void rs400_gart_set_page(struct radeon_device *rdev, unsigned i,
+			 uint64_t addr, uint32_t flags)
 {
 	uint32_t entry;
 	u32 *gtt = rdev->gart.ptr;
 
 	entry = (lower_32_bits(addr) & PAGE_MASK) |
-		((upper_32_bits(addr) & 0xff) << 4) |
-		RS400_PTE_WRITEABLE | RS400_PTE_READABLE;
+		((upper_32_bits(addr) & 0xff) << 4);
+	if (flags & RADEON_GART_PAGE_READ)
+		addr |= RS400_PTE_READABLE;
+	if (flags & RADEON_GART_PAGE_WRITE)
+		addr |= RS400_PTE_WRITEABLE;
+	if (!(flags & RADEON_GART_PAGE_SNOOP))
+		entry |= RS400_PTE_UNSNOOPED;
 	entry = cpu_to_le32(entry);
 	gtt[i] = entry;
 }
diff --git a/drivers/gpu/drm/radeon/rs600.c b/drivers/gpu/drm/radeon/rs600.c
index 27a56ad..5f6db46 100644
--- a/drivers/gpu/drm/radeon/rs600.c
+++ b/drivers/gpu/drm/radeon/rs600.c
@@ -625,15 +625,21 @@ static void rs600_gart_fini(struct radeon_device *rdev)
 	radeon_gart_table_vram_free(rdev);
 }
 
-void rs600_gart_set_page(struct radeon_device *rdev, unsigned i, uint64_t addr)
+void rs600_gart_set_page(struct radeon_device *rdev, unsigned i,
+			 uint64_t addr, uint32_t flags)
 {
 	void __iomem *ptr = (void *)rdev->gart.ptr;
 
 	addr = addr & 0xFFFFFFFFFFFFF000ULL;
-	if (addr == rdev->dummy_page.addr)
-		addr |= R600_PTE_SYSTEM | R600_PTE_SNOOPED;
-	else
-		addr |= R600_PTE_GART;
+	addr |= R600_PTE_SYSTEM;
+	if (flags & RADEON_GART_PAGE_VALID)
+		addr |= R600_PTE_VALID;
+	if (flags & RADEON_GART_PAGE_READ)
+		addr |= R600_PTE_READABLE;
+	if (flags & RADEON_GART_PAGE_WRITE)
+		addr |= R600_PTE_WRITEABLE;
+	if (flags & RADEON_GART_PAGE_SNOOP)
+		addr |= R600_PTE_SNOOPED;
 	writeq(addr, ptr + (i * 8));
 }
 
diff --git a/include/uapi/drm/radeon_drm.h b/include/uapi/drm/radeon_drm.h
index 1cc0b61..509b2d7 100644
--- a/include/uapi/drm/radeon_drm.h
+++ b/include/uapi/drm/radeon_drm.h
@@ -796,7 +796,9 @@ struct drm_radeon_gem_info {
 	uint64_t	vram_visible;
 };
 
-#define RADEON_GEM_NO_BACKING_STORE 1
+#define RADEON_GEM_NO_BACKING_STORE	(1 << 0)
+#define RADEON_GEM_GTT_UC		(1 << 1)
+#define RADEON_GEM_GTT_WC		(1 << 2)
 
 struct drm_radeon_gem_create {
 	uint64_t	size;
-- 
2.0.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 3/5] drm/radeon: Allow write-combined CPU mappings of BOs in GTT
  2014-07-17 10:01 [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT Michel Dänzer
  2014-07-17 10:01 ` [PATCH 1/5] drm/radeon: Remove radeon_gart_restore() Michel Dänzer
  2014-07-17 10:01 ` [PATCH 2/5] drm/radeon: Pass GART page flags to radeon_gart_set_page() explicitly Michel Dänzer
@ 2014-07-17 10:01 ` Michel Dänzer
  2014-07-17 10:01 ` [PATCH 4/5] drm/radeon: Use write-combined CPU mappings of rings and IBs on >= SI Michel Dänzer
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 38+ messages in thread
From: Michel Dänzer @ 2014-07-17 10:01 UTC (permalink / raw)
  To: dri-devel, mesa-dev

From: Michel Dänzer <michel.daenzer@amd.com>

Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
---
 drivers/gpu/drm/radeon/cik.c              |  4 ++--
 drivers/gpu/drm/radeon/cik_sdma.c         |  3 ++-
 drivers/gpu/drm/radeon/evergreen.c        | 12 ++++++++----
 drivers/gpu/drm/radeon/r600.c             |  4 ++--
 drivers/gpu/drm/radeon/radeon.h           |  3 ++-
 drivers/gpu/drm/radeon/radeon_benchmark.c |  4 ++--
 drivers/gpu/drm/radeon/radeon_device.c    |  3 ++-
 drivers/gpu/drm/radeon/radeon_fb.c        |  2 +-
 drivers/gpu/drm/radeon/radeon_gart.c      |  2 +-
 drivers/gpu/drm/radeon/radeon_gem.c       | 16 ++++++----------
 drivers/gpu/drm/radeon/radeon_object.c    | 24 +++++++++++++++++++-----
 drivers/gpu/drm/radeon/radeon_object.h    |  5 +++--
 drivers/gpu/drm/radeon/radeon_prime.c     |  2 +-
 drivers/gpu/drm/radeon/radeon_ring.c      |  4 ++--
 drivers/gpu/drm/radeon/radeon_sa.c        |  4 ++--
 drivers/gpu/drm/radeon/radeon_test.c      |  4 ++--
 drivers/gpu/drm/radeon/radeon_ttm.c       |  2 +-
 drivers/gpu/drm/radeon/radeon_uvd.c       |  6 +++---
 drivers/gpu/drm/radeon/radeon_vce.c       |  2 +-
 drivers/gpu/drm/radeon/radeon_vm.c        |  8 ++++++--
 drivers/gpu/drm/radeon/si_dma.c           |  3 ++-
 21 files changed, 70 insertions(+), 47 deletions(-)

diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
index 1b0da66..a9fd3e7 100644
--- a/drivers/gpu/drm/radeon/cik.c
+++ b/drivers/gpu/drm/radeon/cik.c
@@ -4374,7 +4374,7 @@ static int cik_mec_init(struct radeon_device *rdev)
 		r = radeon_bo_create(rdev,
 				     rdev->mec.num_mec *rdev->mec.num_pipe * MEC_HPD_SIZE * 2,
 				     PAGE_SIZE, true,
-				     RADEON_GEM_DOMAIN_GTT, NULL,
+				     RADEON_GEM_DOMAIN_GTT, 0, NULL,
 				     &rdev->mec.hpd_eop_obj);
 		if (r) {
 			dev_warn(rdev->dev, "(%d) create HDP EOP bo failed\n", r);
@@ -4544,7 +4544,7 @@ static int cik_cp_compute_resume(struct radeon_device *rdev)
 			r = radeon_bo_create(rdev,
 					     sizeof(struct bonaire_mqd),
 					     PAGE_SIZE, true,
-					     RADEON_GEM_DOMAIN_GTT, NULL,
+					     RADEON_GEM_DOMAIN_GTT, 0, NULL,
 					     &rdev->ring[idx].mqd_obj);
 			if (r) {
 				dev_warn(rdev->dev, "(%d) create MQD bo failed\n", r);
diff --git a/drivers/gpu/drm/radeon/cik_sdma.c b/drivers/gpu/drm/radeon/cik_sdma.c
index 8e9d0f1..a7f66c8 100644
--- a/drivers/gpu/drm/radeon/cik_sdma.c
+++ b/drivers/gpu/drm/radeon/cik_sdma.c
@@ -742,7 +742,8 @@ void cik_sdma_vm_set_page(struct radeon_device *rdev,
 
 	trace_radeon_vm_set_page(pe, addr, count, incr, flags);
 
-	if (flags == R600_PTE_GART) {
+	/* XXX: How to distinguish between GART and other system memory pages? */
+	if (flags & R600_PTE_SYSTEM) {
 		uint64_t src = rdev->gart.table_addr + (addr >> 12) * 8;
 		while (count) {
 			unsigned bytes = count * 8;
diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c
index 39ada71..902334f 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -4022,7 +4022,8 @@ int sumo_rlc_init(struct radeon_device *rdev)
 		/* save restore block */
 		if (rdev->rlc.save_restore_obj == NULL) {
 			r = radeon_bo_create(rdev, dws * 4, PAGE_SIZE, true,
-					     RADEON_GEM_DOMAIN_VRAM, NULL, &rdev->rlc.save_restore_obj);
+					     RADEON_GEM_DOMAIN_VRAM, 0, NULL,
+					     &rdev->rlc.save_restore_obj);
 			if (r) {
 				dev_warn(rdev->dev, "(%d) create RLC sr bo failed\n", r);
 				return r;
@@ -4100,7 +4101,8 @@ int sumo_rlc_init(struct radeon_device *rdev)
 
 		if (rdev->rlc.clear_state_obj == NULL) {
 			r = radeon_bo_create(rdev, dws * 4, PAGE_SIZE, true,
-					     RADEON_GEM_DOMAIN_VRAM, NULL, &rdev->rlc.clear_state_obj);
+					     RADEON_GEM_DOMAIN_VRAM, 0, NULL,
+					     &rdev->rlc.clear_state_obj);
 			if (r) {
 				dev_warn(rdev->dev, "(%d) create RLC c bo failed\n", r);
 				sumo_rlc_fini(rdev);
@@ -4174,8 +4176,10 @@ int sumo_rlc_init(struct radeon_device *rdev)
 
 	if (rdev->rlc.cp_table_size) {
 		if (rdev->rlc.cp_table_obj == NULL) {
-			r = radeon_bo_create(rdev, rdev->rlc.cp_table_size, PAGE_SIZE, true,
-					     RADEON_GEM_DOMAIN_VRAM, NULL, &rdev->rlc.cp_table_obj);
+			r = radeon_bo_create(rdev, rdev->rlc.cp_table_size,
+					     PAGE_SIZE, true,
+					     RADEON_GEM_DOMAIN_VRAM, 0, NULL,
+					     &rdev->rlc.cp_table_obj);
 			if (r) {
 				dev_warn(rdev->dev, "(%d) create RLC cp table bo failed\n", r);
 				sumo_rlc_fini(rdev);
diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c
index e1be5ce..9446ff7 100644
--- a/drivers/gpu/drm/radeon/r600.c
+++ b/drivers/gpu/drm/radeon/r600.c
@@ -1338,7 +1338,7 @@ int r600_vram_scratch_init(struct radeon_device *rdev)
 	if (rdev->vram_scratch.robj == NULL) {
 		r = radeon_bo_create(rdev, RADEON_GPU_PAGE_SIZE,
 				     PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM,
-				     NULL, &rdev->vram_scratch.robj);
+				     0, NULL, &rdev->vram_scratch.robj);
 		if (r) {
 			return r;
 		}
@@ -3226,7 +3226,7 @@ int r600_ih_ring_alloc(struct radeon_device *rdev)
 	if (rdev->ih.ring_obj == NULL) {
 		r = radeon_bo_create(rdev, rdev->ih.ring_size,
 				     PAGE_SIZE, true,
-				     RADEON_GEM_DOMAIN_GTT,
+				     RADEON_GEM_DOMAIN_GTT, 0,
 				     NULL, &rdev->ih.ring_obj);
 		if (r) {
 			DRM_ERROR("radeon: failed to create ih ring buffer (%d).\n", r);
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 4dd092e..d9d4c73 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -464,6 +464,7 @@ struct radeon_bo {
 	struct ttm_placement		placement;
 	struct ttm_buffer_object	tbo;
 	struct ttm_bo_kmap_obj		kmap;
+	u32				flags;
 	unsigned			pin_count;
 	void				*kptr;
 	u32				tiling_flags;
@@ -544,7 +545,7 @@ int radeon_gem_init(struct radeon_device *rdev);
 void radeon_gem_fini(struct radeon_device *rdev);
 int radeon_gem_object_create(struct radeon_device *rdev, int size,
 				int alignment, int initial_domain,
-				bool discardable, bool kernel,
+				u32 flags, bool discardable, bool kernel,
 				struct drm_gem_object **obj);
 
 int radeon_mode_dumb_create(struct drm_file *file_priv,
diff --git a/drivers/gpu/drm/radeon/radeon_benchmark.c b/drivers/gpu/drm/radeon/radeon_benchmark.c
index 6e05a2e..69f5695 100644
--- a/drivers/gpu/drm/radeon/radeon_benchmark.c
+++ b/drivers/gpu/drm/radeon/radeon_benchmark.c
@@ -97,7 +97,7 @@ static void radeon_benchmark_move(struct radeon_device *rdev, unsigned size,
 	int time;
 
 	n = RADEON_BENCHMARK_ITERATIONS;
-	r = radeon_bo_create(rdev, size, PAGE_SIZE, true, sdomain, NULL, &sobj);
+	r = radeon_bo_create(rdev, size, PAGE_SIZE, true, sdomain, 0, NULL, &sobj);
 	if (r) {
 		goto out_cleanup;
 	}
@@ -109,7 +109,7 @@ static void radeon_benchmark_move(struct radeon_device *rdev, unsigned size,
 	if (r) {
 		goto out_cleanup;
 	}
-	r = radeon_bo_create(rdev, size, PAGE_SIZE, true, ddomain, NULL, &dobj);
+	r = radeon_bo_create(rdev, size, PAGE_SIZE, true, ddomain, 0, NULL, &dobj);
 	if (r) {
 		goto out_cleanup;
 	}
diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
index 03686fa..2315785e 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -385,7 +385,8 @@ int radeon_wb_init(struct radeon_device *rdev)
 
 	if (rdev->wb.wb_obj == NULL) {
 		r = radeon_bo_create(rdev, RADEON_GPU_PAGE_SIZE, PAGE_SIZE, true,
-				     RADEON_GEM_DOMAIN_GTT, NULL, &rdev->wb.wb_obj);
+				     RADEON_GEM_DOMAIN_GTT, 0, NULL,
+				     &rdev->wb.wb_obj);
 		if (r) {
 			dev_warn(rdev->dev, "(%d) create WB bo failed\n", r);
 			return r;
diff --git a/drivers/gpu/drm/radeon/radeon_fb.c b/drivers/gpu/drm/radeon/radeon_fb.c
index 665ced3..5dec038 100644
--- a/drivers/gpu/drm/radeon/radeon_fb.c
+++ b/drivers/gpu/drm/radeon/radeon_fb.c
@@ -127,7 +127,7 @@ static int radeonfb_create_pinned_object(struct radeon_fbdev *rfbdev,
 	aligned_size = ALIGN(size, PAGE_SIZE);
 	ret = radeon_gem_object_create(rdev, aligned_size, 0,
 				       RADEON_GEM_DOMAIN_VRAM,
-				       false, true,
+				       0, false, true,
 				       &gobj);
 	if (ret) {
 		printk(KERN_ERR "failed to allocate framebuffer (%d)\n",
diff --git a/drivers/gpu/drm/radeon/radeon_gart.c b/drivers/gpu/drm/radeon/radeon_gart.c
index d684642..a053a07 100644
--- a/drivers/gpu/drm/radeon/radeon_gart.c
+++ b/drivers/gpu/drm/radeon/radeon_gart.c
@@ -128,7 +128,7 @@ int radeon_gart_table_vram_alloc(struct radeon_device *rdev)
 	if (rdev->gart.robj == NULL) {
 		r = radeon_bo_create(rdev, rdev->gart.table_size,
 				     PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM,
-				     NULL, &rdev->gart.robj);
+				     0, NULL, &rdev->gart.robj);
 		if (r) {
 			return r;
 		}
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
index a2d6d3a..a0ba697 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -42,7 +42,7 @@ void radeon_gem_object_free(struct drm_gem_object *gobj)
 
 int radeon_gem_object_create(struct radeon_device *rdev, int size,
 				int alignment, int initial_domain,
-				bool discardable, bool kernel,
+				u32 flags, bool discardable, bool kernel,
 				struct drm_gem_object **obj)
 {
 	struct radeon_bo *robj;
@@ -64,7 +64,8 @@ int radeon_gem_object_create(struct radeon_device *rdev, int size,
 	}
 
 retry:
-	r = radeon_bo_create(rdev, size, alignment, kernel, initial_domain, NULL, &robj);
+	r = radeon_bo_create(rdev, size, alignment, kernel, initial_domain,
+			     flags, NULL, &robj);
 	if (r) {
 		if (r != -ERESTARTSYS) {
 			if (initial_domain == RADEON_GEM_DOMAIN_VRAM) {
@@ -238,8 +239,8 @@ int radeon_gem_create_ioctl(struct drm_device *dev, void *data,
 	/* create a gem object to contain this object in */
 	args->size = roundup(args->size, PAGE_SIZE);
 	r = radeon_gem_object_create(rdev, args->size, args->alignment,
-					args->initial_domain, false,
-					false, &gobj);
+				     args->initial_domain, args->flags,
+				     false, false, &gobj);
 	if (r) {
 		up_read(&rdev->exclusive_lock);
 		r = radeon_gem_handle_lockup(rdev, r);
@@ -447,11 +448,6 @@ int radeon_gem_va_ioctl(struct drm_device *dev, void *data,
 		args->operation = RADEON_VA_RESULT_ERROR;
 		return -EINVAL;
 	}
-	if (!(args->flags & RADEON_VM_PAGE_SNOOPED)) {
-		dev_err(&dev->pdev->dev, "only supported snooped mapping for now\n");
-		args->operation = RADEON_VA_RESULT_ERROR;
-		return -EINVAL;
-	}
 
 	switch (args->operation) {
 	case RADEON_VA_MAP:
@@ -558,7 +554,7 @@ int radeon_mode_dumb_create(struct drm_file *file_priv,
 	args->size = ALIGN(args->size, PAGE_SIZE);
 
 	r = radeon_gem_object_create(rdev, args->size, 0,
-				     RADEON_GEM_DOMAIN_VRAM,
+				     RADEON_GEM_DOMAIN_VRAM, 0,
 				     false, ttm_bo_type_device,
 				     &gobj);
 	if (r)
diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c
index 3309d47..d250be4 100644
--- a/drivers/gpu/drm/radeon/radeon_object.c
+++ b/drivers/gpu/drm/radeon/radeon_object.c
@@ -114,15 +114,23 @@ void radeon_ttm_placement_from_domain(struct radeon_bo *rbo, u32 domain)
 		rbo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED |
 					TTM_PL_FLAG_VRAM;
 	if (domain & RADEON_GEM_DOMAIN_GTT) {
-		if (rbo->rdev->flags & RADEON_IS_AGP) {
-			rbo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_TT;
+		if (rbo->flags & RADEON_GEM_GTT_UC) {
+			rbo->placements[c++] = TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_TT;
+		} else if ((rbo->flags & RADEON_GEM_GTT_WC) ||
+			   (rbo->rdev->flags & RADEON_IS_AGP)) {
+			rbo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED |
+				TTM_PL_FLAG_TT;
 		} else {
 			rbo->placements[c++] = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_TT;
 		}
 	}
 	if (domain & RADEON_GEM_DOMAIN_CPU) {
-		if (rbo->rdev->flags & RADEON_IS_AGP) {
-			rbo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_SYSTEM;
+		if (rbo->flags & RADEON_GEM_GTT_UC) {
+			rbo->placements[c++] = TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_SYSTEM;
+		} else if ((rbo->flags & RADEON_GEM_GTT_WC) ||
+		    rbo->rdev->flags & RADEON_IS_AGP) {
+			rbo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED |
+				TTM_PL_FLAG_SYSTEM;
 		} else {
 			rbo->placements[c++] = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_SYSTEM;
 		}
@@ -146,7 +154,7 @@ void radeon_ttm_placement_from_domain(struct radeon_bo *rbo, u32 domain)
 
 int radeon_bo_create(struct radeon_device *rdev,
 		     unsigned long size, int byte_align, bool kernel, u32 domain,
-		     struct sg_table *sg, struct radeon_bo **bo_ptr)
+		     u32 flags, struct sg_table *sg, struct radeon_bo **bo_ptr)
 {
 	struct radeon_bo *bo;
 	enum ttm_bo_type type;
@@ -183,6 +191,12 @@ int radeon_bo_create(struct radeon_device *rdev,
 	bo->initial_domain = domain & (RADEON_GEM_DOMAIN_VRAM |
 	                               RADEON_GEM_DOMAIN_GTT |
 	                               RADEON_GEM_DOMAIN_CPU);
+
+	bo->flags = flags;
+	/* PCI GART is always snooped */
+	if (!(rdev->flags & RADEON_IS_PCIE))
+		bo->flags &= ~(RADEON_GEM_GTT_WC | RADEON_GEM_GTT_UC);
+
 	radeon_ttm_placement_from_domain(bo, domain);
 	/* Kernel allocation are uninterruptible */
 	down_read(&rdev->pm.mclk_lock);
diff --git a/drivers/gpu/drm/radeon/radeon_object.h b/drivers/gpu/drm/radeon/radeon_object.h
index 5a873f3..972ec0f 100644
--- a/drivers/gpu/drm/radeon/radeon_object.h
+++ b/drivers/gpu/drm/radeon/radeon_object.h
@@ -124,7 +124,7 @@ extern int radeon_bo_wait(struct radeon_bo *bo, u32 *mem_type,
 
 extern int radeon_bo_create(struct radeon_device *rdev,
 			    unsigned long size, int byte_align,
-			    bool kernel, u32 domain,
+			    bool kernel, u32 domain, u32 flags,
 			    struct sg_table *sg,
 			    struct radeon_bo **bo_ptr);
 extern int radeon_bo_kmap(struct radeon_bo *bo, void **ptr);
@@ -170,7 +170,8 @@ static inline void * radeon_sa_bo_cpu_addr(struct radeon_sa_bo *sa_bo)
 
 extern int radeon_sa_bo_manager_init(struct radeon_device *rdev,
 				     struct radeon_sa_manager *sa_manager,
-				     unsigned size, u32 align, u32 domain);
+				     unsigned size, u32 align, u32 domain,
+				     u32 flags);
 extern void radeon_sa_bo_manager_fini(struct radeon_device *rdev,
 				      struct radeon_sa_manager *sa_manager);
 extern int radeon_sa_bo_manager_start(struct radeon_device *rdev,
diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
index 2007456..f7e48d3 100644
--- a/drivers/gpu/drm/radeon/radeon_prime.c
+++ b/drivers/gpu/drm/radeon/radeon_prime.c
@@ -65,7 +65,7 @@ struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
 	int ret;
 
 	ret = radeon_bo_create(rdev, size, PAGE_SIZE, false,
-			       RADEON_GEM_DOMAIN_GTT, sg, &bo);
+			       RADEON_GEM_DOMAIN_GTT, 0, sg, &bo);
 	if (ret)
 		return ERR_PTR(ret);
 
diff --git a/drivers/gpu/drm/radeon/radeon_ring.c b/drivers/gpu/drm/radeon/radeon_ring.c
index f8050f5..71439f0 100644
--- a/drivers/gpu/drm/radeon/radeon_ring.c
+++ b/drivers/gpu/drm/radeon/radeon_ring.c
@@ -204,7 +204,7 @@ int radeon_ib_pool_init(struct radeon_device *rdev)
 	r = radeon_sa_bo_manager_init(rdev, &rdev->ring_tmp_bo,
 				      RADEON_IB_POOL_SIZE*64*1024,
 				      RADEON_GPU_PAGE_SIZE,
-				      RADEON_GEM_DOMAIN_GTT);
+				      RADEON_GEM_DOMAIN_GTT, 0);
 	if (r) {
 		return r;
 	}
@@ -640,7 +640,7 @@ int radeon_ring_init(struct radeon_device *rdev, struct radeon_ring *ring, unsig
 	/* Allocate ring buffer */
 	if (ring->ring_obj == NULL) {
 		r = radeon_bo_create(rdev, ring->ring_size, PAGE_SIZE, true,
-				     RADEON_GEM_DOMAIN_GTT,
+				     RADEON_GEM_DOMAIN_GTT, 0,
 				     NULL, &ring->ring_obj);
 		if (r) {
 			dev_err(rdev->dev, "(%d) ring create failed\n", r);
diff --git a/drivers/gpu/drm/radeon/radeon_sa.c b/drivers/gpu/drm/radeon/radeon_sa.c
index adcf3e2..b84f97c 100644
--- a/drivers/gpu/drm/radeon/radeon_sa.c
+++ b/drivers/gpu/drm/radeon/radeon_sa.c
@@ -49,7 +49,7 @@ static void radeon_sa_bo_try_free(struct radeon_sa_manager *sa_manager);
 
 int radeon_sa_bo_manager_init(struct radeon_device *rdev,
 			      struct radeon_sa_manager *sa_manager,
-			      unsigned size, u32 align, u32 domain)
+			      unsigned size, u32 align, u32 domain, u32 flags)
 {
 	int i, r;
 
@@ -65,7 +65,7 @@ int radeon_sa_bo_manager_init(struct radeon_device *rdev,
 	}
 
 	r = radeon_bo_create(rdev, size, align, true,
-			     domain, NULL, &sa_manager->bo);
+			     domain, flags, NULL, &sa_manager->bo);
 	if (r) {
 		dev_err(rdev->dev, "(%d) failed to allocate bo for manager\n", r);
 		return r;
diff --git a/drivers/gpu/drm/radeon/radeon_test.c b/drivers/gpu/drm/radeon/radeon_test.c
index 3a13e0d..9c5b66c 100644
--- a/drivers/gpu/drm/radeon/radeon_test.c
+++ b/drivers/gpu/drm/radeon/radeon_test.c
@@ -73,7 +73,7 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
 	}
 
 	r = radeon_bo_create(rdev, size, PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM,
-			     NULL, &vram_obj);
+			     0, NULL, &vram_obj);
 	if (r) {
 		DRM_ERROR("Failed to create VRAM object\n");
 		goto out_cleanup;
@@ -93,7 +93,7 @@ static void radeon_do_test_moves(struct radeon_device *rdev, int flag)
 		struct radeon_fence *fence = NULL;
 
 		r = radeon_bo_create(rdev, size, PAGE_SIZE, true,
-				     RADEON_GEM_DOMAIN_GTT, NULL, gtt_obj + i);
+				     RADEON_GEM_DOMAIN_GTT, 0, NULL, gtt_obj + i);
 		if (r) {
 			DRM_ERROR("Failed to create GTT object %d\n", i);
 			goto out_lclean;
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
index 7fb7c1c..72afe82 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -730,7 +730,7 @@ int radeon_ttm_init(struct radeon_device *rdev)
 	radeon_ttm_set_active_vram_size(rdev, rdev->mc.visible_vram_size);
 
 	r = radeon_bo_create(rdev, 256 * 1024, PAGE_SIZE, true,
-			     RADEON_GEM_DOMAIN_VRAM,
+			     RADEON_GEM_DOMAIN_VRAM, 0,
 			     NULL, &rdev->stollen_vga_memory);
 	if (r) {
 		return r;
diff --git a/drivers/gpu/drm/radeon/radeon_uvd.c b/drivers/gpu/drm/radeon/radeon_uvd.c
index a4ad270..6bf55ec 100644
--- a/drivers/gpu/drm/radeon/radeon_uvd.c
+++ b/drivers/gpu/drm/radeon/radeon_uvd.c
@@ -117,7 +117,7 @@ int radeon_uvd_init(struct radeon_device *rdev)
 	bo_size = RADEON_GPU_PAGE_ALIGN(rdev->uvd_fw->size + 8) +
 		  RADEON_UVD_STACK_SIZE + RADEON_UVD_HEAP_SIZE;
 	r = radeon_bo_create(rdev, bo_size, PAGE_SIZE, true,
-			     RADEON_GEM_DOMAIN_VRAM, NULL, &rdev->uvd.vcpu_bo);
+			     RADEON_GEM_DOMAIN_VRAM, 0, NULL, &rdev->uvd.vcpu_bo);
 	if (r) {
 		dev_err(rdev->dev, "(%d) failed to allocate UVD bo\n", r);
 		return r;
@@ -674,7 +674,7 @@ int radeon_uvd_get_create_msg(struct radeon_device *rdev, int ring,
 	int r, i;
 
 	r = radeon_bo_create(rdev, 1024, PAGE_SIZE, true,
-			     RADEON_GEM_DOMAIN_VRAM, NULL, &bo);
+			     RADEON_GEM_DOMAIN_VRAM, 0, NULL, &bo);
 	if (r)
 		return r;
 
@@ -720,7 +720,7 @@ int radeon_uvd_get_destroy_msg(struct radeon_device *rdev, int ring,
 	int r, i;
 
 	r = radeon_bo_create(rdev, 1024, PAGE_SIZE, true,
-			     RADEON_GEM_DOMAIN_VRAM, NULL, &bo);
+			     RADEON_GEM_DOMAIN_VRAM, 0, NULL, &bo);
 	if (r)
 		return r;
 
diff --git a/drivers/gpu/drm/radeon/radeon_vce.c b/drivers/gpu/drm/radeon/radeon_vce.c
index aa21c31..f9b70a4 100644
--- a/drivers/gpu/drm/radeon/radeon_vce.c
+++ b/drivers/gpu/drm/radeon/radeon_vce.c
@@ -126,7 +126,7 @@ int radeon_vce_init(struct radeon_device *rdev)
 	size = RADEON_GPU_PAGE_ALIGN(rdev->vce_fw->size) +
 	       RADEON_VCE_STACK_SIZE + RADEON_VCE_HEAP_SIZE;
 	r = radeon_bo_create(rdev, size, PAGE_SIZE, true,
-			     RADEON_GEM_DOMAIN_VRAM, NULL, &rdev->vce.vcpu_bo);
+			     RADEON_GEM_DOMAIN_VRAM, 0, NULL, &rdev->vce.vcpu_bo);
 	if (r) {
 		dev_err(rdev->dev, "(%d) failed to allocate VCE bo\n", r);
 		return r;
diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c
index 888494f..acc4366 100644
--- a/drivers/gpu/drm/radeon/radeon_vm.c
+++ b/drivers/gpu/drm/radeon/radeon_vm.c
@@ -496,7 +496,7 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
 
 		r = radeon_bo_create(rdev, RADEON_VM_PTE_COUNT * 8,
 				     RADEON_GPU_PAGE_SIZE, true,
-				     RADEON_GEM_DOMAIN_VRAM, NULL, &pt);
+				     RADEON_GEM_DOMAIN_VRAM, 0, NULL, &pt);
 		if (r)
 			return r;
 
@@ -850,6 +850,7 @@ int radeon_vm_bo_update(struct radeon_device *rdev,
 
 	bo_va->flags &= ~RADEON_VM_PAGE_VALID;
 	bo_va->flags &= ~RADEON_VM_PAGE_SYSTEM;
+	bo_va->flags &= ~RADEON_VM_PAGE_SNOOPED;
 	if (mem) {
 		addr = mem->start << PAGE_SHIFT;
 		if (mem->mem_type != TTM_PL_SYSTEM) {
@@ -858,6 +859,9 @@ int radeon_vm_bo_update(struct radeon_device *rdev,
 		}
 		if (mem->mem_type == TTM_PL_TT) {
 			bo_va->flags |= RADEON_VM_PAGE_SYSTEM;
+			if (!(bo->flags & (RADEON_GEM_GTT_WC | RADEON_GEM_GTT_UC)))
+				bo_va->flags |= RADEON_VM_PAGE_SNOOPED;
+
 		} else {
 			addr += rdev->vm_manager.vram_base_offset;
 		}
@@ -1041,7 +1045,7 @@ int radeon_vm_init(struct radeon_device *rdev, struct radeon_vm *vm)
 	}
 
 	r = radeon_bo_create(rdev, pd_size, align, true,
-			     RADEON_GEM_DOMAIN_VRAM, NULL,
+			     RADEON_GEM_DOMAIN_VRAM, 0, NULL,
 			     &vm->page_directory);
 	if (r)
 		return r;
diff --git a/drivers/gpu/drm/radeon/si_dma.c b/drivers/gpu/drm/radeon/si_dma.c
index e24c94b..c9da341 100644
--- a/drivers/gpu/drm/radeon/si_dma.c
+++ b/drivers/gpu/drm/radeon/si_dma.c
@@ -79,7 +79,8 @@ void si_dma_vm_set_page(struct radeon_device *rdev,
 
 	trace_radeon_vm_set_page(pe, addr, count, incr, flags);
 
-	if (flags == R600_PTE_GART) {
+	/* XXX: How to distinguish between GART and other system memory pages? */
+	if (flags & R600_PTE_SYSTEM) {
 		uint64_t src = rdev->gart.table_addr + (addr >> 12) * 8;
 		while (count) {
 			unsigned bytes = count * 8;
-- 
2.0.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 4/5] drm/radeon: Use write-combined CPU mappings of rings and IBs on >= SI
  2014-07-17 10:01 [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT Michel Dänzer
                   ` (2 preceding siblings ...)
  2014-07-17 10:01 ` [PATCH 3/5] drm/radeon: Allow write-combined CPU mappings of BOs in GTT Michel Dänzer
@ 2014-07-17 10:01 ` Michel Dänzer
  2014-07-17 10:01 ` [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers " Michel Dänzer
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 38+ messages in thread
From: Michel Dänzer @ 2014-07-17 10:01 UTC (permalink / raw)
  To: dri-devel, mesa-dev

From: Michel Dänzer <michel.daenzer@amd.com>

Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
---
 drivers/gpu/drm/radeon/cik.c         |  3 +++
 drivers/gpu/drm/radeon/cik_sdma.c    |  4 ++++
 drivers/gpu/drm/radeon/ni.c          |  3 +++
 drivers/gpu/drm/radeon/ni_dma.c      |  4 ++++
 drivers/gpu/drm/radeon/radeon_ring.c | 22 +++++++++++++++++-----
 5 files changed, 31 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
index a9fd3e7..df39095 100644
--- a/drivers/gpu/drm/radeon/cik.c
+++ b/drivers/gpu/drm/radeon/cik.c
@@ -4181,6 +4181,9 @@ u32 cik_gfx_get_wptr(struct radeon_device *rdev,
 void cik_gfx_set_wptr(struct radeon_device *rdev,
 		      struct radeon_ring *ring)
 {
+	/* Make IB/ring buffer writes land before the WPTR register write */
+	wmb();
+
 	WREG32(CP_RB0_WPTR, ring->wptr);
 	(void)RREG32(CP_RB0_WPTR);
 }
diff --git a/drivers/gpu/drm/radeon/cik_sdma.c b/drivers/gpu/drm/radeon/cik_sdma.c
index a7f66c8..3396b28 100644
--- a/drivers/gpu/drm/radeon/cik_sdma.c
+++ b/drivers/gpu/drm/radeon/cik_sdma.c
@@ -112,12 +112,16 @@ void cik_sdma_set_wptr(struct radeon_device *rdev,
 {
 	u32 reg;
 
+	/* Make IB/ring buffer writes land before the WPTR register write */
+	wmb();
+
 	if (ring->idx == R600_RING_TYPE_DMA_INDEX)
 		reg = SDMA0_GFX_RB_WPTR + SDMA0_REGISTER_OFFSET;
 	else
 		reg = SDMA0_GFX_RB_WPTR + SDMA1_REGISTER_OFFSET;
 
 	WREG32(reg, (ring->wptr << 2) & 0x3fffc);
+	(void)RREG32(reg);
 }
 
 /**
diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
index 327b85f..b589fe7 100644
--- a/drivers/gpu/drm/radeon/ni.c
+++ b/drivers/gpu/drm/radeon/ni.c
@@ -1449,6 +1449,9 @@ u32 cayman_gfx_get_wptr(struct radeon_device *rdev,
 void cayman_gfx_set_wptr(struct radeon_device *rdev,
 			 struct radeon_ring *ring)
 {
+	/* Make IB/ring buffer writes land before the WPTR register write */
+	wmb();
+
 	if (ring->idx == RADEON_RING_TYPE_GFX_INDEX) {
 		WREG32(CP_RB0_WPTR, ring->wptr);
 		(void)RREG32(CP_RB0_WPTR);
diff --git a/drivers/gpu/drm/radeon/ni_dma.c b/drivers/gpu/drm/radeon/ni_dma.c
index 6378e02..119fc69 100644
--- a/drivers/gpu/drm/radeon/ni_dma.c
+++ b/drivers/gpu/drm/radeon/ni_dma.c
@@ -103,12 +103,16 @@ void cayman_dma_set_wptr(struct radeon_device *rdev,
 {
 	u32 reg;
 
+	/* Make IB/ring buffer writes land before the WPTR register write */
+	wmb();
+
 	if (ring->idx == R600_RING_TYPE_DMA_INDEX)
 		reg = DMA_RB_WPTR + DMA0_REGISTER_OFFSET;
 	else
 		reg = DMA_RB_WPTR + DMA1_REGISTER_OFFSET;
 
 	WREG32(reg, (ring->wptr << 2) & 0x3fffc);
+	(void)RREG32(reg);
 }
 
 /**
diff --git a/drivers/gpu/drm/radeon/radeon_ring.c b/drivers/gpu/drm/radeon/radeon_ring.c
index 71439f0..62e9e57 100644
--- a/drivers/gpu/drm/radeon/radeon_ring.c
+++ b/drivers/gpu/drm/radeon/radeon_ring.c
@@ -201,10 +201,22 @@ int radeon_ib_pool_init(struct radeon_device *rdev)
 	if (rdev->ib_pool_ready) {
 		return 0;
 	}
-	r = radeon_sa_bo_manager_init(rdev, &rdev->ring_tmp_bo,
-				      RADEON_IB_POOL_SIZE*64*1024,
-				      RADEON_GPU_PAGE_SIZE,
-				      RADEON_GEM_DOMAIN_GTT, 0);
+
+	if (rdev->family >= CHIP_TAHITI) {
+		r = radeon_sa_bo_manager_init(rdev, &rdev->ring_tmp_bo,
+					      RADEON_IB_POOL_SIZE*64*1024,
+					      RADEON_GPU_PAGE_SIZE,
+					      RADEON_GEM_DOMAIN_GTT,
+					      RADEON_GEM_GTT_WC);
+	} else {
+		/* Without GPUVM, it's better to stick to cacheable GTT due
+		 * to the command stream patching
+		 */
+		r = radeon_sa_bo_manager_init(rdev, &rdev->ring_tmp_bo,
+					      RADEON_IB_POOL_SIZE*64*1024,
+					      RADEON_GPU_PAGE_SIZE,
+					      RADEON_GEM_DOMAIN_GTT, 0);
+	}
 	if (r) {
 		return r;
 	}
@@ -640,7 +652,7 @@ int radeon_ring_init(struct radeon_device *rdev, struct radeon_ring *ring, unsig
 	/* Allocate ring buffer */
 	if (ring->ring_obj == NULL) {
 		r = radeon_bo_create(rdev, ring->ring_size, PAGE_SIZE, true,
-				     RADEON_GEM_DOMAIN_GTT, 0,
+				     RADEON_GEM_DOMAIN_GTT, RADEON_GEM_GTT_WC,
 				     NULL, &ring->ring_obj);
 		if (r) {
 			dev_err(rdev->dev, "(%d) ring create failed\n", r);
-- 
2.0.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers on >= SI
  2014-07-17 10:01 [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT Michel Dänzer
                   ` (3 preceding siblings ...)
  2014-07-17 10:01 ` [PATCH 4/5] drm/radeon: Use write-combined CPU mappings of rings and IBs on >= SI Michel Dänzer
@ 2014-07-17 10:01 ` Michel Dänzer
  2014-07-17 10:01 ` [PATCH 1/5] winsys/radeon: Use separate caching buffer managers for VRAM and GTT Michel Dänzer
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 38+ messages in thread
From: Michel Dänzer @ 2014-07-17 10:01 UTC (permalink / raw)
  To: dri-devel, mesa-dev

From: Michel Dänzer <michel.daenzer@amd.com>

Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
---
 drivers/gpu/drm/radeon/cik.c         | 3 +++
 drivers/gpu/drm/radeon/cik_sdma.c    | 2 ++
 drivers/gpu/drm/radeon/ni.c          | 3 +++
 drivers/gpu/drm/radeon/ni_dma.c      | 2 ++
 drivers/gpu/drm/radeon/radeon_ring.c | 2 +-
 5 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
index df39095..8af5c9a 100644
--- a/drivers/gpu/drm/radeon/cik.c
+++ b/drivers/gpu/drm/radeon/cik.c
@@ -3846,6 +3846,9 @@ void cik_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib)
 			  (ib->gpu_addr & 0xFFFFFFFC));
 	radeon_ring_write(ring, upper_32_bits(ib->gpu_addr) & 0xFFFF);
 	radeon_ring_write(ring, control);
+
+	/* Flush HDP cache */
+	WREG32(HDP_MEM_COHERENCY_FLUSH_CNTL, 0);
 }
 
 /**
diff --git a/drivers/gpu/drm/radeon/cik_sdma.c b/drivers/gpu/drm/radeon/cik_sdma.c
index 3396b28..2ab873d 100644
--- a/drivers/gpu/drm/radeon/cik_sdma.c
+++ b/drivers/gpu/drm/radeon/cik_sdma.c
@@ -158,6 +158,8 @@ void cik_sdma_ring_ib_execute(struct radeon_device *rdev,
 	radeon_ring_write(ring, upper_32_bits(ib->gpu_addr));
 	radeon_ring_write(ring, ib->length_dw);
 
+	/* Flush HDP cache */
+	WREG32(HDP_MEM_COHERENCY_FLUSH_CNTL, 0);
 }
 
 /**
diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
index b589fe7..ea58e5b 100644
--- a/drivers/gpu/drm/radeon/ni.c
+++ b/drivers/gpu/drm/radeon/ni.c
@@ -1397,6 +1397,9 @@ void cayman_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib)
 	radeon_ring_write(ring, 0xFFFFFFFF);
 	radeon_ring_write(ring, 0);
 	radeon_ring_write(ring, ((ib->vm ? ib->vm->id : 0) << 24) | 10); /* poll interval */
+
+	/* Flush HDP cache (for SI) */
+	WREG32(HDP_MEM_COHERENCY_FLUSH_CNTL, 0x1);
 }
 
 static void cayman_cp_enable(struct radeon_device *rdev, bool enable)
diff --git a/drivers/gpu/drm/radeon/ni_dma.c b/drivers/gpu/drm/radeon/ni_dma.c
index 119fc69..0e575ea 100644
--- a/drivers/gpu/drm/radeon/ni_dma.c
+++ b/drivers/gpu/drm/radeon/ni_dma.c
@@ -148,6 +148,8 @@ void cayman_dma_ring_ib_execute(struct radeon_device *rdev,
 	radeon_ring_write(ring, (ib->gpu_addr & 0xFFFFFFE0));
 	radeon_ring_write(ring, (ib->length_dw << 12) | (upper_32_bits(ib->gpu_addr) & 0xFF));
 
+	/* Flush HDP cache (for SI) */
+	WREG32(HDP_MEM_COHERENCY_FLUSH_CNTL, 0x1);
 }
 
 /**
diff --git a/drivers/gpu/drm/radeon/radeon_ring.c b/drivers/gpu/drm/radeon/radeon_ring.c
index 62e9e57..31ac4fd 100644
--- a/drivers/gpu/drm/radeon/radeon_ring.c
+++ b/drivers/gpu/drm/radeon/radeon_ring.c
@@ -206,7 +206,7 @@ int radeon_ib_pool_init(struct radeon_device *rdev)
 		r = radeon_sa_bo_manager_init(rdev, &rdev->ring_tmp_bo,
 					      RADEON_IB_POOL_SIZE*64*1024,
 					      RADEON_GPU_PAGE_SIZE,
-					      RADEON_GEM_DOMAIN_GTT,
+					      RADEON_GEM_DOMAIN_VRAM,
 					      RADEON_GEM_GTT_WC);
 	} else {
 		/* Without GPUVM, it's better to stick to cacheable GTT due
-- 
2.0.0

_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 1/5] winsys/radeon: Use separate caching buffer managers for VRAM and GTT
  2014-07-17 10:01 [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT Michel Dänzer
                   ` (4 preceding siblings ...)
  2014-07-17 10:01 ` [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers " Michel Dänzer
@ 2014-07-17 10:01 ` Michel Dänzer
  2014-07-17 10:01 ` [PATCH 2/5] r600g/radeonsi: Use write-combined CPU mappings of some BOs in GTT Michel Dänzer
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 38+ messages in thread
From: Michel Dänzer @ 2014-07-17 10:01 UTC (permalink / raw)
  To: dri-devel, mesa-dev

From: Michel Dänzer <michel.daenzer@amd.com>

Should reduce overhead because the caching buffer manager doesn't need to
consider buffers of the wrong type.

Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
---
 src/gallium/winsys/radeon/drm/radeon_drm_bo.c     | 10 +++++++---
 src/gallium/winsys/radeon/drm/radeon_drm_winsys.c | 16 +++++++++++-----
 src/gallium/winsys/radeon/drm/radeon_drm_winsys.h |  3 ++-
 3 files changed, 20 insertions(+), 9 deletions(-)

diff --git a/src/gallium/winsys/radeon/drm/radeon_drm_bo.c b/src/gallium/winsys/radeon/drm/radeon_drm_bo.c
index 0ebe196..d06bb34 100644
--- a/src/gallium/winsys/radeon/drm/radeon_drm_bo.c
+++ b/src/gallium/winsys/radeon/drm/radeon_drm_bo.c
@@ -800,10 +800,14 @@ radeon_winsys_bo_create(struct radeon_winsys *rws,
     desc.initial_domains = domain;
 
     /* Assign a buffer manager. */
-    if (use_reusable_pool)
-        provider = ws->cman;
-    else
+    if (use_reusable_pool) {
+        if (domain == RADEON_DOMAIN_VRAM)
+            provider = ws->cman_vram;
+        else
+            provider = ws->cman_gtt;
+    } else {
         provider = ws->kman;
+    }
 
     buffer = provider->create_buffer(provider, size, &desc.base);
     if (!buffer)
diff --git a/src/gallium/winsys/radeon/drm/radeon_drm_winsys.c b/src/gallium/winsys/radeon/drm/radeon_drm_winsys.c
index 576fea5..0834cbd 100644
--- a/src/gallium/winsys/radeon/drm/radeon_drm_winsys.c
+++ b/src/gallium/winsys/radeon/drm/radeon_drm_winsys.c
@@ -417,7 +417,8 @@ static void radeon_winsys_destroy(struct radeon_winsys *rws)
     pipe_mutex_destroy(ws->cmask_owner_mutex);
     pipe_mutex_destroy(ws->cs_stack_lock);
 
-    ws->cman->destroy(ws->cman);
+    ws->cman_vram->destroy(ws->cman_vram);
+    ws->cman_gtt->destroy(ws->cman_gtt);
     ws->kman->destroy(ws->kman);
     if (ws->gen >= DRV_R600) {
         radeon_surface_manager_free(ws->surf_man);
@@ -632,8 +633,11 @@ radeon_drm_winsys_create(int fd, radeon_screen_create_t screen_create)
     ws->kman = radeon_bomgr_create(ws);
     if (!ws->kman)
         goto fail;
-    ws->cman = pb_cache_manager_create(ws->kman, 1000000, 2.0f, 0);
-    if (!ws->cman)
+    ws->cman_vram = pb_cache_manager_create(ws->kman, 1000000, 2.0f, 0);
+    if (!ws->cman_vram)
+        goto fail;
+    ws->cman_gtt = pb_cache_manager_create(ws->kman, 1000000, 2.0f, 0);
+    if (!ws->cman_gtt)
         goto fail;
 
     if (ws->gen >= DRV_R600) {
@@ -689,8 +693,10 @@ radeon_drm_winsys_create(int fd, radeon_screen_create_t screen_create)
 
 fail:
     pipe_mutex_unlock(fd_tab_mutex);
-    if (ws->cman)
-        ws->cman->destroy(ws->cman);
+    if (ws->cman_gtt)
+        ws->cman_gtt->destroy(ws->cman_gtt);
+    if (ws->cman_vram)
+        ws->cman_vram->destroy(ws->cman_vram);
     if (ws->kman)
         ws->kman->destroy(ws->kman);
     if (ws->surf_man)
diff --git a/src/gallium/winsys/radeon/drm/radeon_drm_winsys.h b/src/gallium/winsys/radeon/drm/radeon_drm_winsys.h
index 18fe0ae..fc6f53b 100644
--- a/src/gallium/winsys/radeon/drm/radeon_drm_winsys.h
+++ b/src/gallium/winsys/radeon/drm/radeon_drm_winsys.h
@@ -57,7 +57,8 @@ struct radeon_drm_winsys {
     uint32_t va_start;
 
     struct pb_manager *kman;
-    struct pb_manager *cman;
+    struct pb_manager *cman_vram;
+    struct pb_manager *cman_gtt;
     struct radeon_surface_manager *surf_man;
 
     uint32_t num_cpus;      /* Number of CPUs. */
-- 
2.0.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 2/5] r600g/radeonsi: Use write-combined CPU mappings of some BOs in GTT
  2014-07-17 10:01 [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT Michel Dänzer
                   ` (5 preceding siblings ...)
  2014-07-17 10:01 ` [PATCH 1/5] winsys/radeon: Use separate caching buffer managers for VRAM and GTT Michel Dänzer
@ 2014-07-17 10:01 ` Michel Dänzer
  2014-07-17 10:01 ` [PATCH 3/5] r600g/radeonsi: Prefer VRAM for CPU -> GPU streaming buffers Michel Dänzer
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 38+ messages in thread
From: Michel Dänzer @ 2014-07-17 10:01 UTC (permalink / raw)
  To: dri-devel, mesa-dev

From: Michel Dänzer <michel.daenzer@amd.com>

Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
---
 src/gallium/drivers/r300/r300_query.c             |  2 +-
 src/gallium/drivers/r300/r300_render.c            |  2 +-
 src/gallium/drivers/r300/r300_screen_buffer.c     |  4 ++--
 src/gallium/drivers/r300/r300_texture.c           |  2 +-
 src/gallium/drivers/radeon/r600_buffer_common.c   |  9 ++++++--
 src/gallium/drivers/radeon/r600_texture.c         |  2 ++
 src/gallium/drivers/radeon/radeon_uvd.c           |  8 +++++---
 src/gallium/drivers/radeon/radeon_vce.c           |  8 ++++----
 src/gallium/drivers/radeon/radeon_video.c         | 11 ++++++----
 src/gallium/drivers/radeon/radeon_video.h         |  4 +++-
 src/gallium/drivers/radeonsi/si_state.c           |  2 +-
 src/gallium/winsys/radeon/drm/radeon_drm_bo.c     | 25 +++++++++++++++++++----
 src/gallium/winsys/radeon/drm/radeon_drm_bo.h     |  1 +
 src/gallium/winsys/radeon/drm/radeon_drm_cs.c     |  2 +-
 src/gallium/winsys/radeon/drm/radeon_drm_winsys.c | 12 +++++++++++
 src/gallium/winsys/radeon/drm/radeon_drm_winsys.h |  2 ++
 src/gallium/winsys/radeon/drm/radeon_winsys.h     |  7 ++++++-
 17 files changed, 77 insertions(+), 26 deletions(-)

diff --git a/src/gallium/drivers/r300/r300_query.c b/src/gallium/drivers/r300/r300_query.c
index 5305ebd..1679433 100644
--- a/src/gallium/drivers/r300/r300_query.c
+++ b/src/gallium/drivers/r300/r300_query.c
@@ -59,7 +59,7 @@ static struct pipe_query *r300_create_query(struct pipe_context *pipe,
         q->num_pipes = r300screen->info.r300_num_gb_pipes;
 
     q->buf = r300->rws->buffer_create(r300->rws, 4096, 4096, TRUE,
-                                      RADEON_DOMAIN_GTT);
+                                      RADEON_DOMAIN_GTT, 0);
     if (!q->buf) {
         FREE(q);
         return NULL;
diff --git a/src/gallium/drivers/r300/r300_render.c b/src/gallium/drivers/r300/r300_render.c
index 175b83a..6e5b381 100644
--- a/src/gallium/drivers/r300/r300_render.c
+++ b/src/gallium/drivers/r300/r300_render.c
@@ -907,7 +907,7 @@ static boolean r300_render_allocate_vertices(struct vbuf_render* render,
         r300->vbo = rws->buffer_create(rws,
                                        MAX2(R300_MAX_DRAW_VBO_SIZE, size),
                                        R300_BUFFER_ALIGNMENT, TRUE,
-                                       RADEON_DOMAIN_GTT);
+                                       RADEON_DOMAIN_GTT, 0);
         if (!r300->vbo) {
             return FALSE;
         }
diff --git a/src/gallium/drivers/r300/r300_screen_buffer.c b/src/gallium/drivers/r300/r300_screen_buffer.c
index 86e4478..de557b5 100644
--- a/src/gallium/drivers/r300/r300_screen_buffer.c
+++ b/src/gallium/drivers/r300/r300_screen_buffer.c
@@ -103,7 +103,7 @@ r300_buffer_transfer_map( struct pipe_context *context,
             /* Create a new one in the same pipe_resource. */
             new_buf = r300->rws->buffer_create(r300->rws, rbuf->b.b.width0,
                                                R300_BUFFER_ALIGNMENT, TRUE,
-                                               rbuf->domain);
+                                               rbuf->domain, 0);
             if (new_buf) {
                 /* Discard the old buffer. */
                 pb_reference(&rbuf->buf, NULL);
@@ -185,7 +185,7 @@ struct pipe_resource *r300_buffer_create(struct pipe_screen *screen,
     rbuf->buf =
         r300screen->rws->buffer_create(r300screen->rws, rbuf->b.b.width0,
                                        R300_BUFFER_ALIGNMENT, TRUE,
-                                       rbuf->domain);
+                                       rbuf->domain, 0);
     if (!rbuf->buf) {
         FREE(rbuf);
         return NULL;
diff --git a/src/gallium/drivers/r300/r300_texture.c b/src/gallium/drivers/r300/r300_texture.c
index 4ea69dc..ffe8c00 100644
--- a/src/gallium/drivers/r300/r300_texture.c
+++ b/src/gallium/drivers/r300/r300_texture.c
@@ -1042,7 +1042,7 @@ r300_texture_create_object(struct r300_screen *rscreen,
     /* Create the backing buffer if needed. */
     if (!tex->buf) {
         tex->buf = rws->buffer_create(rws, tex->tex.size_in_bytes, 2048, TRUE,
-                                      tex->domain);
+                                      tex->domain, 0);
 
         if (!tex->buf) {
             goto fail;
diff --git a/src/gallium/drivers/radeon/r600_buffer_common.c b/src/gallium/drivers/radeon/r600_buffer_common.c
index 0eaa817..4e6b897 100644
--- a/src/gallium/drivers/radeon/r600_buffer_common.c
+++ b/src/gallium/drivers/radeon/r600_buffer_common.c
@@ -107,11 +107,14 @@ bool r600_init_resource(struct r600_common_screen *rscreen,
 {
 	struct r600_texture *rtex = (struct r600_texture*)res;
 	struct pb_buffer *old_buf, *new_buf;
+	enum radeon_bo_flag flags = 0;
 
 	switch (res->b.b.usage) {
-	case PIPE_USAGE_STAGING:
 	case PIPE_USAGE_DYNAMIC:
 	case PIPE_USAGE_STREAM:
+		flags = RADEON_FLAG_GTT_WC;
+		/* fall through */
+	case PIPE_USAGE_STAGING:
 		/* Transfers are likely to occur more often with these resources. */
 		res->domains = RADEON_DOMAIN_GTT;
 		break;
@@ -120,6 +123,7 @@ bool r600_init_resource(struct r600_common_screen *rscreen,
 	default:
 		/* Not listing GTT here improves performance in some apps. */
 		res->domains = RADEON_DOMAIN_VRAM;
+		flags = RADEON_FLAG_GTT_WC;
 		break;
 	}
 
@@ -129,6 +133,7 @@ bool r600_init_resource(struct r600_common_screen *rscreen,
 	    res->b.b.flags & (PIPE_RESOURCE_FLAG_MAP_PERSISTENT |
 			      PIPE_RESOURCE_FLAG_MAP_COHERENT)) {
 		res->domains = RADEON_DOMAIN_GTT;
+		flags = 0;
 	}
 
 	/* Tiled textures are unmappable. Always put them in VRAM. */
@@ -140,7 +145,7 @@ bool r600_init_resource(struct r600_common_screen *rscreen,
 	/* Allocate a new resource. */
 	new_buf = rscreen->ws->buffer_create(rscreen->ws, size, alignment,
 					     use_reusable_pool,
-					     res->domains);
+					     res->domains, flags);
 	if (!new_buf) {
 		return false;
 	}
diff --git a/src/gallium/drivers/radeon/r600_texture.c b/src/gallium/drivers/radeon/r600_texture.c
index bfda69e..6dd84a4 100644
--- a/src/gallium/drivers/radeon/r600_texture.c
+++ b/src/gallium/drivers/radeon/r600_texture.c
@@ -1027,6 +1027,8 @@ static void *r600_texture_transfer_map(struct pipe_context *ctx,
 
 		r600_init_temp_resource_from_box(&resource, texture, box, level,
 						 R600_RESOURCE_FLAG_TRANSFER);
+		resource.usage = (usage & PIPE_TRANSFER_READ) ?
+			PIPE_USAGE_STAGING : PIPE_USAGE_STREAM;
 
 		/* Create the temporary texture. */
 		staging = (struct r600_texture*)ctx->screen->resource_create(ctx->screen, &resource);
diff --git a/src/gallium/drivers/radeon/radeon_uvd.c b/src/gallium/drivers/radeon/radeon_uvd.c
index 137c69c..d77217c 100644
--- a/src/gallium/drivers/radeon/radeon_uvd.c
+++ b/src/gallium/drivers/radeon/radeon_uvd.c
@@ -816,12 +816,14 @@ struct pipe_video_codec *ruvd_create_decoder(struct pipe_context *context,
 	for (i = 0; i < NUM_BUFFERS; ++i) {
 		unsigned msg_fb_size = FB_BUFFER_OFFSET + FB_BUFFER_SIZE;
 		STATIC_ASSERT(sizeof(struct ruvd_msg) <= FB_BUFFER_OFFSET);
-		if (!rvid_create_buffer(dec->ws, &dec->msg_fb_buffers[i], msg_fb_size, RADEON_DOMAIN_VRAM)) {
+		if (!rvid_create_buffer(dec->ws, &dec->msg_fb_buffers[i], msg_fb_size,
+                                        RADEON_DOMAIN_VRAM, 0)) {
 			RVID_ERR("Can't allocated message buffers.\n");
 			goto error;
 		}
 
-		if (!rvid_create_buffer(dec->ws, &dec->bs_buffers[i], bs_buf_size, RADEON_DOMAIN_GTT)) {
+		if (!rvid_create_buffer(dec->ws, &dec->bs_buffers[i], bs_buf_size,
+                                        RADEON_DOMAIN_GTT, 0)) {
 			RVID_ERR("Can't allocated bitstream buffers.\n");
 			goto error;
 		}
@@ -830,7 +832,7 @@ struct pipe_video_codec *ruvd_create_decoder(struct pipe_context *context,
 		rvid_clear_buffer(dec->ws, dec->cs, &dec->bs_buffers[i]);
 	}
 
-	if (!rvid_create_buffer(dec->ws, &dec->dpb, dpb_size, RADEON_DOMAIN_VRAM)) {
+	if (!rvid_create_buffer(dec->ws, &dec->dpb, dpb_size, RADEON_DOMAIN_VRAM, 0)) {
 		RVID_ERR("Can't allocated dpb.\n");
 		goto error;
 	}
diff --git a/src/gallium/drivers/radeon/radeon_vce.c b/src/gallium/drivers/radeon/radeon_vce.c
index f5395b3..9174c97 100644
--- a/src/gallium/drivers/radeon/radeon_vce.c
+++ b/src/gallium/drivers/radeon/radeon_vce.c
@@ -191,7 +191,7 @@ static void rvce_destroy(struct pipe_video_codec *encoder)
 	struct rvce_encoder *enc = (struct rvce_encoder*)encoder;
 	if (enc->stream_handle) {
 		struct rvid_buffer fb;
-		rvid_create_buffer(enc->ws, &fb, 512, RADEON_DOMAIN_GTT);
+		rvid_create_buffer(enc->ws, &fb, 512, RADEON_DOMAIN_GTT, 0);
 		enc->fb = &fb;
 		enc->session(enc);
 		enc->feedback(enc);
@@ -233,7 +233,7 @@ static void rvce_begin_frame(struct pipe_video_codec *encoder,
 	if (!enc->stream_handle) {
 		struct rvid_buffer fb;
 		enc->stream_handle = rvid_alloc_stream_handle();
-		rvid_create_buffer(enc->ws, &fb, 512, RADEON_DOMAIN_GTT);
+		rvid_create_buffer(enc->ws, &fb, 512, RADEON_DOMAIN_GTT, 0);
 		enc->fb = &fb;
 		enc->session(enc);
 		enc->create(enc);
@@ -265,7 +265,7 @@ static void rvce_encode_bitstream(struct pipe_video_codec *encoder,
 	enc->bs_size = destination->width0;
 
 	*fb = enc->fb = CALLOC_STRUCT(rvid_buffer);
-	if (!rvid_create_buffer(enc->ws, enc->fb, 512, RADEON_DOMAIN_GTT)) {
+	if (!rvid_create_buffer(enc->ws, enc->fb, 512, RADEON_DOMAIN_GTT, 0)) {
 		RVID_ERR("Can't create feedback buffer.\n");
 		return;
 	}
@@ -390,7 +390,7 @@ struct pipe_video_codec *rvce_create_encoder(struct pipe_context *context,
 	cpb_size = cpb_size * 3 / 2;
 	cpb_size = cpb_size * enc->cpb_num;
 	tmp_buf->destroy(tmp_buf);
-	if (!rvid_create_buffer(enc->ws, &enc->cpb, cpb_size, RADEON_DOMAIN_VRAM)) {
+	if (!rvid_create_buffer(enc->ws, &enc->cpb, cpb_size, RADEON_DOMAIN_VRAM, 0)) {
 		RVID_ERR("Can't create CPB buffer.\n");
 		goto error;
 	}
diff --git a/src/gallium/drivers/radeon/radeon_video.c b/src/gallium/drivers/radeon/radeon_video.c
index eae533e..17e9a59 100644
--- a/src/gallium/drivers/radeon/radeon_video.c
+++ b/src/gallium/drivers/radeon/radeon_video.c
@@ -61,11 +61,13 @@ unsigned rvid_alloc_stream_handle()
 
 /* create a buffer in the winsys */
 bool rvid_create_buffer(struct radeon_winsys *ws, struct rvid_buffer *buffer,
-			unsigned size, enum radeon_bo_domain domain)
+			unsigned size, enum radeon_bo_domain domain,
+			enum radeon_bo_flag flags)
 {
 	buffer->domain = domain;
+	buffer->flags = flags;
 
-	buffer->buf = ws->buffer_create(ws, size, 4096, false, domain);
+	buffer->buf = ws->buffer_create(ws, size, 4096, false, domain, flags);
 	if (!buffer->buf)
 		return false;
 
@@ -91,7 +93,8 @@ bool rvid_resize_buffer(struct radeon_winsys *ws, struct radeon_winsys_cs *cs,
 	struct rvid_buffer old_buf = *new_buf;
 	void *src = NULL, *dst = NULL;
 
-	if (!rvid_create_buffer(ws, new_buf, new_size, new_buf->domain))
+	if (!rvid_create_buffer(ws, new_buf, new_size, new_buf->domain,
+                                new_buf->flags))
 		goto error;
 
 	src = ws->buffer_map(old_buf.cs_handle, cs, PIPE_TRANSFER_READ);
@@ -191,7 +194,7 @@ void rvid_join_surfaces(struct radeon_winsys* ws, unsigned bind,
 	/* TODO: 2D tiling workaround */
 	alignment *= 2;
 
-	pb = ws->buffer_create(ws, size, alignment, bind, RADEON_DOMAIN_VRAM);
+	pb = ws->buffer_create(ws, size, alignment, bind, RADEON_DOMAIN_VRAM, 0);
 	if (!pb)
 		return;
 
diff --git a/src/gallium/drivers/radeon/radeon_video.h b/src/gallium/drivers/radeon/radeon_video.h
index 55d2ca4..42de5a9 100644
--- a/src/gallium/drivers/radeon/radeon_video.h
+++ b/src/gallium/drivers/radeon/radeon_video.h
@@ -44,6 +44,7 @@
 struct rvid_buffer
 {
 	enum radeon_bo_domain		domain;
+	enum radeon_bo_flag		flags;
 	struct pb_buffer*		buf;
 	struct radeon_winsys_cs_handle*	cs_handle;
 };
@@ -53,7 +54,8 @@ unsigned rvid_alloc_stream_handle(void);
 
 /* create a buffer in the winsys */
 bool rvid_create_buffer(struct radeon_winsys *ws, struct rvid_buffer *buffer,
-			unsigned size, enum radeon_bo_domain domain);
+			unsigned size, enum radeon_bo_domain domain,
+			enum radeon_bo_flag flags);
 
 /* destroy a buffer */
 void rvid_destroy_buffer(struct rvid_buffer *buffer);
diff --git a/src/gallium/drivers/radeonsi/si_state.c b/src/gallium/drivers/radeonsi/si_state.c
index c64958a..1388b50 100644
--- a/src/gallium/drivers/radeonsi/si_state.c
+++ b/src/gallium/drivers/radeonsi/si_state.c
@@ -2781,7 +2781,7 @@ static void si_set_sampler_states(struct si_context *sctx,
 
 				sctx->border_color_table =
 					si_resource_create_custom(&sctx->screen->b.b,
-								  PIPE_USAGE_STAGING,
+								  PIPE_USAGE_DYNAMIC,
 								  4096 * 4 * 4);
 			}
 
diff --git a/src/gallium/winsys/radeon/drm/radeon_drm_bo.c b/src/gallium/winsys/radeon/drm/radeon_drm_bo.c
index d06bb34..73f8d38 100644
--- a/src/gallium/winsys/radeon/drm/radeon_drm_bo.c
+++ b/src/gallium/winsys/radeon/drm/radeon_drm_bo.c
@@ -477,6 +477,10 @@ const struct pb_vtbl radeon_bo_vtbl = {
     radeon_bo_get_base_buffer,
 };
 
+#ifndef RADEON_GEM_GTT_WC
+#define RADEON_GEM_GTT_WC (1 << 2)
+#endif
+
 static struct pb_buffer *radeon_bomgr_create_bo(struct pb_manager *_mgr,
                                                 pb_size size,
                                                 const struct pb_desc *desc)
@@ -497,6 +501,10 @@ static struct pb_buffer *radeon_bomgr_create_bo(struct pb_manager *_mgr,
     args.size = size;
     args.alignment = desc->alignment;
     args.initial_domain = rdesc->initial_domains;
+    args.flags = 0;
+
+    if (rdesc->flags & RADEON_FLAG_GTT_WC)
+        args.flags |= RADEON_GEM_GTT_WC;
 
     if (drmCommandWriteRead(rws->fd, DRM_RADEON_GEM_CREATE,
                             &args, sizeof(args))) {
@@ -504,6 +512,7 @@ static struct pb_buffer *radeon_bomgr_create_bo(struct pb_manager *_mgr,
         fprintf(stderr, "radeon:    size      : %d bytes\n", size);
         fprintf(stderr, "radeon:    alignment : %d bytes\n", desc->alignment);
         fprintf(stderr, "radeon:    domains   : %d\n", args.initial_domain);
+        fprintf(stderr, "radeon:    flags     : %d\n", args.flags);
         return NULL;
     }
 
@@ -784,7 +793,8 @@ radeon_winsys_bo_create(struct radeon_winsys *rws,
                         unsigned size,
                         unsigned alignment,
                         boolean use_reusable_pool,
-                        enum radeon_bo_domain domain)
+                        enum radeon_bo_domain domain,
+                        enum radeon_bo_flag flags)
 {
     struct radeon_drm_winsys *ws = radeon_drm_winsys(rws);
     struct radeon_bomgr *mgr = radeon_bomgr(ws->kman);
@@ -798,13 +808,20 @@ radeon_winsys_bo_create(struct radeon_winsys *rws,
     /* Additional criteria for the cache manager. */
     desc.base.usage = domain;
     desc.initial_domains = domain;
+    desc.flags = flags;
 
     /* Assign a buffer manager. */
     if (use_reusable_pool) {
-        if (domain == RADEON_DOMAIN_VRAM)
-            provider = ws->cman_vram;
-        else
+        if (domain == RADEON_DOMAIN_VRAM) {
+            if (flags & RADEON_FLAG_GTT_WC)
+                provider = ws->cman_vram_gtt_wc;
+            else
+                provider = ws->cman_vram;
+        } else if (flags & RADEON_FLAG_GTT_WC) {
+            provider = ws->cman_gtt_wc;
+        } else {
             provider = ws->cman_gtt;
+        }
     } else {
         provider = ws->kman;
     }
diff --git a/src/gallium/winsys/radeon/drm/radeon_drm_bo.h b/src/gallium/winsys/radeon/drm/radeon_drm_bo.h
index f5b122f..1c00a13 100644
--- a/src/gallium/winsys/radeon/drm/radeon_drm_bo.h
+++ b/src/gallium/winsys/radeon/drm/radeon_drm_bo.h
@@ -42,6 +42,7 @@ struct radeon_bo_desc {
     struct pb_desc base;
 
     unsigned initial_domains;
+    unsigned flags;
 };
 
 struct radeon_bo {
diff --git a/src/gallium/winsys/radeon/drm/radeon_drm_cs.c b/src/gallium/winsys/radeon/drm/radeon_drm_cs.c
index 67375dc..3596f8d 100644
--- a/src/gallium/winsys/radeon/drm/radeon_drm_cs.c
+++ b/src/gallium/winsys/radeon/drm/radeon_drm_cs.c
@@ -606,7 +606,7 @@ radeon_cs_create_fence(struct radeon_winsys_cs *rcs)
 
     /* Create a fence, which is a dummy BO. */
     fence = cs->ws->base.buffer_create(&cs->ws->base, 1, 1, TRUE,
-                                       RADEON_DOMAIN_GTT);
+                                       RADEON_DOMAIN_GTT, 0);
     /* Add the fence as a dummy relocation. */
     cs->ws->base.cs_add_reloc(rcs, cs->ws->base.buffer_get_cs_handle(fence),
                               RADEON_USAGE_READWRITE, RADEON_DOMAIN_GTT,
diff --git a/src/gallium/winsys/radeon/drm/radeon_drm_winsys.c b/src/gallium/winsys/radeon/drm/radeon_drm_winsys.c
index 0834cbd..398c089 100644
--- a/src/gallium/winsys/radeon/drm/radeon_drm_winsys.c
+++ b/src/gallium/winsys/radeon/drm/radeon_drm_winsys.c
@@ -418,7 +418,9 @@ static void radeon_winsys_destroy(struct radeon_winsys *rws)
     pipe_mutex_destroy(ws->cs_stack_lock);
 
     ws->cman_vram->destroy(ws->cman_vram);
+    ws->cman_vram_gtt_wc->destroy(ws->cman_vram_gtt_wc);
     ws->cman_gtt->destroy(ws->cman_gtt);
+    ws->cman_gtt_wc->destroy(ws->cman_gtt_wc);
     ws->kman->destroy(ws->kman);
     if (ws->gen >= DRV_R600) {
         radeon_surface_manager_free(ws->surf_man);
@@ -636,9 +638,15 @@ radeon_drm_winsys_create(int fd, radeon_screen_create_t screen_create)
     ws->cman_vram = pb_cache_manager_create(ws->kman, 1000000, 2.0f, 0);
     if (!ws->cman_vram)
         goto fail;
+    ws->cman_vram_gtt_wc = pb_cache_manager_create(ws->kman, 1000000, 2.0f, 0);
+    if (!ws->cman_vram_gtt_wc)
+        goto fail;
     ws->cman_gtt = pb_cache_manager_create(ws->kman, 1000000, 2.0f, 0);
     if (!ws->cman_gtt)
         goto fail;
+    ws->cman_gtt_wc = pb_cache_manager_create(ws->kman, 1000000, 2.0f, 0);
+    if (!ws->cman_gtt_wc)
+        goto fail;
 
     if (ws->gen >= DRV_R600) {
         ws->surf_man = radeon_surface_manager_new(fd);
@@ -695,8 +703,12 @@ fail:
     pipe_mutex_unlock(fd_tab_mutex);
     if (ws->cman_gtt)
         ws->cman_gtt->destroy(ws->cman_gtt);
+    if (ws->cman_gtt_wc)
+        ws->cman_gtt_wc->destroy(ws->cman_gtt_wc);
     if (ws->cman_vram)
         ws->cman_vram->destroy(ws->cman_vram);
+    if (ws->cman_vram_gtt_wc)
+        ws->cman_vram_gtt_wc->destroy(ws->cman_vram_gtt_wc);
     if (ws->kman)
         ws->kman->destroy(ws->kman);
     if (ws->surf_man)
diff --git a/src/gallium/winsys/radeon/drm/radeon_drm_winsys.h b/src/gallium/winsys/radeon/drm/radeon_drm_winsys.h
index fc6f53b..ea6f7f0 100644
--- a/src/gallium/winsys/radeon/drm/radeon_drm_winsys.h
+++ b/src/gallium/winsys/radeon/drm/radeon_drm_winsys.h
@@ -58,7 +58,9 @@ struct radeon_drm_winsys {
 
     struct pb_manager *kman;
     struct pb_manager *cman_vram;
+    struct pb_manager *cman_vram_gtt_wc;
     struct pb_manager *cman_gtt;
+    struct pb_manager *cman_gtt_wc;
     struct radeon_surface_manager *surf_man;
 
     uint32_t num_cpus;      /* Number of CPUs. */
diff --git a/src/gallium/winsys/radeon/drm/radeon_winsys.h b/src/gallium/winsys/radeon/drm/radeon_winsys.h
index 6df1987..766071f 100644
--- a/src/gallium/winsys/radeon/drm/radeon_winsys.h
+++ b/src/gallium/winsys/radeon/drm/radeon_winsys.h
@@ -65,6 +65,10 @@ enum radeon_bo_domain { /* bitfield */
     RADEON_DOMAIN_VRAM_GTT = RADEON_DOMAIN_VRAM | RADEON_DOMAIN_GTT
 };
 
+enum radeon_bo_flag { /* bitfield */
+   RADEON_FLAG_GTT_WC = (1 << 0)
+};
+
 enum radeon_bo_usage { /* bitfield */
     RADEON_USAGE_READ = 2,
     RADEON_USAGE_WRITE = 4,
@@ -285,7 +289,8 @@ struct radeon_winsys {
                                        unsigned size,
                                        unsigned alignment,
                                        boolean use_reusable_pool,
-                                       enum radeon_bo_domain domain);
+                                       enum radeon_bo_domain domain,
+                                       enum radeon_bo_flag flags);
 
     struct radeon_winsys_cs_handle *(*buffer_get_cs_handle)(
             struct pb_buffer *buf);
-- 
2.0.0

_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 3/5] r600g/radeonsi: Prefer VRAM for CPU -> GPU streaming buffers
  2014-07-17 10:01 [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT Michel Dänzer
                   ` (6 preceding siblings ...)
  2014-07-17 10:01 ` [PATCH 2/5] r600g/radeonsi: Use write-combined CPU mappings of some BOs in GTT Michel Dänzer
@ 2014-07-17 10:01 ` Michel Dänzer
  2014-07-17 10:01 ` [PATCH 4/5] r600g, radeonsi: Use write-combined persistent GTT mappings Michel Dänzer
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 38+ messages in thread
From: Michel Dänzer @ 2014-07-17 10:01 UTC (permalink / raw)
  To: dri-devel, mesa-dev

From: Michel Dänzer <michel.daenzer@amd.com>

Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
---
 src/gallium/drivers/radeon/r600_buffer_common.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/src/gallium/drivers/radeon/r600_buffer_common.c b/src/gallium/drivers/radeon/r600_buffer_common.c
index 4e6b897..40917f0 100644
--- a/src/gallium/drivers/radeon/r600_buffer_common.c
+++ b/src/gallium/drivers/radeon/r600_buffer_common.c
@@ -110,15 +110,13 @@ bool r600_init_resource(struct r600_common_screen *rscreen,
 	enum radeon_bo_flag flags = 0;
 
 	switch (res->b.b.usage) {
-	case PIPE_USAGE_DYNAMIC:
-	case PIPE_USAGE_STREAM:
-		flags = RADEON_FLAG_GTT_WC;
-		/* fall through */
 	case PIPE_USAGE_STAGING:
 		/* Transfers are likely to occur more often with these resources. */
 		res->domains = RADEON_DOMAIN_GTT;
 		break;
 	case PIPE_USAGE_DEFAULT:
+	case PIPE_USAGE_STREAM:
+	case PIPE_USAGE_DYNAMIC:
 	case PIPE_USAGE_IMMUTABLE:
 	default:
 		/* Not listing GTT here improves performance in some apps. */
-- 
2.0.0

_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 4/5] r600g, radeonsi: Use write-combined persistent GTT mappings
  2014-07-17 10:01 [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT Michel Dänzer
                   ` (7 preceding siblings ...)
  2014-07-17 10:01 ` [PATCH 3/5] r600g/radeonsi: Prefer VRAM for CPU -> GPU streaming buffers Michel Dänzer
@ 2014-07-17 10:01 ` Michel Dänzer
  2014-07-17 10:56   ` Grigori Goronzy
  2014-07-17 12:00   ` Marek Olšák
  2014-07-17 10:01 ` [PATCH 5/5] r600g, radeonsi: Prefer VRAM for persistent mappings Michel Dänzer
                   ` (3 subsequent siblings)
  12 siblings, 2 replies; 38+ messages in thread
From: Michel Dänzer @ 2014-07-17 10:01 UTC (permalink / raw)
  To: dri-devel, mesa-dev

From: Michel Dänzer <michel.daenzer@amd.com>

This is hopefully safe: The kernel makes sure writes to these mappings
finish before the GPU might start reading from them, and the GPU caches
are invalidated at the start of a command stream.

Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
---
 src/gallium/drivers/radeon/r600_buffer_common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/gallium/drivers/radeon/r600_buffer_common.c b/src/gallium/drivers/radeon/r600_buffer_common.c
index 40917f0..c8a0723 100644
--- a/src/gallium/drivers/radeon/r600_buffer_common.c
+++ b/src/gallium/drivers/radeon/r600_buffer_common.c
@@ -131,7 +131,7 @@ bool r600_init_resource(struct r600_common_screen *rscreen,
 	    res->b.b.flags & (PIPE_RESOURCE_FLAG_MAP_PERSISTENT |
 			      PIPE_RESOURCE_FLAG_MAP_COHERENT)) {
 		res->domains = RADEON_DOMAIN_GTT;
-		flags = 0;
+		flags = RADEON_FLAG_GTT_WC;
 	}
 
 	/* Tiled textures are unmappable. Always put them in VRAM. */
-- 
2.0.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 5/5] r600g, radeonsi: Prefer VRAM for persistent mappings
  2014-07-17 10:01 [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT Michel Dänzer
                   ` (8 preceding siblings ...)
  2014-07-17 10:01 ` [PATCH 4/5] r600g, radeonsi: Use write-combined persistent GTT mappings Michel Dänzer
@ 2014-07-17 10:01 ` Michel Dänzer
  2014-07-17 12:17   ` Marek Olšák
  2014-07-17 10:09 ` [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT Christian König
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 38+ messages in thread
From: Michel Dänzer @ 2014-07-17 10:01 UTC (permalink / raw)
  To: dri-devel, mesa-dev

From: Michel Dänzer <michel.daenzer@amd.com>

Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
---
 src/gallium/drivers/radeon/r600_buffer_common.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/src/gallium/drivers/radeon/r600_buffer_common.c b/src/gallium/drivers/radeon/r600_buffer_common.c
index c8a0723..6f7fa29 100644
--- a/src/gallium/drivers/radeon/r600_buffer_common.c
+++ b/src/gallium/drivers/radeon/r600_buffer_common.c
@@ -125,12 +125,10 @@ bool r600_init_resource(struct r600_common_screen *rscreen,
 		break;
 	}
 
-	/* Use GTT for all persistent mappings, because they are
-	 * always cached and coherent. */
 	if (res->b.b.target == PIPE_BUFFER &&
 	    res->b.b.flags & (PIPE_RESOURCE_FLAG_MAP_PERSISTENT |
 			      PIPE_RESOURCE_FLAG_MAP_COHERENT)) {
-		res->domains = RADEON_DOMAIN_GTT;
+		res->domains = RADEON_DOMAIN_VRAM;
 		flags = RADEON_FLAG_GTT_WC;
 	}
 
-- 
2.0.0

_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT
  2014-07-17 10:01 [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT Michel Dänzer
                   ` (9 preceding siblings ...)
  2014-07-17 10:01 ` [PATCH 5/5] r600g, radeonsi: Prefer VRAM for persistent mappings Michel Dänzer
@ 2014-07-17 10:09 ` Christian König
  2014-07-18  3:07   ` Michel Dänzer
  2014-07-17 12:25 ` Marek Olšák
  2014-07-17 13:35 ` Alex Deucher
  12 siblings, 1 reply; 38+ messages in thread
From: Christian König @ 2014-07-17 10:09 UTC (permalink / raw)
  To: Michel Dänzer, dri-devel, mesa-dev

Am 17.07.2014 12:01, schrieb Michel Dänzer:
> In order to try and improve X(Shm)PutImage performance with glamor, I
> implemented support for write-combined CPU mappings of BOs in GTT.
>
> This did provide a nice speedup, but to my surprise, using VRAM instead
> of write-combined GTT turned out to be even faster in general on my
> Kaveri machine, both for the internal GPU and for discrete GPUs.
>
> However, I've kept the changes from GTT to VRAM separated, in case this
> turns out to be a loss on other setups.
>
> Kernel patches:
>
> [PATCH 1/5] drm/radeon: Remove radeon_gart_restore()
> [PATCH 2/5] drm/radeon: Pass GART page flags to
> [PATCH 3/5] drm/radeon: Allow write-combined CPU mappings of BOs in
> [PATCH 4/5] drm/radeon: Use write-combined CPU mappings of rings and

Those four are Reviewed-by: Christian König <christian.koenig@amd.com>

> [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers on >= SI

I'm still not very keen with this change since I still don't understand 
the reason why it's faster than with GTT. Definitely needs more testing 
on a wider range of systems. Maybe limit it to APUs for now?

Regards,
Christian.

>
> Mesa patches:
>
> [PATCH 1/5] winsys/radeon: Use separate caching buffer managers for
> [PATCH 2/5] r600g/radeonsi: Use write-combined CPU mappings of some
> [PATCH 3/5] r600g/radeonsi: Prefer VRAM for CPU -> GPU streaming
> [PATCH 4/5] r600g,radeonsi: Use write-combined persistent GTT
> [PATCH 5/5] r600g,radeonsi: Prefer VRAM for persistent mappings
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 4/5] r600g, radeonsi: Use write-combined persistent GTT mappings
  2014-07-17 10:01 ` [PATCH 4/5] r600g, radeonsi: Use write-combined persistent GTT mappings Michel Dänzer
@ 2014-07-17 10:56   ` Grigori Goronzy
  2014-07-17 12:00   ` Marek Olšák
  1 sibling, 0 replies; 38+ messages in thread
From: Grigori Goronzy @ 2014-07-17 10:56 UTC (permalink / raw)
  To: Michel Dänzer, dri-devel, mesa-dev


[-- Attachment #1.1: Type: text/plain, Size: 1311 bytes --]

On 17.07.2014 12:01, Michel Dänzer wrote:
> From: Michel Dänzer <michel.daenzer@amd.com>
> 
> This is hopefully safe: The kernel makes sure writes to these mappings
> finish before the GPU might start reading from them, and the GPU caches
> are invalidated at the start of a command stream.
>

Aren't CPU reads from write-combined GTT memory extraordinarily slow,
because they're uncached? And don't you need the right access patterns
to make write combining perform well?

Grigori

> Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
> ---
>  src/gallium/drivers/radeon/r600_buffer_common.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/src/gallium/drivers/radeon/r600_buffer_common.c b/src/gallium/drivers/radeon/r600_buffer_common.c
> index 40917f0..c8a0723 100644
> --- a/src/gallium/drivers/radeon/r600_buffer_common.c
> +++ b/src/gallium/drivers/radeon/r600_buffer_common.c
> @@ -131,7 +131,7 @@ bool r600_init_resource(struct r600_common_screen *rscreen,
>  	    res->b.b.flags & (PIPE_RESOURCE_FLAG_MAP_PERSISTENT |
>  			      PIPE_RESOURCE_FLAG_MAP_COHERENT)) {
>  		res->domains = RADEON_DOMAIN_GTT;
> -		flags = 0;
> +		flags = RADEON_FLAG_GTT_WC;
>  	}
>  
>  	/* Tiled textures are unmappable. Always put them in VRAM. */
> 



[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 246 bytes --]

[-- Attachment #2: Type: text/plain, Size: 159 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 4/5] r600g, radeonsi: Use write-combined persistent GTT mappings
  2014-07-17 10:01 ` [PATCH 4/5] r600g, radeonsi: Use write-combined persistent GTT mappings Michel Dänzer
  2014-07-17 10:56   ` Grigori Goronzy
@ 2014-07-17 12:00   ` Marek Olšák
  2014-07-18  3:19     ` [Mesa-dev] " Michel Dänzer
  1 sibling, 1 reply; 38+ messages in thread
From: Marek Olšák @ 2014-07-17 12:00 UTC (permalink / raw)
  To: Michel Dänzer; +Cc: mesa-dev, dri-devel

The resource flags actually tell you what you can do. If the COHERENT
flag is set, the mapping must be cached. If it's unset, it's up to
you.

If write-combining is faster for vertex uploads, then Glamor shouldn't
set the coherent flag.

Marek

On Thu, Jul 17, 2014 at 12:01 PM, Michel Dänzer <michel@daenzer.net> wrote:
> From: Michel Dänzer <michel.daenzer@amd.com>
>
> This is hopefully safe: The kernel makes sure writes to these mappings
> finish before the GPU might start reading from them, and the GPU caches
> are invalidated at the start of a command stream.
>
> Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
> ---
>  src/gallium/drivers/radeon/r600_buffer_common.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/src/gallium/drivers/radeon/r600_buffer_common.c b/src/gallium/drivers/radeon/r600_buffer_common.c
> index 40917f0..c8a0723 100644
> --- a/src/gallium/drivers/radeon/r600_buffer_common.c
> +++ b/src/gallium/drivers/radeon/r600_buffer_common.c
> @@ -131,7 +131,7 @@ bool r600_init_resource(struct r600_common_screen *rscreen,
>             res->b.b.flags & (PIPE_RESOURCE_FLAG_MAP_PERSISTENT |
>                               PIPE_RESOURCE_FLAG_MAP_COHERENT)) {
>                 res->domains = RADEON_DOMAIN_GTT;
> -               flags = 0;
> +               flags = RADEON_FLAG_GTT_WC;
>         }
>
>         /* Tiled textures are unmappable. Always put them in VRAM. */
> --
> 2.0.0
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 5/5] r600g, radeonsi: Prefer VRAM for persistent mappings
  2014-07-17 10:01 ` [PATCH 5/5] r600g, radeonsi: Prefer VRAM for persistent mappings Michel Dänzer
@ 2014-07-17 12:17   ` Marek Olšák
  0 siblings, 0 replies; 38+ messages in thread
From: Marek Olšák @ 2014-07-17 12:17 UTC (permalink / raw)
  To: Michel Dänzer; +Cc: mesa-dev, dri-devel

Like I said at patch 4, this would be okay if the COHERENT flag wasn't set.

If you removed the PERSISTENT flag from the conditional, the placement
of persistent non-coherent buffers would be driven by the "usage",
meaning that you would be able to get any kind of placement you want.

Marek

On Thu, Jul 17, 2014 at 12:01 PM, Michel Dänzer <michel@daenzer.net> wrote:
> From: Michel Dänzer <michel.daenzer@amd.com>
>
> Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
> ---
>  src/gallium/drivers/radeon/r600_buffer_common.c | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/src/gallium/drivers/radeon/r600_buffer_common.c b/src/gallium/drivers/radeon/r600_buffer_common.c
> index c8a0723..6f7fa29 100644
> --- a/src/gallium/drivers/radeon/r600_buffer_common.c
> +++ b/src/gallium/drivers/radeon/r600_buffer_common.c
> @@ -125,12 +125,10 @@ bool r600_init_resource(struct r600_common_screen *rscreen,
>                 break;
>         }
>
> -       /* Use GTT for all persistent mappings, because they are
> -        * always cached and coherent. */
>         if (res->b.b.target == PIPE_BUFFER &&
>             res->b.b.flags & (PIPE_RESOURCE_FLAG_MAP_PERSISTENT |
>                               PIPE_RESOURCE_FLAG_MAP_COHERENT)) {
> -               res->domains = RADEON_DOMAIN_GTT;
> +               res->domains = RADEON_DOMAIN_VRAM;
>                 flags = RADEON_FLAG_GTT_WC;
>         }
>
> --
> 2.0.0
>
> _______________________________________________
> mesa-dev mailing list
> mesa-dev@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/mesa-dev
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT
  2014-07-17 10:01 [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT Michel Dänzer
                   ` (10 preceding siblings ...)
  2014-07-17 10:09 ` [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT Christian König
@ 2014-07-17 12:25 ` Marek Olšák
  2014-07-17 13:35 ` Alex Deucher
  12 siblings, 0 replies; 38+ messages in thread
From: Marek Olšák @ 2014-07-17 12:25 UTC (permalink / raw)
  To: Michel Dänzer; +Cc: mesa-dev, dri-devel

On Thu, Jul 17, 2014 at 12:01 PM, Michel Dänzer <michel@daenzer.net> wrote:
> Mesa patches:
>
> [PATCH 1/5] winsys/radeon: Use separate caching buffer managers for
> [PATCH 2/5] r600g/radeonsi: Use write-combined CPU mappings of some
> [PATCH 3/5] r600g/radeonsi: Prefer VRAM for CPU -> GPU streaming

For these 3 patches:

Reviewed-by: Marek Olšák <marek.olsak@amd.com>

Marek
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT
  2014-07-17 10:01 [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT Michel Dänzer
                   ` (11 preceding siblings ...)
  2014-07-17 12:25 ` Marek Olšák
@ 2014-07-17 13:35 ` Alex Deucher
  12 siblings, 0 replies; 38+ messages in thread
From: Alex Deucher @ 2014-07-17 13:35 UTC (permalink / raw)
  To: Michel Dänzer; +Cc: mesa-dev, Maling list - DRI developers

On Thu, Jul 17, 2014 at 6:01 AM, Michel Dänzer <michel@daenzer.net> wrote:
> In order to try and improve X(Shm)PutImage performance with glamor, I
> implemented support for write-combined CPU mappings of BOs in GTT.
>
> This did provide a nice speedup, but to my surprise, using VRAM instead
> of write-combined GTT turned out to be even faster in general on my
> Kaveri machine, both for the internal GPU and for discrete GPUs.
>
> However, I've kept the changes from GTT to VRAM separated, in case this
> turns out to be a loss on other setups.
>
> Kernel patches:
>
> [PATCH 1/5] drm/radeon: Remove radeon_gart_restore()
> [PATCH 2/5] drm/radeon: Pass GART page flags to
> [PATCH 3/5] drm/radeon: Allow write-combined CPU mappings of BOs in
> [PATCH 4/5] drm/radeon: Use write-combined CPU mappings of rings and
> [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers on >= SI

Applied 1-4 to my 3.17 tree.  thanks!

Alex

>
> Mesa patches:
>
> [PATCH 1/5] winsys/radeon: Use separate caching buffer managers for
> [PATCH 2/5] r600g/radeonsi: Use write-combined CPU mappings of some
> [PATCH 3/5] r600g/radeonsi: Prefer VRAM for CPU -> GPU streaming
> [PATCH 4/5] r600g,radeonsi: Use write-combined persistent GTT
> [PATCH 5/5] r600g,radeonsi: Prefer VRAM for persistent mappings
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT
  2014-07-17 10:09 ` [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT Christian König
@ 2014-07-18  3:07   ` Michel Dänzer
  2014-07-18  3:58     ` Dieter Nützel
  2014-07-18 15:47     ` Christian König
  0 siblings, 2 replies; 38+ messages in thread
From: Michel Dänzer @ 2014-07-18  3:07 UTC (permalink / raw)
  To: Christian König; +Cc: mesa-dev, dri-devel

On 17.07.2014 19:09, Christian König wrote:
> Am 17.07.2014 12:01, schrieb Michel Dänzer:
>> In order to try and improve X(Shm)PutImage performance with glamor, I
>> implemented support for write-combined CPU mappings of BOs in GTT.
>>
>> This did provide a nice speedup, but to my surprise, using VRAM instead
>> of write-combined GTT turned out to be even faster in general on my
>> Kaveri machine, both for the internal GPU and for discrete GPUs.
>>
>> However, I've kept the changes from GTT to VRAM separated, in case this
>> turns out to be a loss on other setups.
>>
>> Kernel patches:
>>
>> [PATCH 1/5] drm/radeon: Remove radeon_gart_restore()
>> [PATCH 2/5] drm/radeon: Pass GART page flags to
>> [PATCH 3/5] drm/radeon: Allow write-combined CPU mappings of BOs in
>> [PATCH 4/5] drm/radeon: Use write-combined CPU mappings of rings and
> 
> Those four are Reviewed-by: Christian König <christian.koenig@amd.com>

Thanks!


>> [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers on >= SI
> 
> I'm still not very keen with this change since I still don't understand
> the reason why it's faster than with GTT. Definitely needs more testing
> on a wider range of systems.

Sure. If anyone wants to give this patch a spin and see if they can
measure any performance difference, good or bad, that would be interesting.

> Maybe limit it to APUs for now?

But IIRC, CPU writes to VRAM vs. write-combined GTT are actually an even
bigger win with dedicated GPUs than with the Kaveri built-in GPU on my
system. I suspect it may depend on the bandwidth available for PCIe vs.
system memory though.


-- 
Earthling Michel Dänzer            |                  http://www.amd.com
Libre software enthusiast          |                Mesa and X developer

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [Mesa-dev] [PATCH 4/5] r600g, radeonsi: Use write-combined persistent GTT mappings
  2014-07-17 12:00   ` Marek Olšák
@ 2014-07-18  3:19     ` Michel Dänzer
  2014-07-18 11:45       ` Marek Olšák
  2014-07-18 20:55       ` Marek Olšák
  0 siblings, 2 replies; 38+ messages in thread
From: Michel Dänzer @ 2014-07-18  3:19 UTC (permalink / raw)
  To: Marek Olšák; +Cc: mesa-dev, dri-devel

On 17.07.2014 21:00, Marek Olšák wrote:
> On Thu, Jul 17, 2014 at 12:01 PM, Michel Dänzer <michel@daenzer.net> wrote:
>> From: Michel Dänzer <michel.daenzer@amd.com>
>>
>> This is hopefully safe: The kernel makes sure writes to these mappings
>> finish before the GPU might start reading from them, and the GPU caches
>> are invalidated at the start of a command stream.
>>
> The resource flags actually tell you what you can do. If the COHERENT
> flag is set, the mapping must be cached.

Why is that required? As I explain above, we should satisfy the
requirements of the ARB_buffer_storage extension AFAICT.


As pointed out by you and Grigori in other posts, I should probably just
drop the special treatment of persistent mappings though, so the
placement and flags are derived from the buffer usage.


-- 
Earthling Michel Dänzer            |                  http://www.amd.com
Libre software enthusiast          |                Mesa and X developer
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT
  2014-07-18  3:07   ` Michel Dänzer
@ 2014-07-18  3:58     ` Dieter Nützel
  2014-07-18  9:52       ` Michel Dänzer
  2014-07-18 15:47     ` Christian König
  1 sibling, 1 reply; 38+ messages in thread
From: Dieter Nützel @ 2014-07-18  3:58 UTC (permalink / raw)
  To: Michel Dänzer; +Cc: mesa-dev, dri-devel

Am 18.07.2014 05:07, schrieb Michel Dänzer:
> On 17.07.2014 19:09, Christian König wrote:
>> Am 17.07.2014 12:01, schrieb Michel Dänzer:
>>> In order to try and improve X(Shm)PutImage performance with glamor, I
>>> implemented support for write-combined CPU mappings of BOs in GTT.
>>> 
>>> This did provide a nice speedup, but to my surprise, using VRAM 
>>> instead
>>> of write-combined GTT turned out to be even faster in general on my
>>> Kaveri machine, both for the internal GPU and for discrete GPUs.
>>> 
>>> However, I've kept the changes from GTT to VRAM separated, in case 
>>> this
>>> turns out to be a loss on other setups.
>>> 
>>> Kernel patches:
>>> 
>>> [PATCH 1/5] drm/radeon: Remove radeon_gart_restore()
>>> [PATCH 2/5] drm/radeon: Pass GART page flags to
>>> [PATCH 3/5] drm/radeon: Allow write-combined CPU mappings of BOs in
>>> [PATCH 4/5] drm/radeon: Use write-combined CPU mappings of rings and
>> 
>> Those four are Reviewed-by: Christian König <christian.koenig@amd.com>
> 
> Thanks!
> 
> 
>>> [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers on >= SI
>> 
>> I'm still not very keen with this change since I still don't 
>> understand
>> the reason why it's faster than with GTT. Definitely needs more 
>> testing
>> on a wider range of systems.
> 
> Sure. If anyone wants to give this patch a spin and see if they can
> measure any performance difference, good or bad, that would be 
> interesting.
> 
>> Maybe limit it to APUs for now?
> 
> But IIRC, CPU writes to VRAM vs. write-combined GTT are actually an 
> even
> bigger win with dedicated GPUs than with the Kaveri built-in GPU on my
> system. I suspect it may depend on the bandwidth available for PCIe vs.
> system memory though.

Michel,

please, please do NOT change anything on this!;-)
You all know that I currently can only run this on my poor Duron 1800 
with RV730 (AGP), but...

With this all 'objview' demos (mesa-demos) run at 60 fps (vsync),
even with chip set/CPU power management enabled (athcool on).

If I set vblank_mode=0
the slowest GreatLakesBiplaneHP.obj
run at ~100 fps (~16 fps before) => 6x speedup.
(Even 5 planes run at 30 fps) - Wow!!!

'buddha' went from ~40 fps up to ~175 fps
'bunny' went from ~60 fps up to ~215 fps
'bobcat' show not such a big improvement 'only' 70 fps more

R600_HYPERZ=1
help somewhat, too but not in all cases.

Overall X/Kwin eXperience is much better.
Let me know which benchmarks you need.

Cheers,
   Dieter

BTW Do anyone know how I can override BIOS GTT settings?
I can only set 256 MB max. - BIOS patching?
With agpmode=-1 I can run with 1024 MB GTT
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT
  2014-07-18  3:58     ` Dieter Nützel
@ 2014-07-18  9:52       ` Michel Dänzer
  0 siblings, 0 replies; 38+ messages in thread
From: Michel Dänzer @ 2014-07-18  9:52 UTC (permalink / raw)
  To: Dieter Nützel; +Cc: mesa-dev, dri-devel

On 18.07.2014 12:58, Dieter Nützel wrote:
> Am 18.07.2014 05:07, schrieb Michel Dänzer:
>> On 17.07.2014 19:09, Christian König wrote:
>>> Am 17.07.2014 12:01, schrieb Michel Dänzer:
>>>> [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers on >= SI
>>>
>>> I'm still not very keen with this change since I still don't understand
>>> the reason why it's faster than with GTT. Definitely needs more testing
>>> on a wider range of systems.
>>
>> Sure. If anyone wants to give this patch a spin and see if they can
>> measure any performance difference, good or bad, that would be
>> interesting.
>>
>>> Maybe limit it to APUs for now?
>>
>> But IIRC, CPU writes to VRAM vs. write-combined GTT are actually an even
>> bigger win with dedicated GPUs than with the Kaveri built-in GPU on my
>> system. I suspect it may depend on the bandwidth available for PCIe vs.
>> system memory though.
> 
> Michel,
> 
> please, please do NOT change anything on this!;-)
> You all know that I currently can only run this on my poor Duron 1800
> with RV730 (AGP), but...
> 
> With this all 'objview' demos (mesa-demos) run at 60 fps (vsync),
> even with chip set/CPU power management enabled (athcool on).
> 
> If I set vblank_mode=0
> the slowest GreatLakesBiplaneHP.obj
> run at ~100 fps (~16 fps before) => 6x speedup.
> (Even 5 planes run at 30 fps) - Wow!!!

That's great, but note that the disputed change above only has an effect
with SI or newer GPUs, i.e. none with yours.

I suspect that speedup is because the app ends up using effectively
static vertex/index buffers, which are now in VRAM instead of in GTT due
to my Mesa changes.


> Overall X/Kwin eXperience is much better.
> Let me know which benchmarks you need.

I'm not looking for anything in particular, basically anything where you
care about performance. E.g. the usual suspects in PTS.


> BTW Do anyone know how I can override BIOS GTT settings?
> I can only set 256 MB max. - BIOS patching?

Your AGP bridge hardware might not support more.

> With agpmode=-1 I can run with 1024 MB GTT

What effect does that have on performance? I'm not sure if AGP provides
any benefit for GPUs with PCIe GART.


-- 
Earthling Michel Dänzer            |                  http://www.amd.com
Libre software enthusiast          |                Mesa and X developer
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 4/5] r600g, radeonsi: Use write-combined persistent GTT mappings
  2014-07-18  3:19     ` [Mesa-dev] " Michel Dänzer
@ 2014-07-18 11:45       ` Marek Olšák
  2014-07-18 19:50         ` [Mesa-dev] " Grigori Goronzy
  2014-07-23  6:32         ` Michel Dänzer
  2014-07-18 20:55       ` Marek Olšák
  1 sibling, 2 replies; 38+ messages in thread
From: Marek Olšák @ 2014-07-18 11:45 UTC (permalink / raw)
  To: Michel Dänzer; +Cc: mesa-dev, dri-devel

If the requirements of GL_MAP_COHERENT_BIT are satisfied, then the
patch is okay.

Marek


On Fri, Jul 18, 2014 at 5:19 AM, Michel Dänzer <michel@daenzer.net> wrote:
> On 17.07.2014 21:00, Marek Olšák wrote:
>> On Thu, Jul 17, 2014 at 12:01 PM, Michel Dänzer <michel@daenzer.net> wrote:
>>> From: Michel Dänzer <michel.daenzer@amd.com>
>>>
>>> This is hopefully safe: The kernel makes sure writes to these mappings
>>> finish before the GPU might start reading from them, and the GPU caches
>>> are invalidated at the start of a command stream.
>>>
>> The resource flags actually tell you what you can do. If the COHERENT
>> flag is set, the mapping must be cached.
>
> Why is that required? As I explain above, we should satisfy the
> requirements of the ARB_buffer_storage extension AFAICT.
>
>
> As pointed out by you and Grigori in other posts, I should probably just
> drop the special treatment of persistent mappings though, so the
> placement and flags are derived from the buffer usage.
>
>
> --
> Earthling Michel Dänzer            |                  http://www.amd.com
> Libre software enthusiast          |                Mesa and X developer
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT
  2014-07-18  3:07   ` Michel Dänzer
  2014-07-18  3:58     ` Dieter Nützel
@ 2014-07-18 15:47     ` Christian König
  2014-07-18 17:47       ` Marek Olšák
  2014-07-19  1:15       ` Michel Dänzer
  1 sibling, 2 replies; 38+ messages in thread
From: Christian König @ 2014-07-18 15:47 UTC (permalink / raw)
  To: Michel Dänzer; +Cc: mesa-dev, dri-devel

Am 18.07.2014 05:07, schrieb Michel Dänzer:
>>> [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers on >= SI
>> I'm still not very keen with this change since I still don't understand
>> the reason why it's faster than with GTT. Definitely needs more testing
>> on a wider range of systems.
> Sure. If anyone wants to give this patch a spin and see if they can
> measure any performance difference, good or bad, that would be interesting.
>
>> Maybe limit it to APUs for now?
> But IIRC, CPU writes to VRAM vs. write-combined GTT are actually an even
> bigger win with dedicated GPUs than with the Kaveri built-in GPU on my
> system. I suspect it may depend on the bandwidth available for PCIe vs.
> system memory though.

I've made a few tests today with the kernel part of the patches running 
Xonotic on Ultra in 1920 x 1080.

Without any patches I get around ~47.0fps on average with my dedicated 
HD7870.

Adding only "drm/radeon: Use write-combined CPU mappings of rings and 
IBs on >= SI" and that goes down to ~45.3fps.

Adding on to off that "drm/radeon: Use VRAM for indirect buffers on >= 
SI" and the frame rate goes down to ~27.74fps.

So enabling this unconditionally is definitely not a good idea. What I 
don't understand yet is why using USWC reduces the fps on SI as well. It 
looks like the reads from the IB buffer for command stream validation on 
SI affect that more than thought.

Christian.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT
  2014-07-18 15:47     ` Christian König
@ 2014-07-18 17:47       ` Marek Olšák
  2014-07-18 22:03         ` Marek Olšák
  2014-07-19  1:15       ` Michel Dänzer
  1 sibling, 1 reply; 38+ messages in thread
From: Marek Olšák @ 2014-07-18 17:47 UTC (permalink / raw)
  Cc: mesa-dev, Michel Dänzer, dri-devel

On Fri, Jul 18, 2014 at 5:47 PM, Christian König
<deathsimple@vodafone.de> wrote:
> Am 18.07.2014 05:07, schrieb Michel Dänzer:
>>>>
>>>> [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers on >= SI
>>>
>>> I'm still not very keen with this change since I still don't understand
>>> the reason why it's faster than with GTT. Definitely needs more testing
>>> on a wider range of systems.
>>
>> Sure. If anyone wants to give this patch a spin and see if they can
>> measure any performance difference, good or bad, that would be
>> interesting.
>>
>>> Maybe limit it to APUs for now?
>>
>> But IIRC, CPU writes to VRAM vs. write-combined GTT are actually an even
>> bigger win with dedicated GPUs than with the Kaveri built-in GPU on my
>> system. I suspect it may depend on the bandwidth available for PCIe vs.
>> system memory though.
>
>
> I've made a few tests today with the kernel part of the patches running
> Xonotic on Ultra in 1920 x 1080.
>
> Without any patches I get around ~47.0fps on average with my dedicated
> HD7870.
>
> Adding only "drm/radeon: Use write-combined CPU mappings of rings and IBs on
>>= SI" and that goes down to ~45.3fps.
>
> Adding on to off that "drm/radeon: Use VRAM for indirect buffers on >= SI"
> and the frame rate goes down to ~27.74fps.
>
> So enabling this unconditionally is definitely not a good idea. What I don't
> understand yet is why using USWC reduces the fps on SI as well. It looks
> like the reads from the IB buffer for command stream validation on SI affect
> that more than thought.

Yes, there is a CS parser with SI, but shouldn't the parser read from
the CPU copy that came with the ioctl instead? Anyway, I recommend
only using VRAM for IBs which are not parsed and patched by the CPU
(which reduces it down to CIK graphics and DMA IBs, right?)

Marek
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [Mesa-dev] [PATCH 4/5] r600g, radeonsi: Use write-combined persistent GTT mappings
  2014-07-18 11:45       ` Marek Olšák
@ 2014-07-18 19:50         ` Grigori Goronzy
  2014-07-18 20:52           ` Marek Olšák
  2014-07-23  6:32         ` Michel Dänzer
  1 sibling, 1 reply; 38+ messages in thread
From: Grigori Goronzy @ 2014-07-18 19:50 UTC (permalink / raw)
  To: Marek Olšák, Michel Dänzer; +Cc: mesa-dev, dri-devel


[-- Attachment #1.1: Type: text/plain, Size: 1897 bytes --]

On 18.07.2014 13:45, Marek Olšák wrote:
> If the requirements of GL_MAP_COHERENT_BIT are satisfied, then the
> patch is okay.
>

Apart from correctness, I still wonder how this will affect performance,
most notably CPU reads. This change unconditionally uses write-combined,
uncached memory for MAP_COHERENT buffers. Unless I am missing something,
CPU reads will be slow, even if the buffer storage flags indicate that
the buffer will be read by the CPU. Maybe it's a good idea to avoid
write combined memory if the buffer storage flags include MAP_READ_BIT?

Grigori

> Marek
> 
> 
> On Fri, Jul 18, 2014 at 5:19 AM, Michel Dänzer <michel@daenzer.net> wrote:
>> On 17.07.2014 21:00, Marek Olšák wrote:
>>> On Thu, Jul 17, 2014 at 12:01 PM, Michel Dänzer <michel@daenzer.net> wrote:
>>>> From: Michel Dänzer <michel.daenzer@amd.com>
>>>>
>>>> This is hopefully safe: The kernel makes sure writes to these mappings
>>>> finish before the GPU might start reading from them, and the GPU caches
>>>> are invalidated at the start of a command stream.
>>>>
>>> The resource flags actually tell you what you can do. If the COHERENT
>>> flag is set, the mapping must be cached.
>>
>> Why is that required? As I explain above, we should satisfy the
>> requirements of the ARB_buffer_storage extension AFAICT.
>>
>>
>> As pointed out by you and Grigori in other posts, I should probably just
>> drop the special treatment of persistent mappings though, so the
>> placement and flags are derived from the buffer usage.
>>
>>
>> --
>> Earthling Michel Dänzer            |                  http://www.amd.com
>> Libre software enthusiast          |                Mesa and X developer
> _______________________________________________
> mesa-dev mailing list
> mesa-dev@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/mesa-dev
> 



[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 246 bytes --]

[-- Attachment #2: Type: text/plain, Size: 159 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [Mesa-dev] [PATCH 4/5] r600g, radeonsi: Use write-combined persistent GTT mappings
  2014-07-18 19:50         ` [Mesa-dev] " Grigori Goronzy
@ 2014-07-18 20:52           ` Marek Olšák
  0 siblings, 0 replies; 38+ messages in thread
From: Marek Olšák @ 2014-07-18 20:52 UTC (permalink / raw)
  To: Grigori Goronzy; +Cc: mesa-dev, Michel Dänzer, dri-devel

GL_MAP_READ_BIT is always unconditionally set for glBufferData, which
is the most-used function. However, st/mesa can look at the
immutability flag to distinguish between BufferData and BufferStorage
before pipe_buffer_create is called, and set the read flag if the
caller is BufferStorage and GL_MAP_READ_BIT is flagged, and not set
any flag otherwise.

Marek

On Fri, Jul 18, 2014 at 9:50 PM, Grigori Goronzy <greg@chown.ath.cx> wrote:
> On 18.07.2014 13:45, Marek Olšák wrote:
>> If the requirements of GL_MAP_COHERENT_BIT are satisfied, then the
>> patch is okay.
>>
>
> Apart from correctness, I still wonder how this will affect performance,
> most notably CPU reads. This change unconditionally uses write-combined,
> uncached memory for MAP_COHERENT buffers. Unless I am missing something,
> CPU reads will be slow, even if the buffer storage flags indicate that
> the buffer will be read by the CPU. Maybe it's a good idea to avoid
> write combined memory if the buffer storage flags include MAP_READ_BIT?
>
> Grigori
>
>> Marek
>>
>>
>> On Fri, Jul 18, 2014 at 5:19 AM, Michel Dänzer <michel@daenzer.net> wrote:
>>> On 17.07.2014 21:00, Marek Olšák wrote:
>>>> On Thu, Jul 17, 2014 at 12:01 PM, Michel Dänzer <michel@daenzer.net> wrote:
>>>>> From: Michel Dänzer <michel.daenzer@amd.com>
>>>>>
>>>>> This is hopefully safe: The kernel makes sure writes to these mappings
>>>>> finish before the GPU might start reading from them, and the GPU caches
>>>>> are invalidated at the start of a command stream.
>>>>>
>>>> The resource flags actually tell you what you can do. If the COHERENT
>>>> flag is set, the mapping must be cached.
>>>
>>> Why is that required? As I explain above, we should satisfy the
>>> requirements of the ARB_buffer_storage extension AFAICT.
>>>
>>>
>>> As pointed out by you and Grigori in other posts, I should probably just
>>> drop the special treatment of persistent mappings though, so the
>>> placement and flags are derived from the buffer usage.
>>>
>>>
>>> --
>>> Earthling Michel Dänzer            |                  http://www.amd.com
>>> Libre software enthusiast          |                Mesa and X developer
>> _______________________________________________
>> mesa-dev mailing list
>> mesa-dev@lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/mesa-dev
>>
>
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 4/5] r600g, radeonsi: Use write-combined persistent GTT mappings
  2014-07-18  3:19     ` [Mesa-dev] " Michel Dänzer
  2014-07-18 11:45       ` Marek Olšák
@ 2014-07-18 20:55       ` Marek Olšák
  1 sibling, 0 replies; 38+ messages in thread
From: Marek Olšák @ 2014-07-18 20:55 UTC (permalink / raw)
  To: Michel Dänzer; +Cc: mesa-dev, dri-devel

On Fri, Jul 18, 2014 at 5:19 AM, Michel Dänzer <michel@daenzer.net> wrote:
> On 17.07.2014 21:00, Marek Olšák wrote:
>> On Thu, Jul 17, 2014 at 12:01 PM, Michel Dänzer <michel@daenzer.net> wrote:
>>> From: Michel Dänzer <michel.daenzer@amd.com>
>>>
>>> This is hopefully safe: The kernel makes sure writes to these mappings
>>> finish before the GPU might start reading from them, and the GPU caches
>>> are invalidated at the start of a command stream.
>>>
>> The resource flags actually tell you what you can do. If the COHERENT
>> flag is set, the mapping must be cached.
>
> Why is that required? As I explain above, we should satisfy the
> requirements of the ARB_buffer_storage extension AFAICT.
>
>
> As pointed out by you and Grigori in other posts, I should probably just
> drop the special treatment of persistent mappings though, so the
> placement and flags are derived from the buffer usage.

Yes, please drop the special treatment of persistent mappings. Thank you.

Marek
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT
  2014-07-18 17:47       ` Marek Olšák
@ 2014-07-18 22:03         ` Marek Olšák
  0 siblings, 0 replies; 38+ messages in thread
From: Marek Olšák @ 2014-07-18 22:03 UTC (permalink / raw)
  To: Marek Olšák; +Cc: mesa-dev, Michel Dänzer, dri-devel

On Fri, Jul 18, 2014 at 7:47 PM, Marek Olšák <maraeo@gmail.com> wrote:
> On Fri, Jul 18, 2014 at 5:47 PM, Christian König
> <deathsimple@vodafone.de> wrote:
>> Am 18.07.2014 05:07, schrieb Michel Dänzer:
>>>>>
>>>>> [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers on >= SI
>>>>
>>>> I'm still not very keen with this change since I still don't understand
>>>> the reason why it's faster than with GTT. Definitely needs more testing
>>>> on a wider range of systems.
>>>
>>> Sure. If anyone wants to give this patch a spin and see if they can
>>> measure any performance difference, good or bad, that would be
>>> interesting.
>>>
>>>> Maybe limit it to APUs for now?
>>>
>>> But IIRC, CPU writes to VRAM vs. write-combined GTT are actually an even
>>> bigger win with dedicated GPUs than with the Kaveri built-in GPU on my
>>> system. I suspect it may depend on the bandwidth available for PCIe vs.
>>> system memory though.
>>
>>
>> I've made a few tests today with the kernel part of the patches running
>> Xonotic on Ultra in 1920 x 1080.
>>
>> Without any patches I get around ~47.0fps on average with my dedicated
>> HD7870.
>>
>> Adding only "drm/radeon: Use write-combined CPU mappings of rings and IBs on
>>>= SI" and that goes down to ~45.3fps.
>>
>> Adding on to off that "drm/radeon: Use VRAM for indirect buffers on >= SI"
>> and the frame rate goes down to ~27.74fps.
>>
>> So enabling this unconditionally is definitely not a good idea. What I don't
>> understand yet is why using USWC reduces the fps on SI as well. It looks
>> like the reads from the IB buffer for command stream validation on SI affect
>> that more than thought.
>
> Yes, there is a CS parser with SI, but shouldn't the parser read from
> the CPU copy that came with the ioctl instead? Anyway, I recommend
> only using VRAM for IBs which are not parsed and patched by the CPU
> (which reduces it down to CIK graphics and DMA IBs, right?)

Oh, sorry. There is no CPU copy, just the IB. My recommendation still stands.

Marek
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT
  2014-07-18 15:47     ` Christian König
  2014-07-18 17:47       ` Marek Olšák
@ 2014-07-19  1:15       ` Michel Dänzer
  2014-07-21  8:07         ` Christian König
  1 sibling, 1 reply; 38+ messages in thread
From: Michel Dänzer @ 2014-07-19  1:15 UTC (permalink / raw)
  To: Christian König, Alex Deucher; +Cc: mesa-dev, dri-devel

On 19.07.2014 00:47, Christian König wrote:
> Am 18.07.2014 05:07, schrieb Michel Dänzer:
>>>> [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers on >= SI
>>> I'm still not very keen with this change since I still don't understand
>>> the reason why it's faster than with GTT. Definitely needs more testing
>>> on a wider range of systems.
>> Sure. If anyone wants to give this patch a spin and see if they can
>> measure any performance difference, good or bad, that would be
>> interesting.
>>
>>> Maybe limit it to APUs for now?
>> But IIRC, CPU writes to VRAM vs. write-combined GTT are actually an even
>> bigger win with dedicated GPUs than with the Kaveri built-in GPU on my
>> system. I suspect it may depend on the bandwidth available for PCIe vs.
>> system memory though.
> 
> I've made a few tests today with the kernel part of the patches running
> Xonotic on Ultra in 1920 x 1080.
> 
> Without any patches I get around ~47.0fps on average with my dedicated
> HD7870.
> 
> Adding only "drm/radeon: Use write-combined CPU mappings of rings and
> IBs on >= SI" and that goes down to ~45.3fps.
> 
> Adding on to off that "drm/radeon: Use VRAM for indirect buffers on >=
> SI" and the frame rate goes down to ~27.74fps.

Hmm, looks like I'll need to do more benchmarking of 3D workloads as well.

Alex, given those numbers, it's probably best if you remove the "Use
write-combined CPU mappings of rings and IBs on >= SI" change from your
tree as well for now.


-- 
Earthling Michel Dänzer            |                  http://www.amd.com
Libre software enthusiast          |                Mesa and X developer

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT
  2014-07-19  1:15       ` Michel Dänzer
@ 2014-07-21  8:07         ` Christian König
  2014-07-23  3:54           ` Michel Dänzer
  0 siblings, 1 reply; 38+ messages in thread
From: Christian König @ 2014-07-21  8:07 UTC (permalink / raw)
  To: Michel Dänzer, Alex Deucher; +Cc: mesa-dev, dri-devel

Am 19.07.2014 03:15, schrieb Michel Dänzer:
> On 19.07.2014 00:47, Christian König wrote:
>> Am 18.07.2014 05:07, schrieb Michel Dänzer:
>>>>> [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers on >= SI
>>>> I'm still not very keen with this change since I still don't understand
>>>> the reason why it's faster than with GTT. Definitely needs more testing
>>>> on a wider range of systems.
>>> Sure. If anyone wants to give this patch a spin and see if they can
>>> measure any performance difference, good or bad, that would be
>>> interesting.
>>>
>>>> Maybe limit it to APUs for now?
>>> But IIRC, CPU writes to VRAM vs. write-combined GTT are actually an even
>>> bigger win with dedicated GPUs than with the Kaveri built-in GPU on my
>>> system. I suspect it may depend on the bandwidth available for PCIe vs.
>>> system memory though.
>> I've made a few tests today with the kernel part of the patches running
>> Xonotic on Ultra in 1920 x 1080.
>>
>> Without any patches I get around ~47.0fps on average with my dedicated
>> HD7870.
>>
>> Adding only "drm/radeon: Use write-combined CPU mappings of rings and
>> IBs on >= SI" and that goes down to ~45.3fps.
>>
>> Adding on to off that "drm/radeon: Use VRAM for indirect buffers on >=
>> SI" and the frame rate goes down to ~27.74fps.
> Hmm, looks like I'll need to do more benchmarking of 3D workloads as well.
>
> Alex, given those numbers, it's probably best if you remove the "Use
> write-combined CPU mappings of rings and IBs on >= SI" change from your
> tree as well for now.

I wouldn't go as far as reverting the patch. It just needs a bit more 
fine tuning and that can happen in the 3.17rc cycle.

My tests clearly show that we still can use USWC for the ring buffer on 
SI and probably earlier chips as well. The performance drop comes from 
reading the IB content for command stream validation on SI.

Putting the IB into VRAM still doesn't seems to be a good idea on 
dedicated cards, but it might actually make sense on APUs.

Regards,
Christian.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT
  2014-07-21  8:07         ` Christian König
@ 2014-07-23  3:54           ` Michel Dänzer
  2014-07-23  6:42             ` Christian König
  2014-07-23 13:59             ` Alex Deucher
  0 siblings, 2 replies; 38+ messages in thread
From: Michel Dänzer @ 2014-07-23  3:54 UTC (permalink / raw)
  To: Christian König, Alex Deucher; +Cc: mesa-dev, dri-devel

On 21.07.2014 17:07, Christian König wrote:
> Am 19.07.2014 03:15, schrieb Michel Dänzer:
>> On 19.07.2014 00:47, Christian König wrote:
>>> Am 18.07.2014 05:07, schrieb Michel Dänzer:
>>>>>> [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers on >= SI
>>>>> I'm still not very keen with this change since I still don't
>>>>> understand
>>>>> the reason why it's faster than with GTT. Definitely needs more
>>>>> testing
>>>>> on a wider range of systems.
>>>> Sure. If anyone wants to give this patch a spin and see if they can
>>>> measure any performance difference, good or bad, that would be
>>>> interesting.
>>>>
>>>>> Maybe limit it to APUs for now?
>>>> But IIRC, CPU writes to VRAM vs. write-combined GTT are actually an
>>>> even
>>>> bigger win with dedicated GPUs than with the Kaveri built-in GPU on my
>>>> system. I suspect it may depend on the bandwidth available for PCIe vs.
>>>> system memory though.
>>> I've made a few tests today with the kernel part of the patches running
>>> Xonotic on Ultra in 1920 x 1080.
>>>
>>> Without any patches I get around ~47.0fps on average with my dedicated
>>> HD7870.
>>>
>>> Adding only "drm/radeon: Use write-combined CPU mappings of rings and
>>> IBs on >= SI" and that goes down to ~45.3fps.
>>>
>>> Adding on to off that "drm/radeon: Use VRAM for indirect buffers on >=
>>> SI" and the frame rate goes down to ~27.74fps.
>> Hmm, looks like I'll need to do more benchmarking of 3D workloads as
>> well.

I haven't been able to consistently[0] measure any significant
difference between all placements of the rings and IBs with Xonotic or
Reaction Quake with my Bonaire. I'd expect Xonotic to be shader / GPU
memory bandwidth bound rather than CS bound anyway, so a ~40% hit from
that kernel patch alone is very surprising. Are you sure it wasn't just
the same kind of variation as described below?

[0] There were slightly different results sometimes, but next time I
tried the same setup again, it was back to the same as always. So it
seemed to depend more on the particular system boot / test run / moon
phase / ... than the kernel patches themselves.


>> Alex, given those numbers, it's probably best if you remove the "Use
>> write-combined CPU mappings of rings and IBs on >= SI" change from your
>> tree as well for now.
> 
> I wouldn't go as far as reverting the patch. It just needs a bit more
> fine tuning and that can happen in the 3.17rc cycle.

There's no need to revert it, just drop it from the tree. I'd still
prefer that for now.


> My tests clearly show that we still can use USWC for the ring buffer on
> SI and probably earlier chips as well.

Yeah, that might be the safest approach for now.


-- 
Earthling Michel Dänzer            |                  http://www.amd.com
Libre software enthusiast          |                Mesa and X developer

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [Mesa-dev] [PATCH 4/5] r600g, radeonsi: Use write-combined persistent GTT mappings
  2014-07-18 11:45       ` Marek Olšák
  2014-07-18 19:50         ` [Mesa-dev] " Grigori Goronzy
@ 2014-07-23  6:32         ` Michel Dänzer
  2014-07-23 11:52           ` Marek Olšák
  1 sibling, 1 reply; 38+ messages in thread
From: Michel Dänzer @ 2014-07-23  6:32 UTC (permalink / raw)
  To: Marek Olšák; +Cc: mesa-dev, dri-devel

On 18.07.2014 20:45, Marek Olšák wrote:
> If the requirements of GL_MAP_COHERENT_BIT are satisfied, then the
> patch is okay.

AFAICT GL_MAP_COHERENT_BIT simply means the app doesn't need to call
glMemoryBarrier() to ensure coherency between access to the mapping and
GL commands. Since we don't need to do anything for glMemoryBarrier(),
GL_MAP_COHERENT_BIT doesn't make any difference for us.

That said, I think I need to add code in the kernel ensuring we always
flush the HDP cache before submitting a command stream to the hardware,
bump the minor version, and only use VRAM for streaming when the DRM
minor version is >= the bumped version.


-- 
Earthling Michel Dänzer            |                  http://www.amd.com
Libre software enthusiast          |                Mesa and X developer
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT
  2014-07-23  3:54           ` Michel Dänzer
@ 2014-07-23  6:42             ` Christian König
  2014-07-23  7:21               ` Michel Dänzer
  2014-07-23 13:59             ` Alex Deucher
  1 sibling, 1 reply; 38+ messages in thread
From: Christian König @ 2014-07-23  6:42 UTC (permalink / raw)
  To: Michel Dänzer, Alex Deucher; +Cc: mesa-dev, dri-devel

Am 23.07.2014 05:54, schrieb Michel Dänzer:
> On 21.07.2014 17:07, Christian König wrote:
>> Am 19.07.2014 03:15, schrieb Michel Dänzer:
>>> On 19.07.2014 00:47, Christian König wrote:
>>>> Am 18.07.2014 05:07, schrieb Michel Dänzer:
>>>>>>> [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers on >= SI
>>>>>> I'm still not very keen with this change since I still don't
>>>>>> understand
>>>>>> the reason why it's faster than with GTT. Definitely needs more
>>>>>> testing
>>>>>> on a wider range of systems.
>>>>> Sure. If anyone wants to give this patch a spin and see if they can
>>>>> measure any performance difference, good or bad, that would be
>>>>> interesting.
>>>>>
>>>>>> Maybe limit it to APUs for now?
>>>>> But IIRC, CPU writes to VRAM vs. write-combined GTT are actually an
>>>>> even
>>>>> bigger win with dedicated GPUs than with the Kaveri built-in GPU on my
>>>>> system. I suspect it may depend on the bandwidth available for PCIe vs.
>>>>> system memory though.
>>>> I've made a few tests today with the kernel part of the patches running
>>>> Xonotic on Ultra in 1920 x 1080.
>>>>
>>>> Without any patches I get around ~47.0fps on average with my dedicated
>>>> HD7870.
>>>>
>>>> Adding only "drm/radeon: Use write-combined CPU mappings of rings and
>>>> IBs on >= SI" and that goes down to ~45.3fps.
>>>>
>>>> Adding on to off that "drm/radeon: Use VRAM for indirect buffers on >=
>>>> SI" and the frame rate goes down to ~27.74fps.
>>> Hmm, looks like I'll need to do more benchmarking of 3D workloads as
>>> well.
> I haven't been able to consistently[0] measure any significant
> difference between all placements of the rings and IBs with Xonotic or
> Reaction Quake with my Bonaire. I'd expect Xonotic to be shader / GPU
> memory bandwidth bound rather than CS bound anyway, so a ~40% hit from
> that kernel patch alone is very surprising. Are you sure it wasn't just
> the same kind of variation as described below?

Yes, I've measured that multiple times and the results where quite 
consistent.

But I didn't measured it on a Bonaire, where the bottleneck probably 
isn't the CPU load. I measured it on a fast Pitcairn and there Xonotic 
was clearly affected by the patches.

>
> [0] There were slightly different results sometimes, but next time I
> tried the same setup again, it was back to the same as always. So it
> seemed to depend more on the particular system boot / test run / moon
> phase / ... than the kernel patches themselves.
>
>
>>> Alex, given those numbers, it's probably best if you remove the "Use
>>> write-combined CPU mappings of rings and IBs on >= SI" change from your
>>> tree as well for now.
>> I wouldn't go as far as reverting the patch. It just needs a bit more
>> fine tuning and that can happen in the 3.17rc cycle.
> There's no need to revert it, just drop it from the tree. I'd still
> prefer that for now.
>
>
>> My tests clearly show that we still can use USWC for the ring buffer on
>> SI and probably earlier chips as well.
> Yeah, that might be the safest approach for now.
How about using USWC for the rings on all chips since R600 and for the 
IB only on CIK? As far as I can see that should do the trick quite well.

Christian.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT
  2014-07-23  6:42             ` Christian König
@ 2014-07-23  7:21               ` Michel Dänzer
  2014-07-23  7:32                 ` Christian König
  2014-07-23 12:07                 ` Marek Olšák
  0 siblings, 2 replies; 38+ messages in thread
From: Michel Dänzer @ 2014-07-23  7:21 UTC (permalink / raw)
  To: Christian König, Alex Deucher; +Cc: mesa-dev, dri-devel

On 23.07.2014 15:42, Christian König wrote:
> Am 23.07.2014 05:54, schrieb Michel Dänzer:
>> On 21.07.2014 17:07, Christian König wrote:
>>> Am 19.07.2014 03:15, schrieb Michel Dänzer:
>>>> On 19.07.2014 00:47, Christian König wrote:
>>>>> Am 18.07.2014 05:07, schrieb Michel Dänzer:
>>>>>>>> [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers on >= SI
>>>>>>> I'm still not very keen with this change since I still don't
>>>>>>> understand
>>>>>>> the reason why it's faster than with GTT. Definitely needs more
>>>>>>> testing
>>>>>>> on a wider range of systems.
>>>>>> Sure. If anyone wants to give this patch a spin and see if they can
>>>>>> measure any performance difference, good or bad, that would be
>>>>>> interesting.
>>>>>>
>>>>>>> Maybe limit it to APUs for now?
>>>>>> But IIRC, CPU writes to VRAM vs. write-combined GTT are actually an
>>>>>> even
>>>>>> bigger win with dedicated GPUs than with the Kaveri built-in GPU
>>>>>> on my
>>>>>> system. I suspect it may depend on the bandwidth available for
>>>>>> PCIe vs.
>>>>>> system memory though.
>>>>> I've made a few tests today with the kernel part of the patches
>>>>> running
>>>>> Xonotic on Ultra in 1920 x 1080.
>>>>>
>>>>> Without any patches I get around ~47.0fps on average with my dedicated
>>>>> HD7870.
>>>>>
>>>>> Adding only "drm/radeon: Use write-combined CPU mappings of rings and
>>>>> IBs on >= SI" and that goes down to ~45.3fps.
>>>>>
>>>>> Adding on to off that "drm/radeon: Use VRAM for indirect buffers on >=
>>>>> SI" and the frame rate goes down to ~27.74fps.
>>>> Hmm, looks like I'll need to do more benchmarking of 3D workloads as
>>>> well.
>> I haven't been able to consistently[0] measure any significant
>> difference between all placements of the rings and IBs with Xonotic or
>> Reaction Quake with my Bonaire. I'd expect Xonotic to be shader / GPU
>> memory bandwidth bound rather than CS bound anyway, so a ~40% hit from
>> that kernel patch alone is very surprising. Are you sure it wasn't just
>> the same kind of variation as described below?
> 
> Yes, I've measured that multiple times and the results where quite
> consistent.
> 
> But I didn't measured it on a Bonaire, where the bottleneck probably
> isn't the CPU load. I measured it on a fast Pitcairn 

Ahem, my Bonaire is cranking out ~90fps of Xonotic Ultra at 1920x1080.
:) (And AFAIK there are even faster Bonaire variants)

> and there Xonotic was clearly affected by the patches.

Okay, I hadn't realized we're not doing any command stream checking as
of CIK, that probably explains the difference.


>>> My tests clearly show that we still can use USWC for the ring buffer on
>>> SI and probably earlier chips as well.
>> Yeah, that might be the safest approach for now.
> How about using USWC for the rings on all chips since R600

Any particular reason against doing it for older chips which support
unsnooped access as well?

> and for the IB only on CIK? As far as I can see that should do the trick
> quite well.

Yeah, sounds good.


-- 
Earthling Michel Dänzer            |                  http://www.amd.com
Libre software enthusiast          |                Mesa and X developer

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT
  2014-07-23  7:21               ` Michel Dänzer
@ 2014-07-23  7:32                 ` Christian König
  2014-07-23 12:07                 ` Marek Olšák
  1 sibling, 0 replies; 38+ messages in thread
From: Christian König @ 2014-07-23  7:32 UTC (permalink / raw)
  To: Michel Dänzer, Alex Deucher; +Cc: mesa-dev, dri-devel

Am 23.07.2014 09:21, schrieb Michel Dänzer:
> On 23.07.2014 15:42, Christian König wrote:
>> Am 23.07.2014 05:54, schrieb Michel Dänzer:
>>> On 21.07.2014 17:07, Christian König wrote:
>>>> Am 19.07.2014 03:15, schrieb Michel Dänzer:
>>>>> On 19.07.2014 00:47, Christian König wrote:
>>>>>> Am 18.07.2014 05:07, schrieb Michel Dänzer:
>>>>>>>>> [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers on >= SI
>>>>>>>> I'm still not very keen with this change since I still don't
>>>>>>>> understand
>>>>>>>> the reason why it's faster than with GTT. Definitely needs more
>>>>>>>> testing
>>>>>>>> on a wider range of systems.
>>>>>>> Sure. If anyone wants to give this patch a spin and see if they can
>>>>>>> measure any performance difference, good or bad, that would be
>>>>>>> interesting.
>>>>>>>
>>>>>>>> Maybe limit it to APUs for now?
>>>>>>> But IIRC, CPU writes to VRAM vs. write-combined GTT are actually an
>>>>>>> even
>>>>>>> bigger win with dedicated GPUs than with the Kaveri built-in GPU
>>>>>>> on my
>>>>>>> system. I suspect it may depend on the bandwidth available for
>>>>>>> PCIe vs.
>>>>>>> system memory though.
>>>>>> I've made a few tests today with the kernel part of the patches
>>>>>> running
>>>>>> Xonotic on Ultra in 1920 x 1080.
>>>>>>
>>>>>> Without any patches I get around ~47.0fps on average with my dedicated
>>>>>> HD7870.
>>>>>>
>>>>>> Adding only "drm/radeon: Use write-combined CPU mappings of rings and
>>>>>> IBs on >= SI" and that goes down to ~45.3fps.
>>>>>>
>>>>>> Adding on to off that "drm/radeon: Use VRAM for indirect buffers on >=
>>>>>> SI" and the frame rate goes down to ~27.74fps.
>>>>> Hmm, looks like I'll need to do more benchmarking of 3D workloads as
>>>>> well.
>>> I haven't been able to consistently[0] measure any significant
>>> difference between all placements of the rings and IBs with Xonotic or
>>> Reaction Quake with my Bonaire. I'd expect Xonotic to be shader / GPU
>>> memory bandwidth bound rather than CS bound anyway, so a ~40% hit from
>>> that kernel patch alone is very surprising. Are you sure it wasn't just
>>> the same kind of variation as described below?
>> Yes, I've measured that multiple times and the results where quite
>> consistent.
>>
>> But I didn't measured it on a Bonaire, where the bottleneck probably
>> isn't the CPU load. I measured it on a fast Pitcairn
> Ahem, my Bonaire is cranking out ~90fps of Xonotic Ultra at 1920x1080.
> :) (And AFAIK there are even faster Bonaire variants)

My Bonaire only makes something around 17fps with Xonotic Ultra at 
1920x1080, might be a good idea to figure out why at some point.

>
>> and there Xonotic was clearly affected by the patches.
> Okay, I hadn't realized we're not doing any command stream checking as
> of CIK, that probably explains the difference.

Good point, I should probably test the putting IBs in VRAM patch with my 
Bonaire as well.

>>>> My tests clearly show that we still can use USWC for the ring buffer on
>>>> SI and probably earlier chips as well.
>>> Yeah, that might be the safest approach for now.
>> How about using USWC for the rings on all chips since R600
> Any particular reason against doing it for older chips which support
> unsnooped access as well?

Not really, I just didn't noticed that older chips can do this as well.

Christian.

>
>> and for the IB only on CIK? As far as I can see that should do the trick
>> quite well.
> Yeah, sounds good.
>
>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 4/5] r600g, radeonsi: Use write-combined persistent GTT mappings
  2014-07-23  6:32         ` Michel Dänzer
@ 2014-07-23 11:52           ` Marek Olšák
  0 siblings, 0 replies; 38+ messages in thread
From: Marek Olšák @ 2014-07-23 11:52 UTC (permalink / raw)
  To: Michel Dänzer; +Cc: mesa-dev, dri-devel

This sounds good to me.

Marek

On Wed, Jul 23, 2014 at 8:32 AM, Michel Dänzer <michel@daenzer.net> wrote:
> On 18.07.2014 20:45, Marek Olšák wrote:
>> If the requirements of GL_MAP_COHERENT_BIT are satisfied, then the
>> patch is okay.
>
> AFAICT GL_MAP_COHERENT_BIT simply means the app doesn't need to call
> glMemoryBarrier() to ensure coherency between access to the mapping and
> GL commands. Since we don't need to do anything for glMemoryBarrier(),
> GL_MAP_COHERENT_BIT doesn't make any difference for us.
>
> That said, I think I need to add code in the kernel ensuring we always
> flush the HDP cache before submitting a command stream to the hardware,
> bump the minor version, and only use VRAM for streaming when the DRM
> minor version is >= the bumped version.
>
>
> --
> Earthling Michel Dänzer            |                  http://www.amd.com
> Libre software enthusiast          |                Mesa and X developer
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT
  2014-07-23  7:21               ` Michel Dänzer
  2014-07-23  7:32                 ` Christian König
@ 2014-07-23 12:07                 ` Marek Olšák
  1 sibling, 0 replies; 38+ messages in thread
From: Marek Olšák @ 2014-07-23 12:07 UTC (permalink / raw)
  To: Michel Dänzer; +Cc: mesa-dev, dri-devel

On Wed, Jul 23, 2014 at 9:21 AM, Michel Dänzer <michel@daenzer.net> wrote:
> On 23.07.2014 15:42, Christian König wrote:
>> Am 23.07.2014 05:54, schrieb Michel Dänzer:
>>> On 21.07.2014 17:07, Christian König wrote:
>>>> Am 19.07.2014 03:15, schrieb Michel Dänzer:
>>>>> On 19.07.2014 00:47, Christian König wrote:
>>>>>> Am 18.07.2014 05:07, schrieb Michel Dänzer:
>>>>>>>>> [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers on >= SI
>>>>>>>> I'm still not very keen with this change since I still don't
>>>>>>>> understand
>>>>>>>> the reason why it's faster than with GTT. Definitely needs more
>>>>>>>> testing
>>>>>>>> on a wider range of systems.
>>>>>>> Sure. If anyone wants to give this patch a spin and see if they can
>>>>>>> measure any performance difference, good or bad, that would be
>>>>>>> interesting.
>>>>>>>
>>>>>>>> Maybe limit it to APUs for now?
>>>>>>> But IIRC, CPU writes to VRAM vs. write-combined GTT are actually an
>>>>>>> even
>>>>>>> bigger win with dedicated GPUs than with the Kaveri built-in GPU
>>>>>>> on my
>>>>>>> system. I suspect it may depend on the bandwidth available for
>>>>>>> PCIe vs.
>>>>>>> system memory though.
>>>>>> I've made a few tests today with the kernel part of the patches
>>>>>> running
>>>>>> Xonotic on Ultra in 1920 x 1080.
>>>>>>
>>>>>> Without any patches I get around ~47.0fps on average with my dedicated
>>>>>> HD7870.
>>>>>>
>>>>>> Adding only "drm/radeon: Use write-combined CPU mappings of rings and
>>>>>> IBs on >= SI" and that goes down to ~45.3fps.
>>>>>>
>>>>>> Adding on to off that "drm/radeon: Use VRAM for indirect buffers on >=
>>>>>> SI" and the frame rate goes down to ~27.74fps.
>>>>> Hmm, looks like I'll need to do more benchmarking of 3D workloads as
>>>>> well.
>>> I haven't been able to consistently[0] measure any significant
>>> difference between all placements of the rings and IBs with Xonotic or
>>> Reaction Quake with my Bonaire. I'd expect Xonotic to be shader / GPU
>>> memory bandwidth bound rather than CS bound anyway, so a ~40% hit from
>>> that kernel patch alone is very surprising. Are you sure it wasn't just
>>> the same kind of variation as described below?
>>
>> Yes, I've measured that multiple times and the results where quite
>> consistent.
>>
>> But I didn't measured it on a Bonaire, where the bottleneck probably
>> isn't the CPU load. I measured it on a fast Pitcairn
>
> Ahem, my Bonaire is cranking out ~90fps of Xonotic Ultra at 1920x1080.
> :) (And AFAIK there are even faster Bonaire variants)
>
>> and there Xonotic was clearly affected by the patches.
>
> Okay, I hadn't realized we're not doing any command stream checking as
> of CIK, that probably explains the difference.

I think CIK is doing CS checking for VCE, but not for graphics. SI is
doing CS checking for everything.

Marek
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT
  2014-07-23  3:54           ` Michel Dänzer
  2014-07-23  6:42             ` Christian König
@ 2014-07-23 13:59             ` Alex Deucher
  1 sibling, 0 replies; 38+ messages in thread
From: Alex Deucher @ 2014-07-23 13:59 UTC (permalink / raw)
  To: Michel Dänzer; +Cc: mesa-dev, Maling list - DRI developers

On Tue, Jul 22, 2014 at 11:54 PM, Michel Dänzer <michel@daenzer.net> wrote:
> On 21.07.2014 17:07, Christian König wrote:
>> Am 19.07.2014 03:15, schrieb Michel Dänzer:
>>> On 19.07.2014 00:47, Christian König wrote:
>>>> Am 18.07.2014 05:07, schrieb Michel Dänzer:
>>>>>>> [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers on >= SI
>>>>>> I'm still not very keen with this change since I still don't
>>>>>> understand
>>>>>> the reason why it's faster than with GTT. Definitely needs more
>>>>>> testing
>>>>>> on a wider range of systems.
>>>>> Sure. If anyone wants to give this patch a spin and see if they can
>>>>> measure any performance difference, good or bad, that would be
>>>>> interesting.
>>>>>
>>>>>> Maybe limit it to APUs for now?
>>>>> But IIRC, CPU writes to VRAM vs. write-combined GTT are actually an
>>>>> even
>>>>> bigger win with dedicated GPUs than with the Kaveri built-in GPU on my
>>>>> system. I suspect it may depend on the bandwidth available for PCIe vs.
>>>>> system memory though.
>>>> I've made a few tests today with the kernel part of the patches running
>>>> Xonotic on Ultra in 1920 x 1080.
>>>>
>>>> Without any patches I get around ~47.0fps on average with my dedicated
>>>> HD7870.
>>>>
>>>> Adding only "drm/radeon: Use write-combined CPU mappings of rings and
>>>> IBs on >= SI" and that goes down to ~45.3fps.
>>>>
>>>> Adding on to off that "drm/radeon: Use VRAM for indirect buffers on >=
>>>> SI" and the frame rate goes down to ~27.74fps.
>>> Hmm, looks like I'll need to do more benchmarking of 3D workloads as
>>> well.
>
> I haven't been able to consistently[0] measure any significant
> difference between all placements of the rings and IBs with Xonotic or
> Reaction Quake with my Bonaire. I'd expect Xonotic to be shader / GPU
> memory bandwidth bound rather than CS bound anyway, so a ~40% hit from
> that kernel patch alone is very surprising. Are you sure it wasn't just
> the same kind of variation as described below?
>
> [0] There were slightly different results sometimes, but next time I
> tried the same setup again, it was back to the same as always. So it
> seemed to depend more on the particular system boot / test run / moon
> phase / ... than the kernel patches themselves.
>
>
>>> Alex, given those numbers, it's probably best if you remove the "Use
>>> write-combined CPU mappings of rings and IBs on >= SI" change from your
>>> tree as well for now.
>>
>> I wouldn't go as far as reverting the patch. It just needs a bit more
>> fine tuning and that can happen in the 3.17rc cycle.
>
> There's no need to revert it, just drop it from the tree. I'd still
> prefer that for now.
>

I can drop it.  Maybe just map the rings WC for now?

Alex

>
>> My tests clearly show that we still can use USWC for the ring buffer on
>> SI and probably earlier chips as well.
>
> Yeah, that might be the safest approach for now.
>
>
> --
> Earthling Michel Dänzer            |                  http://www.amd.com
> Libre software enthusiast          |                Mesa and X developer
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2014-07-23 13:59 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-17 10:01 [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT Michel Dänzer
2014-07-17 10:01 ` [PATCH 1/5] drm/radeon: Remove radeon_gart_restore() Michel Dänzer
2014-07-17 10:01 ` [PATCH 2/5] drm/radeon: Pass GART page flags to radeon_gart_set_page() explicitly Michel Dänzer
2014-07-17 10:01 ` [PATCH 3/5] drm/radeon: Allow write-combined CPU mappings of BOs in GTT Michel Dänzer
2014-07-17 10:01 ` [PATCH 4/5] drm/radeon: Use write-combined CPU mappings of rings and IBs on >= SI Michel Dänzer
2014-07-17 10:01 ` [PATCH 5/5] drm/radeon: Use VRAM for indirect buffers " Michel Dänzer
2014-07-17 10:01 ` [PATCH 1/5] winsys/radeon: Use separate caching buffer managers for VRAM and GTT Michel Dänzer
2014-07-17 10:01 ` [PATCH 2/5] r600g/radeonsi: Use write-combined CPU mappings of some BOs in GTT Michel Dänzer
2014-07-17 10:01 ` [PATCH 3/5] r600g/radeonsi: Prefer VRAM for CPU -> GPU streaming buffers Michel Dänzer
2014-07-17 10:01 ` [PATCH 4/5] r600g, radeonsi: Use write-combined persistent GTT mappings Michel Dänzer
2014-07-17 10:56   ` Grigori Goronzy
2014-07-17 12:00   ` Marek Olšák
2014-07-18  3:19     ` [Mesa-dev] " Michel Dänzer
2014-07-18 11:45       ` Marek Olšák
2014-07-18 19:50         ` [Mesa-dev] " Grigori Goronzy
2014-07-18 20:52           ` Marek Olšák
2014-07-23  6:32         ` Michel Dänzer
2014-07-23 11:52           ` Marek Olšák
2014-07-18 20:55       ` Marek Olšák
2014-07-17 10:01 ` [PATCH 5/5] r600g, radeonsi: Prefer VRAM for persistent mappings Michel Dänzer
2014-07-17 12:17   ` Marek Olšák
2014-07-17 10:09 ` [PATCH 0/5] radeon: Write-combined CPU mappings of BOs in GTT Christian König
2014-07-18  3:07   ` Michel Dänzer
2014-07-18  3:58     ` Dieter Nützel
2014-07-18  9:52       ` Michel Dänzer
2014-07-18 15:47     ` Christian König
2014-07-18 17:47       ` Marek Olšák
2014-07-18 22:03         ` Marek Olšák
2014-07-19  1:15       ` Michel Dänzer
2014-07-21  8:07         ` Christian König
2014-07-23  3:54           ` Michel Dänzer
2014-07-23  6:42             ` Christian König
2014-07-23  7:21               ` Michel Dänzer
2014-07-23  7:32                 ` Christian König
2014-07-23 12:07                 ` Marek Olšák
2014-07-23 13:59             ` Alex Deucher
2014-07-17 12:25 ` Marek Olšák
2014-07-17 13:35 ` Alex Deucher

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.