linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Stephen Rothwell <sfr@canb.auug.org.au>
To: Dave Airlie <airlied@linux.ie>
Cc: linux-next@vger.kernel.org, linux-kernel@vger.kernel.org,
	Jerome Glisse <jglisse@redhat.com>,
	Alex Deucher <alexander.deucher@amd.com>
Subject: linux-next: manual merge of the drm tree with Linus' tree
Date: Wed, 13 Feb 2013 15:48:59 +1100	[thread overview]
Message-ID: <20130213154859.8cc81206f401e6f30e673178@canb.auug.org.au> (raw)

[-- Attachment #1: Type: text/plain, Size: 32938 bytes --]

Hi Dave,

Today's linux-next merge of the drm tree got a conflict in
drivers/gpu/drm/radeon/evergreen_cs.c between commit de0babd60d8d
("drm/radeon: enforce use of radeon_get_ib_value when reading user cmd")
from Linus' tree and commit 0fcb6155cb5c ("radeon/kms: cleanup async dma
packet checking") from the drm tree.

I fixed it up (I think (I did it fairly mechanically) - see below) and
can carry the fix as necessary (no action is required - but it might be
worth doing this merge yourself before asking Linus to pull - you could
just *merge* the above commit from Linus' tree (or the head of the branch
that Linus merged)).

-- 
Cheers,
Stephen Rothwell                    sfr@canb.auug.org.au

diff --cc drivers/gpu/drm/radeon/evergreen_cs.c
index ee4cff5,d8f5d5f..0000000
--- a/drivers/gpu/drm/radeon/evergreen_cs.c
+++ b/drivers/gpu/drm/radeon/evergreen_cs.c
@@@ -2908,15 -2708,19 +2708,19 @@@ int evergreen_dma_cs_parse(struct radeo
  				DRM_ERROR("bad DMA_PACKET_WRITE\n");
  				return -EINVAL;
  			}
- 			if (tiled) {
+ 			switch (sub_cmd) {
+ 			/* tiled */
+ 			case 8:
 -				dst_offset = ib[idx+1];
 +				dst_offset = radeon_get_ib_value(p, idx+1);
  				dst_offset <<= 8;
  
  				ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset >> 8);
  				p->idx += count + 7;
- 			} else {
+ 				break;
+ 			/* linear */
+ 			case 0:
 -				dst_offset = ib[idx+1];
 -				dst_offset |= ((u64)(ib[idx+2] & 0xff)) << 32;
 +				dst_offset = radeon_get_ib_value(p, idx+1);
 +				dst_offset |= ((u64)(radeon_get_ib_value(p, idx+2) & 0xff)) << 32;
  
  				ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset & 0xfffffffc);
  				ib[idx+2] += upper_32_bits(dst_reloc->lobj.gpu_offset) & 0xff;
@@@ -2939,338 -2747,330 +2747,330 @@@
  				DRM_ERROR("bad DMA_PACKET_COPY\n");
  				return -EINVAL;
  			}
- 			if (tiled) {
- 				idx_value = radeon_get_ib_value(p, idx + 2);
- 				if (new_cmd) {
- 					switch (misc) {
- 					case 0:
- 						/* L2T, frame to fields */
- 						if (idx_value & (1 << 31)) {
- 							DRM_ERROR("bad L2T, frame to fields DMA_PACKET_COPY\n");
- 							return -EINVAL;
- 						}
- 						r = r600_dma_cs_next_reloc(p, &dst2_reloc);
- 						if (r) {
- 							DRM_ERROR("bad L2T, frame to fields DMA_PACKET_COPY\n");
- 							return -EINVAL;
- 						}
- 						dst_offset = radeon_get_ib_value(p, idx+1);
- 						dst_offset <<= 8;
- 						dst2_offset = radeon_get_ib_value(p, idx+2);
- 						dst2_offset <<= 8;
- 						src_offset = radeon_get_ib_value(p, idx+8);
- 						src_offset |= ((u64)(radeon_get_ib_value(p, idx+9) & 0xff)) << 32;
- 						if ((src_offset + (count * 4)) > radeon_bo_size(src_reloc->robj)) {
- 							dev_warn(p->dev, "DMA L2T, frame to fields src buffer too small (%llu %lu)\n",
- 								 src_offset + (count * 4), radeon_bo_size(src_reloc->robj));
- 							return -EINVAL;
- 						}
- 						if ((dst_offset + (count * 4)) > radeon_bo_size(dst_reloc->robj)) {
- 							dev_warn(p->dev, "DMA L2T, frame to fields buffer too small (%llu %lu)\n",
- 								 dst_offset + (count * 4), radeon_bo_size(dst_reloc->robj));
- 							return -EINVAL;
- 						}
- 						if ((dst2_offset + (count * 4)) > radeon_bo_size(dst2_reloc->robj)) {
- 							dev_warn(p->dev, "DMA L2T, frame to fields buffer too small (%llu %lu)\n",
- 								 dst2_offset + (count * 4), radeon_bo_size(dst2_reloc->robj));
- 							return -EINVAL;
- 						}
- 						ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset >> 8);
- 						ib[idx+2] += (u32)(dst2_reloc->lobj.gpu_offset >> 8);
- 						ib[idx+8] += (u32)(src_reloc->lobj.gpu_offset & 0xfffffffc);
- 						ib[idx+9] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
- 						p->idx += 10;
- 						break;
- 					case 1:
- 						/* L2T, T2L partial */
- 						if (p->family < CHIP_CAYMAN) {
- 							DRM_ERROR("L2T, T2L Partial is cayman only !\n");
- 							return -EINVAL;
- 						}
- 						/* detile bit */
- 						if (idx_value & (1 << 31)) {
- 							/* tiled src, linear dst */
- 							ib[idx+1] += (u32)(src_reloc->lobj.gpu_offset >> 8);
- 
- 							ib[idx+7] += (u32)(dst_reloc->lobj.gpu_offset & 0xfffffffc);
- 							ib[idx+8] += upper_32_bits(dst_reloc->lobj.gpu_offset) & 0xff;
- 						} else {
- 							/* linear src, tiled dst */
- 							ib[idx+7] += (u32)(src_reloc->lobj.gpu_offset & 0xfffffffc);
- 							ib[idx+8] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
- 
- 							ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset >> 8);
- 						}
- 						p->idx += 12;
- 						break;
- 					case 3:
- 						/* L2T, broadcast */
- 						if (idx_value & (1 << 31)) {
- 							DRM_ERROR("bad L2T, broadcast DMA_PACKET_COPY\n");
- 							return -EINVAL;
- 						}
- 						r = r600_dma_cs_next_reloc(p, &dst2_reloc);
- 						if (r) {
- 							DRM_ERROR("bad L2T, broadcast DMA_PACKET_COPY\n");
- 							return -EINVAL;
- 						}
- 						dst_offset = radeon_get_ib_value(p, idx+1);
- 						dst_offset <<= 8;
- 						dst2_offset = radeon_get_ib_value(p, idx+2);
- 						dst2_offset <<= 8;
- 						src_offset = radeon_get_ib_value(p, idx+8);
- 						src_offset |= ((u64)(radeon_get_ib_value(p, idx+9) & 0xff)) << 32;
- 						if ((src_offset + (count * 4)) > radeon_bo_size(src_reloc->robj)) {
- 							dev_warn(p->dev, "DMA L2T, broadcast src buffer too small (%llu %lu)\n",
- 								 src_offset + (count * 4), radeon_bo_size(src_reloc->robj));
- 							return -EINVAL;
- 						}
- 						if ((dst_offset + (count * 4)) > radeon_bo_size(dst_reloc->robj)) {
- 							dev_warn(p->dev, "DMA L2T, broadcast dst buffer too small (%llu %lu)\n",
- 								 dst_offset + (count * 4), radeon_bo_size(dst_reloc->robj));
- 							return -EINVAL;
- 						}
- 						if ((dst2_offset + (count * 4)) > radeon_bo_size(dst2_reloc->robj)) {
- 							dev_warn(p->dev, "DMA L2T, broadcast dst2 buffer too small (%llu %lu)\n",
- 								 dst2_offset + (count * 4), radeon_bo_size(dst2_reloc->robj));
- 							return -EINVAL;
- 						}
- 						ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset >> 8);
- 						ib[idx+2] += (u32)(dst2_reloc->lobj.gpu_offset >> 8);
- 						ib[idx+8] += (u32)(src_reloc->lobj.gpu_offset & 0xfffffffc);
- 						ib[idx+9] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
- 						p->idx += 10;
- 						break;
- 					case 4:
- 						/* L2T, T2L */
- 						/* detile bit */
- 						if (idx_value & (1 << 31)) {
- 							/* tiled src, linear dst */
- 							src_offset = radeon_get_ib_value(p, idx+1);
- 							src_offset <<= 8;
- 							ib[idx+1] += (u32)(src_reloc->lobj.gpu_offset >> 8);
- 
- 							dst_offset = radeon_get_ib_value(p, idx+7);
- 							dst_offset |= ((u64)(radeon_get_ib_value(p, idx+8) & 0xff)) << 32;
- 							ib[idx+7] += (u32)(dst_reloc->lobj.gpu_offset & 0xfffffffc);
- 							ib[idx+8] += upper_32_bits(dst_reloc->lobj.gpu_offset) & 0xff;
- 						} else {
- 							/* linear src, tiled dst */
- 							src_offset = radeon_get_ib_value(p, idx+7);
- 							src_offset |= ((u64)(radeon_get_ib_value(p, idx+8) & 0xff)) << 32;
- 							ib[idx+7] += (u32)(src_reloc->lobj.gpu_offset & 0xfffffffc);
- 							ib[idx+8] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
- 
- 							dst_offset = radeon_get_ib_value(p, idx+1);
- 							dst_offset <<= 8;
- 							ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset >> 8);
- 						}
- 						if ((src_offset + (count * 4)) > radeon_bo_size(src_reloc->robj)) {
- 							dev_warn(p->dev, "DMA L2T, T2L src buffer too small (%llu %lu)\n",
- 								 src_offset + (count * 4), radeon_bo_size(src_reloc->robj));
- 							return -EINVAL;
- 						}
- 						if ((dst_offset + (count * 4)) > radeon_bo_size(dst_reloc->robj)) {
- 							dev_warn(p->dev, "DMA L2T, T2L dst buffer too small (%llu %lu)\n",
- 								 dst_offset + (count * 4), radeon_bo_size(dst_reloc->robj));
- 							return -EINVAL;
- 						}
- 						p->idx += 9;
- 						break;
- 					case 5:
- 						/* T2T partial */
- 						if (p->family < CHIP_CAYMAN) {
- 							DRM_ERROR("L2T, T2L Partial is cayman only !\n");
- 							return -EINVAL;
- 						}
- 						ib[idx+1] += (u32)(src_reloc->lobj.gpu_offset >> 8);
- 						ib[idx+4] += (u32)(dst_reloc->lobj.gpu_offset >> 8);
- 						p->idx += 13;
- 						break;
- 					case 7:
- 						/* L2T, broadcast */
- 						if (idx_value & (1 << 31)) {
- 							DRM_ERROR("bad L2T, broadcast DMA_PACKET_COPY\n");
- 							return -EINVAL;
- 						}
- 						r = r600_dma_cs_next_reloc(p, &dst2_reloc);
- 						if (r) {
- 							DRM_ERROR("bad L2T, broadcast DMA_PACKET_COPY\n");
- 							return -EINVAL;
- 						}
- 						dst_offset = radeon_get_ib_value(p, idx+1);
- 						dst_offset <<= 8;
- 						dst2_offset = radeon_get_ib_value(p, idx+2);
- 						dst2_offset <<= 8;
- 						src_offset = radeon_get_ib_value(p, idx+8);
- 						src_offset |= ((u64)(radeon_get_ib_value(p, idx+9) & 0xff)) << 32;
- 						if ((src_offset + (count * 4)) > radeon_bo_size(src_reloc->robj)) {
- 							dev_warn(p->dev, "DMA L2T, broadcast src buffer too small (%llu %lu)\n",
- 								 src_offset + (count * 4), radeon_bo_size(src_reloc->robj));
- 							return -EINVAL;
- 						}
- 						if ((dst_offset + (count * 4)) > radeon_bo_size(dst_reloc->robj)) {
- 							dev_warn(p->dev, "DMA L2T, broadcast dst buffer too small (%llu %lu)\n",
- 								 dst_offset + (count * 4), radeon_bo_size(dst_reloc->robj));
- 							return -EINVAL;
- 						}
- 						if ((dst2_offset + (count * 4)) > radeon_bo_size(dst2_reloc->robj)) {
- 							dev_warn(p->dev, "DMA L2T, broadcast dst2 buffer too small (%llu %lu)\n",
- 								 dst2_offset + (count * 4), radeon_bo_size(dst2_reloc->robj));
- 							return -EINVAL;
- 						}
- 						ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset >> 8);
- 						ib[idx+2] += (u32)(dst2_reloc->lobj.gpu_offset >> 8);
- 						ib[idx+8] += (u32)(src_reloc->lobj.gpu_offset & 0xfffffffc);
- 						ib[idx+9] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
- 						p->idx += 10;
- 						break;
- 					default:
- 						DRM_ERROR("bad DMA_PACKET_COPY misc %u\n", misc);
- 						return -EINVAL;
- 					}
+ 			switch (sub_cmd) {
+ 			/* Copy L2L, DW aligned */
+ 			case 0x00:
+ 				/* L2L, dw */
 -				src_offset = ib[idx+2];
 -				src_offset |= ((u64)(ib[idx+4] & 0xff)) << 32;
 -				dst_offset = ib[idx+1];
 -				dst_offset |= ((u64)(ib[idx+3] & 0xff)) << 32;
++				src_offset = radeon_get_ib_value(p, idx+2);
++				src_offset |= ((u64)(radeon_get_ib_value(p, idx+4) & 0xff)) << 32;
++				dst_offset = radeon_get_ib_value(p, idx+1);
++				dst_offset |= ((u64)(radeon_get_ib_value(p, idx+3) & 0xff)) << 32;
+ 				if ((src_offset + (count * 4)) > radeon_bo_size(src_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2L, dw src buffer too small (%llu %lu)\n",
+ 							src_offset + (count * 4), radeon_bo_size(src_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				if ((dst_offset + (count * 4)) > radeon_bo_size(dst_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2L, dw dst buffer too small (%llu %lu)\n",
+ 							dst_offset + (count * 4), radeon_bo_size(dst_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset & 0xfffffffc);
+ 				ib[idx+2] += (u32)(src_reloc->lobj.gpu_offset & 0xfffffffc);
+ 				ib[idx+3] += upper_32_bits(dst_reloc->lobj.gpu_offset) & 0xff;
+ 				ib[idx+4] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
+ 				p->idx += 5;
+ 				break;
+ 			/* Copy L2T/T2L */
+ 			case 0x08:
+ 				/* detile bit */
 -				if (ib[idx + 2] & (1 << 31)) {
++				if (radeon_get_ib_value(p, idx + 2) & (1 << 31)) {
+ 					/* tiled src, linear dst */
 -					src_offset = ib[idx+1];
++					src_offset = radeon_get_ib_value(p, idx+1);
+ 					src_offset <<= 8;
+ 					ib[idx+1] += (u32)(src_reloc->lobj.gpu_offset >> 8);
+ 
+ 					dst_offset = radeon_get_ib_value(p, idx + 7);
 -					dst_offset |= ((u64)(ib[idx+8] & 0xff)) << 32;
++					dst_offset |= ((u64)(radeon_get_ib_value(p, idx+8) & 0xff)) << 32;
+ 					ib[idx+7] += (u32)(dst_reloc->lobj.gpu_offset & 0xfffffffc);
+ 					ib[idx+8] += upper_32_bits(dst_reloc->lobj.gpu_offset) & 0xff;
  				} else {
- 					switch (misc) {
- 					case 0:
- 						/* detile bit */
- 						if (idx_value & (1 << 31)) {
- 							/* tiled src, linear dst */
- 							src_offset = radeon_get_ib_value(p, idx+1);
- 							src_offset <<= 8;
- 							ib[idx+1] += (u32)(src_reloc->lobj.gpu_offset >> 8);
- 
- 							dst_offset = radeon_get_ib_value(p, idx+7);
- 							dst_offset |= ((u64)(radeon_get_ib_value(p, idx+8) & 0xff)) << 32;
- 							ib[idx+7] += (u32)(dst_reloc->lobj.gpu_offset & 0xfffffffc);
- 							ib[idx+8] += upper_32_bits(dst_reloc->lobj.gpu_offset) & 0xff;
- 						} else {
- 							/* linear src, tiled dst */
- 							src_offset = radeon_get_ib_value(p, idx+7);
- 							src_offset |= ((u64)(radeon_get_ib_value(p, idx+8) & 0xff)) << 32;
- 							ib[idx+7] += (u32)(src_reloc->lobj.gpu_offset & 0xfffffffc);
- 							ib[idx+8] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
- 
- 							dst_offset = radeon_get_ib_value(p, idx+1);
- 							dst_offset <<= 8;
- 							ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset >> 8);
- 						}
- 						if ((src_offset + (count * 4)) > radeon_bo_size(src_reloc->robj)) {
- 							dev_warn(p->dev, "DMA L2T, broadcast src buffer too small (%llu %lu)\n",
- 								 src_offset + (count * 4), radeon_bo_size(src_reloc->robj));
- 							return -EINVAL;
- 						}
- 						if ((dst_offset + (count * 4)) > radeon_bo_size(dst_reloc->robj)) {
- 							dev_warn(p->dev, "DMA L2T, broadcast dst buffer too small (%llu %lu)\n",
- 								 dst_offset + (count * 4), radeon_bo_size(dst_reloc->robj));
- 							return -EINVAL;
- 						}
- 						p->idx += 9;
- 						break;
- 					default:
- 						DRM_ERROR("bad DMA_PACKET_COPY misc %u\n", misc);
- 						return -EINVAL;
- 					}
+ 					/* linear src, tiled dst */
 -					src_offset = ib[idx+7];
 -					src_offset |= ((u64)(ib[idx+8] & 0xff)) << 32;
++					src_offset = radeon_get_ib_value(p, idx+7);
++					src_offset |= ((u64)(radeon_get_ib_value(p, idx+8) & 0xff)) << 32;
+ 					ib[idx+7] += (u32)(src_reloc->lobj.gpu_offset & 0xfffffffc);
+ 					ib[idx+8] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
+ 
 -					dst_offset = ib[idx+1];
++					dst_offset = radeon_get_ib_value(p, idx+1);
+ 					dst_offset <<= 8;
+ 					ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset >> 8);
  				}
- 			} else {
- 				if (new_cmd) {
- 					switch (misc) {
- 					case 0:
- 						/* L2L, byte */
- 						src_offset = radeon_get_ib_value(p, idx+2);
- 						src_offset |= ((u64)(radeon_get_ib_value(p, idx+4) & 0xff)) << 32;
- 						dst_offset = radeon_get_ib_value(p, idx+1);
- 						dst_offset |= ((u64)(radeon_get_ib_value(p, idx+3) & 0xff)) << 32;
- 						if ((src_offset + count) > radeon_bo_size(src_reloc->robj)) {
- 							dev_warn(p->dev, "DMA L2L, byte src buffer too small (%llu %lu)\n",
- 								 src_offset + count, radeon_bo_size(src_reloc->robj));
- 							return -EINVAL;
- 						}
- 						if ((dst_offset + count) > radeon_bo_size(dst_reloc->robj)) {
- 							dev_warn(p->dev, "DMA L2L, byte dst buffer too small (%llu %lu)\n",
- 								 dst_offset + count, radeon_bo_size(dst_reloc->robj));
- 							return -EINVAL;
- 						}
- 						ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset & 0xffffffff);
- 						ib[idx+2] += (u32)(src_reloc->lobj.gpu_offset & 0xffffffff);
- 						ib[idx+3] += upper_32_bits(dst_reloc->lobj.gpu_offset) & 0xff;
- 						ib[idx+4] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
- 						p->idx += 5;
- 						break;
- 					case 1:
- 						/* L2L, partial */
- 						if (p->family < CHIP_CAYMAN) {
- 							DRM_ERROR("L2L Partial is cayman only !\n");
- 							return -EINVAL;
- 						}
- 						ib[idx+1] += (u32)(src_reloc->lobj.gpu_offset & 0xffffffff);
- 						ib[idx+2] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
- 						ib[idx+4] += (u32)(dst_reloc->lobj.gpu_offset & 0xffffffff);
- 						ib[idx+5] += upper_32_bits(dst_reloc->lobj.gpu_offset) & 0xff;
- 
- 						p->idx += 9;
- 						break;
- 					case 4:
- 						/* L2L, dw, broadcast */
- 						r = r600_dma_cs_next_reloc(p, &dst2_reloc);
- 						if (r) {
- 							DRM_ERROR("bad L2L, dw, broadcast DMA_PACKET_COPY\n");
- 							return -EINVAL;
- 						}
- 						dst_offset = radeon_get_ib_value(p, idx+1);
- 						dst_offset |= ((u64)(radeon_get_ib_value(p, idx+4) & 0xff)) << 32;
- 						dst2_offset = radeon_get_ib_value(p, idx+2);
- 						dst2_offset |= ((u64)(radeon_get_ib_value(p, idx+5) & 0xff)) << 32;
- 						src_offset = radeon_get_ib_value(p, idx+3);
- 						src_offset |= ((u64)(radeon_get_ib_value(p, idx+6) & 0xff)) << 32;
- 						if ((src_offset + (count * 4)) > radeon_bo_size(src_reloc->robj)) {
- 							dev_warn(p->dev, "DMA L2L, dw, broadcast src buffer too small (%llu %lu)\n",
- 								 src_offset + (count * 4), radeon_bo_size(src_reloc->robj));
- 							return -EINVAL;
- 						}
- 						if ((dst_offset + (count * 4)) > radeon_bo_size(dst_reloc->robj)) {
- 							dev_warn(p->dev, "DMA L2L, dw, broadcast dst buffer too small (%llu %lu)\n",
- 								 dst_offset + (count * 4), radeon_bo_size(dst_reloc->robj));
- 							return -EINVAL;
- 						}
- 						if ((dst2_offset + (count * 4)) > radeon_bo_size(dst2_reloc->robj)) {
- 							dev_warn(p->dev, "DMA L2L, dw, broadcast dst2 buffer too small (%llu %lu)\n",
- 								 dst2_offset + (count * 4), radeon_bo_size(dst2_reloc->robj));
- 							return -EINVAL;
- 						}
- 						ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset & 0xfffffffc);
- 						ib[idx+2] += (u32)(dst2_reloc->lobj.gpu_offset & 0xfffffffc);
- 						ib[idx+3] += (u32)(src_reloc->lobj.gpu_offset & 0xfffffffc);
- 						ib[idx+4] += upper_32_bits(dst_reloc->lobj.gpu_offset) & 0xff;
- 						ib[idx+5] += upper_32_bits(dst2_reloc->lobj.gpu_offset) & 0xff;
- 						ib[idx+6] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
- 						p->idx += 7;
- 						break;
- 					default:
- 						DRM_ERROR("bad DMA_PACKET_COPY misc %u\n", misc);
- 						return -EINVAL;
- 					}
+ 				if ((src_offset + (count * 4)) > radeon_bo_size(src_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2T, src buffer too small (%llu %lu)\n",
+ 							src_offset + (count * 4), radeon_bo_size(src_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				if ((dst_offset + (count * 4)) > radeon_bo_size(dst_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2T, dst buffer too small (%llu %lu)\n",
+ 							dst_offset + (count * 4), radeon_bo_size(dst_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				p->idx += 9;
+ 				break;
+ 			/* Copy L2L, byte aligned */
+ 			case 0x40:
+ 				/* L2L, byte */
 -				src_offset = ib[idx+2];
 -				src_offset |= ((u64)(ib[idx+4] & 0xff)) << 32;
 -				dst_offset = ib[idx+1];
 -				dst_offset |= ((u64)(ib[idx+3] & 0xff)) << 32;
++				src_offset = radeon_get_ib_value(p, idx+2);
++				src_offset |= ((u64)(radeon_get_ib_value(p, idx+4) & 0xff)) << 32;
++				dst_offset = radeon_get_ib_value(p, idx+1);
++				dst_offset |= ((u64)(radeon_get_ib_value(p, idx+3) & 0xff)) << 32;
+ 				if ((src_offset + count) > radeon_bo_size(src_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2L, byte src buffer too small (%llu %lu)\n",
+ 							src_offset + count, radeon_bo_size(src_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				if ((dst_offset + count) > radeon_bo_size(dst_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2L, byte dst buffer too small (%llu %lu)\n",
+ 							dst_offset + count, radeon_bo_size(dst_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset & 0xffffffff);
+ 				ib[idx+2] += (u32)(src_reloc->lobj.gpu_offset & 0xffffffff);
+ 				ib[idx+3] += upper_32_bits(dst_reloc->lobj.gpu_offset) & 0xff;
+ 				ib[idx+4] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
+ 				p->idx += 5;
+ 				break;
+ 			/* Copy L2L, partial */
+ 			case 0x41:
+ 				/* L2L, partial */
+ 				if (p->family < CHIP_CAYMAN) {
+ 					DRM_ERROR("L2L Partial is cayman only !\n");
+ 					return -EINVAL;
+ 				}
+ 				ib[idx+1] += (u32)(src_reloc->lobj.gpu_offset & 0xffffffff);
+ 				ib[idx+2] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
+ 				ib[idx+4] += (u32)(dst_reloc->lobj.gpu_offset & 0xffffffff);
+ 				ib[idx+5] += upper_32_bits(dst_reloc->lobj.gpu_offset) & 0xff;
+ 
+ 				p->idx += 9;
+ 				break;
+ 			/* Copy L2L, DW aligned, broadcast */
+ 			case 0x44:
+ 				/* L2L, dw, broadcast */
+ 				r = r600_dma_cs_next_reloc(p, &dst2_reloc);
+ 				if (r) {
+ 					DRM_ERROR("bad L2L, dw, broadcast DMA_PACKET_COPY\n");
+ 					return -EINVAL;
+ 				}
 -				dst_offset = ib[idx+1];
 -				dst_offset |= ((u64)(ib[idx+4] & 0xff)) << 32;
 -				dst2_offset = ib[idx+2];
 -				dst2_offset |= ((u64)(ib[idx+5] & 0xff)) << 32;
 -				src_offset = ib[idx+3];
 -				src_offset |= ((u64)(ib[idx+6] & 0xff)) << 32;
++				dst_offset = radeon_get_ib_value(p, idx+1);
++				dst_offset |= ((u64)(radeon_get_ib_value(p, idx+4) & 0xff)) << 32;
++				dst2_offset = radeon_get_ib_value(p, idx+2);
++				dst2_offset |= ((u64)(radeon_get_ib_value(p, idx+5) & 0xff)) << 32;
++				src_offset = radeon_get_ib_value(p, idx+3);
++				src_offset |= ((u64)(radeon_get_ib_value(p, idx+6) & 0xff)) << 32;
+ 				if ((src_offset + (count * 4)) > radeon_bo_size(src_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2L, dw, broadcast src buffer too small (%llu %lu)\n",
+ 							src_offset + (count * 4), radeon_bo_size(src_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				if ((dst_offset + (count * 4)) > radeon_bo_size(dst_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2L, dw, broadcast dst buffer too small (%llu %lu)\n",
+ 							dst_offset + (count * 4), radeon_bo_size(dst_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				if ((dst2_offset + (count * 4)) > radeon_bo_size(dst2_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2L, dw, broadcast dst2 buffer too small (%llu %lu)\n",
+ 							dst2_offset + (count * 4), radeon_bo_size(dst2_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset & 0xfffffffc);
+ 				ib[idx+2] += (u32)(dst2_reloc->lobj.gpu_offset & 0xfffffffc);
+ 				ib[idx+3] += (u32)(src_reloc->lobj.gpu_offset & 0xfffffffc);
+ 				ib[idx+4] += upper_32_bits(dst_reloc->lobj.gpu_offset) & 0xff;
+ 				ib[idx+5] += upper_32_bits(dst2_reloc->lobj.gpu_offset) & 0xff;
+ 				ib[idx+6] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
+ 				p->idx += 7;
+ 				break;
+ 			/* Copy L2T Frame to Field */
+ 			case 0x48:
 -				if (ib[idx + 2] & (1 << 31)) {
++				if (radeon_get_ib_value(p, idx + 2) & (1 << 31)) {
+ 					DRM_ERROR("bad L2T, frame to fields DMA_PACKET_COPY\n");
+ 					return -EINVAL;
+ 				}
+ 				r = r600_dma_cs_next_reloc(p, &dst2_reloc);
+ 				if (r) {
+ 					DRM_ERROR("bad L2T, frame to fields DMA_PACKET_COPY\n");
+ 					return -EINVAL;
+ 				}
 -				dst_offset = ib[idx+1];
++				dst_offset = radeon_get_ib_value(p, idx+1);
+ 				dst_offset <<= 8;
 -				dst2_offset = ib[idx+2];
++				dst2_offset = radeon_get_ib_value(p, idx+2);
+ 				dst2_offset <<= 8;
 -				src_offset = ib[idx+8];
 -				src_offset |= ((u64)(ib[idx+9] & 0xff)) << 32;
++				src_offset = radeon_get_ib_value(p, idx+8);
++				src_offset |= ((u64)(radeon_get_ib_value(p, idx+9) & 0xff)) << 32;
+ 				if ((src_offset + (count * 4)) > radeon_bo_size(src_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2T, frame to fields src buffer too small (%llu %lu)\n",
+ 							src_offset + (count * 4), radeon_bo_size(src_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				if ((dst_offset + (count * 4)) > radeon_bo_size(dst_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2T, frame to fields buffer too small (%llu %lu)\n",
+ 							dst_offset + (count * 4), radeon_bo_size(dst_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				if ((dst2_offset + (count * 4)) > radeon_bo_size(dst2_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2T, frame to fields buffer too small (%llu %lu)\n",
+ 							dst2_offset + (count * 4), radeon_bo_size(dst2_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset >> 8);
+ 				ib[idx+2] += (u32)(dst2_reloc->lobj.gpu_offset >> 8);
+ 				ib[idx+8] += (u32)(src_reloc->lobj.gpu_offset & 0xfffffffc);
+ 				ib[idx+9] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
+ 				p->idx += 10;
+ 				break;
+ 			/* Copy L2T/T2L, partial */
+ 			case 0x49:
+ 				/* L2T, T2L partial */
+ 				if (p->family < CHIP_CAYMAN) {
+ 					DRM_ERROR("L2T, T2L Partial is cayman only !\n");
+ 					return -EINVAL;
+ 				}
+ 				/* detile bit */
 -				if (ib[idx + 2 ] & (1 << 31)) {
++				if (radeon_get_ib_value(p, idx + 2 ) & (1 << 31)) {
+ 					/* tiled src, linear dst */
+ 					ib[idx+1] += (u32)(src_reloc->lobj.gpu_offset >> 8);
+ 
+ 					ib[idx+7] += (u32)(dst_reloc->lobj.gpu_offset & 0xfffffffc);
+ 					ib[idx+8] += upper_32_bits(dst_reloc->lobj.gpu_offset) & 0xff;
+ 				} else {
+ 					/* linear src, tiled dst */
+ 					ib[idx+7] += (u32)(src_reloc->lobj.gpu_offset & 0xfffffffc);
+ 					ib[idx+8] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
+ 
+ 					ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset >> 8);
+ 				}
+ 				p->idx += 12;
+ 				break;
+ 			/* Copy L2T broadcast */
+ 			case 0x4b:
+ 				/* L2T, broadcast */
 -				if (ib[idx + 2] & (1 << 31)) {
++				if (radeon_get_ib_value(p, idx + 2) & (1 << 31)) {
+ 					DRM_ERROR("bad L2T, broadcast DMA_PACKET_COPY\n");
+ 					return -EINVAL;
+ 				}
+ 				r = r600_dma_cs_next_reloc(p, &dst2_reloc);
+ 				if (r) {
+ 					DRM_ERROR("bad L2T, broadcast DMA_PACKET_COPY\n");
+ 					return -EINVAL;
+ 				}
 -				dst_offset = ib[idx+1];
++				dst_offset = radeon_get_ib_value(p, idx+1);
+ 				dst_offset <<= 8;
 -				dst2_offset = ib[idx+2];
++				dst2_offset = radeon_get_ib_value(p, idx+2);
+ 				dst2_offset <<= 8;
 -				src_offset = ib[idx+8];
 -				src_offset |= ((u64)(ib[idx+9] & 0xff)) << 32;
++				src_offset = radeon_get_ib_value(p, idx+8);
++				src_offset |= ((u64)(radeon_get_ib_value(p, idx+9) & 0xff)) << 32;
+ 				if ((src_offset + (count * 4)) > radeon_bo_size(src_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2T, broadcast src buffer too small (%llu %lu)\n",
+ 							src_offset + (count * 4), radeon_bo_size(src_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				if ((dst_offset + (count * 4)) > radeon_bo_size(dst_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2T, broadcast dst buffer too small (%llu %lu)\n",
+ 							dst_offset + (count * 4), radeon_bo_size(dst_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				if ((dst2_offset + (count * 4)) > radeon_bo_size(dst2_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2T, broadcast dst2 buffer too small (%llu %lu)\n",
+ 							dst2_offset + (count * 4), radeon_bo_size(dst2_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset >> 8);
+ 				ib[idx+2] += (u32)(dst2_reloc->lobj.gpu_offset >> 8);
+ 				ib[idx+8] += (u32)(src_reloc->lobj.gpu_offset & 0xfffffffc);
+ 				ib[idx+9] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
+ 				p->idx += 10;
+ 				break;
+ 			/* Copy L2T/T2L (tile units) */
+ 			case 0x4c:
+ 				/* L2T, T2L */
+ 				/* detile bit */
 -				if (ib[idx + 2] & (1 << 31)) {
++				if (radeon_get_ib_value(p, idx + 2) & (1 << 31)) {
+ 					/* tiled src, linear dst */
 -					src_offset = ib[idx+1];
++					src_offset = radeon_get_ib_value(p, idx+1);
+ 					src_offset <<= 8;
+ 					ib[idx+1] += (u32)(src_reloc->lobj.gpu_offset >> 8);
+ 
 -					dst_offset = ib[idx+7];
 -					dst_offset |= ((u64)(ib[idx+8] & 0xff)) << 32;
++					dst_offset = radeon_get_ib_value(p, idx+7);
++					dst_offset |= ((u64)(radeon_get_ib_value(p, idx+8) & 0xff)) << 32;
+ 					ib[idx+7] += (u32)(dst_reloc->lobj.gpu_offset & 0xfffffffc);
+ 					ib[idx+8] += upper_32_bits(dst_reloc->lobj.gpu_offset) & 0xff;
  				} else {
- 					/* L2L, dw */
- 					src_offset = radeon_get_ib_value(p, idx+2);
- 					src_offset |= ((u64)(radeon_get_ib_value(p, idx+4) & 0xff)) << 32;
+ 					/* linear src, tiled dst */
 -					src_offset = ib[idx+7];
 -					src_offset |= ((u64)(ib[idx+8] & 0xff)) << 32;
++					src_offset = radeon_get_ib_value(p, idx+7);
++					src_offset |= ((u64)(radeon_get_ib_value(p, idx+8) & 0xff)) << 32;
+ 					ib[idx+7] += (u32)(src_reloc->lobj.gpu_offset & 0xfffffffc);
+ 					ib[idx+8] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
+ 
 -					dst_offset = ib[idx+1];
 +					dst_offset = radeon_get_ib_value(p, idx+1);
- 					dst_offset |= ((u64)(radeon_get_ib_value(p, idx+3) & 0xff)) << 32;
- 					if ((src_offset + (count * 4)) > radeon_bo_size(src_reloc->robj)) {
- 						dev_warn(p->dev, "DMA L2L, dw src buffer too small (%llu %lu)\n",
- 							 src_offset + (count * 4), radeon_bo_size(src_reloc->robj));
- 						return -EINVAL;
- 					}
- 					if ((dst_offset + (count * 4)) > radeon_bo_size(dst_reloc->robj)) {
- 						dev_warn(p->dev, "DMA L2L, dw dst buffer too small (%llu %lu)\n",
- 							 dst_offset + (count * 4), radeon_bo_size(dst_reloc->robj));
- 						return -EINVAL;
- 					}
- 					ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset & 0xfffffffc);
- 					ib[idx+2] += (u32)(src_reloc->lobj.gpu_offset & 0xfffffffc);
- 					ib[idx+3] += upper_32_bits(dst_reloc->lobj.gpu_offset) & 0xff;
- 					ib[idx+4] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
- 					p->idx += 5;
+ 					dst_offset <<= 8;
+ 					ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset >> 8);
  				}
+ 				if ((src_offset + (count * 4)) > radeon_bo_size(src_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2T, T2L src buffer too small (%llu %lu)\n",
+ 							src_offset + (count * 4), radeon_bo_size(src_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				if ((dst_offset + (count * 4)) > radeon_bo_size(dst_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2T, T2L dst buffer too small (%llu %lu)\n",
+ 							dst_offset + (count * 4), radeon_bo_size(dst_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				p->idx += 9;
+ 				break;
+ 			/* Copy T2T, partial (tile units) */
+ 			case 0x4d:
+ 				/* T2T partial */
+ 				if (p->family < CHIP_CAYMAN) {
+ 					DRM_ERROR("L2T, T2L Partial is cayman only !\n");
+ 					return -EINVAL;
+ 				}
+ 				ib[idx+1] += (u32)(src_reloc->lobj.gpu_offset >> 8);
+ 				ib[idx+4] += (u32)(dst_reloc->lobj.gpu_offset >> 8);
+ 				p->idx += 13;
+ 				break;
+ 			/* Copy L2T broadcast (tile units) */
+ 			case 0x4f:
+ 				/* L2T, broadcast */
 -				if (ib[idx + 2] & (1 << 31)) {
++				if (radeon_get_ib_value(p, idx + 2) & (1 << 31)) {
+ 					DRM_ERROR("bad L2T, broadcast DMA_PACKET_COPY\n");
+ 					return -EINVAL;
+ 				}
+ 				r = r600_dma_cs_next_reloc(p, &dst2_reloc);
+ 				if (r) {
+ 					DRM_ERROR("bad L2T, broadcast DMA_PACKET_COPY\n");
+ 					return -EINVAL;
+ 				}
 -				dst_offset = ib[idx+1];
++				dst_offset = radeon_get_ib_value(p, idx+1);
+ 				dst_offset <<= 8;
 -				dst2_offset = ib[idx+2];
++				dst2_offset = radeon_get_ib_value(p, idx+2);
+ 				dst2_offset <<= 8;
 -				src_offset = ib[idx+8];
 -				src_offset |= ((u64)(ib[idx+9] & 0xff)) << 32;
++				src_offset = radeon_get_ib_value(p, idx+8);
++				src_offset |= ((u64)(radeon_get_ib_value(p, idx+9) & 0xff)) << 32;
+ 				if ((src_offset + (count * 4)) > radeon_bo_size(src_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2T, broadcast src buffer too small (%llu %lu)\n",
+ 							src_offset + (count * 4), radeon_bo_size(src_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				if ((dst_offset + (count * 4)) > radeon_bo_size(dst_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2T, broadcast dst buffer too small (%llu %lu)\n",
+ 							dst_offset + (count * 4), radeon_bo_size(dst_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				if ((dst2_offset + (count * 4)) > radeon_bo_size(dst2_reloc->robj)) {
+ 					dev_warn(p->dev, "DMA L2T, broadcast dst2 buffer too small (%llu %lu)\n",
+ 							dst2_offset + (count * 4), radeon_bo_size(dst2_reloc->robj));
+ 					return -EINVAL;
+ 				}
+ 				ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset >> 8);
+ 				ib[idx+2] += (u32)(dst2_reloc->lobj.gpu_offset >> 8);
+ 				ib[idx+8] += (u32)(src_reloc->lobj.gpu_offset & 0xfffffffc);
+ 				ib[idx+9] += upper_32_bits(src_reloc->lobj.gpu_offset) & 0xff;
+ 				p->idx += 10;
+ 				break;
+ 			default:
 -				DRM_ERROR("bad DMA_PACKET_COPY [%6d] 0x%08x invalid sub cmd\n", idx, ib[idx+0]);
++				DRM_ERROR("bad DMA_PACKET_COPY [%6d] 0x%08x invalid sub cmd\n", idx, radeon_get_ib_value(p, idx+0));
+ 				return -EINVAL;
  			}
  			break;
  		case DMA_PACKET_CONSTANT_FILL:

[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

             reply	other threads:[~2013-02-13  4:49 UTC|newest]

Thread overview: 142+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-02-13  4:48 Stephen Rothwell [this message]
  -- strict thread matches above, loose matches on Subject: below --
2024-03-04  1:08 linux-next: manual merge of the drm tree with Linus' tree Stephen Rothwell
2024-03-04  0:47 Stephen Rothwell
2024-01-08  0:14 Stephen Rothwell
2023-06-19  1:17 Stephen Rothwell
2022-11-30 22:57 Stephen Rothwell
2022-11-27 23:58 Stephen Rothwell
2022-11-29  9:07 ` Geert Uytterhoeven
2022-11-17  2:13 Stephen Rothwell
2022-09-19  0:58 Stephen Rothwell
2022-09-19  7:58 ` Geert Uytterhoeven
2022-09-19 14:23   ` Nathan Chancellor
2022-02-28  3:17 Stephen Rothwell
2021-10-29  0:48 Stephen Rothwell
2021-10-29  6:52 ` Joonas Lahtinen
2021-10-11  0:37 Stephen Rothwell
2021-08-23  2:41 Stephen Rothwell
2021-08-24  0:12 ` Stephen Rothwell
2021-08-12  1:20 Stephen Rothwell
2021-07-01  0:52 Stephen Rothwell
2021-03-29  2:14 Stephen Rothwell
2021-03-30  7:36 ` Geert Uytterhoeven
2021-03-30 23:41   ` Stephen Rothwell
2021-02-01  1:30 Stephen Rothwell
2021-02-14 22:07 ` Stephen Rothwell
2021-01-18  0:56 Stephen Rothwell
2020-10-02  3:42 Stephen Rothwell
2020-05-25  3:51 Stephen Rothwell
2020-03-23  0:50 Stephen Rothwell
2019-11-13  1:38 Stephen Rothwell
2019-11-13  1:13 Stephen Rothwell
2019-11-13  0:58 Stephen Rothwell
2019-11-13  0:50 Stephen Rothwell
2019-11-13  0:46 Stephen Rothwell
2019-11-13  0:40 Stephen Rothwell
2019-08-26  3:12 Stephen Rothwell
2019-06-11  2:19 Stephen Rothwell
2018-06-04  3:09 Stephen Rothwell
2018-03-26  3:38 Stephen Rothwell
2018-03-22  6:37 Stephen Rothwell
2018-03-26  3:45 ` Stephen Rothwell
2018-03-13  1:15 Stephen Rothwell
2018-02-18 23:10 Stephen Rothwell
2018-02-20 20:15 ` Rodrigo Vivi
2018-02-18 23:02 Stephen Rothwell
2018-01-15  1:08 Stephen Rothwell
2017-11-23 23:37 Stephen Rothwell
2017-10-30 18:36 Mark Brown
2017-10-30 18:26 Mark Brown
2017-08-28  2:32 Stephen Rothwell
2017-07-31  2:24 Stephen Rothwell
2017-07-24  2:06 Stephen Rothwell
2017-07-24  2:09 ` Stephen Rothwell
2017-06-05  2:49 Stephen Rothwell
2017-04-05  0:30 Stephen Rothwell
2017-02-23 23:17 Stephen Rothwell
2016-09-28  1:57 Stephen Rothwell
2016-09-20  2:42 Stephen Rothwell
2016-09-20 12:25 ` Philipp Zabel
2016-09-05  3:58 Stephen Rothwell
2016-09-05  3:51 Stephen Rothwell
2016-07-29  1:45 Stephen Rothwell
2016-05-09  1:28 Stephen Rothwell
2016-05-09  1:27 Stephen Rothwell
2016-03-17  0:45 Stephen Rothwell
2016-03-17  1:43 ` Luis R. Rodriguez
2016-02-29  2:59 Stephen Rothwell
2015-12-07  3:38 Stephen Rothwell
2015-11-01  8:11 Stephen Rothwell
2015-07-27  1:14 Stephen Rothwell
2015-06-21  3:50 Stephen Rothwell
2015-06-22  7:20 ` Christian König
2015-05-11  3:07 Stephen Rothwell
2015-04-07  5:22 Stephen Rothwell
2015-01-29  2:17 Stephen Rothwell
2015-01-29  8:19 ` Oded Gabbay
2015-01-22  2:22 Stephen Rothwell
2015-01-22  2:31 ` Dave Airlie
2015-01-22  2:57   ` Stephen Rothwell
2015-01-22  2:18 Stephen Rothwell
2015-01-22  2:15 Stephen Rothwell
2014-12-05  4:46 Stephen Rothwell
2014-12-05  4:39 Stephen Rothwell
2014-11-17  3:11 Stephen Rothwell
2014-11-17  9:47 ` Thierry Reding
2014-06-05  3:59 Stephen Rothwell
2014-06-05  3:54 Stephen Rothwell
2014-03-31  2:54 Stephen Rothwell
2014-03-24  2:01 Stephen Rothwell
2014-03-24  8:49 ` Jani Nikula
2013-12-19  2:09 Stephen Rothwell
2013-12-19  1:59 Stephen Rothwell
2013-11-07  2:21 Stephen Rothwell
2013-11-07  2:10 Stephen Rothwell
2013-10-28  5:58 Stephen Rothwell
2013-10-28  5:28 Stephen Rothwell
2013-10-28  5:11 Stephen Rothwell
2013-07-29  2:58 Stephen Rothwell
2013-07-26  3:01 Stephen Rothwell
2013-06-24  2:56 Stephen Rothwell
2013-06-17  3:09 Stephen Rothwell
2012-11-30  1:22 Stephen Rothwell
2012-10-04  2:00 Stephen Rothwell
2012-10-04  1:54 Stephen Rothwell
2012-09-17  4:53 Stephen Rothwell
2012-08-24  2:20 Stephen Rothwell
2012-08-24  2:19 Stephen Rothwell
2012-08-24  7:13 ` Jani Nikula
2012-08-24  7:12   ` Dave Airlie
2012-08-24 20:29     ` Sedat Dilek
2012-07-19  1:38 Stephen Rothwell
2012-07-16  3:00 Stephen Rothwell
2012-06-29  3:30 Stephen Rothwell
2012-06-29  9:01 ` Daniel Vetter
2012-05-14  4:08 Stephen Rothwell
2012-05-07  3:45 Stephen Rothwell
2012-05-07  8:06 ` Daniel Vetter
2012-05-03  3:07 Stephen Rothwell
2012-04-13  3:38 Stephen Rothwell
2012-04-13  8:59 ` Daniel Vetter
2012-03-05  3:47 Stephen Rothwell
2011-12-28  2:53 Stephen Rothwell
2011-12-28  2:44 Stephen Rothwell
2011-12-21  3:18 Stephen Rothwell
2011-12-12  2:06 Stephen Rothwell
2011-03-09  3:41 Stephen Rothwell
2010-10-15  1:54 Stephen Rothwell
2010-10-03 23:30 Stephen Rothwell
2010-10-03 23:30 Stephen Rothwell
2010-04-28  3:58 Stephen Rothwell
2010-04-27  2:51 Stephen Rothwell
2010-04-12  5:06 Stephen Rothwell
2009-10-06  2:28 Stephen Rothwell
2009-10-08  2:35 ` Stephen Rothwell
2009-10-08  3:38   ` Dave Airlie
2009-10-08  3:44     ` Stephen Rothwell
2009-09-08  5:01 Stephen Rothwell
2009-08-20  5:27 Stephen Rothwell
2009-08-20  5:30 ` Dave Airlie
2009-08-20  6:43   ` Stephen Rothwell
2009-08-20  5:07 Stephen Rothwell
2009-08-05  3:44 Stephen Rothwell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130213154859.8cc81206f401e6f30e673178@canb.auug.org.au \
    --to=sfr@canb.auug.org.au \
    --cc=airlied@linux.ie \
    --cc=alexander.deucher@amd.com \
    --cc=jglisse@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-next@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).