linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c:483:35: sparse: sparse: incorrect type in assignment (different base types)
@ 2020-09-06  8:52 kernel test robot
  2020-09-07  6:24 ` [PATCH] crypto: sun4i-ss - Fix SHA1 hash on A33-variant with BE CPU Herbert Xu
  0 siblings, 1 reply; 14+ messages in thread
From: kernel test robot @ 2020-09-06  8:52 UTC (permalink / raw)
  To: Corentin Labbe; +Cc: kbuild-all, linux-kernel, Herbert Xu

[-- Attachment #1: Type: text/plain, Size: 15747 bytes --]

tree:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
head:   dd9fb9bb3340c791a2be106fdc895db75f177343
commit: 1e02e6fbdadb3a0cb56294ff37eeeb8109e1f493 crypto: sun4i-ss - add the A33 variant of SS
date:   9 months ago
config: arm64-randconfig-s031-20200906 (attached as .config)
compiler: aarch64-linux-gcc (GCC) 9.3.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # apt-get install sparse
        # sparse version: v0.6.2-191-g10164920-dirty
        git checkout 1e02e6fbdadb3a0cb56294ff37eeeb8109e1f493
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=arm64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>


sparse warnings: (new ones prefixed by >>)

   drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c:412:28: sparse: sparse: invalid assignment: &=
   drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c:412:28: sparse:    left side has type restricted __le32
   drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c:412:28: sparse:    right side has type unsigned long
   drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c:419:12: sparse: sparse: invalid assignment: |=
   drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c:419:12: sparse:    left side has type restricted __le32
   drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c:419:12: sparse:    right side has type int
>> drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c:483:35: sparse: sparse: incorrect type in assignment (different base types) @@     expected unsigned int [assigned] [usertype] v @@     got restricted __le32 [usertype] @@
   drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c:483:35: sparse:     expected unsigned int [assigned] [usertype] v
   drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c:483:35: sparse:     got restricted __le32 [usertype]
   drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c:485:35: sparse: sparse: incorrect type in assignment (different base types) @@     expected unsigned int [assigned] [usertype] v @@     got restricted __be32 [usertype] @@
   drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c:485:35: sparse:     expected unsigned int [assigned] [usertype] v
   drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c:485:35: sparse:     got restricted __be32 [usertype]
   drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c:490:27: sparse: sparse: incorrect type in assignment (different base types) @@     expected unsigned int [addressable] [assigned] [usertype] v @@     got restricted __le32 [usertype] @@
   drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c:490:27: sparse:     expected unsigned int [addressable] [assigned] [usertype] v
   drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c:490:27: sparse:     got restricted __le32 [usertype]

# https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1e02e6fbdadb3a0cb56294ff37eeeb8109e1f493
git remote add linus https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
git fetch --no-tags linus master
git checkout 1e02e6fbdadb3a0cb56294ff37eeeb8109e1f493
vim +483 drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c

   148	
   149	/*
   150	 * sun4i_hash_update: update hash engine
   151	 *
   152	 * Could be used for both SHA1 and MD5
   153	 * Write data by step of 32bits and put then in the SS.
   154	 *
   155	 * Since we cannot leave partial data and hash state in the engine,
   156	 * we need to get the hash state at the end of this function.
   157	 * We can get the hash state every 64 bytes
   158	 *
   159	 * So the first work is to get the number of bytes to write to SS modulo 64
   160	 * The extra bytes will go to a temporary buffer op->buf storing op->len bytes
   161	 *
   162	 * So at the begin of update()
   163	 * if op->len + areq->nbytes < 64
   164	 * => all data will be written to wait buffer (op->buf) and end=0
   165	 * if not, write all data from op->buf to the device and position end to
   166	 * complete to 64bytes
   167	 *
   168	 * example 1:
   169	 * update1 60o => op->len=60
   170	 * update2 60o => need one more word to have 64 bytes
   171	 * end=4
   172	 * so write all data from op->buf and one word of SGs
   173	 * write remaining data in op->buf
   174	 * final state op->len=56
   175	 */
   176	static int sun4i_hash(struct ahash_request *areq)
   177	{
   178		/*
   179		 * i is the total bytes read from SGs, to be compared to areq->nbytes
   180		 * i is important because we cannot rely on SG length since the sum of
   181		 * SG->length could be greater than areq->nbytes
   182		 *
   183		 * end is the position when we need to stop writing to the device,
   184		 * to be compared to i
   185		 *
   186		 * in_i: advancement in the current SG
   187		 */
   188		unsigned int i = 0, end, fill, min_fill, nwait, nbw = 0, j = 0, todo;
   189		unsigned int in_i = 0;
   190		u32 spaces, rx_cnt = SS_RX_DEFAULT, bf[32] = {0}, v, ivmode = 0;
   191		struct sun4i_req_ctx *op = ahash_request_ctx(areq);
   192		struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
   193		struct sun4i_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
   194		struct sun4i_ss_ctx *ss = tfmctx->ss;
   195		struct scatterlist *in_sg = areq->src;
   196		struct sg_mapping_iter mi;
   197		int in_r, err = 0;
   198		size_t copied = 0;
   199		__le32 wb = 0;
   200	
   201		dev_dbg(ss->dev, "%s %s bc=%llu len=%u mode=%x wl=%u h0=%0x",
   202			__func__, crypto_tfm_alg_name(areq->base.tfm),
   203			op->byte_count, areq->nbytes, op->mode,
   204			op->len, op->hash[0]);
   205	
   206		if (unlikely(!areq->nbytes) && !(op->flags & SS_HASH_FINAL))
   207			return 0;
   208	
   209		/* protect against overflow */
   210		if (unlikely(areq->nbytes > UINT_MAX - op->len)) {
   211			dev_err(ss->dev, "Cannot process too large request\n");
   212			return -EINVAL;
   213		}
   214	
   215		if (op->len + areq->nbytes < 64 && !(op->flags & SS_HASH_FINAL)) {
   216			/* linearize data to op->buf */
   217			copied = sg_pcopy_to_buffer(areq->src, sg_nents(areq->src),
   218						    op->buf + op->len, areq->nbytes, 0);
   219			op->len += copied;
   220			return 0;
   221		}
   222	
   223		spin_lock_bh(&ss->slock);
   224	
   225		/*
   226		 * if some data have been processed before,
   227		 * we need to restore the partial hash state
   228		 */
   229		if (op->byte_count) {
   230			ivmode = SS_IV_ARBITRARY;
   231			for (i = 0; i < crypto_ahash_digestsize(tfm) / 4; i++)
   232				writel(op->hash[i], ss->base + SS_IV0 + i * 4);
   233		}
   234		/* Enable the device */
   235		writel(op->mode | SS_ENABLED | ivmode, ss->base + SS_CTL);
   236	
   237		if (!(op->flags & SS_HASH_UPDATE))
   238			goto hash_final;
   239	
   240		/* start of handling data */
   241		if (!(op->flags & SS_HASH_FINAL)) {
   242			end = ((areq->nbytes + op->len) / 64) * 64 - op->len;
   243	
   244			if (end > areq->nbytes || areq->nbytes - end > 63) {
   245				dev_err(ss->dev, "ERROR: Bound error %u %u\n",
   246					end, areq->nbytes);
   247				err = -EINVAL;
   248				goto release_ss;
   249			}
   250		} else {
   251			/* Since we have the flag final, we can go up to modulo 4 */
   252			if (areq->nbytes < 4)
   253				end = 0;
   254			else
   255				end = ((areq->nbytes + op->len) / 4) * 4 - op->len;
   256		}
   257	
   258		/* TODO if SGlen % 4 and !op->len then DMA */
   259		i = 1;
   260		while (in_sg && i == 1) {
   261			if (in_sg->length % 4)
   262				i = 0;
   263			in_sg = sg_next(in_sg);
   264		}
   265		if (i == 1 && !op->len && areq->nbytes)
   266			dev_dbg(ss->dev, "We can DMA\n");
   267	
   268		i = 0;
   269		sg_miter_start(&mi, areq->src, sg_nents(areq->src),
   270			       SG_MITER_FROM_SG | SG_MITER_ATOMIC);
   271		sg_miter_next(&mi);
   272		in_i = 0;
   273	
   274		do {
   275			/*
   276			 * we need to linearize in two case:
   277			 * - the buffer is already used
   278			 * - the SG does not have enough byte remaining ( < 4)
   279			 */
   280			if (op->len || (mi.length - in_i) < 4) {
   281				/*
   282				 * if we have entered here we have two reason to stop
   283				 * - the buffer is full
   284				 * - reach the end
   285				 */
   286				while (op->len < 64 && i < end) {
   287					/* how many bytes we can read from current SG */
   288					in_r = min(end - i, 64 - op->len);
   289					in_r = min_t(size_t, mi.length - in_i, in_r);
   290					memcpy(op->buf + op->len, mi.addr + in_i, in_r);
   291					op->len += in_r;
   292					i += in_r;
   293					in_i += in_r;
   294					if (in_i == mi.length) {
   295						sg_miter_next(&mi);
   296						in_i = 0;
   297					}
   298				}
   299				if (op->len > 3 && !(op->len % 4)) {
   300					/* write buf to the device */
   301					writesl(ss->base + SS_RXFIFO, op->buf,
   302						op->len / 4);
   303					op->byte_count += op->len;
   304					op->len = 0;
   305				}
   306			}
   307			if (mi.length - in_i > 3 && i < end) {
   308				/* how many bytes we can read from current SG */
   309				in_r = min_t(size_t, mi.length - in_i, areq->nbytes - i);
   310				in_r = min_t(size_t, ((mi.length - in_i) / 4) * 4, in_r);
   311				/* how many bytes we can write in the device*/
   312				todo = min3((u32)(end - i) / 4, rx_cnt, (u32)in_r / 4);
   313				writesl(ss->base + SS_RXFIFO, mi.addr + in_i, todo);
   314				op->byte_count += todo * 4;
   315				i += todo * 4;
   316				in_i += todo * 4;
   317				rx_cnt -= todo;
   318				if (!rx_cnt) {
   319					spaces = readl(ss->base + SS_FCSR);
   320					rx_cnt = SS_RXFIFO_SPACES(spaces);
   321				}
   322				if (in_i == mi.length) {
   323					sg_miter_next(&mi);
   324					in_i = 0;
   325				}
   326			}
   327		} while (i < end);
   328	
   329		/*
   330		 * Now we have written to the device all that we can,
   331		 * store the remaining bytes in op->buf
   332		 */
   333		if ((areq->nbytes - i) < 64) {
   334			while (i < areq->nbytes && in_i < mi.length && op->len < 64) {
   335				/* how many bytes we can read from current SG */
   336				in_r = min(areq->nbytes - i, 64 - op->len);
   337				in_r = min_t(size_t, mi.length - in_i, in_r);
   338				memcpy(op->buf + op->len, mi.addr + in_i, in_r);
   339				op->len += in_r;
   340				i += in_r;
   341				in_i += in_r;
   342				if (in_i == mi.length) {
   343					sg_miter_next(&mi);
   344					in_i = 0;
   345				}
   346			}
   347		}
   348	
   349		sg_miter_stop(&mi);
   350	
   351		/*
   352		 * End of data process
   353		 * Now if we have the flag final go to finalize part
   354		 * If not, store the partial hash
   355		 */
   356		if (op->flags & SS_HASH_FINAL)
   357			goto hash_final;
   358	
   359		writel(op->mode | SS_ENABLED | SS_DATA_END, ss->base + SS_CTL);
   360		i = 0;
   361		do {
   362			v = readl(ss->base + SS_CTL);
   363			i++;
   364		} while (i < SS_TIMEOUT && (v & SS_DATA_END));
   365		if (unlikely(i >= SS_TIMEOUT)) {
   366			dev_err_ratelimited(ss->dev,
   367					    "ERROR: hash end timeout %d>%d ctl=%x len=%u\n",
   368					    i, SS_TIMEOUT, v, areq->nbytes);
   369			err = -EIO;
   370			goto release_ss;
   371		}
   372	
   373		/*
   374		 * The datasheet isn't very clear about when to retrieve the digest. The
   375		 * bit SS_DATA_END is cleared when the engine has processed the data and
   376		 * when the digest is computed *but* it doesn't mean the digest is
   377		 * available in the digest registers. Hence the delay to be sure we can
   378		 * read it.
   379		 */
   380		ndelay(1);
   381	
   382		for (i = 0; i < crypto_ahash_digestsize(tfm) / 4; i++)
   383			op->hash[i] = readl(ss->base + SS_MD0 + i * 4);
   384	
   385		goto release_ss;
   386	
   387	/*
   388	 * hash_final: finalize hashing operation
   389	 *
   390	 * If we have some remaining bytes, we write them.
   391	 * Then ask the SS for finalizing the hashing operation
   392	 *
   393	 * I do not check RX FIFO size in this function since the size is 32
   394	 * after each enabling and this function neither write more than 32 words.
   395	 * If we come from the update part, we cannot have more than
   396	 * 3 remaining bytes to write and SS is fast enough to not care about it.
   397	 */
   398	
   399	hash_final:
   400	
   401		/* write the remaining words of the wait buffer */
   402		if (op->len) {
   403			nwait = op->len / 4;
   404			if (nwait) {
   405				writesl(ss->base + SS_RXFIFO, op->buf, nwait);
   406				op->byte_count += 4 * nwait;
   407			}
   408	
   409			nbw = op->len - 4 * nwait;
   410			if (nbw) {
   411				wb = cpu_to_le32(*(u32 *)(op->buf + nwait * 4));
   412				wb &= GENMASK((nbw * 8) - 1, 0);
   413	
   414				op->byte_count += nbw;
   415			}
   416		}
   417	
   418		/* write the remaining bytes of the nbw buffer */
   419		wb |= ((1 << 7) << (nbw * 8));
   420		bf[j++] = le32_to_cpu(wb);
   421	
   422		/*
   423		 * number of space to pad to obtain 64o minus 8(size) minus 4 (final 1)
   424		 * I take the operations from other MD5/SHA1 implementations
   425		 */
   426	
   427		/* last block size */
   428		fill = 64 - (op->byte_count % 64);
   429		min_fill = 2 * sizeof(u32) + (nbw ? 0 : sizeof(u32));
   430	
   431		/* if we can't fill all data, jump to the next 64 block */
   432		if (fill < min_fill)
   433			fill += 64;
   434	
   435		j += (fill - min_fill) / sizeof(u32);
   436	
   437		/* write the length of data */
   438		if (op->mode == SS_OP_SHA1) {
   439			__be64 *bits = (__be64 *)&bf[j];
   440			*bits = cpu_to_be64(op->byte_count << 3);
   441			j += 2;
   442		} else {
   443			__le64 *bits = (__le64 *)&bf[j];
   444			*bits = cpu_to_le64(op->byte_count << 3);
   445			j += 2;
   446		}
   447		writesl(ss->base + SS_RXFIFO, bf, j);
   448	
   449		/* Tell the SS to stop the hashing */
   450		writel(op->mode | SS_ENABLED | SS_DATA_END, ss->base + SS_CTL);
   451	
   452		/*
   453		 * Wait for SS to finish the hash.
   454		 * The timeout could happen only in case of bad overclocking
   455		 * or driver bug.
   456		 */
   457		i = 0;
   458		do {
   459			v = readl(ss->base + SS_CTL);
   460			i++;
   461		} while (i < SS_TIMEOUT && (v & SS_DATA_END));
   462		if (unlikely(i >= SS_TIMEOUT)) {
   463			dev_err_ratelimited(ss->dev,
   464					    "ERROR: hash end timeout %d>%d ctl=%x len=%u\n",
   465					    i, SS_TIMEOUT, v, areq->nbytes);
   466			err = -EIO;
   467			goto release_ss;
   468		}
   469	
   470		/*
   471		 * The datasheet isn't very clear about when to retrieve the digest. The
   472		 * bit SS_DATA_END is cleared when the engine has processed the data and
   473		 * when the digest is computed *but* it doesn't mean the digest is
   474		 * available in the digest registers. Hence the delay to be sure we can
   475		 * read it.
   476		 */
   477		ndelay(1);
   478	
   479		/* Get the hash from the device */
   480		if (op->mode == SS_OP_SHA1) {
   481			for (i = 0; i < 5; i++) {
   482				if (ss->variant->sha1_in_be)
 > 483					v = cpu_to_le32(readl(ss->base + SS_MD0 + i * 4));
   484				else
   485					v = cpu_to_be32(readl(ss->base + SS_MD0 + i * 4));
   486				memcpy(areq->result + i * 4, &v, 4);
   487			}
   488		} else {
   489			for (i = 0; i < 4; i++) {
   490				v = cpu_to_le32(readl(ss->base + SS_MD0 + i * 4));
   491				memcpy(areq->result + i * 4, &v, 4);
   492			}
   493		}
   494	
   495	release_ss:
   496		writel(0, ss->base + SS_CTL);
   497		spin_unlock_bh(&ss->slock);
   498		return err;
   499	}
   500	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 43366 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2020-10-08 23:35 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-06  8:52 drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c:483:35: sparse: sparse: incorrect type in assignment (different base types) kernel test robot
2020-09-07  6:24 ` [PATCH] crypto: sun4i-ss - Fix SHA1 hash on A33-variant with BE CPU Herbert Xu
2020-09-07 14:55   ` Corentin Labbe
2020-09-07 16:00   ` Corentin Labbe
2020-09-08  5:00     ` [v2 PATCH] crypto: sun4i-ss - Fix sparse endianness markers Herbert Xu
2020-09-10 12:22       ` Corentin Labbe
2020-09-11  4:13         ` Herbert Xu
2020-09-14  7:45           ` Corentin Labbe
2020-09-14 10:40           ` Corentin Labbe
2020-09-24  3:08             ` Herbert Xu
2020-09-24 13:27               ` Corentin Labbe
2020-10-08  5:52                 ` Herbert Xu
2020-10-08  6:36                   ` Corentin Labbe
2020-10-08 23:35                     ` Eric Biggers

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).