From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com. [2607:f8b0:4864:20::444]) by gmr-mx.google.com with ESMTPS id d12si385846pjv.0.2020.02.05.09.17.44 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 05 Feb 2020 09:17:44 -0800 (PST) Received: by mail-pf1-x444.google.com with SMTP id s1so1526776pfh.10 for ; Wed, 05 Feb 2020 09:17:44 -0800 (PST) Return-Path: Sender: Arindam Nath From: Arindam Nath Subject: [PATCH 2/4] ntb_perf: send command in response to EAGAIN Date: Wed, 5 Feb 2020 22:46:56 +0530 Message-Id: In-Reply-To: References: In-Reply-To: References: To: Jon Mason , Dave Jiang , Allen Hubbe , Sanjay R Mehta Cc: linux-ntb@googlegroups.com, linux-kernel@vger.kernel.org, Arindam Nath List-ID: perf_spad_cmd_send() and perf_msg_cmd_send() return -EAGAIN after trying to send commands for a maximum of MSG_TRIES re-tries. But currently there is no handling for this error. These functions are invoked from perf_service_work() through function pointers, so rather than simply call these functions is not enough. We need to make sure to invoke them again in case of -EAGAIN. Since peer status bits were cleared before calling these functions, we set the same status bits before queueing the work again for later invocation. This way we simply won't go ahead and initialize the XLAT registers wrongfully in case sending the very first command itself fails. Signed-off-by: Arindam Nath --- drivers/ntb/test/ntb_perf.c | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/drivers/ntb/test/ntb_perf.c b/drivers/ntb/test/ntb_perf.c index 0e9b9efe74a4..5116655f0211 100644 --- a/drivers/ntb/test/ntb_perf.c +++ b/drivers/ntb/test/ntb_perf.c @@ -625,14 +625,24 @@ static void perf_service_work(struct work_struct *work) { struct perf_peer *peer = to_peer_service(work); - if (test_and_clear_bit(PERF_CMD_SSIZE, &peer->sts)) - perf_cmd_send(peer, PERF_CMD_SSIZE, peer->outbuf_size); + if (test_and_clear_bit(PERF_CMD_SSIZE, &peer->sts)) { + if (perf_cmd_send(peer, PERF_CMD_SSIZE, peer->outbuf_size) + == -EAGAIN) { + set_bit(PERF_CMD_SSIZE, &peer->sts); + (void)queue_work(system_highpri_wq, &peer->service); + } + } if (test_and_clear_bit(PERF_CMD_RSIZE, &peer->sts)) perf_setup_inbuf(peer); - if (test_and_clear_bit(PERF_CMD_SXLAT, &peer->sts)) - perf_cmd_send(peer, PERF_CMD_SXLAT, peer->inbuf_xlat); + if (test_and_clear_bit(PERF_CMD_SXLAT, &peer->sts)) { + if (perf_cmd_send(peer, PERF_CMD_SXLAT, peer->inbuf_xlat) + == -EAGAIN) { + set_bit(PERF_CMD_SXLAT, &peer->sts); + (void)queue_work(system_highpri_wq, &peer->service); + } + } if (test_and_clear_bit(PERF_CMD_RXLAT, &peer->sts)) perf_setup_outbuf(peer); -- 2.17.1