netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net] ibmvnic: Fall back to 16 H_SEND_SUB_CRQ_INDIRECT entries with old FW
@ 2020-04-27 17:33 Juliet Kim
  2020-04-28 15:35 ` Thomas Falcon
  0 siblings, 1 reply; 3+ messages in thread
From: Juliet Kim @ 2020-04-27 17:33 UTC (permalink / raw)
  To: netdev; +Cc: linuxppc-dev, julietk, tlfalcon

The maximum entries for H_SEND_SUB_CRQ_INDIRECT has increased on
some platforms from 16 to 128. If Live Partition Mobility is used
to migrate a running OS image from a newer source platform to an
older target platform, then H_SEND_SUB_CRQ_INDIRECT will fail with
H_PARAMETER if 128 entries are queued.

Fix this by falling back to 16 entries if H_PARAMETER is returned
from the hcall().

Signed-off-by: Juliet Kim <julietk@linux.vnet.ibm.com>
---
 drivers/net/ethernet/ibm/ibmvnic.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index 4bd33245bad6..b66c2f26a427 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -1656,6 +1656,17 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
 		lpar_rc = send_subcrq_indirect(adapter, handle_array[queue_num],
 					       (u64)tx_buff->indir_dma,
 					       (u64)num_entries);
+
+		/* Old firmware accepts max 16 num_entries */
+		if (lpar_rc == H_PARAMETER && num_entries > 16) {
+			tx_crq.v1.n_crq_elem = 16;
+			tx_buff->num_entries = 16;
+			lpar_rc = send_subcrq_indirect(adapter,
+						       handle_array[queue_num],
+						       (u64)tx_buff->indir_dma,
+						       16);
+		}
+
 		dma_unmap_single(dev, tx_buff->indir_dma,
 				 sizeof(tx_buff->indir_arr), DMA_TO_DEVICE);
 	} else {
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH net] ibmvnic: Fall back to 16 H_SEND_SUB_CRQ_INDIRECT entries with old FW
  2020-04-27 17:33 [PATCH net] ibmvnic: Fall back to 16 H_SEND_SUB_CRQ_INDIRECT entries with old FW Juliet Kim
@ 2020-04-28 15:35 ` Thomas Falcon
  2020-04-28 22:29   ` Juliet Kim
  0 siblings, 1 reply; 3+ messages in thread
From: Thomas Falcon @ 2020-04-28 15:35 UTC (permalink / raw)
  To: Juliet Kim, netdev; +Cc: linuxppc-dev

On 4/27/20 12:33 PM, Juliet Kim wrote:
> The maximum entries for H_SEND_SUB_CRQ_INDIRECT has increased on
> some platforms from 16 to 128. If Live Partition Mobility is used
> to migrate a running OS image from a newer source platform to an
> older target platform, then H_SEND_SUB_CRQ_INDIRECT will fail with
> H_PARAMETER if 128 entries are queued.
>
> Fix this by falling back to 16 entries if H_PARAMETER is returned
> from the hcall().

Thanks for the submission, but I am having a hard time believing that 
this is what is happening since the driver does not support sending 
multiple frames per hypervisor call at this time. Even if it were the 
case, this approach would omit frame data needed by the VF, so the 
second attempt may still fail. Are there system logs available that show 
the driver is attempting to send transmissions with greater than 16 
descriptors?

Thanks,

Tom


>
> Signed-off-by: Juliet Kim <julietk@linux.vnet.ibm.com>
> ---
>   drivers/net/ethernet/ibm/ibmvnic.c | 11 +++++++++++
>   1 file changed, 11 insertions(+)
>
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
> index 4bd33245bad6..b66c2f26a427 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.c
> +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> @@ -1656,6 +1656,17 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
>   		lpar_rc = send_subcrq_indirect(adapter, handle_array[queue_num],
>   					       (u64)tx_buff->indir_dma,
>   					       (u64)num_entries);
> +
> +		/* Old firmware accepts max 16 num_entries */
> +		if (lpar_rc == H_PARAMETER && num_entries > 16) {
> +			tx_crq.v1.n_crq_elem = 16;
> +			tx_buff->num_entries = 16;
> +			lpar_rc = send_subcrq_indirect(adapter,
> +						       handle_array[queue_num],
> +						       (u64)tx_buff->indir_dma,
> +						       16);
> +		}
> +
>   		dma_unmap_single(dev, tx_buff->indir_dma,
>   				 sizeof(tx_buff->indir_arr), DMA_TO_DEVICE);
>   	} else {

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH net] ibmvnic: Fall back to 16 H_SEND_SUB_CRQ_INDIRECT entries with old FW
  2020-04-28 15:35 ` Thomas Falcon
@ 2020-04-28 22:29   ` Juliet Kim
  0 siblings, 0 replies; 3+ messages in thread
From: Juliet Kim @ 2020-04-28 22:29 UTC (permalink / raw)
  To: Thomas Falcon, netdev; +Cc: linuxppc-dev


On 4/28/20 10:35 AM, Thomas Falcon wrote:
> On 4/27/20 12:33 PM, Juliet Kim wrote:
>> The maximum entries for H_SEND_SUB_CRQ_INDIRECT has increased on
>> some platforms from 16 to 128. If Live Partition Mobility is used
>> to migrate a running OS image from a newer source platform to an
>> older target platform, then H_SEND_SUB_CRQ_INDIRECT will fail with
>> H_PARAMETER if 128 entries are queued.
>>
>> Fix this by falling back to 16 entries if H_PARAMETER is returned
>> from the hcall().
>
> Thanks for the submission, but I am having a hard time believing that this is what is happening since the driver does not support sending multiple frames per hypervisor call at this time. Even if it were the case, this approach would omit frame data needed by the VF, so the second attempt may still fail. Are there system logs available that show the driver is attempting to send transmissions with greater than 16 descriptors?
>
> Thanks,
>
> Tom
>
>
I am trying to confirm.

Juliet

>>
>> Signed-off-by: Juliet Kim <julietk@linux.vnet.ibm.com>
>> ---
>>   drivers/net/ethernet/ibm/ibmvnic.c | 11 +++++++++++
>>   1 file changed, 11 insertions(+)
>>
>> diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
>> index 4bd33245bad6..b66c2f26a427 100644
>> --- a/drivers/net/ethernet/ibm/ibmvnic.c
>> +++ b/drivers/net/ethernet/ibm/ibmvnic.c
>> @@ -1656,6 +1656,17 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
>>           lpar_rc = send_subcrq_indirect(adapter, handle_array[queue_num],
>>                              (u64)tx_buff->indir_dma,
>>                              (u64)num_entries);
>> +
>> +        /* Old firmware accepts max 16 num_entries */
>> +        if (lpar_rc == H_PARAMETER && num_entries > 16) {
>> +            tx_crq.v1.n_crq_elem = 16;
>> +            tx_buff->num_entries = 16;
>> +            lpar_rc = send_subcrq_indirect(adapter,
>> +                               handle_array[queue_num],
>> +                               (u64)tx_buff->indir_dma,
>> +                               16);
>> +        }
>> +
>>           dma_unmap_single(dev, tx_buff->indir_dma,
>>                    sizeof(tx_buff->indir_arr), DMA_TO_DEVICE);
>>       } else {

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-04-28 22:30 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-27 17:33 [PATCH net] ibmvnic: Fall back to 16 H_SEND_SUB_CRQ_INDIRECT entries with old FW Juliet Kim
2020-04-28 15:35 ` Thomas Falcon
2020-04-28 22:29   ` Juliet Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).