Linux-NVME Archive on lore.kernel.org
 help / color / Atom feed
From: James Smart <jsmart2021@gmail.com>
To: Benjamin Block <bblock@linux.ibm.com>,
	Muneendra <muneendra.kumar@broadcom.com>,
	hare@suse.de
Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
	tj@kernel.org, linux-nvme@lists.infradead.org, emilne@redhat.com,
	mkumar@redhat.com,
	Gaurav Srivastava <gaurav.srivastava@broadcom.com>,
	Steffen Maier <maier@linux.ibm.com>
Subject: Re: [PATCH v9 07/13] lpfc: vmid: Implements ELS commands for appid patch
Date: Wed, 21 Apr 2021 15:55:15 -0700
Message-ID: <d9d57857-83f5-9ff7-a427-0817d37f5f84@gmail.com> (raw)
In-Reply-To: <YH7LPd8c4PZa1qFC@t480-pf1aa2c2.linux.ibm.com>

On 4/20/2021 5:38 AM, Benjamin Block wrote:
...
>> +	len = *((u32 *)(pcmd + 4));
>> +	len = be32_to_cpu(len);
>> +	memcpy(vport->qfpa_res, pcmd, len + 8);
>> +	len = len / LPFC_PRIORITY_RANGE_DESC_SIZE;
>> +
>> +	desc = (struct priority_range_desc *)(pcmd + 8);
>> +	vmid_range = vport->vmid_priority.vmid_range;
>> +	if (!vmid_range) {
>> +		vmid_range = kcalloc(MAX_PRIORITY_DESC, sizeof(*vmid_range),
>> +				     GFP_KERNEL);
>> +		if (!vmid_range) {
>> +			kfree(vport->qfpa_res);
>> +			goto out;
>> +		}
>> +		vport->vmid_priority.vmid_range = vmid_range;
>> +	}
>> +	vport->vmid_priority.num_descriptors = len;
>> +
>> +	for (i = 0; i < len; i++, vmid_range++, desc++) {
>> +		lpfc_printf_vlog(vport, KERN_DEBUG, LOG_ELS,
>> +				 "6539 vmid values low=%d, high=%d, qos=%d, "
>> +				 "local ve id=%d\n", desc->lo_range,
>> +				 desc->hi_range, desc->qos_priority,
>> +				 desc->local_ve_id);
>> +
>> +		vmid_range->low = desc->lo_range << 1;
>> +		if (desc->local_ve_id == QFPA_ODD_ONLY)
>> +			vmid_range->low++;
>> +		if (desc->qos_priority)
>> +			vport->vmid_flag |= LPFC_VMID_QOS_ENABLED;
>> +		vmid_range->qos = desc->qos_priority;
> 
> I'm curios, if the FC-switch signals it supports QoS for a range here, how
> exactly interacts this with the VM IDs that you seem to allocate
> dynamically during runtime for cgroups that request specific App IDs?
> You don't seem to use `LPFC_VMID_QOS_ENABLED` anywhere else in the
> series. >
> Would different cgroups get different QoS classes/guarantees depending
> on the selected VM ID (higher VM ID gets better QoS class, or something
> like that?)? Would the tagged traffic be handled differently than the
> ordinary traffic in the fabric?

The simple answer is there is no interaction w/ the cgroup on priority.
And no- we really don't look or use it.  The ranges don't really have 
hard priority values. The way it works is that all values within a range 
is equal; a value in the first range is "higher priority" than a value 
in the second range; and a value in the second range is higher than 
those in the third range, and so on. Doesn't really matter whether the 
range was marked Best Effort or H/M/L. There's no real "weight".

What you see is the driver simply recording the different ranges so that 
it knows what to allocate from later on. The driver creates a flat 
bitmap of all possible values (max of 255) from all ranges - then will 
allocate values on a first bit set basis.  I know at one point we were 
going to only auto-assign if there was 1 range, and if multiple range 
was going to defer a mgmt authority to tell us which range, but this 
obviously doesn't do that.

Also... although this is coded to support the full breadth of what the 
standard allows, it may well be the switch only implements 1 range in 
practice.

> 
> I tried to get something from FC-LS (-5) or FC-FS (-6), but they are extremely
> sparse somehow. FC-LS-5 just says "QoS priority provided" for the
> field.. and FC-FS doesn't say anything regarding QoS if the tagging
> extension in CS_CTL is used.

Yes - most of the discussion on how this form of VMID is used/performed 
was given in the T11 proposals, but as most of that is informational and 
non-normative, very little ends up getting into the spec.

FC-LS-5 section 9 "Priority Tagging" is what you want to look at.

The other form of VMID is the Application Tag (up to 32bits) which is 
described in FC-GS-8 section 6.9 Application Server.  Both forms map a 
value to a uuid and the switch may apply some QoS level to the value 
when it sees it.

The priority tagging method seems to tie in more to qos, but the 
application tag is can equally be done although any qos aspects are 
solely in the switch and not exported to the driver/host.

-- james

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply index

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-06 23:06 [PATCH v9 00/13] blkcg:Support to track FC storage blk io traffic Muneendra
2021-04-06 23:06 ` [PATCH v9 01/13] cgroup: Added cgroup_get_from_id Muneendra
2021-04-08  8:26   ` Hannes Reinecke
2021-04-06 23:06 ` [PATCH v9 02/13] blkcg: Added a app identifier support for blkcg Muneendra
2021-04-08  8:26   ` Hannes Reinecke
2021-04-06 23:06 ` [PATCH v9 03/13] nvme: Added a newsysfs attribute appid_store Muneendra
2021-04-18 15:32   ` Benjamin Block
2021-04-20  6:54     ` Muneendra Kumar M
2021-04-20 11:09       ` Benjamin Block
2021-04-22 23:29         ` James Smart
2021-04-23 10:14           ` Benjamin Block
2021-04-06 23:06 ` [PATCH v9 04/13] lpfc: vmid: Add the datastructure for supporting VMID in lpfc Muneendra
2021-04-08  8:28   ` Hannes Reinecke
2021-04-06 23:06 ` [PATCH v9 05/13] lpfc: vmid: VMID params initialization Muneendra
2021-04-08  8:29   ` Hannes Reinecke
2021-04-06 23:06 ` [PATCH v9 06/13] lpfc: vmid: Add support for vmid in mailbox command, does vmid resource allocation and vmid cleanup Muneendra
2021-04-08  8:32   ` Hannes Reinecke
2021-04-06 23:06 ` [PATCH v9 07/13] lpfc: vmid: Implements ELS commands for appid patch Muneendra
2021-04-08  8:34   ` Hannes Reinecke
2021-04-20 12:38   ` Benjamin Block
2021-04-21 22:55     ` James Smart [this message]
2021-04-22  9:28       ` Benjamin Block
2021-04-06 23:06 ` [PATCH v9 08/13] lpfc: vmid: Functions to manage vmids Muneendra
2021-04-08  8:35   ` Hannes Reinecke
2021-04-06 23:06 ` [PATCH v9 09/13] lpfc: vmid: Implements CT commands for appid Muneendra
2021-04-08  8:37   ` Hannes Reinecke
2021-04-06 23:06 ` [PATCH v9 10/13] lpfc: vmid: Appends the vmid in the wqe before sending Muneendra
2021-04-06 23:06 ` [PATCH v9 11/13] lpfc: vmid: Timeout implementation for vmid Muneendra
2021-04-08  8:38   ` Hannes Reinecke
2021-04-06 23:06 ` [PATCH v9 12/13] lpfc: vmid: Adding qfpa and vmid timeout check in worker thread Muneendra
2021-04-06 23:06 ` [PATCH v9 13/13] lpfc: vmid: Introducing vmid in io path Muneendra
2021-04-08  8:46   ` Hannes Reinecke
2021-04-10 15:00     ` James Smart
2021-04-12  5:27       ` Hannes Reinecke

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d9d57857-83f5-9ff7-a427-0817d37f5f84@gmail.com \
    --to=jsmart2021@gmail.com \
    --cc=bblock@linux.ibm.com \
    --cc=emilne@redhat.com \
    --cc=gaurav.srivastava@broadcom.com \
    --cc=hare@suse.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=maier@linux.ibm.com \
    --cc=mkumar@redhat.com \
    --cc=muneendra.kumar@broadcom.com \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-NVME Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-nvme/0 linux-nvme/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-nvme linux-nvme/ https://lore.kernel.org/linux-nvme \
		linux-nvme@lists.infradead.org
	public-inbox-index linux-nvme

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.infradead.lists.linux-nvme


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git