linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Bart Van Assche <bvanassche@acm.org>
To: "André Almeida" <andrealmeid@collabora.com>,
	linux-block@vger.kernel.org, linux-kernel@vger.kernel.org
Cc: axboe@kernel.dk, kernel@collabora.com, krisman@collabora.com
Subject: Re: [PATCH v2 1/1] blk-mq: fill header with kernel-doc
Date: Tue, 8 Oct 2019 11:12:45 -0700	[thread overview]
Message-ID: <16a10539-c0a0-e411-8f9a-1f0830579986@acm.org> (raw)
In-Reply-To: <20191008001416.12656-1-andrealmeid@collabora.com>

On 10/7/19 5:14 PM, André Almeida wrote:
>   struct blk_mq_hw_ctx {
>   	struct {
> +		/** @lock: Lock for accessing dispatch queue */
>   		spinlock_t		lock;

This spinlock not only protects dispatch list accesses but also 
modifications of the dispatch list. How about changing that comment into 
"protects the dispatch list"?

> +		/**
> +		 * @dispatch: Queue of dispatched requests, waiting for
> +		 * workers to send them to the hardware.
> +		 */
>   		struct list_head	dispatch;

What is a worker? That word is not used anywhere in the block layer. How 
about changing that comment into "requests owned by this hardware queue"?

> -		unsigned long		state;		/* BLK_MQ_S_* flags */
> +		 /**
> +		  * @state: BLK_MQ_S_* flags. Define the state of the hw
                                               ^^^^^^
                                               Defines?
>   
> +	/**
> +	 * @run_work: Worker for dispatch scheduled requests to hardware.
> +	 * BLK_MQ_CPU_WORK_BATCH workers will be assigned to a CPU, then the
> +	 * following ones will be assigned to the next_cpu.
> +	 */
 >   	struct delayed_work	run_work;

I'd prefer if algorithm details would be left out from structure 
documentation since such documentation becomes easily outdated. How 
about using something like the following description: "used for 
scheduling a hardware queue run at a later time"?

> +	/**
> +	 * @next_cpu: Index of CPU/workqueue to be used in the next batch
> +	 * of workers.
> +	 */

The word "workers" here is confusing. How about the following 
description: "used by blk_mq_hctx_next_cpu() for round-robin CPU 
selection from @cpumask"?

> +	/**
> +	 * @next_cpu_batch: Counter of how many works letf in the batch before
                                                       ^^^^
                                                       left?
> +	 * changing to the next CPU. A batch has the size
> +	 * of BLK_MQ_CPU_WORK_BATCH.
> +	 */
>   	int			next_cpu_batch;

Again, I think this is too much detail about the actual algorithm.

>   
> -	unsigned long		flags;		/* BLK_MQ_F_* flags */
> +	/** @flags: BLK_MQ_F_* flags. Define the behaviour of the queue. */
                                       ^^^^^^
                                       Defines?
> +	unsigned long		flags;
>   
> +	/** @sched_data: Data to support schedulers. */
>   	void			*sched_data;

That's a rather vague description. How about mentioning that this 
pointer is owned by the I/O scheduler that has been attached to a 
request queue and that the I/O scheduler decides how to use this pointer?

> +	/** @queue: Queue of request to be dispatched. */
>   	struct request_queue	*queue;

How about "pointer to the request queue that owns this hardware context"?

> +	/**
> +	 * @ctx_map: Bitmap for each software queue. If bit is on, there is a
> +	 * pending request.
> +	 */
>   	struct sbitmap		ctx_map;

Please add " in that software queue" at the end of the description.

>   
> +	/**
> +	 * @dispatch_from: Queue to be used when there is no scheduler
> +	 * was selected.
> +	 */
>   	struct blk_mq_ctx	*dispatch_from;

Does the word "queue" refer to a request queue, software queue or 
hardware queue? Please make that clear.

> +	/**
> +	 * @dispatch_wait: Waitqueue for dispatched requests. Request here will
> +	 * be processed when percpu_ref_is_zero(&q->q_usage_counter) evaluates
> +	 * true for q as a request_queue.
> +	 */
>   	wait_queue_entry_t	dispatch_wait;

That description doesn't look correct to me. I think that @dispatch_wait 
is used to wait if no tags are available. The comment above 
blk_mq_mark_tag_wait() is as follows:

/*
  * Mark us waiting for a tag. For shared tags, this involves hooking us
  * into the tag wakeups. For non-shared tags, we can simply mark us
  * needing a restart. For both cases, take care to check the condition
  * again after marking us as waiting.
  */

> +	/** @wait_index: Index of next wait queue to be used. */
>   	atomic_t		wait_index;

To be used by what?

> +	/** @tags: Request tags. */
>   	struct blk_mq_tags	*tags;
> +	/** @sched_tags: Request tags for schedulers. */
>   	struct blk_mq_tags	*sched_tags;

I think @tags represents tags owned by the block driver and @sched_tags 
represents tags owned by the I/O scheduler. If no I/O scheduler is 
associated with a request queue, only a driver tag is allocated. If an 
I/O scheduler has been associated with a request queue, a request is 
assigned a tag from @sched_tags when that request is allocated. A tag 
from @tags is only assigned when a request is dispatched to a hardware 
queue. See also blk_mq_get_driver_tag().

> +	/** @nr_active:	Number of active tags. */
>   	atomic_t		nr_active;

Isn't this the number of active requests instead of the number of active 
tags? Please mention that this member is only used when a tag set is 
shared across request queues.

> +/**
> + * struct blk_mq_queue_data - Data about a request inserted in a queue
> + *
> + * @rq:   Data about the block IO request.

How about changing this into "Request pointer"?

> +/**
> + * struct blk_mq_ops - list of callback functions that implements block driver
> + * behaviour.
> + */

Is this really a list?

>    * Driver command data is immediately after the request. So subtract request
> - * size to get back to the original request, add request size to get the PDU.
> + * size to get back to the original request.
>    */
>   static inline struct request *blk_mq_rq_from_pdu(void *pdu)
>   {
>   	return pdu - sizeof(struct request);
>   }

I'm not sure this change is an improvement.

Bart.

  parent reply	other threads:[~2019-10-08 18:12 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-08  0:14 [PATCH v2 1/1] blk-mq: fill header with kernel-doc André Almeida
2019-10-08 16:35 ` Gabriel Krisman Bertazi
2019-10-08 17:30   ` Bart Van Assche
2019-10-08 18:46     ` Gabriel Krisman Bertazi
2019-10-08 20:01       ` Bart Van Assche
2019-10-10 20:38         ` André Almeida
2019-10-11 17:00           ` Bart Van Assche
2019-10-08 18:12 ` Bart Van Assche [this message]
2019-10-10 20:41   ` André Almeida

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=16a10539-c0a0-e411-8f9a-1f0830579986@acm.org \
    --to=bvanassche@acm.org \
    --cc=andrealmeid@collabora.com \
    --cc=axboe@kernel.dk \
    --cc=kernel@collabora.com \
    --cc=krisman@collabora.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).