From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Martin K. Petersen" Subject: Re: [PATCHv5 00/14] dm-zoned: metadata version 2 Date: Wed, 13 May 2020 22:19:38 -0400 Message-ID: References: <20200508090332.40716-1-hare@suse.de> <2553e593-795d-6aed-f983-e990a283e2ff@suse.de> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: (Damien Le Moal's message of "Thu, 14 May 2020 00:55:10 +0000") List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Damien Le Moal Cc: Mike Snitzer , "dm-devel@redhat.com" , "Martin K. Petersen" , Bob Liu List-Id: dm-devel.ids Damien, > Indeed. It is an NVMe M.2 consumer grade SSD. Nothing fancy. If you > look at nvme/host/core.c nvme_update_disk_info(), you will see that > io_opt is set to the block size... This is probably abusing this > limit. So I guess the most elegant fix may be to have nvme stop doing > that ? Yeah, I'd prefer for io_opt to only be set if the device actually reports NOWS. The purpose of io_min is to be the preferred lower I/O size boundary. One should not submit I/Os smaller than this. And io_opt is the preferred upper boundary for I/Os. One should not issue I/Os larger than this value. Setting io_opt to the logical block size kind of defeats that intent. That said, we should probably handle the case where the pbs gets scaled up but io_opt doesn't more gracefully. -- Martin K. Petersen Oracle Linux Engineering