From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753010AbcLXJtM (ORCPT ); Sat, 24 Dec 2016 04:49:12 -0500 Received: from verein.lst.de ([213.95.11.211]:35465 "EHLO newverein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750971AbcLXJtI (ORCPT ); Sat, 24 Dec 2016 04:49:08 -0500 Date: Sat, 24 Dec 2016 10:49:05 +0100 From: Christoph Hellwig To: Jens Axboe Cc: Linus Torvalds , Christoph Hellwig , Chris Leech , Ming Lei , Dave Chinner , Johannes Weiner , Linux Kernel Mailing List , Lee Duncan , open-iscsi@googlegroups.com, Linux SCSI List , linux-block , "Michael S. Tsirkin" Subject: Re: [4.10, panic, regression] iscsi: null pointer deref at iscsi_tcp_segment_done+0x20d/0x2e0 Message-ID: <20161224094905.GA16518@lst.de> References: <20161222001303.nvrtm22szn3hgxar@straylight.hirudinean.org> <20161222051322.GF4758@dastard> <20161222065012.GI4758@dastard> <20161222185030.so4btkuzzkih3owz@straylight.hirudinean.org> <20161223000356.dxwkgsei32w7hc4f@straylight.hirudinean.org> <20161223100014.GA29467@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Dec 23, 2016 at 07:45:45PM -0700, Jens Axboe wrote: > It's not that it's technically hard to fix up, it's more that it's a > pain in the ass to have to do it. For instance, for blk_execute_rq(), we > either should enforce that the caller allocates it dynamically and then > free it, or we need nasty hack where the caller needs to know he has to > free it. Pretty obvious what I would prefer there. > > And yes, there would be a good chunk of other places where this would > nede to be fixed up... My planned rework for the BLOCK_PC code (split all fields for them out of struct request and move them into a separate, driver-allocate structure) would fix this up as a side-effect. I really wanted to get it into 4.10, but I didn't manage to fix it up. I'll try to get it into 4.11 early.