> > diff --git a/drivers/scsi/mpi3mr/mpi3mr.h > > b/drivers/scsi/mpi3mr/mpi3mr.h index dd79b12218e1..fe6094bb357a > 100644 > > --- a/drivers/scsi/mpi3mr/mpi3mr.h > > +++ b/drivers/scsi/mpi3mr/mpi3mr.h > > @@ -71,6 +71,12 @@ extern struct list_head mrioc_list; > > #define MPI3MR_ADMIN_REQ_FRAME_SZ 128 > > #define MPI3MR_ADMIN_REPLY_FRAME_SZ 16 > > > > +/* Operational queue management definitions */ > > +#define MPI3MR_OP_REQ_Q_QD 512 > > +#define MPI3MR_OP_REP_Q_QD 4096 > > +#define MPI3MR_OP_REQ_Q_SEG_SIZE 4096 > > +#define MPI3MR_OP_REP_Q_SEG_SIZE 4096 > > +#define MPI3MR_MAX_SEG_LIST_SIZE 4096 > > > Do I read this correctly? > The reply queue depth is larger than the request queue depth? > Why is that? Hannes, You are correct. Request queue desc unit size is 128 byte and reply queue desc unit size is 16 byte. Having current values of queue depth, we are creating 64K size request pool and reply pool. To avoid memory allocation failure, we have come up with some realistic queue depth which can meet memory allocation requirement on most of the cases and also we do not harm performance. BTW, we have also improvement in this area. You can notice segemented queue depth “enable_segqueue” field in the same patch. We have plan to improve this area based on test results of “enable_segqueue”. > > /** > > @@ -220,6 +220,8 @@ mpi3mr_probe(struct pci_dev *pdev, const struct > pci_device_id *id) > > spin_lock_init(&mrioc->sbq_lock); > > > > mpi3mr_init_drv_cmd(&mrioc->init_cmds, > MPI3MR_HOSTTAG_INITCMDS); > > + if (pdev->revision) > > + mrioc->enable_segqueue = true; > > > > mrioc->logging_level = logging_level; > > mrioc->shost = shost; > > > Other than that: > > Reviewed-by: Hannes Reinecke > > Cheers, > > Hannes > -- > Dr. Hannes Reinecke Kernel Storage Architect > hare@suse.de +49 911 74053 688 > SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg HRB 36809 > (AG Nürnberg), Geschäftsführer: Felix Imendörffer