From mboxrd@z Thu Jan 1 00:00:00 1970 From: Muneendra Kumar M Subject: deterministic io throughput in multipath Date: Mon, 19 Dec 2016 11:50:36 +0000 Message-ID: <1649d4b8538d4b4cb1efacdfe8cf31eb@BRMWP-EXMB12.corp.brocade.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============2455925303803120355==" Return-path: Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: "dm-devel@redhat.com" List-Id: dm-devel.ids --===============2455925303803120355== Content-Language: en-US Content-Type: multipart/alternative; boundary="_000_1649d4b8538d4b4cb1efacdfe8cf31ebBRMWPEXMB12corpbrocadec_" --_000_1649d4b8538d4b4cb1efacdfe8cf31ebBRMWPEXMB12corpbrocadec_ Content-Type: text/plain; charset="us-ascii" Customers using Linux host (mostly RHEL host) using a SAN network for block storage, complain the Linux multipath stack is not resilient to handle non-deterministic storage network behaviors. This has caused many customer move away to non-linux based servers. The intent of the below patch and the prevailing issues are given below. With the below design we are seeing the Linux multipath stack becoming resilient to such network issues. We hope by getting this patch accepted will help in more Linux server adoption that use SAN network. I have already sent the design details to the community in a different mail chain and the details are available in the below link. https://www.redhat.com/archives/dm-devel/2016-December/msg00122.html. Can you please go through the design and send the comments to us. Regards, Muneendra. --_000_1649d4b8538d4b4cb1efacdfe8cf31ebBRMWPEXMB12corpbrocadec_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

Customers using Linux host (mostly RHEL host) using a SAN net= work for block storage, complain the Linux multipath stack is not resilient= to handle non-deterministic storage network behaviors. This has caused many customer move away to non-linux based serv= ers. The intent of the below patch and the prevailing issues are given belo= w. With the below design we are seeing the Linux multipath stack becoming r= esilient to such network issues. We hope by getting this patch accepted will help in more Linux server adop= tion that use SAN network.

I have already sent the design details to the community in a = different mail chain and the details are available in the below link.

https://www.redhat.com/archives/dm-devel/2016-= December/msg00122.html.

Can you please go through the design and send the comments to= us.  

 

Regards,

Muneendra.

 

 

--_000_1649d4b8538d4b4cb1efacdfe8cf31ebBRMWPEXMB12corpbrocadec_-- --===============2455925303803120355== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline --===============2455925303803120355==-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: Hannes Reinecke Subject: Re: deterministic io throughput in multipath Date: Mon, 19 Dec 2016 13:09:00 +0100 Message-ID: References: <1649d4b8538d4b4cb1efacdfe8cf31eb@BRMWP-EXMB12.corp.brocade.com> Mime-Version: 1.0 Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: <1649d4b8538d4b4cb1efacdfe8cf31eb@BRMWP-EXMB12.corp.brocade.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: dm-devel@redhat.com List-Id: dm-devel.ids On 12/19/2016 12:50 PM, Muneendra Kumar M wrote: > Customers using Linux host (mostly RHEL host) using a SAN network for > block storage, complain the Linux multipath stack is not resilient to > handle non-deterministic storage network behaviors. This has caused many > customer move away to non-linux based servers. The intent of the below > patch and the prevailing issues are given below. With the below design > we are seeing the Linux multipath stack becoming resilient to such > network issues. We hope by getting this patch accepted will help in more > Linux server adoption that use SAN network. > = > I have already sent the design details to the community in a different > mail chain and the details are available in the below link. > = > https://www.redhat.com/archives/dm-devel/2016-December/msg00122.html. > = > Can you please go through the design and send the comments to us. = > = This issue is coming up from time to time. Standard answer here is that using 'service-time' as a path selector _should_ already give you the expected results; namely any path exhibiting intermediate I/O errors should have a higher latency than any functional path. Hence the 'service-time' path selector should switch away from those paths automatically. Have you tried this? Cheers, Hannes -- = Dr. Hannes Reinecke Teamlead Storage & Networking hare@suse.de +49 911 74053 688 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N=FCrnberg GF: F. Imend=F6rffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton HRB 21284 (AG N=FCrnberg) From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Benjamin Marzinski" Subject: Re: deterministic io throughput in multipath Date: Wed, 21 Dec 2016 10:09:40 -0600 Message-ID: <20161221160940.GG19659@octiron.msp.redhat.com> References: <1649d4b8538d4b4cb1efacdfe8cf31eb@BRMWP-EXMB12.corp.brocade.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Return-path: Content-Disposition: inline In-Reply-To: <1649d4b8538d4b4cb1efacdfe8cf31eb@BRMWP-EXMB12.corp.brocade.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Muneendra Kumar M Cc: "dm-devel@redhat.com" List-Id: dm-devel.ids Have you looked into the delay_watch_checks and delay_wait_checks configuration parameters? The idea behind them is to minimize the use of paths that are intermittently failing. -Ben On Mon, Dec 19, 2016 at 11:50:36AM +0000, Muneendra Kumar M wrote: > Customers using Linux host (mostly RHEL host) using a SAN network for > block storage, complain the Linux multipath stack is not resilient to > handle non-deterministic storage network behaviors. This has caused ma= ny > customer move away to non-linux based servers. The intent of the below > patch and the prevailing issues are given below. With the below design= we > are seeing the Linux multipath stack becoming resilient to such network > issues. We hope by getting this patch accepted will help in more Linux > server adoption that use SAN network. > = > I have already sent the design details to the community in a different > mail chain and the details are available in the below link. > = > [1]https://www.redhat.com/archives/dm-devel/2016-December/msg00122.htm= l. > = > Can you please go through the design and send the comments to us. =A0 > = > =A0 > = > Regards, > = > Muneendra. > = > =A0 > = > =A0 > = > References > = > Visible links > 1. https://www.redhat.com/archives/dm-devel/2016-December/msg00122.html > -- > dm-devel mailing list > dm-devel@redhat.com > https://www.redhat.com/mailman/listinfo/dm-devel From mboxrd@z Thu Jan 1 00:00:00 1970 From: Muneendra Kumar M Subject: Re: deterministic io throughput in multipath Date: Thu, 22 Dec 2016 05:39:36 +0000 Message-ID: <0df0f2cc11584ced9242a57b6457fa5d@BRMWP-EXMB12.corp.brocade.com> References: <1649d4b8538d4b4cb1efacdfe8cf31eb@BRMWP-EXMB12.corp.brocade.com> <20161221160940.GG19659@octiron.msp.redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: <20161221160940.GG19659@octiron.msp.redhat.com> Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Benjamin Marzinski Cc: "dm-devel@redhat.com" List-Id: dm-devel.ids Hi Ben, Thanks for the reply. I will look into this parameters will do the internal testing and let you k= now the results. Regards, Muneendra. -----Original Message----- From: Benjamin Marzinski [mailto:bmarzins@redhat.com] = Sent: Wednesday, December 21, 2016 9:40 PM To: Muneendra Kumar M Cc: dm-devel@redhat.com Subject: Re: [dm-devel] deterministic io throughput in multipath Have you looked into the delay_watch_checks and delay_wait_checks configura= tion parameters? The idea behind them is to minimize the use of paths that= are intermittently failing. -Ben On Mon, Dec 19, 2016 at 11:50:36AM +0000, Muneendra Kumar M wrote: > Customers using Linux host (mostly RHEL host) using a SAN network for > block storage, complain the Linux multipath stack is not resilient to > handle non-deterministic storage network behaviors. This has caused ma= ny > customer move away to non-linux based servers. The intent of the below > patch and the prevailing issues are given below. With the below design= we > are seeing the Linux multipath stack becoming resilient to such network > issues. We hope by getting this patch accepted will help in more Linux > server adoption that use SAN network. > = > I have already sent the design details to the community in a different > mail chain and the details are available in the below link. > = > [1]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.c= om_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_XqQW= OjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6= e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7MRu= 52hG3MKzM&e=3D . > = > Can you please go through the design and send the comments to us. > = > =A0 > = > Regards, > = > Muneendra. > = > =A0 > = > =A0 > = > References > = > Visible links > 1. = > https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.com_ar > chives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_XqQWOjub > gfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6e1K > XtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7MRu > 52hG3MKzM&e=3D > -- > dm-devel mailing list > dm-devel@redhat.com > https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.com_ma > ilman_listinfo_dm-2Ddevel&d=3DDgIDAw&c=3DIL_XqQWOjubgfqINi2jTzg&r=3DE3ftc= 47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6e1KXtRA0ctwHYJ7cDmPsLi2C1= L9pox7uexsY&s=3DUyE46dXOrNTbPz_TVGtpoHl3J3h_n0uYhI4TI-PgyWg&e=3D From mboxrd@z Thu Jan 1 00:00:00 1970 From: Muneendra Kumar M Subject: Re: deterministic io throughput in multipath Date: Mon, 26 Dec 2016 09:42:48 +0000 Message-ID: <8cd4cc5f20b540a1b8312ad485711152@BRMWP-EXMB12.corp.brocade.com> References: <1649d4b8538d4b4cb1efacdfe8cf31eb@BRMWP-EXMB12.corp.brocade.com> <20161221160940.GG19659@octiron.msp.redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Return-path: Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Benjamin Marzinski Cc: "dm-devel@redhat.com" List-Id: dm-devel.ids Hi Ben, If there are two paths on a dm-1 say sda and sdb as below. # multipath -ll mpathd (3600110d001ee7f0102050001cc0b6751) dm-1 SANBlaze,VLUN MyLun size=3D8.0M features=3D'0' hwhandler=3D'0' wp=3Drw `-+- policy=3D'round-robin 0' prio=3D50 status=3Dactive |- 8:0:1:0 sda 8:48 active ready running `- 9:0:1:0 sdb 8:64 active ready running = And on sda if iam seeing lot of errors due to which the sda path is fluctua= ting from failed state to active state and vicevera. My requirement is something like this if sda is failed for more then 5 time= s in a hour duration ,then I want to keep the sda in failed state for few h= ours (3hrs) And the data should travel only thorugh sdb path. Will this be possible with the below parameters. Can you just let me know what values I should add for delay_watch_checks an= d delay_wait_checks. Regards, Muneendra. -----Original Message----- From: Muneendra Kumar M = Sent: Thursday, December 22, 2016 11:10 AM To: 'Benjamin Marzinski' Cc: dm-devel@redhat.com Subject: RE: [dm-devel] deterministic io throughput in multipath Hi Ben, Thanks for the reply. I will look into this parameters will do the internal testing and let you k= now the results. Regards, Muneendra. -----Original Message----- From: Benjamin Marzinski [mailto:bmarzins@redhat.com] = Sent: Wednesday, December 21, 2016 9:40 PM To: Muneendra Kumar M Cc: dm-devel@redhat.com Subject: Re: [dm-devel] deterministic io throughput in multipath Have you looked into the delay_watch_checks and delay_wait_checks configura= tion parameters? The idea behind them is to minimize the use of paths that= are intermittently failing. -Ben On Mon, Dec 19, 2016 at 11:50:36AM +0000, Muneendra Kumar M wrote: > Customers using Linux host (mostly RHEL host) using a SAN network for > block storage, complain the Linux multipath stack is not resilient to > handle non-deterministic storage network behaviors. This has caused ma= ny > customer move away to non-linux based servers. The intent of the below > patch and the prevailing issues are given below. With the below design= we > are seeing the Linux multipath stack becoming resilient to such network > issues. We hope by getting this patch accepted will help in more Linux > server adoption that use SAN network. > = > I have already sent the design details to the community in a different > mail chain and the details are available in the below link. > = > [1]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.c= om_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_XqQW= OjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6= e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7MRu= 52hG3MKzM&e=3D . > = > Can you please go through the design and send the comments to us. > = > =A0 > = > Regards, > = > Muneendra. > = > =A0 > = > =A0 > = > References > = > Visible links > 1. = > https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.com_ar > chives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_XqQWOjub > gfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6e1K > XtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7MRu > 52hG3MKzM&e=3D > -- > dm-devel mailing list > dm-devel@redhat.com > https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.com_ma > ilman_listinfo_dm-2Ddevel&d=3DDgIDAw&c=3DIL_XqQWOjubgfqINi2jTzg&r=3DE3ftc= 47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6e1KXtRA0ctwHYJ7cDmPsLi2C1= L9pox7uexsY&s=3DUyE46dXOrNTbPz_TVGtpoHl3J3h_n0uYhI4TI-PgyWg&e=3D From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Benjamin Marzinski" Subject: Re: deterministic io throughput in multipath Date: Tue, 3 Jan 2017 11:12:00 -0600 Message-ID: <20170103171159.GA2732@octiron.msp.redhat.com> References: <1649d4b8538d4b4cb1efacdfe8cf31eb@BRMWP-EXMB12.corp.brocade.com> <20161221160940.GG19659@octiron.msp.redhat.com> <8cd4cc5f20b540a1b8312ad485711152@BRMWP-EXMB12.corp.brocade.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Return-path: Content-Disposition: inline In-Reply-To: <8cd4cc5f20b540a1b8312ad485711152@BRMWP-EXMB12.corp.brocade.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Muneendra Kumar M Cc: "dm-devel@redhat.com" List-Id: dm-devel.ids On Mon, Dec 26, 2016 at 09:42:48AM +0000, Muneendra Kumar M wrote: > Hi Ben, > = > If there are two paths on a dm-1 say sda and sdb as below. > = > # multipath -ll > mpathd (3600110d001ee7f0102050001cc0b6751) dm-1 SANBlaze,VLUN MyLun > size=3D8.0M features=3D'0' hwhandler=3D'0' wp=3Drw > `-+- policy=3D'round-robin 0' prio=3D50 status=3Dactive > |- 8:0:1:0 sda 8:48 active ready running > `- 9:0:1:0 sdb 8:64 active ready running = > = > And on sda if iam seeing lot of errors due to which the sda path is fluct= uating from failed state to active state and vicevera. > = > My requirement is something like this if sda is failed for more then 5 ti= mes in a hour duration ,then I want to keep the sda in failed state for few= hours (3hrs) > = > And the data should travel only thorugh sdb path. > Will this be possible with the below parameters. No. delay_watch_checks sets how may path checks you watch a path that has recently come back from the failed state. If the path fails again within this time, multipath device delays it. This means that the delay is always trigger by two failures within the time limit. It's possible to adapt this to count numbers of failures, and act after a certain number within a certain timeframe, but it would take a bit more work. delay_wait_checks doesn't guarantee that it will delay for any set length of time. Instead, it sets the number of consecutive successful path checks that must occur before the path is usable again. You could set this for 3 hours of path checks, but if a check failed during this time, you would restart the 3 hours over again. -Ben > Can you just let me know what values I should add for delay_watch_checks = and delay_wait_checks. > = > Regards, > Muneendra. > = > = > = > -----Original Message----- > From: Muneendra Kumar M = > Sent: Thursday, December 22, 2016 11:10 AM > To: 'Benjamin Marzinski' > Cc: dm-devel@redhat.com > Subject: RE: [dm-devel] deterministic io throughput in multipath > = > Hi Ben, > = > Thanks for the reply. > I will look into this parameters will do the internal testing and let you= know the results. > = > Regards, > Muneendra. > = > -----Original Message----- > From: Benjamin Marzinski [mailto:bmarzins@redhat.com] = > Sent: Wednesday, December 21, 2016 9:40 PM > To: Muneendra Kumar M > Cc: dm-devel@redhat.com > Subject: Re: [dm-devel] deterministic io throughput in multipath > = > Have you looked into the delay_watch_checks and delay_wait_checks configu= ration parameters? The idea behind them is to minimize the use of paths th= at are intermittently failing. > = > -Ben > = > On Mon, Dec 19, 2016 at 11:50:36AM +0000, Muneendra Kumar M wrote: > > Customers using Linux host (mostly RHEL host) using a SAN network for > > block storage, complain the Linux multipath stack is not resilient to > > handle non-deterministic storage network behaviors. This has caused = many > > customer move away to non-linux based servers. The intent of the bel= ow > > patch and the prevailing issues are given below. With the below desi= gn we > > are seeing the Linux multipath stack becoming resilient to such netw= ork > > issues. We hope by getting this patch accepted will help in more Lin= ux > > server adoption that use SAN network. > > = > > I have already sent the design details to the community in a differe= nt > > mail chain and the details are available in the below link. > > = > > [1]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat= .com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_Xq= QWOjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpV= p6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7M= Ru52hG3MKzM&e=3D . > > = > > Can you please go through the design and send the comments to us. > > = > > =A0 > > = > > Regards, > > = > > Muneendra. > > = > > =A0 > > = > > =A0 > > = > > References > > = > > Visible links > > 1. = > > https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.com_ar > > chives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_XqQWO= jub > > gfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6= e1K > > XtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7MRu > > 52hG3MKzM&e=3D > = > > -- > > dm-devel mailing list > > dm-devel@redhat.com > > https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.com_ma > > ilman_listinfo_dm-2Ddevel&d=3DDgIDAw&c=3DIL_XqQWOjubgfqINi2jTzg&r=3DE3f= tc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6e1KXtRA0ctwHYJ7cDmPsLi2= C1L9pox7uexsY&s=3DUyE46dXOrNTbPz_TVGtpoHl3J3h_n0uYhI4TI-PgyWg&e=3D From mboxrd@z Thu Jan 1 00:00:00 1970 From: Muneendra Kumar M Subject: Re: deterministic io throughput in multipath Date: Wed, 4 Jan 2017 13:26:01 +0000 Message-ID: <5622482d0db74f25b7780e32678efcac@BRMWP-EXMB12.corp.brocade.com> References: <1649d4b8538d4b4cb1efacdfe8cf31eb@BRMWP-EXMB12.corp.brocade.com> <20161221160940.GG19659@octiron.msp.redhat.com> <8cd4cc5f20b540a1b8312ad485711152@BRMWP-EXMB12.corp.brocade.com> <20170103171159.GA2732@octiron.msp.redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: <20170103171159.GA2732@octiron.msp.redhat.com> Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Benjamin Marzinski Cc: "dm-devel@redhat.com" List-Id: dm-devel.ids Hi Ben, Thanks for the information. Regards, Muneendra. -----Original Message----- From: Benjamin Marzinski [mailto:bmarzins@redhat.com] = Sent: Tuesday, January 03, 2017 10:42 PM To: Muneendra Kumar M Cc: dm-devel@redhat.com Subject: Re: [dm-devel] deterministic io throughput in multipath On Mon, Dec 26, 2016 at 09:42:48AM +0000, Muneendra Kumar M wrote: > Hi Ben, > = > If there are two paths on a dm-1 say sda and sdb as below. > = > # multipath -ll > mpathd (3600110d001ee7f0102050001cc0b6751) dm-1 SANBlaze,VLUN MyLun > size=3D8.0M features=3D'0' hwhandler=3D'0' wp=3Drw > `-+- policy=3D'round-robin 0' prio=3D50 status=3Dactive > |- 8:0:1:0 sda 8:48 active ready running > `- 9:0:1:0 sdb 8:64 active ready running = > = > And on sda if iam seeing lot of errors due to which the sda path is fluct= uating from failed state to active state and vicevera. > = > My requirement is something like this if sda is failed for more then 5 = > times in a hour duration ,then I want to keep the sda in failed state = > for few hours (3hrs) > = > And the data should travel only thorugh sdb path. > Will this be possible with the below parameters. No. delay_watch_checks sets how may path checks you watch a path that has r= ecently come back from the failed state. If the path fails again within thi= s time, multipath device delays it. This means that the delay is always tr= igger by two failures within the time limit. It's possible to adapt this t= o count numbers of failures, and act after a certain number within a certai= n timeframe, but it would take a bit more work. delay_wait_checks doesn't guarantee that it will delay for any set length o= f time. Instead, it sets the number of consecutive successful path checks = that must occur before the path is usable again. You could set this for 3 h= ours of path checks, but if a check failed during this time, you would rest= art the 3 hours over again. -Ben > Can you just let me know what values I should add for delay_watch_checks = and delay_wait_checks. > = > Regards, > Muneendra. > = > = > = > -----Original Message----- > From: Muneendra Kumar M > Sent: Thursday, December 22, 2016 11:10 AM > To: 'Benjamin Marzinski' > Cc: dm-devel@redhat.com > Subject: RE: [dm-devel] deterministic io throughput in multipath > = > Hi Ben, > = > Thanks for the reply. > I will look into this parameters will do the internal testing and let you= know the results. > = > Regards, > Muneendra. > = > -----Original Message----- > From: Benjamin Marzinski [mailto:bmarzins@redhat.com] > Sent: Wednesday, December 21, 2016 9:40 PM > To: Muneendra Kumar M > Cc: dm-devel@redhat.com > Subject: Re: [dm-devel] deterministic io throughput in multipath > = > Have you looked into the delay_watch_checks and delay_wait_checks configu= ration parameters? The idea behind them is to minimize the use of paths th= at are intermittently failing. > = > -Ben > = > On Mon, Dec 19, 2016 at 11:50:36AM +0000, Muneendra Kumar M wrote: > > Customers using Linux host (mostly RHEL host) using a SAN network for > > block storage, complain the Linux multipath stack is not resilient to > > handle non-deterministic storage network behaviors. This has caused = many > > customer move away to non-linux based servers. The intent of the bel= ow > > patch and the prevailing issues are given below. With the below desi= gn we > > are seeing the Linux multipath stack becoming resilient to such netw= ork > > issues. We hope by getting this patch accepted will help in more Lin= ux > > server adoption that use SAN network. > > = > > I have already sent the design details to the community in a differe= nt > > mail chain and the details are available in the below link. > > = > > [1]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat= .com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_Xq= QWOjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpV= p6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7M= Ru52hG3MKzM&e=3D . > > = > > Can you please go through the design and send the comments to us. > > = > > =A0 > > = > > Regards, > > = > > Muneendra. > > = > > =A0 > > = > > =A0 > > = > > References > > = > > Visible links > > 1. = > > https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.com_ > > ar = > > chives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_XqQWOj > > ub = > > gfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6e > > 1K = > > XtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7M > > Ru > > 52hG3MKzM&e=3D > = > > -- > > dm-devel mailing list > > dm-devel@redhat.com > > https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.com_ > > ma = > > ilman_listinfo_dm-2Ddevel&d=3DDgIDAw&c=3DIL_XqQWOjubgfqINi2jTzg&r=3DE3f= tc4 > > 7B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6e1KXtRA0ctwHYJ7cDmPsL > > i2C1L9pox7uexsY&s=3DUyE46dXOrNTbPz_TVGtpoHl3J3h_n0uYhI4TI-PgyWg&e=3D From mboxrd@z Thu Jan 1 00:00:00 1970 From: Muneendra Kumar M Subject: Re: deterministic io throughput in multipath Date: Mon, 16 Jan 2017 11:19:19 +0000 Message-ID: <4dfed25f04c04771a732580a4a8cc834@BRMWP-EXMB12.corp.brocade.com> References: <1649d4b8538d4b4cb1efacdfe8cf31eb@BRMWP-EXMB12.corp.brocade.com> <20161221160940.GG19659@octiron.msp.redhat.com> <8cd4cc5f20b540a1b8312ad485711152@BRMWP-EXMB12.corp.brocade.com> <20170103171159.GA2732@octiron.msp.redhat.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="_005_4dfed25f04c04771a732580a4a8cc834BRMWPEXMB12corpbrocadec_" Return-path: Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Benjamin Marzinski Cc: "dm-devel@redhat.com" List-Id: dm-devel.ids --_005_4dfed25f04c04771a732580a4a8cc834BRMWPEXMB12corpbrocadec_ Content-Type: multipart/alternative; boundary="_000_4dfed25f04c04771a732580a4a8cc834BRMWPEXMB12corpbrocadec_" --_000_4dfed25f04c04771a732580a4a8cc834BRMWPEXMB12corpbrocadec_ Content-Type: text/plain; charset="us-ascii" Hi Ben, After the below discussion we came with the approach which will meet our requirement. I have attached the patch which is working good in our field tests. Could you please review the attached patch and provide us your valuable comments . Below are the files that has been changed . libmultipath/config.c | 3 +++ libmultipath/config.h | 9 +++++++++ libmultipath/configure.c | 3 +++ libmultipath/defaults.h | 1 + libmultipath/dict.c | 80 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ libmultipath/dict.h | 1 + libmultipath/propsel.c | 44 ++++++++++++++++++++++++++++++++++++++++++++ libmultipath/propsel.h | 6 ++++++ libmultipath/structs.h | 12 +++++++++++- libmultipath/structs_vec.c | 10 ++++++++++ multipath/multipath.conf.5 | 58 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ multipathd/main.c | 61 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-- We have added three new config parameters whose description is below. 1.san_path_err_threshold: If set to a value greater than 0, multipathd will watch paths and check how many times a path has been failed due to errors. If the number of failures on a particular path is greater then the san_path_err_threshold then the path will not reinstate till san_path_err_recovery_time. These path failures should occur within a san_path_err_threshold_window time frame, if not we will consider the path is good enough to reinstate. 2.san_path_err_threshold_window: If set to a value greater than 0, multipathd will check whether the path failures has exceeded the san_path_err_threshold within this time frame i.e san_path_err_threshold_window . If so we will not reinstate the path till san_path_err_recovery_time. 3.san_path_err_recovery_time: If set to a value greater than 0, multipathd will make sure that when path failures has exceeded the san_path_err_threshold within san_path_err_threshold_window then the path will be placed in failed state for san_path_err_recovery_time duration. Once san_path_err_recovery_time has timeout we will reinstate the failed path . Regards, Muneendra. -----Original Message----- From: Muneendra Kumar M Sent: Wednesday, January 04, 2017 6:56 PM To: 'Benjamin Marzinski' Cc: dm-devel@redhat.com Subject: RE: [dm-devel] deterministic io throughput in multipath Hi Ben, Thanks for the information. Regards, Muneendra. -----Original Message----- From: Benjamin Marzinski [mailto:bmarzins@redhat.com] Sent: Tuesday, January 03, 2017 10:42 PM To: Muneendra Kumar M > Cc: dm-devel@redhat.com Subject: Re: [dm-devel] deterministic io throughput in multipath On Mon, Dec 26, 2016 at 09:42:48AM +0000, Muneendra Kumar M wrote: > Hi Ben, > > If there are two paths on a dm-1 say sda and sdb as below. > > # multipath -ll > mpathd (3600110d001ee7f0102050001cc0b6751) dm-1 SANBlaze,VLUN MyLun > size=8.0M features='0' hwhandler='0' wp=rw > `-+- policy='round-robin 0' prio=50 status=active > |- 8:0:1:0 sda 8:48 active ready running > `- 9:0:1:0 sdb 8:64 active ready running > > And on sda if iam seeing lot of errors due to which the sda path is fluctuating from failed state to active state and vicevera. > > My requirement is something like this if sda is failed for more then 5 > times in a hour duration ,then I want to keep the sda in failed state > for few hours (3hrs) > > And the data should travel only thorugh sdb path. > Will this be possible with the below parameters. No. delay_watch_checks sets how may path checks you watch a path that has recently come back from the failed state. If the path fails again within this time, multipath device delays it. This means that the delay is always trigger by two failures within the time limit. It's possible to adapt this to count numbers of failures, and act after a certain number within a certain timeframe, but it would take a bit more work. delay_wait_checks doesn't guarantee that it will delay for any set length of time. Instead, it sets the number of consecutive successful path checks that must occur before the path is usable again. You could set this for 3 hours of path checks, but if a check failed during this time, you would restart the 3 hours over again. -Ben > Can you just let me know what values I should add for delay_watch_checks and delay_wait_checks. > > Regards, > Muneendra. > > > > -----Original Message----- > From: Muneendra Kumar M > Sent: Thursday, December 22, 2016 11:10 AM > To: 'Benjamin Marzinski' > > Cc: dm-devel@redhat.com > Subject: RE: [dm-devel] deterministic io throughput in multipath > > Hi Ben, > > Thanks for the reply. > I will look into this parameters will do the internal testing and let you know the results. > > Regards, > Muneendra. > > -----Original Message----- > From: Benjamin Marzinski [mailto:bmarzins@redhat.com] > Sent: Wednesday, December 21, 2016 9:40 PM > To: Muneendra Kumar M > > Cc: dm-devel@redhat.com > Subject: Re: [dm-devel] deterministic io throughput in multipath > > Have you looked into the delay_watch_checks and delay_wait_checks configuration parameters? The idea behind them is to minimize the use of paths that are intermittently failing. > > -Ben > > On Mon, Dec 19, 2016 at 11:50:36AM +0000, Muneendra Kumar M wrote: > > Customers using Linux host (mostly RHEL host) using a SAN network for > > block storage, complain the Linux multipath stack is not resilient to > > handle non-deterministic storage network behaviors. This has caused many > > customer move away to non-linux based servers. The intent of the below > > patch and the prevailing issues are given below. With the below design we > > are seeing the Linux multipath stack becoming resilient to such network > > issues. We hope by getting this patch accepted will help in more Linux > > server adoption that use SAN network. > > > > I have already sent the design details to the community in a different > > mail chain and the details are available in the below link. > > > > [1]https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=DgIDAw&c=IL_XqQWOjubgfqINi2jTzg&r=E3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=vfwpVp6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=q5OI-lfefNC2CHKmyUkokgiyiPo_Uj7MRu52hG3MKzM&e= . > > > > Can you please go through the design and send the comments to us. > > > > > > > > Regards, > > > > Muneendra. > > > > > > > > > > > > References > > > > Visible links > > 1. > > https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_ > > ar > > chives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=DgIDAw&c=IL_XqQWOj > > ub > > gfqINi2jTzg&r=E3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=vfwpVp6e > > 1K > > XtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=q5OI-lfefNC2CHKmyUkokgiyiPo_Uj7M > > Ru > > 52hG3MKzM&e= > > > -- > > dm-devel mailing list > > dm-devel@redhat.com > > https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_ > > ma > > ilman_listinfo_dm-2Ddevel&d=DgIDAw&c=IL_XqQWOjubgfqINi2jTzg&r=E3ftc4 > > 7B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=vfwpVp6e1KXtRA0ctwHYJ7cDmPsL > > i2C1L9pox7uexsY&s=UyE46dXOrNTbPz_TVGtpoHl3J3h_n0uYhI4TI-PgyWg&e= --_000_4dfed25f04c04771a732580a4a8cc834BRMWPEXMB12corpbrocadec_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable
Hi Ben,
After the below discussion we  came with the approach which will = meet our requirement.
I have attached the patch which is working good in our field tests.
Could you please review the attached patch and provide us your valuabl= e comments .
Below are the files that has been changed .
 
libmultipath/config.c      |  3 +&#= 43;+
libmultipath/config.h      |  9 +&#= 43;+++++++
libmultipath/configure.c   |  3 +++
libmultipath/defaults.h    |  1 +
libmultipath/dict.c        &n= bsp;    | 80 +++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++
libmultipath/dict.h        | = 1 +
libmultipath/propsel.c     | 44 +++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++
libmultipath/propsel.h     |  6 ++&#= 43;+++
libmultipath/structs.h     | 12 +++&#= 43;+++++++-
libmultipath/structs_vec.c | 10 +++++++&#= 43;++
multipath/multipath.conf.5 | 58 +++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;+++++
multipathd/main.c        &nbs= p; | 61 +++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;--
 
We have added three new config parameters whose description is below.<= /div>
1.san_path_err_threshold:
        If set to a value greater t= han 0, multipathd will watch paths and check how many times a path has been= failed due to errors. If the number of failures on a particular path is gr= eater then the san_path_err_threshold then the path will not  reinstat= e  till san_path_err_recovery_time. These path failures should occur within a = san_path_err_threshold_window time frame, if not we will consider the path = is good enough to reinstate.
 
2.san_path_err_threshold_window:
        If set to a value greater t= han 0, multipathd will check whether the path failures has exceeded  t= he san_path_err_threshold within this time frame i.e san_path_err_threshold= _window . If so we will not reinstate the path till    =       san_path_err_recovery_time.
 
3.san_path_err_recovery_time:
If set to a value greater than 0, multipathd will make sure that when = path failures has exceeded the san_path_err_threshold within san_path_err_t= hreshold_window then the path  will be placed in failed state for san_= path_err_recovery_time duration. Once san_path_err_recovery_time has timeout  we will reinstate the failed p= ath .
 
Regards,
Muneendra.
 
-----Original Message-----
From: Muneendra Kumar M
Sent: Wednesday, January 04, 2017 6:56 PM
To: 'Benjamin Marzinski' <bmarzins@redhat.com>
Cc: dm-devel@redhat.com
Subject: RE: [dm-devel] deterministic io throughput in multipath
 
Hi Ben,
Thanks for the information.
 
Regards,
Muneendra.
 
-----Original Message-----
From: Benjamin Marzinski [mailt= o:bmarzins@redhat.com]
Sent: Tuesday, January 03, 2017 10:42 PM
To: Muneendra Kumar M <mman= dala@Brocade.com>
Cc: dm-devel@redhat.com
Subject: Re: [dm-devel] deterministic io throughput in multipath
 
On Mon, Dec 26, 2016 at 09:42:48AM +0000, Muneendra Kumar M wrote:=
> Hi Ben,
>
> If there are two paths on a dm-1 say sda and sdb as below.
>
> #  multipath -ll
>        mpathd (3600110d001ee7f= 0102050001cc0b6751) dm-1 SANBlaze,VLUN MyLun
>        size=3D8.0M features=3D= '0' hwhandler=3D'0' wp=3Drw
>        `-+- policy=3D'roun= d-robin 0' prio=3D50 status=3Dactive
>          |- 8:0:1:0&= nbsp; sda 8:48 active ready  running
>          `- 9:0:1:0&= nbsp; sdb 8:64 active ready  running     &nbs= p;   
>
> And on sda if iam seeing lot of errors due to which the sda path = is fluctuating from failed state to active state and vicevera.
>
> My requirement is something like this if sda is failed for more t= hen 5
> times in a hour duration ,then I want to keep the sda in failed s= tate
> for few hours (3hrs)
>
> And the data should travel only thorugh sdb path.
> Will this be possible with the below parameters.
 
No. delay_watch_checks sets how may path checks you watch a path that = has recently come back from the failed state. If the path fails again withi= n this time, multipath device delays it.  This means that the delay is= always trigger by two failures within the time limit.  It's possible to adapt this to count numbers of failu= res, and act after a certain number within a certain timeframe, but it woul= d take a bit more work.
 
delay_wait_checks doesn't guarantee that it will delay for any set len= gth of time.  Instead, it sets the number of consecutive successful pa= th checks that must occur before the path is usable again. You could set th= is for 3 hours of path checks, but if a check failed during this time, you would restart the 3 hours over again.<= /div>
 
-Ben
 
> Can you just let me know what values I should add for delay_watch= _checks and delay_wait_checks.
>
> Regards,
> Muneendra.
>
>
>
> -----Original Message-----
> From: Muneendra Kumar M
> Sent: Thursday, December 22, 2016 11:10 AM
> To: 'Benjamin Marzinski' <bmarzins@redhat.com>
> Subject: RE: [dm-devel] deterministic io throughput in multipath<= /div>
>
> Hi Ben,
>
> Thanks for the reply.
> I will look into this parameters will do the internal testing and= let you know the results.
>
> Regards,
> Muneendra.
>
> -----Original Message-----
> Sent: Wednesday, December 21, 2016 9:40 PM
> To: Muneendra Kumar M <mmandala@Brocade.com>
> Subject: Re: [dm-devel] deterministic io throughput in multipath<= /div>
>
> Have you looked into the delay_watch_checks and delay_wait_checks= configuration parameters?  The idea behind them is to minimize the us= e of paths that are intermittently failing.
>
> -Ben
>
> On Mon, Dec 19, 2016 at 11:50:36AM +0000, Muneendra Kumar M w= rote:
> >    Customers using Linux host (mostly RHEL ho= st) using a SAN network for
> >    block storage, complain the Linux multipat= h stack is not resilient to
> >    handle non-deterministic storage network b= ehaviors. This has caused many
> >    customer move away to non-linux based serv= ers. The intent of the below
> >    patch and the prevailing issues are given = below. With the below design we
> >    are seeing the Linux multipath stack becom= ing resilient to such network
> >    issues. We hope by getting this patch acce= pted will help in more Linux
> >    server adoption that use SAN network.
> >
> >    I have already sent the design details to = the community in a different
> >    mail chain and the details are available i= n the below link.
> >
> >
> >    Can you please go through the design and s= end the comments to us.
> >
> >     
> >
> >    Regards,
> >
> >    Muneendra.
> >
> >     
> >
> >     
> >
> > References
> >
> >    Visible links
> >    1.
> > ar
> > chives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgID= Aw&c=3DIL_XqQWOj
> > ub
> > gfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sB= IP0&m=3DvfwpVp6e
> > 1K
> > XtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyU= kokgiyiPo_Uj7M
> > Ru
> > 52hG3MKzM&e=3D
>
> > --
> > dm-devel mailing list
> > ma
> > ilman_listinfo_dm-2Ddevel&d=3DDgIDAw&c=3DIL_XqQWOjub= gfqINi2jTzg&r=3DE3ftc4
> > 7B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6e1KXtRA= 0ctwHYJ7cDmPsL
> > i2C1L9pox7uexsY&s=3DUyE46dXOrNTbPz_TVGtpoHl3J3h_n0uYhI4T= I-PgyWg&e=3D
 
--_000_4dfed25f04c04771a732580a4a8cc834BRMWPEXMB12corpbrocadec_-- --_005_4dfed25f04c04771a732580a4a8cc834BRMWPEXMB12corpbrocadec_ Content-Type: application/octet-stream; name="san_path_err.patch" Content-Description: san_path_err.patch Content-Disposition: attachment; filename="san_path_err.patch"; size=20594; creation-date="Mon, 16 Jan 2017 06:39:48 GMT"; modification-date="Mon, 16 Jan 2017 06:30:13 GMT" Content-Transfer-Encoding: base64 ZGlmZiAtLWdpdCBhL2xpYm11bHRpcGF0aC9jb25maWcuYyBiL2xpYm11bHRpcGF0aC9jb25maWcu YwppbmRleCAxNWRkYmQ4Li4xOWFkYjk3IDEwMDY0NAotLS0gYS9saWJtdWx0aXBhdGgvY29uZmln LmMKKysrIGIvbGlibXVsdGlwYXRoL2NvbmZpZy5jCkBAIC0zNDgsNiArMzQ4LDkgQEAgbWVyZ2Vf aHdlIChzdHJ1Y3QgaHdlbnRyeSAqIGRzdCwgc3RydWN0IGh3ZW50cnkgKiBzcmMpCiAJbWVyZ2Vf bnVtKGRlbGF5X3dhaXRfY2hlY2tzKTsKIAltZXJnZV9udW0oc2tpcF9rcGFydHgpOwogCW1lcmdl X251bShtYXhfc2VjdG9yc19rYik7CisJbWVyZ2VfbnVtKHNhbl9wYXRoX2Vycl90aHJlc2hvbGQp OworCW1lcmdlX251bShzYW5fcGF0aF9lcnJfdGhyZXNob2xkX3dpbmRvdyk7CisJbWVyZ2VfbnVt KHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lKTsKIAogCS8qCiAJICogTWFrZSBzdXJlIGZlYXR1 cmVzIGlzIGNvbnNpc3RlbnQgd2l0aApkaWZmIC0tZ2l0IGEvbGlibXVsdGlwYXRoL2NvbmZpZy5o IGIvbGlibXVsdGlwYXRoL2NvbmZpZy5oCmluZGV4IDk2NzAwMjAuLjI5ODU5NTggMTAwNjQ0Ci0t LSBhL2xpYm11bHRpcGF0aC9jb25maWcuaAorKysgYi9saWJtdWx0aXBhdGgvY29uZmlnLmgKQEAg LTY1LDYgKzY1LDkgQEAgc3RydWN0IGh3ZW50cnkgewogCWludCBkZWZlcnJlZF9yZW1vdmU7CiAJ aW50IGRlbGF5X3dhdGNoX2NoZWNrczsKIAlpbnQgZGVsYXlfd2FpdF9jaGVja3M7CisJaW50IHNh bl9wYXRoX2Vycl90aHJlc2hvbGQ7CisJaW50IHNhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93 OworCWludCBzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZTsKIAlpbnQgc2tpcF9rcGFydHg7CiAJ aW50IG1heF9zZWN0b3JzX2tiOwogCWNoYXIgKiBibF9wcm9kdWN0OwpAQCAtOTMsNiArOTYsOSBA QCBzdHJ1Y3QgbXBlbnRyeSB7CiAJaW50IGRlZmVycmVkX3JlbW92ZTsKIAlpbnQgZGVsYXlfd2F0 Y2hfY2hlY2tzOwogCWludCBkZWxheV93YWl0X2NoZWNrczsKKwlpbnQgc2FuX3BhdGhfZXJyX3Ro cmVzaG9sZDsKKwlpbnQgc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5kb3c7CisJaW50IHNhbl9w YXRoX2Vycl9yZWNvdmVyeV90aW1lOwogCWludCBza2lwX2twYXJ0eDsKIAlpbnQgbWF4X3NlY3Rv cnNfa2I7CiAJdWlkX3QgdWlkOwpAQCAtMTM4LDYgKzE0NCw5IEBAIHN0cnVjdCBjb25maWcgewog CWludCBwcm9jZXNzZWRfbWFpbl9jb25maWc7CiAJaW50IGRlbGF5X3dhdGNoX2NoZWNrczsKIAlp bnQgZGVsYXlfd2FpdF9jaGVja3M7CisJaW50IHNhbl9wYXRoX2Vycl90aHJlc2hvbGQ7CisJaW50 IHNhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93OworCWludCBzYW5fcGF0aF9lcnJfcmVjb3Zl cnlfdGltZTsKIAlpbnQgdXhzb2NrX3RpbWVvdXQ7CiAJaW50IHN0cmljdF90aW1pbmc7CiAJaW50 IHJldHJpZ2dlcl90cmllczsKZGlmZiAtLWdpdCBhL2xpYm11bHRpcGF0aC9jb25maWd1cmUuYyBi L2xpYm11bHRpcGF0aC9jb25maWd1cmUuYwppbmRleCBhMGZjYWQ5Li4wZjUwODI2IDEwMDY0NAot LS0gYS9saWJtdWx0aXBhdGgvY29uZmlndXJlLmMKKysrIGIvbGlibXVsdGlwYXRoL2NvbmZpZ3Vy ZS5jCkBAIC0yOTQsNiArMjk0LDkgQEAgaW50IHNldHVwX21hcChzdHJ1Y3QgbXVsdGlwYXRoICpt cHAsIGNoYXIgKnBhcmFtcywgaW50IHBhcmFtc19zaXplKQogCXNlbGVjdF9kZWZlcnJlZF9yZW1v dmUoY29uZiwgbXBwKTsKIAlzZWxlY3RfZGVsYXlfd2F0Y2hfY2hlY2tzKGNvbmYsIG1wcCk7CiAJ c2VsZWN0X2RlbGF5X3dhaXRfY2hlY2tzKGNvbmYsIG1wcCk7CisJc2VsZWN0X3Nhbl9wYXRoX2Vy cl90aHJlc2hvbGQoY29uZiwgbXBwKTsKKwlzZWxlY3Rfc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93 aW5kb3coY29uZiwgbXBwKTsKKwlzZWxlY3Rfc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUoY29u ZiwgbXBwKTsKIAlzZWxlY3Rfc2tpcF9rcGFydHgoY29uZiwgbXBwKTsKIAlzZWxlY3RfbWF4X3Nl Y3RvcnNfa2IoY29uZiwgbXBwKTsKIApkaWZmIC0tZ2l0IGEvbGlibXVsdGlwYXRoL2RlZmF1bHRz LmggYi9saWJtdWx0aXBhdGgvZGVmYXVsdHMuaAppbmRleCBiOWIwYTM3Li45ZTgwNTljIDEwMDY0 NAotLS0gYS9saWJtdWx0aXBhdGgvZGVmYXVsdHMuaAorKysgYi9saWJtdWx0aXBhdGgvZGVmYXVs dHMuaApAQCAtMjQsNiArMjQsNyBAQAogI2RlZmluZSBERUZBVUxUX0RFVEVDVF9QUklPCURFVEVD VF9QUklPX09OCiAjZGVmaW5lIERFRkFVTFRfREVGRVJSRURfUkVNT1ZFCURFRkVSUkVEX1JFTU9W RV9PRkYKICNkZWZpbmUgREVGQVVMVF9ERUxBWV9DSEVDS1MJREVMQVlfQ0hFQ0tTX09GRgorI2Rl ZmluZSBERUZBVUxUX0VSUl9DSEVDS1MJRVJSX0NIRUNLU19PRkYKICNkZWZpbmUgREVGQVVMVF9V RVZFTlRfU1RBQ0tTSVpFIDI1NgogI2RlZmluZSBERUZBVUxUX1JFVFJJR0dFUl9ERUxBWQkxMAog I2RlZmluZSBERUZBVUxUX1JFVFJJR0dFUl9UUklFUwkzCmRpZmYgLS1naXQgYS9saWJtdWx0aXBh dGgvZGljdC5jIGIvbGlibXVsdGlwYXRoL2RpY3QuYwppbmRleCBkYzIxODQ2Li5hNTY4OWJkIDEw MDY0NAotLS0gYS9saWJtdWx0aXBhdGgvZGljdC5jCisrKyBiL2xpYm11bHRpcGF0aC9kaWN0LmMK QEAgLTEwNzQsNiArMTA3NCw3MiBAQCBkZWNsYXJlX2h3X3NucHJpbnQoZGVsYXlfd2FpdF9jaGVj a3MsIHByaW50X2RlbGF5X2NoZWNrcykKIGRlY2xhcmVfbXBfaGFuZGxlcihkZWxheV93YWl0X2No ZWNrcywgc2V0X2RlbGF5X2NoZWNrcykKIGRlY2xhcmVfbXBfc25wcmludChkZWxheV93YWl0X2No ZWNrcywgcHJpbnRfZGVsYXlfY2hlY2tzKQogCisKK3N0YXRpYyBpbnQKK3NldF9wYXRoX2Vycl9p bmZvKHZlY3RvciBzdHJ2ZWMsIHZvaWQgKnB0cikKK3sKKyAgICAgICAgaW50ICppbnRfcHRyID0g KGludCAqKXB0cjsKKyAgICAgICAgY2hhciAqIGJ1ZmY7CisKKyAgICAgICAgYnVmZiA9IHNldF92 YWx1ZShzdHJ2ZWMpOworICAgICAgICBpZiAoIWJ1ZmYpCisgICAgICAgICAgICAgICAgcmV0dXJu IDE7CisKKyAgICAgICAgaWYgKCFzdHJjbXAoYnVmZiwgIm5vIikgfHwgIXN0cmNtcChidWZmLCAi MCIpKQorICAgICAgICAgICAgICAgICppbnRfcHRyID0gRVJSX0NIRUNLU19PRkY7CisgICAgICAg IGVsc2UgaWYgKCgqaW50X3B0ciA9IGF0b2koYnVmZikpIDwgMSkKKyAgICAgICAgICAgICAgICAq aW50X3B0ciA9IEVSUl9DSEVDS1NfVU5ERUY7CisKKyAgICAgICAgRlJFRShidWZmKTsKKyAgICAg ICAgcmV0dXJuIDA7Cit9CisKK2ludAorcHJpbnRfcGF0aF9lcnJfaW5mbyhjaGFyICogYnVmZiwg aW50IGxlbiwgdm9pZCAqcHRyKQoreworICAgICAgICBpbnQgKmludF9wdHIgPSAoaW50ICopcHRy OworCisgICAgICAgIHN3aXRjaCgqaW50X3B0cikgeworICAgICAgICBjYXNlIEVSUl9DSEVDS1Nf VU5ERUY6CisgICAgICAgICAgICAgICAgcmV0dXJuIDA7CisgICAgICAgIGNhc2UgRVJSX0NIRUNL U19PRkY6CisgICAgICAgICAgICAgICAgcmV0dXJuIHNucHJpbnRmKGJ1ZmYsIGxlbiwgIlwib2Zm XCIiKTsKKyAgICAgICAgZGVmYXVsdDoKKyAgICAgICAgICAgICAgICByZXR1cm4gc25wcmludGYo YnVmZiwgbGVuLCAiJWkiLCAqaW50X3B0cik7CisgICAgICAgIH0KK30KKworCisKKworCitkZWNs YXJlX2RlZl9oYW5kbGVyKHNhbl9wYXRoX2Vycl90aHJlc2hvbGQsIHNldF9wYXRoX2Vycl9pbmZv KQorZGVjbGFyZV9kZWZfc25wcmludChzYW5fcGF0aF9lcnJfdGhyZXNob2xkLCBwcmludF9wYXRo X2Vycl9pbmZvKQorZGVjbGFyZV9vdnJfaGFuZGxlcihzYW5fcGF0aF9lcnJfdGhyZXNob2xkLCBz ZXRfcGF0aF9lcnJfaW5mbykKK2RlY2xhcmVfb3ZyX3NucHJpbnQoc2FuX3BhdGhfZXJyX3RocmVz aG9sZCwgcHJpbnRfcGF0aF9lcnJfaW5mbykKK2RlY2xhcmVfaHdfaGFuZGxlcihzYW5fcGF0aF9l cnJfdGhyZXNob2xkLCBzZXRfcGF0aF9lcnJfaW5mbykKK2RlY2xhcmVfaHdfc25wcmludChzYW5f cGF0aF9lcnJfdGhyZXNob2xkLCBwcmludF9wYXRoX2Vycl9pbmZvKQorZGVjbGFyZV9tcF9oYW5k bGVyKHNhbl9wYXRoX2Vycl90aHJlc2hvbGQsIHNldF9wYXRoX2Vycl9pbmZvKQorZGVjbGFyZV9t cF9zbnByaW50KHNhbl9wYXRoX2Vycl90aHJlc2hvbGQsIHByaW50X3BhdGhfZXJyX2luZm8pCisK K2RlY2xhcmVfZGVmX2hhbmRsZXIoc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5kb3csIHNldF9w YXRoX2Vycl9pbmZvKQorZGVjbGFyZV9kZWZfc25wcmludChzYW5fcGF0aF9lcnJfdGhyZXNob2xk X3dpbmRvdywgcHJpbnRfcGF0aF9lcnJfaW5mbykKK2RlY2xhcmVfb3ZyX2hhbmRsZXIoc2FuX3Bh dGhfZXJyX3RocmVzaG9sZF93aW5kb3csIHNldF9wYXRoX2Vycl9pbmZvKQorZGVjbGFyZV9vdnJf c25wcmludChzYW5fcGF0aF9lcnJfdGhyZXNob2xkX3dpbmRvdywgcHJpbnRfcGF0aF9lcnJfaW5m bykKK2RlY2xhcmVfaHdfaGFuZGxlcihzYW5fcGF0aF9lcnJfdGhyZXNob2xkX3dpbmRvdywgc2V0 X3BhdGhfZXJyX2luZm8pCitkZWNsYXJlX2h3X3NucHJpbnQoc2FuX3BhdGhfZXJyX3RocmVzaG9s ZF93aW5kb3csIHByaW50X3BhdGhfZXJyX2luZm8pCitkZWNsYXJlX21wX2hhbmRsZXIoc2FuX3Bh dGhfZXJyX3RocmVzaG9sZF93aW5kb3csIHNldF9wYXRoX2Vycl9pbmZvKQorZGVjbGFyZV9tcF9z bnByaW50KHNhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93LCBwcmludF9wYXRoX2Vycl9pbmZv KQorCisKK2RlY2xhcmVfZGVmX2hhbmRsZXIoc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUsIHNl dF9wYXRoX2Vycl9pbmZvKQorZGVjbGFyZV9kZWZfc25wcmludChzYW5fcGF0aF9lcnJfcmVjb3Zl cnlfdGltZSwgcHJpbnRfcGF0aF9lcnJfaW5mbykKK2RlY2xhcmVfb3ZyX2hhbmRsZXIoc2FuX3Bh dGhfZXJyX3JlY292ZXJ5X3RpbWUsIHNldF9wYXRoX2Vycl9pbmZvKQorZGVjbGFyZV9vdnJfc25w cmludChzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSwgcHJpbnRfcGF0aF9lcnJfaW5mbykKK2Rl Y2xhcmVfaHdfaGFuZGxlcihzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSwgc2V0X3BhdGhfZXJy X2luZm8pCitkZWNsYXJlX2h3X3NucHJpbnQoc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUsIHBy aW50X3BhdGhfZXJyX2luZm8pCitkZWNsYXJlX21wX2hhbmRsZXIoc2FuX3BhdGhfZXJyX3JlY292 ZXJ5X3RpbWUsIHNldF9wYXRoX2Vycl9pbmZvKQorZGVjbGFyZV9tcF9zbnByaW50KHNhbl9wYXRo X2Vycl9yZWNvdmVyeV90aW1lLCBwcmludF9wYXRoX2Vycl9pbmZvKQogc3RhdGljIGludAogZGVm X3V4c29ja190aW1lb3V0X2hhbmRsZXIoc3RydWN0IGNvbmZpZyAqY29uZiwgdmVjdG9yIHN0cnZl YykKIHsKQEAgLTE0MDQsNiArMTQ3MCwxMCBAQCBpbml0X2tleXdvcmRzKHZlY3RvciBrZXl3b3Jk cykKIAlpbnN0YWxsX2tleXdvcmQoImNvbmZpZ19kaXIiLCAmZGVmX2NvbmZpZ19kaXJfaGFuZGxl ciwgJnNucHJpbnRfZGVmX2NvbmZpZ19kaXIpOwogCWluc3RhbGxfa2V5d29yZCgiZGVsYXlfd2F0 Y2hfY2hlY2tzIiwgJmRlZl9kZWxheV93YXRjaF9jaGVja3NfaGFuZGxlciwgJnNucHJpbnRfZGVm X2RlbGF5X3dhdGNoX2NoZWNrcyk7CiAJaW5zdGFsbF9rZXl3b3JkKCJkZWxheV93YWl0X2NoZWNr cyIsICZkZWZfZGVsYXlfd2FpdF9jaGVja3NfaGFuZGxlciwgJnNucHJpbnRfZGVmX2RlbGF5X3dh aXRfY2hlY2tzKTsKKyAgICAgICAgaW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0aF9lcnJfdGhyZXNo b2xkIiwgJmRlZl9zYW5fcGF0aF9lcnJfdGhyZXNob2xkX2hhbmRsZXIsICZzbnByaW50X2RlZl9z YW5fcGF0aF9lcnJfdGhyZXNob2xkKTsKKyAgICAgICAgaW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0 aF9lcnJfdGhyZXNob2xkX3dpbmRvdyIsICZkZWZfc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5k b3dfaGFuZGxlciwgJnNucHJpbnRfZGVmX3Nhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93KTsK KyAgICAgICAgaW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSIsICZk ZWZfc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWVfaGFuZGxlciwgJnNucHJpbnRfZGVmX3Nhbl9w YXRoX2Vycl9yZWNvdmVyeV90aW1lKTsKKwogCWluc3RhbGxfa2V5d29yZCgiZmluZF9tdWx0aXBh dGhzIiwgJmRlZl9maW5kX211bHRpcGF0aHNfaGFuZGxlciwgJnNucHJpbnRfZGVmX2ZpbmRfbXVs dGlwYXRocyk7CiAJaW5zdGFsbF9rZXl3b3JkKCJ1eHNvY2tfdGltZW91dCIsICZkZWZfdXhzb2Nr X3RpbWVvdXRfaGFuZGxlciwgJnNucHJpbnRfZGVmX3V4c29ja190aW1lb3V0KTsKIAlpbnN0YWxs X2tleXdvcmQoInJldHJpZ2dlcl90cmllcyIsICZkZWZfcmV0cmlnZ2VyX3RyaWVzX2hhbmRsZXIs ICZzbnByaW50X2RlZl9yZXRyaWdnZXJfdHJpZXMpOwpAQCAtMTQ4Niw2ICsxNTU2LDkgQEAgaW5p dF9rZXl3b3Jkcyh2ZWN0b3Iga2V5d29yZHMpCiAJaW5zdGFsbF9rZXl3b3JkKCJkZWZlcnJlZF9y ZW1vdmUiLCAmaHdfZGVmZXJyZWRfcmVtb3ZlX2hhbmRsZXIsICZzbnByaW50X2h3X2RlZmVycmVk X3JlbW92ZSk7CiAJaW5zdGFsbF9rZXl3b3JkKCJkZWxheV93YXRjaF9jaGVja3MiLCAmaHdfZGVs YXlfd2F0Y2hfY2hlY2tzX2hhbmRsZXIsICZzbnByaW50X2h3X2RlbGF5X3dhdGNoX2NoZWNrcyk7 CiAJaW5zdGFsbF9rZXl3b3JkKCJkZWxheV93YWl0X2NoZWNrcyIsICZod19kZWxheV93YWl0X2No ZWNrc19oYW5kbGVyLCAmc25wcmludF9od19kZWxheV93YWl0X2NoZWNrcyk7CisgICAgICAgIGlu c3RhbGxfa2V5d29yZCgic2FuX3BhdGhfZXJyX3RocmVzaG9sZCIsICZod19zYW5fcGF0aF9lcnJf dGhyZXNob2xkX2hhbmRsZXIsICZzbnByaW50X2h3X3Nhbl9wYXRoX2Vycl90aHJlc2hvbGQpOwor ICAgICAgICBpbnN0YWxsX2tleXdvcmQoInNhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93Iiwg Jmh3X3Nhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93X2hhbmRsZXIsICZzbnByaW50X2h3X3Nh bl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93KTsKKyAgICAgICAgaW5zdGFsbF9rZXl3b3JkKCJz YW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSIsICZod19zYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGlt ZV9oYW5kbGVyLCAmc25wcmludF9od19zYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSk7CiAJaW5z dGFsbF9rZXl3b3JkKCJza2lwX2twYXJ0eCIsICZod19za2lwX2twYXJ0eF9oYW5kbGVyLCAmc25w cmludF9od19za2lwX2twYXJ0eCk7CiAJaW5zdGFsbF9rZXl3b3JkKCJtYXhfc2VjdG9yc19rYiIs ICZod19tYXhfc2VjdG9yc19rYl9oYW5kbGVyLCAmc25wcmludF9od19tYXhfc2VjdG9yc19rYik7 CiAJaW5zdGFsbF9zdWJsZXZlbF9lbmQoKTsKQEAgLTE1MTUsNiArMTU4OCwxMCBAQCBpbml0X2tl eXdvcmRzKHZlY3RvciBrZXl3b3JkcykKIAlpbnN0YWxsX2tleXdvcmQoImRlZmVycmVkX3JlbW92 ZSIsICZvdnJfZGVmZXJyZWRfcmVtb3ZlX2hhbmRsZXIsICZzbnByaW50X292cl9kZWZlcnJlZF9y ZW1vdmUpOwogCWluc3RhbGxfa2V5d29yZCgiZGVsYXlfd2F0Y2hfY2hlY2tzIiwgJm92cl9kZWxh eV93YXRjaF9jaGVja3NfaGFuZGxlciwgJnNucHJpbnRfb3ZyX2RlbGF5X3dhdGNoX2NoZWNrcyk7 CiAJaW5zdGFsbF9rZXl3b3JkKCJkZWxheV93YWl0X2NoZWNrcyIsICZvdnJfZGVsYXlfd2FpdF9j aGVja3NfaGFuZGxlciwgJnNucHJpbnRfb3ZyX2RlbGF5X3dhaXRfY2hlY2tzKTsKKyAgICAgICAg aW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0aF9lcnJfdGhyZXNob2xkIiwgJm92cl9zYW5fcGF0aF9l cnJfdGhyZXNob2xkX2hhbmRsZXIsICZzbnByaW50X292cl9zYW5fcGF0aF9lcnJfdGhyZXNob2xk KTsKKyAgICAgICAgaW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0aF9lcnJfdGhyZXNob2xkX3dpbmRv dyIsICZvdnJfc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5kb3dfaGFuZGxlciwgJnNucHJpbnRf b3ZyX3Nhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93KTsKKyAgICAgICAgaW5zdGFsbF9rZXl3 b3JkKCJzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSIsICZvdnJfc2FuX3BhdGhfZXJyX3JlY292 ZXJ5X3RpbWVfaGFuZGxlciwgJnNucHJpbnRfb3ZyX3Nhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1l KTsKKwogCWluc3RhbGxfa2V5d29yZCgic2tpcF9rcGFydHgiLCAmb3ZyX3NraXBfa3BhcnR4X2hh bmRsZXIsICZzbnByaW50X292cl9za2lwX2twYXJ0eCk7CiAJaW5zdGFsbF9rZXl3b3JkKCJtYXhf c2VjdG9yc19rYiIsICZvdnJfbWF4X3NlY3RvcnNfa2JfaGFuZGxlciwgJnNucHJpbnRfb3ZyX21h eF9zZWN0b3JzX2tiKTsKIApAQCAtMTU0Myw2ICsxNjIwLDkgQEAgaW5pdF9rZXl3b3Jkcyh2ZWN0 b3Iga2V5d29yZHMpCiAJaW5zdGFsbF9rZXl3b3JkKCJkZWZlcnJlZF9yZW1vdmUiLCAmbXBfZGVm ZXJyZWRfcmVtb3ZlX2hhbmRsZXIsICZzbnByaW50X21wX2RlZmVycmVkX3JlbW92ZSk7CiAJaW5z dGFsbF9rZXl3b3JkKCJkZWxheV93YXRjaF9jaGVja3MiLCAmbXBfZGVsYXlfd2F0Y2hfY2hlY2tz X2hhbmRsZXIsICZzbnByaW50X21wX2RlbGF5X3dhdGNoX2NoZWNrcyk7CiAJaW5zdGFsbF9rZXl3 b3JkKCJkZWxheV93YWl0X2NoZWNrcyIsICZtcF9kZWxheV93YWl0X2NoZWNrc19oYW5kbGVyLCAm c25wcmludF9tcF9kZWxheV93YWl0X2NoZWNrcyk7CisJaW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0 aF9lcnJfdGhyZXNob2xkIiwgJm1wX3Nhbl9wYXRoX2Vycl90aHJlc2hvbGRfaGFuZGxlciwgJnNu cHJpbnRfbXBfc2FuX3BhdGhfZXJyX3RocmVzaG9sZCk7CisJaW5zdGFsbF9rZXl3b3JkKCJzYW5f cGF0aF9lcnJfdGhyZXNob2xkX3dpbmRvdyIsICZtcF9zYW5fcGF0aF9lcnJfdGhyZXNob2xkX3dp bmRvd19oYW5kbGVyLCAmc25wcmludF9tcF9zYW5fcGF0aF9lcnJfdGhyZXNob2xkX3dpbmRvdyk7 CisJaW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSIsICZtcF9zYW5f cGF0aF9lcnJfcmVjb3ZlcnlfdGltZV9oYW5kbGVyLCAmc25wcmludF9tcF9zYW5fcGF0aF9lcnJf cmVjb3ZlcnlfdGltZSk7CiAJaW5zdGFsbF9rZXl3b3JkKCJza2lwX2twYXJ0eCIsICZtcF9za2lw X2twYXJ0eF9oYW5kbGVyLCAmc25wcmludF9tcF9za2lwX2twYXJ0eCk7CiAJaW5zdGFsbF9rZXl3 b3JkKCJtYXhfc2VjdG9yc19rYiIsICZtcF9tYXhfc2VjdG9yc19rYl9oYW5kbGVyLCAmc25wcmlu dF9tcF9tYXhfc2VjdG9yc19rYik7CiAJaW5zdGFsbF9zdWJsZXZlbF9lbmQoKTsKZGlmZiAtLWdp dCBhL2xpYm11bHRpcGF0aC9kaWN0LmggYi9saWJtdWx0aXBhdGgvZGljdC5oCmluZGV4IDRjZDAz YzUuLmFkYWE5ZjEgMTAwNjQ0Ci0tLSBhL2xpYm11bHRpcGF0aC9kaWN0LmgKKysrIGIvbGlibXVs dGlwYXRoL2RpY3QuaApAQCAtMTUsNSArMTUsNiBAQCBpbnQgcHJpbnRfZmFzdF9pb19mYWlsKGNo YXIgKiBidWZmLCBpbnQgbGVuLCB2b2lkICpwdHIpOwogaW50IHByaW50X2Rldl9sb3NzKGNoYXIg KiBidWZmLCBpbnQgbGVuLCB2b2lkICpwdHIpOwogaW50IHByaW50X3Jlc2VydmF0aW9uX2tleShj aGFyICogYnVmZiwgaW50IGxlbiwgdm9pZCAqIHB0cik7CiBpbnQgcHJpbnRfZGVsYXlfY2hlY2tz KGNoYXIgKiBidWZmLCBpbnQgbGVuLCB2b2lkICpwdHIpOworaW50IHByaW50X3BhdGhfZXJyX2lu Zm8oY2hhciAqIGJ1ZmYsIGludCBsZW4sIHZvaWQgKnB0cik7CiAKICNlbmRpZiAvKiBfRElDVF9I ICovCmRpZmYgLS1naXQgYS9saWJtdWx0aXBhdGgvcHJvcHNlbC5jIGIvbGlibXVsdGlwYXRoL3By b3BzZWwuYwppbmRleCBjMGJjNjE2Li5mNGNhMzc4IDEwMDY0NAotLS0gYS9saWJtdWx0aXBhdGgv cHJvcHNlbC5jCisrKyBiL2xpYm11bHRpcGF0aC9wcm9wc2VsLmMKQEAgLTY0Myw3ICs2NDMsNTEg QEAgb3V0OgogCXJldHVybiAwOwogCiB9CitpbnQgc2VsZWN0X3Nhbl9wYXRoX2Vycl90aHJlc2hv bGQoc3RydWN0IGNvbmZpZyAqY29uZiwgc3RydWN0IG11bHRpcGF0aCAqbXApCit7CisgICAgICAg IGNoYXIgKm9yaWdpbiwgYnVmZlsxMl07CisKKyAgICAgICAgbXBfc2V0X21wZShzYW5fcGF0aF9l cnJfdGhyZXNob2xkKTsKKyAgICAgICAgbXBfc2V0X292cihzYW5fcGF0aF9lcnJfdGhyZXNob2xk KTsKKyAgICAgICAgbXBfc2V0X2h3ZShzYW5fcGF0aF9lcnJfdGhyZXNob2xkKTsKKyAgICAgICAg bXBfc2V0X2NvbmYoc2FuX3BhdGhfZXJyX3RocmVzaG9sZCk7CisgICAgICAgIG1wX3NldF9kZWZh dWx0KHNhbl9wYXRoX2Vycl90aHJlc2hvbGQsIERFRkFVTFRfRVJSX0NIRUNLUyk7CitvdXQ6Cisg ICAgICAgIHByaW50X3BhdGhfZXJyX2luZm8oYnVmZiwgMTIsICZtcC0+c2FuX3BhdGhfZXJyX3Ro cmVzaG9sZCk7CisgICAgICAgIGNvbmRsb2coMywgIiVzOiBzYW5fcGF0aF9lcnJfdGhyZXNob2xk ID0gJXMgJXMiLCBtcC0+YWxpYXMsIGJ1ZmYsIG9yaWdpbik7CisgICAgICAgIHJldHVybiAwOwor fQorCitpbnQgc2VsZWN0X3Nhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93KHN0cnVjdCBjb25m aWcgKmNvbmYsIHN0cnVjdCBtdWx0aXBhdGggKm1wKQoreworICAgICAgICBjaGFyICpvcmlnaW4s IGJ1ZmZbMTJdOworCisgICAgICAgIG1wX3NldF9tcGUoc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93 aW5kb3cpOworICAgICAgICBtcF9zZXRfb3ZyKHNhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93 KTsKKyAgICAgICAgbXBfc2V0X2h3ZShzYW5fcGF0aF9lcnJfdGhyZXNob2xkX3dpbmRvdyk7Cisg ICAgICAgIG1wX3NldF9jb25mKHNhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93KTsKKyAgICAg ICAgbXBfc2V0X2RlZmF1bHQoc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5kb3csIERFRkFVTFRf RVJSX0NIRUNLUyk7CitvdXQ6CisgICAgICAgIHByaW50X3BhdGhfZXJyX2luZm8oYnVmZiwgMTIs ICZtcC0+c2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5kb3cpOworICAgICAgICBjb25kbG9nKDMs ICIlczogc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5kb3cgPSAlcyAlcyIsIG1wLT5hbGlhcywg YnVmZiwgb3JpZ2luKTsKKyAgICAgICAgcmV0dXJuIDA7CisKK30KK2ludCBzZWxlY3Rfc2FuX3Bh dGhfZXJyX3JlY292ZXJ5X3RpbWUoc3RydWN0IGNvbmZpZyAqY29uZiwgc3RydWN0IG11bHRpcGF0 aCAqbXApCit7CisgICAgICAgIGNoYXIgKm9yaWdpbiwgYnVmZlsxMl07CiAKKyAgICAgICAgbXBf c2V0X21wZShzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSk7CisgICAgICAgIG1wX3NldF9vdnIo c2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUpOworICAgICAgICBtcF9zZXRfaHdlKHNhbl9wYXRo X2Vycl9yZWNvdmVyeV90aW1lKTsKKyAgICAgICAgbXBfc2V0X2NvbmYoc2FuX3BhdGhfZXJyX3Jl Y292ZXJ5X3RpbWUpOworICAgICAgICBtcF9zZXRfZGVmYXVsdChzYW5fcGF0aF9lcnJfcmVjb3Zl cnlfdGltZSwgREVGQVVMVF9FUlJfQ0hFQ0tTKTsKK291dDoKKyAgICAgICAgcHJpbnRfcGF0aF9l cnJfaW5mbyhidWZmLCAxMiwgJm1wLT5zYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSk7CisgICAg ICAgIGNvbmRsb2coMywgIiVzOiBzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSA9ICVzICVzIiwg bXAtPmFsaWFzLCBidWZmLCBvcmlnaW4pOworICAgICAgICByZXR1cm4gMDsKKworfQogaW50IHNl bGVjdF9za2lwX2twYXJ0eCAoc3RydWN0IGNvbmZpZyAqY29uZiwgc3RydWN0IG11bHRpcGF0aCAq IG1wKQogewogCWNoYXIgKm9yaWdpbjsKZGlmZiAtLWdpdCBhL2xpYm11bHRpcGF0aC9wcm9wc2Vs LmggYi9saWJtdWx0aXBhdGgvcHJvcHNlbC5oCmluZGV4IGFkOThmYTUuLjg4YjU4NDAgMTAwNjQ0 Ci0tLSBhL2xpYm11bHRpcGF0aC9wcm9wc2VsLmgKKysrIGIvbGlibXVsdGlwYXRoL3Byb3BzZWwu aApAQCAtMjQsMyArMjQsOSBAQCBpbnQgc2VsZWN0X2RlbGF5X3dhdGNoX2NoZWNrcyAoc3RydWN0 IGNvbmZpZyAqY29uZiwgc3RydWN0IG11bHRpcGF0aCAqIG1wKTsKIGludCBzZWxlY3RfZGVsYXlf d2FpdF9jaGVja3MgKHN0cnVjdCBjb25maWcgKmNvbmYsIHN0cnVjdCBtdWx0aXBhdGggKiBtcCk7 CiBpbnQgc2VsZWN0X3NraXBfa3BhcnR4IChzdHJ1Y3QgY29uZmlnICpjb25mLCBzdHJ1Y3QgbXVs dGlwYXRoICogbXApOwogaW50IHNlbGVjdF9tYXhfc2VjdG9yc19rYiAoc3RydWN0IGNvbmZpZyAq Y29uZiwgc3RydWN0IG11bHRpcGF0aCAqIG1wKTsKK2ludCBzZWxlY3Rfc2FuX3BhdGhfZXJyX3Ro cmVzaG9sZF93aW5kb3coc3RydWN0IGNvbmZpZyAqY29uZiwgc3RydWN0IG11bHRpcGF0aCAqbXAp OworaW50IHNlbGVjdF9zYW5fcGF0aF9lcnJfdGhyZXNob2xkKHN0cnVjdCBjb25maWcgKmNvbmYs IHN0cnVjdCBtdWx0aXBhdGggKm1wKTsKK2ludCBzZWxlY3Rfc2FuX3BhdGhfZXJyX3JlY292ZXJ5 X3RpbWUoc3RydWN0IGNvbmZpZyAqY29uZiwgc3RydWN0IG11bHRpcGF0aCAqbXApOworCisKKwpk aWZmIC0tZ2l0IGEvbGlibXVsdGlwYXRoL3N0cnVjdHMuaCBiL2xpYm11bHRpcGF0aC9zdHJ1Y3Rz LmgKaW5kZXggMzk2ZjY5ZC4uOGI3YTgwMyAxMDA2NDQKLS0tIGEvbGlibXVsdGlwYXRoL3N0cnVj dHMuaAorKysgYi9saWJtdWx0aXBhdGgvc3RydWN0cy5oCkBAIC0xNTYsNiArMTU2LDEwIEBAIGVu dW0gZGVsYXlfY2hlY2tzX3N0YXRlcyB7CiAJREVMQVlfQ0hFQ0tTX09GRiA9IC0xLAogCURFTEFZ X0NIRUNLU19VTkRFRiA9IDAsCiB9OworZW51bSBlcnJfY2hlY2tzX3N0YXRlcyB7CisJRVJSX0NI RUNLU19PRkYgPSAtMSwKKwlFUlJfQ0hFQ0tTX1VOREVGID0gMCwKK307CiAKIGVudW0gaW5pdGlh bGl6ZWRfc3RhdGVzIHsKIAlJTklUX0ZBSUxFRCwKQEAgLTIyMyw3ICsyMjcsMTAgQEAgc3RydWN0 IHBhdGggewogCWludCBpbml0aWFsaXplZDsKIAlpbnQgcmV0cmlnZ2VyczsKIAlpbnQgd3dpZF9j aGFuZ2VkOwotCisJdW5zaWduZWQgaW50IHBhdGhfZmFpbHVyZXM7CisJdGltZV90ICAgZmFpbHVy ZV9zdGFydF90aW1lOworCXRpbWVfdCBkaXNfcmVpbnN0YW50ZV90aW1lOworCWludCBkaXNhYmxl X3JlaW5zdGF0ZTsKIAkvKiBjb25maWdsZXQgcG9pbnRlcnMgKi8KIAlzdHJ1Y3QgaHdlbnRyeSAq IGh3ZTsKIH07CkBAIC0yNTUsNiArMjYyLDkgQEAgc3RydWN0IG11bHRpcGF0aCB7CiAJaW50IGRl ZmVycmVkX3JlbW92ZTsKIAlpbnQgZGVsYXlfd2F0Y2hfY2hlY2tzOwogCWludCBkZWxheV93YWl0 X2NoZWNrczsKKwlpbnQgc2FuX3BhdGhfZXJyX3RocmVzaG9sZDsKKwlpbnQgc2FuX3BhdGhfZXJy X3RocmVzaG9sZF93aW5kb3c7CisJaW50IHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lOwogCWlu dCBza2lwX2twYXJ0eDsKIAlpbnQgbWF4X3NlY3RvcnNfa2I7CiAJdW5zaWduZWQgaW50IGRldl9s b3NzOwpkaWZmIC0tZ2l0IGEvbGlibXVsdGlwYXRoL3N0cnVjdHNfdmVjLmMgYi9saWJtdWx0aXBh dGgvc3RydWN0c192ZWMuYwppbmRleCAyMmJlOGUwLi5iZjg0YjE3IDEwMDY0NAotLS0gYS9saWJt dWx0aXBhdGgvc3RydWN0c192ZWMuYworKysgYi9saWJtdWx0aXBhdGgvc3RydWN0c192ZWMuYwpA QCAtNTQ2LDYgKzU0Niw3IEBAIGludCB1cGRhdGVfbXVsdGlwYXRoIChzdHJ1Y3QgdmVjdG9ycyAq dmVjcywgY2hhciAqbWFwbmFtZSwgaW50IHJlc2V0KQogCXN0cnVjdCBwYXRoZ3JvdXAgICpwZ3A7 CiAJc3RydWN0IHBhdGggKnBwOwogCWludCBpLCBqOworCXN0cnVjdCB0aW1lc3BlYyBzdGFydF90 aW1lOwogCiAJbXBwID0gZmluZF9tcF9ieV9hbGlhcyh2ZWNzLT5tcHZlYywgbWFwbmFtZSk7CiAK QEAgLTU3MCw2ICs1NzEsMTUgQEAgaW50IHVwZGF0ZV9tdWx0aXBhdGggKHN0cnVjdCB2ZWN0b3Jz ICp2ZWNzLCBjaGFyICptYXBuYW1lLCBpbnQgcmVzZXQpCiAJCQkJaW50IG9sZHN0YXRlID0gcHAt PnN0YXRlOwogCQkJCWNvbmRsb2coMiwgIiVzOiBtYXJrIGFzIGZhaWxlZCIsIHBwLT5kZXYpOwog CQkJCW1wcC0+c3RhdF9wYXRoX2ZhaWx1cmVzKys7CisJCQkJLypDYXB0dXJlZCB0aGUgdGltZSB3 aGVuIHdlIHNlZSB0aGUgZmlyc3QgZmFpbHVyZSBvbiB0aGUgcGF0aCovCisJCQkJaWYocHAtPnBh dGhfZmFpbHVyZXMgPT0gMCkgeworCQkJCQlpZiAoY2xvY2tfZ2V0dGltZShDTE9DS19NT05PVE9O SUMsICZzdGFydF90aW1lKSAhPSAwKQorCQkJCQkJc3RhcnRfdGltZS50dl9zZWMgPSAwOworCQkJ CQlwcC0+ZmFpbHVyZV9zdGFydF90aW1lID0gc3RhcnRfdGltZS50dl9zZWM7CisJCisJCQkJfQor CQkJCS8qSW5jcmVtZW50IHRoZSBudW1iZXIgb2YgcGF0aCBmYWlsdXJlcyovCisJCQkJcHAtPnBh dGhfZmFpbHVyZXMrKzsKIAkJCQlwcC0+c3RhdGUgPSBQQVRIX0RPV047CiAJCQkJaWYgKG9sZHN0 YXRlID09IFBBVEhfVVAgfHwKIAkJCQkgICAgb2xkc3RhdGUgPT0gUEFUSF9HSE9TVCkKZGlmZiAt LWdpdCBhL211bHRpcGF0aC9tdWx0aXBhdGguY29uZi41IGIvbXVsdGlwYXRoL211bHRpcGF0aC5j b25mLjUKaW5kZXggMzY1ODlmNS4uN2RmZDQ4YSAxMDA2NDQKLS0tIGEvbXVsdGlwYXRoL211bHRp cGF0aC5jb25mLjUKKysrIGIvbXVsdGlwYXRoL211bHRpcGF0aC5jb25mLjUKQEAgLTc1MSw2ICs3 NTEsNDYgQEAgVGhlIGRlZmF1bHQgaXM6IFxmQi9ldGMvbXVsdGlwYXRoL2NvbmYuZC9cZlIKIC4K IC4KIC5UUAorLkIgc2FuX3BhdGhfZXJyX3RocmVzaG9sZAorSWYgc2V0IHRvIGEgdmFsdWUgZ3Jl YXRlciB0aGFuIDAsIG11bHRpcGF0aGQgd2lsbCB3YXRjaCBwYXRocyBhbmQgY2hlY2sgaG93IG1h bnkKK3RpbWVzIGEgcGF0aCBoYXMgYmVlbiBmYWlsZWQgZHVlIHRvIGVycm9ycy5JZiB0aGUgbnVt YmVyIG9mIGZhaWx1cmVzIG9uIGEgcGFydGljdWxhcgorcGF0aCBpcyBncmVhdGVyIHRoZW4gdGhl IHNhbl9wYXRoX2Vycl90aHJlc2hvbGQgdGhlbiB0aGUgcGF0aCB3aWxsIG5vdCAgcmVpbnN0YW50 ZQordGlsbCBzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZS5UaGVzZSBwYXRoIGZhaWx1cmVzIHNo b3VsZCBvY2N1ciB3aXRoaW4gYSAKK3Nhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93IHRpbWUg ZnJhbWUsIGlmIG5vdCB3ZSB3aWxsIGNvbnNpZGVyIHRoZSBwYXRoIGlzIGdvb2QgZW5vdWdoCit0 byByZWluc3RhbnRhdGUuCisuUlMKKy5UUAorVGhlIGRlZmF1bHQgaXM6IFxmQm5vXGZSCisuUkUK Ky4KKy4KKy5UUAorLkIgc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5kb3cKK0lmIHNldCB0byBh IHZhbHVlIGdyZWF0ZXIgdGhhbiAwLCBtdWx0aXBhdGhkIHdpbGwgY2hlY2sgd2hldGhlciB0aGUg cGF0aCBmYWlsdXJlcworaGFzIGV4Y2VlZGVkICB0aGUgc2FuX3BhdGhfZXJyX3RocmVzaG9sZCB3 aXRoaW4gdGhpcyB0aW1lIGZyYW1lIGkuZSAKK3Nhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93 IC4gSWYgc28gd2Ugd2lsbCBub3QgcmVpbnN0YW50ZSB0aGUgcGF0aCB0aWxsCitzYW5fcGF0aF9l cnJfcmVjb3ZlcnlfdGltZS4KK3Nhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93IHZhbHVlIHNo b3VsZCBiZSBpbiBzZWNzLgorLlJTCisuVFAKK1RoZSBkZWZhdWx0IGlzOiBcZkJub1xmUgorLlJF CisuCisuCisuVFAKKy5CIHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lCitJZiBzZXQgdG8gYSB2 YWx1ZSBncmVhdGVyIHRoYW4gMCwgbXVsdGlwYXRoZCB3aWxsIG1ha2Ugc3VyZSB0aGF0IHdoZW4g cGF0aCBmYWlsdXJlcworaGFzIGV4Y2VlZGVkIHRoZSBzYW5fcGF0aF9lcnJfdGhyZXNob2xkIHdp dGhpbiBzYW5fcGF0aF9lcnJfdGhyZXNob2xkX3dpbmRvdyB0aGVuIHRoZSBwYXRoCit3aWxsIGJl IHBsYWNlZCBpbiBmYWlsZWQgc3RhdGUgZm9yIHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lIGR1 cmF0aW9uLk9uY2Ugc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUKK2hhcyB0aW1lb3V0ICB3ZSB3 aWxsIHJlaW5zdGFudGUgdGhlIGZhaWxlZCBwYXRoIC4KK3Nhbl9wYXRoX2Vycl9yZWNvdmVyeV90 aW1lIHZhbHVlIHNob3VsZCBiZSBpbiBzZWNzLgorLlJTCisuVFAKK1RoZSBkZWZhdWx0IGlzOiBc ZkJub1xmUgorLlJFCisuCisuCisuVFAKIC5CIGRlbGF5X3dhdGNoX2NoZWNrcwogSWYgc2V0IHRv IGEgdmFsdWUgZ3JlYXRlciB0aGFuIDAsIG11bHRpcGF0aGQgd2lsbCB3YXRjaCBwYXRocyB0aGF0 IGhhdmUKIHJlY2VudGx5IGJlY29tZSB2YWxpZCBmb3IgdGhpcyBtYW55IGNoZWNrcy4gSWYgdGhl eSBmYWlsIGFnYWluIHdoaWxlIHRoZXkgYXJlCkBAIC0xMDE1LDYgKzEwNTUsMTIgQEAgYXJlIHRh a2VuIGZyb20gdGhlIFxmSWRlZmF1bHRzXGZSIG9yIFxmSWRldmljZXNcZlIgc2VjdGlvbjoKIC5U UAogLkIgZGVmZXJyZWRfcmVtb3ZlCiAuVFAKKy5CIHNhbl9wYXRoX2Vycl90aHJlc2hvbGQKKy5U UAorLkIgc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5kb3cKKy5UUAorLkIgc2FuX3BhdGhfZXJy X3JlY292ZXJ5X3RpbWUKKy5UUAogLkIgZGVsYXlfd2F0Y2hfY2hlY2tzCiAuVFAKIC5CIGRlbGF5 X3dhaXRfY2hlY2tzCkBAIC0xMTI4LDYgKzExNzQsMTIgQEAgc2VjdGlvbjoKIC5UUAogLkIgZGVm ZXJyZWRfcmVtb3ZlCiAuVFAKKy5CIHNhbl9wYXRoX2Vycl90aHJlc2hvbGQKKy5UUAorLkIgc2Fu X3BhdGhfZXJyX3RocmVzaG9sZF93aW5kb3cKKy5UUAorLkIgc2FuX3BhdGhfZXJyX3JlY292ZXJ5 X3RpbWUKKy5UUAogLkIgZGVsYXlfd2F0Y2hfY2hlY2tzCiAuVFAKIC5CIGRlbGF5X3dhaXRfY2hl Y2tzCkBAIC0xMTkyLDYgKzEyNDQsMTIgQEAgdGhlIHZhbHVlcyBhcmUgdGFrZW4gZnJvbSB0aGUg XGZJZGV2aWNlc1xmUiBvciBcZklkZWZhdWx0c1xmUiBzZWN0aW9uczoKIC5UUAogLkIgZGVmZXJy ZWRfcmVtb3ZlCiAuVFAKKy5CIHNhbl9wYXRoX2Vycl90aHJlc2hvbGQKKy5UUAorLkIgc2FuX3Bh dGhfZXJyX3RocmVzaG9sZF93aW5kb3cKKy5UUAorLkIgc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3Rp bWUKKy5UUAogLkIgZGVsYXlfd2F0Y2hfY2hlY2tzCiAuVFAKIC5CIGRlbGF5X3dhaXRfY2hlY2tz CmRpZmYgLS1naXQgYS9tdWx0aXBhdGhkL21haW4uYyBiL211bHRpcGF0aGQvbWFpbi5jCmluZGV4 IGFkYzMyNTguLmZhY2ZjMDMgMTAwNjQ0Ci0tLSBhL211bHRpcGF0aGQvbWFpbi5jCisrKyBiL211 bHRpcGF0aGQvbWFpbi5jCkBAIC0xNDg2LDcgKzE0ODYsNTQgQEAgdm9pZCByZXBhaXJfcGF0aChz dHJ1Y3QgcGF0aCAqIHBwKQogCWNoZWNrZXJfcmVwYWlyKCZwcC0+Y2hlY2tlcik7CiAJTE9HX01T RygxLCBjaGVja2VyX21lc3NhZ2UoJnBwLT5jaGVja2VyKSk7CiB9CitzdGF0aWMgaW50IGNoZWNr X3BhdGhfdmFsaWRpdHlfZXJyKCBzdHJ1Y3QgcGF0aCAqIHBwKXsKKwlzdHJ1Y3QgdGltZXNwZWMg c3RhcnRfdGltZTsKKwlpbnQgZGlzYWJsZV9yZWluc3RhdGUgPSAwOworCisJaWYgKGNsb2NrX2dl dHRpbWUoQ0xPQ0tfTU9OT1RPTklDLCAmc3RhcnRfdGltZSkgIT0gMCkKKwkJc3RhcnRfdGltZS50 dl9zZWMgPSAwOworCisJCS8qSWYgbnVtYmVyIG9mIHBhdGggZmFpbHVyZXMgYXJlIG1vcmUgdGhl biB0aGUgc2FuX3BhdGhfZXJyX3RocmVzaG9sZCovCisJCWlmKChwcC0+bXBwLT5zYW5fcGF0aF9l cnJfdGhyZXNob2xkID4gMCkmJiAocHAtPnBhdGhfZmFpbHVyZXMgPiBwcC0+bXBwLT5zYW5fcGF0 aF9lcnJfdGhyZXNob2xkKSl7CisJCQljb25kbG9nKDMsIlxucGF0aCAlcyA6aGl0IHRoZSBlcnJv ciB0aHJlc2hvbGRcbiIscHAtPmRldik7CisKKwkJCWlmKCFwcC0+ZGlzYWJsZV9yZWluc3RhdGUp eworCQkJCS8qaWYgdGhlIGVycm9yIHRocmVzaG9sZCBoYXMgaGl0IGhpdCB3aXRoaW4gdGhlIHNh bl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93CisJCQkJICogdGltZSBmcmFtZSBkb25vdCByZWlu c3RhbnRlIHRoZSBwYXRoIHRpbGwgdGhlIHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lCisJCQkJ ICogcGxhY2UgdGhlIHBhdGggaW4gZmFpbGVkIHN0YXRlIHRpbGwgc2FuX3BhdGhfZXJyX3JlY292 ZXJ5X3RpbWUgc28gdGhhdCB0aGUgCisJCQkJICogY3V0b21lciBjYW4gcmVjdGlmeSB0aGUgaXNz dWUgd2l0aGluIHRoaXMgdGltZSAuT25jZSB0aGUgY29wbGV0aW9uIG9mIAorCQkJCSAqIHNhbl9w YXRoX2Vycl9yZWNvdmVyeV90aW1lIGl0IHNob3VsZCBhdXRvbWF0aWNhbGx5IHJlaW5zdGFudGF0 ZSB0aGUgcGF0aAorCQkJCSAqICovCisJCQkJaWYoKHBwLT5tcHAtPnNhbl9wYXRoX2Vycl90aHJl c2hvbGRfd2luZG93ID4gMCkgJiYgCisJCQkJICAgKChzdGFydF90aW1lLnR2X3NlYyAtIHBwLT5m YWlsdXJlX3N0YXJ0X3RpbWUpIDwgcHAtPm1wcC0+c2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5k b3cpKXsKKwkJCQkJY29uZGxvZygzLCJcbnBhdGggJXMgOmhpdCB0aGUgZXJyb3IgdGhyZXNob2xk IHdpdGhpbiB0aGUgdGhyc2hvbGQgd2luZG93IHRpbWVcbiIscHAtPmRldik7CisJCQkJCWRpc2Fi bGVfcmVpbnN0YXRlID0gMTsgCisJCQkJCXBwLT5kaXNfcmVpbnN0YW50ZV90aW1lID0gc3RhcnRf dGltZS50dl9zZWMgOworCQkJCQlwcC0+ZGlzYWJsZV9yZWluc3RhdGUgPSAxOworCQkJCX1lbHNl eworCQkJCQkvKmV2ZW4gdGhvdWdoIHRoZSBudW1iZXIgb2YgZXJyb3JzIGFyZSBncmVhdGVyIHRo ZW4gdGhlIHNhbl9wYXRoX2Vycl90aHJlc2hvbGQKKwkJCQkJICpzaW5jZSBpdCBkb2Vzbm90IGhp dCB3aXRoaW4gdGhlIHNhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93IHRpbWUgIHdlIHNob3Vs ZCBub3QgdGFrZSB0aGVzZQorCQkJCQkgKiBlcnJyb3MgaW50byBhY2NvdW50IGFuZCB3ZSBoYXZl IHRvIHJld2F0Y2ggdGhlIGVycm9ycworCQkJCQkgKi8KKwkJCQkJcHAtPnBhdGhfZmFpbHVyZXMg PSAwOworCQkJCQlwcC0+ZGlzYWJsZV9yZWluc3RhdGUgPSAwOworCisJCQkJfQorCQkJfQorCQkJ aWYocHAtPmRpc2FibGVfcmVpbnN0YXRlKXsKKwkJCQlkaXNhYmxlX3JlaW5zdGF0ZSA9IDE7CisJ CQkJaWYoKHBwLT5tcHAtPnNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lID4gMCkgJiYgCisJCQkJ ICAgKHN0YXJ0X3RpbWUudHZfc2VjIC0gcHAtPmRpc19yZWluc3RhbnRlX3RpbWUgKSA+IHBwLT5t cHAtPnNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lKXsKKwkJCQkJZGlzYWJsZV9yZWluc3RhdGUg PTA7CisJCQkJCXBwLT5wYXRoX2ZhaWx1cmVzID0gMDsKKwkJCQkJcHAtPmRpc2FibGVfcmVpbnN0 YXRlID0gMDsKKwkJCQkJIGNvbmRsb2coMywiXG5wYXRoICVzIDpyZWluc3RhdGUgdGhlIHBhdGgg YWZ0ZXIgZXJyIHJlY292ZXJ5IHRpbWVcbiIscHAtPmRldik7CisJCQkJfQogCisJCQl9CisJCX0K KwlyZXR1cm4gIGRpc2FibGVfcmVpbnN0YXRlOworfQogLyoKICAqIFJldHVybnMgJzEnIGlmIHRo ZSBwYXRoIGhhcyBiZWVuIGNoZWNrZWQsICctMScgaWYgaXQgd2FzIGJsYWNrbGlzdGVkCiAgKiBh bmQgJzAnIG90aGVyd2lzZQpAQCAtMTUwMyw3ICsxNTUwLDExIEBAIGNoZWNrX3BhdGggKHN0cnVj dCB2ZWN0b3JzICogdmVjcywgc3RydWN0IHBhdGggKiBwcCwgaW50IHRpY2tzKQogCWludCByZXRy aWdnZXJfdHJpZXMsIGNoZWNraW50OwogCXN0cnVjdCBjb25maWcgKmNvbmY7CiAJaW50IHJldDsK KwlzdHJ1Y3QgdGltZXNwZWMgc3RhcnRfdGltZTsKIAorCWlmIChjbG9ja19nZXR0aW1lKENMT0NL X01PTk9UT05JQywgJnN0YXJ0X3RpbWUpICE9IDApCisJCXN0YXJ0X3RpbWUudHZfc2VjID0gMDsK KwkKIAlpZiAoKHBwLT5pbml0aWFsaXplZCA9PSBJTklUX09LIHx8CiAJICAgICBwcC0+aW5pdGlh bGl6ZWQgPT0gSU5JVF9SRVFVRVNURURfVURFVikgJiYgIXBwLT5tcHApCiAJCXJldHVybiAwOwpA QCAtMTYxNSwxMiArMTY2NiwxOCBAQCBjaGVja19wYXRoIChzdHJ1Y3QgdmVjdG9ycyAqIHZlY3Ms IHN0cnVjdCBwYXRoICogcHAsIGludCB0aWNrcykKIAkgKiBhbmQgaWYgdGFyZ2V0IHN1cHBvcnRz IG9ubHkgaW1wbGljaXQgdHBncyBtb2RlLgogCSAqIHRoaXMgd2lsbCBwcmV2ZW50IHVubmVjZXNz YXJ5IGkvbyBieSBkbSBvbiBzdGFuZC1ieQogCSAqIHBhdGhzIGlmIHRoZXJlIGFyZSBubyBvdGhl ciBhY3RpdmUgcGF0aHMgaW4gbWFwLgorCSAqCisJICogd2hlbiBwYXRoIGZhaWx1cmVzIGhhcyBl eGNlZWRlZCB0aGUgc2FuX3BhdGhfZXJyX3RocmVzaG9sZCAKKwkgKiB3aXRoaW4gc2FuX3BhdGhf ZXJyX3RocmVzaG9sZF93aW5kb3cgdGhlbiB3ZSBkb24ndCByZWluc3RhdGUKKwkgKiBmYWlsZWQg cGF0aCBmb3Igc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUKIAkgKi8KLQlkaXNhYmxlX3JlaW5z dGF0ZSA9IChuZXdzdGF0ZSA9PSBQQVRIX0dIT1NUICYmCisJZGlzYWJsZV9yZWluc3RhdGUgPSAo KG5ld3N0YXRlID09IFBBVEhfR0hPU1QgJiYKIAkJCSAgICBwcC0+bXBwLT5ucl9hY3RpdmUgPT0g MCAmJgotCQkJICAgIHBwLT50cGdzID09IFRQR1NfSU1QTElDSVQpID8gMSA6IDA7CisJCQkgICAg cHAtPnRwZ3MgPT0gVFBHU19JTVBMSUNJVCkgPyAxIDoKKwkJCSAgICBjaGVja19wYXRoX3ZhbGlk aXR5X2VycihwcCkpOwogCiAJcHAtPmNoa3JzdGF0ZSA9IG5ld3N0YXRlOworCiAJaWYgKG5ld3N0 YXRlICE9IHBwLT5zdGF0ZSkgewogCQlpbnQgb2xkc3RhdGUgPSBwcC0+c3RhdGU7CiAJCXBwLT5z dGF0ZSA9IG5ld3N0YXRlOwo= --_005_4dfed25f04c04771a732580a4a8cc834BRMWPEXMB12corpbrocadec_ Content-Type: application/octet-stream; name="san_path_err.patch" Content-Description: san_path_err.patch Content-Disposition: attachment; filename="san_path_err.patch"; size=20594; creation-date="Mon, 16 Jan 2017 06:39:48 GMT"; modification-date="Mon, 16 Jan 2017 06:30:13 GMT" Content-Transfer-Encoding: base64 ZGlmZiAtLWdpdCBhL2xpYm11bHRpcGF0aC9jb25maWcuYyBiL2xpYm11bHRpcGF0aC9jb25maWcu YwppbmRleCAxNWRkYmQ4Li4xOWFkYjk3IDEwMDY0NAotLS0gYS9saWJtdWx0aXBhdGgvY29uZmln LmMKKysrIGIvbGlibXVsdGlwYXRoL2NvbmZpZy5jCkBAIC0zNDgsNiArMzQ4LDkgQEAgbWVyZ2Vf aHdlIChzdHJ1Y3QgaHdlbnRyeSAqIGRzdCwgc3RydWN0IGh3ZW50cnkgKiBzcmMpCiAJbWVyZ2Vf bnVtKGRlbGF5X3dhaXRfY2hlY2tzKTsKIAltZXJnZV9udW0oc2tpcF9rcGFydHgpOwogCW1lcmdl X251bShtYXhfc2VjdG9yc19rYik7CisJbWVyZ2VfbnVtKHNhbl9wYXRoX2Vycl90aHJlc2hvbGQp OworCW1lcmdlX251bShzYW5fcGF0aF9lcnJfdGhyZXNob2xkX3dpbmRvdyk7CisJbWVyZ2VfbnVt KHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lKTsKIAogCS8qCiAJICogTWFrZSBzdXJlIGZlYXR1 cmVzIGlzIGNvbnNpc3RlbnQgd2l0aApkaWZmIC0tZ2l0IGEvbGlibXVsdGlwYXRoL2NvbmZpZy5o IGIvbGlibXVsdGlwYXRoL2NvbmZpZy5oCmluZGV4IDk2NzAwMjAuLjI5ODU5NTggMTAwNjQ0Ci0t LSBhL2xpYm11bHRpcGF0aC9jb25maWcuaAorKysgYi9saWJtdWx0aXBhdGgvY29uZmlnLmgKQEAg LTY1LDYgKzY1LDkgQEAgc3RydWN0IGh3ZW50cnkgewogCWludCBkZWZlcnJlZF9yZW1vdmU7CiAJ aW50IGRlbGF5X3dhdGNoX2NoZWNrczsKIAlpbnQgZGVsYXlfd2FpdF9jaGVja3M7CisJaW50IHNh bl9wYXRoX2Vycl90aHJlc2hvbGQ7CisJaW50IHNhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93 OworCWludCBzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZTsKIAlpbnQgc2tpcF9rcGFydHg7CiAJ aW50IG1heF9zZWN0b3JzX2tiOwogCWNoYXIgKiBibF9wcm9kdWN0OwpAQCAtOTMsNiArOTYsOSBA QCBzdHJ1Y3QgbXBlbnRyeSB7CiAJaW50IGRlZmVycmVkX3JlbW92ZTsKIAlpbnQgZGVsYXlfd2F0 Y2hfY2hlY2tzOwogCWludCBkZWxheV93YWl0X2NoZWNrczsKKwlpbnQgc2FuX3BhdGhfZXJyX3Ro cmVzaG9sZDsKKwlpbnQgc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5kb3c7CisJaW50IHNhbl9w YXRoX2Vycl9yZWNvdmVyeV90aW1lOwogCWludCBza2lwX2twYXJ0eDsKIAlpbnQgbWF4X3NlY3Rv cnNfa2I7CiAJdWlkX3QgdWlkOwpAQCAtMTM4LDYgKzE0NCw5IEBAIHN0cnVjdCBjb25maWcgewog CWludCBwcm9jZXNzZWRfbWFpbl9jb25maWc7CiAJaW50IGRlbGF5X3dhdGNoX2NoZWNrczsKIAlp bnQgZGVsYXlfd2FpdF9jaGVja3M7CisJaW50IHNhbl9wYXRoX2Vycl90aHJlc2hvbGQ7CisJaW50 IHNhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93OworCWludCBzYW5fcGF0aF9lcnJfcmVjb3Zl cnlfdGltZTsKIAlpbnQgdXhzb2NrX3RpbWVvdXQ7CiAJaW50IHN0cmljdF90aW1pbmc7CiAJaW50 IHJldHJpZ2dlcl90cmllczsKZGlmZiAtLWdpdCBhL2xpYm11bHRpcGF0aC9jb25maWd1cmUuYyBi L2xpYm11bHRpcGF0aC9jb25maWd1cmUuYwppbmRleCBhMGZjYWQ5Li4wZjUwODI2IDEwMDY0NAot LS0gYS9saWJtdWx0aXBhdGgvY29uZmlndXJlLmMKKysrIGIvbGlibXVsdGlwYXRoL2NvbmZpZ3Vy ZS5jCkBAIC0yOTQsNiArMjk0LDkgQEAgaW50IHNldHVwX21hcChzdHJ1Y3QgbXVsdGlwYXRoICpt cHAsIGNoYXIgKnBhcmFtcywgaW50IHBhcmFtc19zaXplKQogCXNlbGVjdF9kZWZlcnJlZF9yZW1v dmUoY29uZiwgbXBwKTsKIAlzZWxlY3RfZGVsYXlfd2F0Y2hfY2hlY2tzKGNvbmYsIG1wcCk7CiAJ c2VsZWN0X2RlbGF5X3dhaXRfY2hlY2tzKGNvbmYsIG1wcCk7CisJc2VsZWN0X3Nhbl9wYXRoX2Vy cl90aHJlc2hvbGQoY29uZiwgbXBwKTsKKwlzZWxlY3Rfc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93 aW5kb3coY29uZiwgbXBwKTsKKwlzZWxlY3Rfc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUoY29u ZiwgbXBwKTsKIAlzZWxlY3Rfc2tpcF9rcGFydHgoY29uZiwgbXBwKTsKIAlzZWxlY3RfbWF4X3Nl Y3RvcnNfa2IoY29uZiwgbXBwKTsKIApkaWZmIC0tZ2l0IGEvbGlibXVsdGlwYXRoL2RlZmF1bHRz LmggYi9saWJtdWx0aXBhdGgvZGVmYXVsdHMuaAppbmRleCBiOWIwYTM3Li45ZTgwNTljIDEwMDY0 NAotLS0gYS9saWJtdWx0aXBhdGgvZGVmYXVsdHMuaAorKysgYi9saWJtdWx0aXBhdGgvZGVmYXVs dHMuaApAQCAtMjQsNiArMjQsNyBAQAogI2RlZmluZSBERUZBVUxUX0RFVEVDVF9QUklPCURFVEVD VF9QUklPX09OCiAjZGVmaW5lIERFRkFVTFRfREVGRVJSRURfUkVNT1ZFCURFRkVSUkVEX1JFTU9W RV9PRkYKICNkZWZpbmUgREVGQVVMVF9ERUxBWV9DSEVDS1MJREVMQVlfQ0hFQ0tTX09GRgorI2Rl ZmluZSBERUZBVUxUX0VSUl9DSEVDS1MJRVJSX0NIRUNLU19PRkYKICNkZWZpbmUgREVGQVVMVF9V RVZFTlRfU1RBQ0tTSVpFIDI1NgogI2RlZmluZSBERUZBVUxUX1JFVFJJR0dFUl9ERUxBWQkxMAog I2RlZmluZSBERUZBVUxUX1JFVFJJR0dFUl9UUklFUwkzCmRpZmYgLS1naXQgYS9saWJtdWx0aXBh dGgvZGljdC5jIGIvbGlibXVsdGlwYXRoL2RpY3QuYwppbmRleCBkYzIxODQ2Li5hNTY4OWJkIDEw MDY0NAotLS0gYS9saWJtdWx0aXBhdGgvZGljdC5jCisrKyBiL2xpYm11bHRpcGF0aC9kaWN0LmMK QEAgLTEwNzQsNiArMTA3NCw3MiBAQCBkZWNsYXJlX2h3X3NucHJpbnQoZGVsYXlfd2FpdF9jaGVj a3MsIHByaW50X2RlbGF5X2NoZWNrcykKIGRlY2xhcmVfbXBfaGFuZGxlcihkZWxheV93YWl0X2No ZWNrcywgc2V0X2RlbGF5X2NoZWNrcykKIGRlY2xhcmVfbXBfc25wcmludChkZWxheV93YWl0X2No ZWNrcywgcHJpbnRfZGVsYXlfY2hlY2tzKQogCisKK3N0YXRpYyBpbnQKK3NldF9wYXRoX2Vycl9p bmZvKHZlY3RvciBzdHJ2ZWMsIHZvaWQgKnB0cikKK3sKKyAgICAgICAgaW50ICppbnRfcHRyID0g KGludCAqKXB0cjsKKyAgICAgICAgY2hhciAqIGJ1ZmY7CisKKyAgICAgICAgYnVmZiA9IHNldF92 YWx1ZShzdHJ2ZWMpOworICAgICAgICBpZiAoIWJ1ZmYpCisgICAgICAgICAgICAgICAgcmV0dXJu IDE7CisKKyAgICAgICAgaWYgKCFzdHJjbXAoYnVmZiwgIm5vIikgfHwgIXN0cmNtcChidWZmLCAi MCIpKQorICAgICAgICAgICAgICAgICppbnRfcHRyID0gRVJSX0NIRUNLU19PRkY7CisgICAgICAg IGVsc2UgaWYgKCgqaW50X3B0ciA9IGF0b2koYnVmZikpIDwgMSkKKyAgICAgICAgICAgICAgICAq aW50X3B0ciA9IEVSUl9DSEVDS1NfVU5ERUY7CisKKyAgICAgICAgRlJFRShidWZmKTsKKyAgICAg ICAgcmV0dXJuIDA7Cit9CisKK2ludAorcHJpbnRfcGF0aF9lcnJfaW5mbyhjaGFyICogYnVmZiwg aW50IGxlbiwgdm9pZCAqcHRyKQoreworICAgICAgICBpbnQgKmludF9wdHIgPSAoaW50ICopcHRy OworCisgICAgICAgIHN3aXRjaCgqaW50X3B0cikgeworICAgICAgICBjYXNlIEVSUl9DSEVDS1Nf VU5ERUY6CisgICAgICAgICAgICAgICAgcmV0dXJuIDA7CisgICAgICAgIGNhc2UgRVJSX0NIRUNL U19PRkY6CisgICAgICAgICAgICAgICAgcmV0dXJuIHNucHJpbnRmKGJ1ZmYsIGxlbiwgIlwib2Zm XCIiKTsKKyAgICAgICAgZGVmYXVsdDoKKyAgICAgICAgICAgICAgICByZXR1cm4gc25wcmludGYo YnVmZiwgbGVuLCAiJWkiLCAqaW50X3B0cik7CisgICAgICAgIH0KK30KKworCisKKworCitkZWNs YXJlX2RlZl9oYW5kbGVyKHNhbl9wYXRoX2Vycl90aHJlc2hvbGQsIHNldF9wYXRoX2Vycl9pbmZv KQorZGVjbGFyZV9kZWZfc25wcmludChzYW5fcGF0aF9lcnJfdGhyZXNob2xkLCBwcmludF9wYXRo X2Vycl9pbmZvKQorZGVjbGFyZV9vdnJfaGFuZGxlcihzYW5fcGF0aF9lcnJfdGhyZXNob2xkLCBz ZXRfcGF0aF9lcnJfaW5mbykKK2RlY2xhcmVfb3ZyX3NucHJpbnQoc2FuX3BhdGhfZXJyX3RocmVz aG9sZCwgcHJpbnRfcGF0aF9lcnJfaW5mbykKK2RlY2xhcmVfaHdfaGFuZGxlcihzYW5fcGF0aF9l cnJfdGhyZXNob2xkLCBzZXRfcGF0aF9lcnJfaW5mbykKK2RlY2xhcmVfaHdfc25wcmludChzYW5f cGF0aF9lcnJfdGhyZXNob2xkLCBwcmludF9wYXRoX2Vycl9pbmZvKQorZGVjbGFyZV9tcF9oYW5k bGVyKHNhbl9wYXRoX2Vycl90aHJlc2hvbGQsIHNldF9wYXRoX2Vycl9pbmZvKQorZGVjbGFyZV9t cF9zbnByaW50KHNhbl9wYXRoX2Vycl90aHJlc2hvbGQsIHByaW50X3BhdGhfZXJyX2luZm8pCisK K2RlY2xhcmVfZGVmX2hhbmRsZXIoc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5kb3csIHNldF9w YXRoX2Vycl9pbmZvKQorZGVjbGFyZV9kZWZfc25wcmludChzYW5fcGF0aF9lcnJfdGhyZXNob2xk X3dpbmRvdywgcHJpbnRfcGF0aF9lcnJfaW5mbykKK2RlY2xhcmVfb3ZyX2hhbmRsZXIoc2FuX3Bh dGhfZXJyX3RocmVzaG9sZF93aW5kb3csIHNldF9wYXRoX2Vycl9pbmZvKQorZGVjbGFyZV9vdnJf c25wcmludChzYW5fcGF0aF9lcnJfdGhyZXNob2xkX3dpbmRvdywgcHJpbnRfcGF0aF9lcnJfaW5m bykKK2RlY2xhcmVfaHdfaGFuZGxlcihzYW5fcGF0aF9lcnJfdGhyZXNob2xkX3dpbmRvdywgc2V0 X3BhdGhfZXJyX2luZm8pCitkZWNsYXJlX2h3X3NucHJpbnQoc2FuX3BhdGhfZXJyX3RocmVzaG9s ZF93aW5kb3csIHByaW50X3BhdGhfZXJyX2luZm8pCitkZWNsYXJlX21wX2hhbmRsZXIoc2FuX3Bh dGhfZXJyX3RocmVzaG9sZF93aW5kb3csIHNldF9wYXRoX2Vycl9pbmZvKQorZGVjbGFyZV9tcF9z bnByaW50KHNhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93LCBwcmludF9wYXRoX2Vycl9pbmZv KQorCisKK2RlY2xhcmVfZGVmX2hhbmRsZXIoc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUsIHNl dF9wYXRoX2Vycl9pbmZvKQorZGVjbGFyZV9kZWZfc25wcmludChzYW5fcGF0aF9lcnJfcmVjb3Zl cnlfdGltZSwgcHJpbnRfcGF0aF9lcnJfaW5mbykKK2RlY2xhcmVfb3ZyX2hhbmRsZXIoc2FuX3Bh dGhfZXJyX3JlY292ZXJ5X3RpbWUsIHNldF9wYXRoX2Vycl9pbmZvKQorZGVjbGFyZV9vdnJfc25w cmludChzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSwgcHJpbnRfcGF0aF9lcnJfaW5mbykKK2Rl Y2xhcmVfaHdfaGFuZGxlcihzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSwgc2V0X3BhdGhfZXJy X2luZm8pCitkZWNsYXJlX2h3X3NucHJpbnQoc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUsIHBy aW50X3BhdGhfZXJyX2luZm8pCitkZWNsYXJlX21wX2hhbmRsZXIoc2FuX3BhdGhfZXJyX3JlY292 ZXJ5X3RpbWUsIHNldF9wYXRoX2Vycl9pbmZvKQorZGVjbGFyZV9tcF9zbnByaW50KHNhbl9wYXRo X2Vycl9yZWNvdmVyeV90aW1lLCBwcmludF9wYXRoX2Vycl9pbmZvKQogc3RhdGljIGludAogZGVm X3V4c29ja190aW1lb3V0X2hhbmRsZXIoc3RydWN0IGNvbmZpZyAqY29uZiwgdmVjdG9yIHN0cnZl YykKIHsKQEAgLTE0MDQsNiArMTQ3MCwxMCBAQCBpbml0X2tleXdvcmRzKHZlY3RvciBrZXl3b3Jk cykKIAlpbnN0YWxsX2tleXdvcmQoImNvbmZpZ19kaXIiLCAmZGVmX2NvbmZpZ19kaXJfaGFuZGxl ciwgJnNucHJpbnRfZGVmX2NvbmZpZ19kaXIpOwogCWluc3RhbGxfa2V5d29yZCgiZGVsYXlfd2F0 Y2hfY2hlY2tzIiwgJmRlZl9kZWxheV93YXRjaF9jaGVja3NfaGFuZGxlciwgJnNucHJpbnRfZGVm X2RlbGF5X3dhdGNoX2NoZWNrcyk7CiAJaW5zdGFsbF9rZXl3b3JkKCJkZWxheV93YWl0X2NoZWNr cyIsICZkZWZfZGVsYXlfd2FpdF9jaGVja3NfaGFuZGxlciwgJnNucHJpbnRfZGVmX2RlbGF5X3dh aXRfY2hlY2tzKTsKKyAgICAgICAgaW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0aF9lcnJfdGhyZXNo b2xkIiwgJmRlZl9zYW5fcGF0aF9lcnJfdGhyZXNob2xkX2hhbmRsZXIsICZzbnByaW50X2RlZl9z YW5fcGF0aF9lcnJfdGhyZXNob2xkKTsKKyAgICAgICAgaW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0 aF9lcnJfdGhyZXNob2xkX3dpbmRvdyIsICZkZWZfc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5k b3dfaGFuZGxlciwgJnNucHJpbnRfZGVmX3Nhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93KTsK KyAgICAgICAgaW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSIsICZk ZWZfc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWVfaGFuZGxlciwgJnNucHJpbnRfZGVmX3Nhbl9w YXRoX2Vycl9yZWNvdmVyeV90aW1lKTsKKwogCWluc3RhbGxfa2V5d29yZCgiZmluZF9tdWx0aXBh dGhzIiwgJmRlZl9maW5kX211bHRpcGF0aHNfaGFuZGxlciwgJnNucHJpbnRfZGVmX2ZpbmRfbXVs dGlwYXRocyk7CiAJaW5zdGFsbF9rZXl3b3JkKCJ1eHNvY2tfdGltZW91dCIsICZkZWZfdXhzb2Nr X3RpbWVvdXRfaGFuZGxlciwgJnNucHJpbnRfZGVmX3V4c29ja190aW1lb3V0KTsKIAlpbnN0YWxs X2tleXdvcmQoInJldHJpZ2dlcl90cmllcyIsICZkZWZfcmV0cmlnZ2VyX3RyaWVzX2hhbmRsZXIs ICZzbnByaW50X2RlZl9yZXRyaWdnZXJfdHJpZXMpOwpAQCAtMTQ4Niw2ICsxNTU2LDkgQEAgaW5p dF9rZXl3b3Jkcyh2ZWN0b3Iga2V5d29yZHMpCiAJaW5zdGFsbF9rZXl3b3JkKCJkZWZlcnJlZF9y ZW1vdmUiLCAmaHdfZGVmZXJyZWRfcmVtb3ZlX2hhbmRsZXIsICZzbnByaW50X2h3X2RlZmVycmVk X3JlbW92ZSk7CiAJaW5zdGFsbF9rZXl3b3JkKCJkZWxheV93YXRjaF9jaGVja3MiLCAmaHdfZGVs YXlfd2F0Y2hfY2hlY2tzX2hhbmRsZXIsICZzbnByaW50X2h3X2RlbGF5X3dhdGNoX2NoZWNrcyk7 CiAJaW5zdGFsbF9rZXl3b3JkKCJkZWxheV93YWl0X2NoZWNrcyIsICZod19kZWxheV93YWl0X2No ZWNrc19oYW5kbGVyLCAmc25wcmludF9od19kZWxheV93YWl0X2NoZWNrcyk7CisgICAgICAgIGlu c3RhbGxfa2V5d29yZCgic2FuX3BhdGhfZXJyX3RocmVzaG9sZCIsICZod19zYW5fcGF0aF9lcnJf dGhyZXNob2xkX2hhbmRsZXIsICZzbnByaW50X2h3X3Nhbl9wYXRoX2Vycl90aHJlc2hvbGQpOwor ICAgICAgICBpbnN0YWxsX2tleXdvcmQoInNhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93Iiwg Jmh3X3Nhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93X2hhbmRsZXIsICZzbnByaW50X2h3X3Nh bl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93KTsKKyAgICAgICAgaW5zdGFsbF9rZXl3b3JkKCJz YW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSIsICZod19zYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGlt ZV9oYW5kbGVyLCAmc25wcmludF9od19zYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSk7CiAJaW5z dGFsbF9rZXl3b3JkKCJza2lwX2twYXJ0eCIsICZod19za2lwX2twYXJ0eF9oYW5kbGVyLCAmc25w cmludF9od19za2lwX2twYXJ0eCk7CiAJaW5zdGFsbF9rZXl3b3JkKCJtYXhfc2VjdG9yc19rYiIs ICZod19tYXhfc2VjdG9yc19rYl9oYW5kbGVyLCAmc25wcmludF9od19tYXhfc2VjdG9yc19rYik7 CiAJaW5zdGFsbF9zdWJsZXZlbF9lbmQoKTsKQEAgLTE1MTUsNiArMTU4OCwxMCBAQCBpbml0X2tl eXdvcmRzKHZlY3RvciBrZXl3b3JkcykKIAlpbnN0YWxsX2tleXdvcmQoImRlZmVycmVkX3JlbW92 ZSIsICZvdnJfZGVmZXJyZWRfcmVtb3ZlX2hhbmRsZXIsICZzbnByaW50X292cl9kZWZlcnJlZF9y ZW1vdmUpOwogCWluc3RhbGxfa2V5d29yZCgiZGVsYXlfd2F0Y2hfY2hlY2tzIiwgJm92cl9kZWxh eV93YXRjaF9jaGVja3NfaGFuZGxlciwgJnNucHJpbnRfb3ZyX2RlbGF5X3dhdGNoX2NoZWNrcyk7 CiAJaW5zdGFsbF9rZXl3b3JkKCJkZWxheV93YWl0X2NoZWNrcyIsICZvdnJfZGVsYXlfd2FpdF9j aGVja3NfaGFuZGxlciwgJnNucHJpbnRfb3ZyX2RlbGF5X3dhaXRfY2hlY2tzKTsKKyAgICAgICAg aW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0aF9lcnJfdGhyZXNob2xkIiwgJm92cl9zYW5fcGF0aF9l cnJfdGhyZXNob2xkX2hhbmRsZXIsICZzbnByaW50X292cl9zYW5fcGF0aF9lcnJfdGhyZXNob2xk KTsKKyAgICAgICAgaW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0aF9lcnJfdGhyZXNob2xkX3dpbmRv dyIsICZvdnJfc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5kb3dfaGFuZGxlciwgJnNucHJpbnRf b3ZyX3Nhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93KTsKKyAgICAgICAgaW5zdGFsbF9rZXl3 b3JkKCJzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSIsICZvdnJfc2FuX3BhdGhfZXJyX3JlY292 ZXJ5X3RpbWVfaGFuZGxlciwgJnNucHJpbnRfb3ZyX3Nhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1l KTsKKwogCWluc3RhbGxfa2V5d29yZCgic2tpcF9rcGFydHgiLCAmb3ZyX3NraXBfa3BhcnR4X2hh bmRsZXIsICZzbnByaW50X292cl9za2lwX2twYXJ0eCk7CiAJaW5zdGFsbF9rZXl3b3JkKCJtYXhf c2VjdG9yc19rYiIsICZvdnJfbWF4X3NlY3RvcnNfa2JfaGFuZGxlciwgJnNucHJpbnRfb3ZyX21h eF9zZWN0b3JzX2tiKTsKIApAQCAtMTU0Myw2ICsxNjIwLDkgQEAgaW5pdF9rZXl3b3Jkcyh2ZWN0 b3Iga2V5d29yZHMpCiAJaW5zdGFsbF9rZXl3b3JkKCJkZWZlcnJlZF9yZW1vdmUiLCAmbXBfZGVm ZXJyZWRfcmVtb3ZlX2hhbmRsZXIsICZzbnByaW50X21wX2RlZmVycmVkX3JlbW92ZSk7CiAJaW5z dGFsbF9rZXl3b3JkKCJkZWxheV93YXRjaF9jaGVja3MiLCAmbXBfZGVsYXlfd2F0Y2hfY2hlY2tz X2hhbmRsZXIsICZzbnByaW50X21wX2RlbGF5X3dhdGNoX2NoZWNrcyk7CiAJaW5zdGFsbF9rZXl3 b3JkKCJkZWxheV93YWl0X2NoZWNrcyIsICZtcF9kZWxheV93YWl0X2NoZWNrc19oYW5kbGVyLCAm c25wcmludF9tcF9kZWxheV93YWl0X2NoZWNrcyk7CisJaW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0 aF9lcnJfdGhyZXNob2xkIiwgJm1wX3Nhbl9wYXRoX2Vycl90aHJlc2hvbGRfaGFuZGxlciwgJnNu cHJpbnRfbXBfc2FuX3BhdGhfZXJyX3RocmVzaG9sZCk7CisJaW5zdGFsbF9rZXl3b3JkKCJzYW5f cGF0aF9lcnJfdGhyZXNob2xkX3dpbmRvdyIsICZtcF9zYW5fcGF0aF9lcnJfdGhyZXNob2xkX3dp bmRvd19oYW5kbGVyLCAmc25wcmludF9tcF9zYW5fcGF0aF9lcnJfdGhyZXNob2xkX3dpbmRvdyk7 CisJaW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSIsICZtcF9zYW5f cGF0aF9lcnJfcmVjb3ZlcnlfdGltZV9oYW5kbGVyLCAmc25wcmludF9tcF9zYW5fcGF0aF9lcnJf cmVjb3ZlcnlfdGltZSk7CiAJaW5zdGFsbF9rZXl3b3JkKCJza2lwX2twYXJ0eCIsICZtcF9za2lw X2twYXJ0eF9oYW5kbGVyLCAmc25wcmludF9tcF9za2lwX2twYXJ0eCk7CiAJaW5zdGFsbF9rZXl3 b3JkKCJtYXhfc2VjdG9yc19rYiIsICZtcF9tYXhfc2VjdG9yc19rYl9oYW5kbGVyLCAmc25wcmlu dF9tcF9tYXhfc2VjdG9yc19rYik7CiAJaW5zdGFsbF9zdWJsZXZlbF9lbmQoKTsKZGlmZiAtLWdp dCBhL2xpYm11bHRpcGF0aC9kaWN0LmggYi9saWJtdWx0aXBhdGgvZGljdC5oCmluZGV4IDRjZDAz YzUuLmFkYWE5ZjEgMTAwNjQ0Ci0tLSBhL2xpYm11bHRpcGF0aC9kaWN0LmgKKysrIGIvbGlibXVs dGlwYXRoL2RpY3QuaApAQCAtMTUsNSArMTUsNiBAQCBpbnQgcHJpbnRfZmFzdF9pb19mYWlsKGNo YXIgKiBidWZmLCBpbnQgbGVuLCB2b2lkICpwdHIpOwogaW50IHByaW50X2Rldl9sb3NzKGNoYXIg KiBidWZmLCBpbnQgbGVuLCB2b2lkICpwdHIpOwogaW50IHByaW50X3Jlc2VydmF0aW9uX2tleShj aGFyICogYnVmZiwgaW50IGxlbiwgdm9pZCAqIHB0cik7CiBpbnQgcHJpbnRfZGVsYXlfY2hlY2tz KGNoYXIgKiBidWZmLCBpbnQgbGVuLCB2b2lkICpwdHIpOworaW50IHByaW50X3BhdGhfZXJyX2lu Zm8oY2hhciAqIGJ1ZmYsIGludCBsZW4sIHZvaWQgKnB0cik7CiAKICNlbmRpZiAvKiBfRElDVF9I ICovCmRpZmYgLS1naXQgYS9saWJtdWx0aXBhdGgvcHJvcHNlbC5jIGIvbGlibXVsdGlwYXRoL3By b3BzZWwuYwppbmRleCBjMGJjNjE2Li5mNGNhMzc4IDEwMDY0NAotLS0gYS9saWJtdWx0aXBhdGgv cHJvcHNlbC5jCisrKyBiL2xpYm11bHRpcGF0aC9wcm9wc2VsLmMKQEAgLTY0Myw3ICs2NDMsNTEg QEAgb3V0OgogCXJldHVybiAwOwogCiB9CitpbnQgc2VsZWN0X3Nhbl9wYXRoX2Vycl90aHJlc2hv bGQoc3RydWN0IGNvbmZpZyAqY29uZiwgc3RydWN0IG11bHRpcGF0aCAqbXApCit7CisgICAgICAg IGNoYXIgKm9yaWdpbiwgYnVmZlsxMl07CisKKyAgICAgICAgbXBfc2V0X21wZShzYW5fcGF0aF9l cnJfdGhyZXNob2xkKTsKKyAgICAgICAgbXBfc2V0X292cihzYW5fcGF0aF9lcnJfdGhyZXNob2xk KTsKKyAgICAgICAgbXBfc2V0X2h3ZShzYW5fcGF0aF9lcnJfdGhyZXNob2xkKTsKKyAgICAgICAg bXBfc2V0X2NvbmYoc2FuX3BhdGhfZXJyX3RocmVzaG9sZCk7CisgICAgICAgIG1wX3NldF9kZWZh dWx0KHNhbl9wYXRoX2Vycl90aHJlc2hvbGQsIERFRkFVTFRfRVJSX0NIRUNLUyk7CitvdXQ6Cisg ICAgICAgIHByaW50X3BhdGhfZXJyX2luZm8oYnVmZiwgMTIsICZtcC0+c2FuX3BhdGhfZXJyX3Ro cmVzaG9sZCk7CisgICAgICAgIGNvbmRsb2coMywgIiVzOiBzYW5fcGF0aF9lcnJfdGhyZXNob2xk ID0gJXMgJXMiLCBtcC0+YWxpYXMsIGJ1ZmYsIG9yaWdpbik7CisgICAgICAgIHJldHVybiAwOwor fQorCitpbnQgc2VsZWN0X3Nhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93KHN0cnVjdCBjb25m aWcgKmNvbmYsIHN0cnVjdCBtdWx0aXBhdGggKm1wKQoreworICAgICAgICBjaGFyICpvcmlnaW4s IGJ1ZmZbMTJdOworCisgICAgICAgIG1wX3NldF9tcGUoc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93 aW5kb3cpOworICAgICAgICBtcF9zZXRfb3ZyKHNhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93 KTsKKyAgICAgICAgbXBfc2V0X2h3ZShzYW5fcGF0aF9lcnJfdGhyZXNob2xkX3dpbmRvdyk7Cisg ICAgICAgIG1wX3NldF9jb25mKHNhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93KTsKKyAgICAg ICAgbXBfc2V0X2RlZmF1bHQoc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5kb3csIERFRkFVTFRf RVJSX0NIRUNLUyk7CitvdXQ6CisgICAgICAgIHByaW50X3BhdGhfZXJyX2luZm8oYnVmZiwgMTIs ICZtcC0+c2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5kb3cpOworICAgICAgICBjb25kbG9nKDMs ICIlczogc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5kb3cgPSAlcyAlcyIsIG1wLT5hbGlhcywg YnVmZiwgb3JpZ2luKTsKKyAgICAgICAgcmV0dXJuIDA7CisKK30KK2ludCBzZWxlY3Rfc2FuX3Bh dGhfZXJyX3JlY292ZXJ5X3RpbWUoc3RydWN0IGNvbmZpZyAqY29uZiwgc3RydWN0IG11bHRpcGF0 aCAqbXApCit7CisgICAgICAgIGNoYXIgKm9yaWdpbiwgYnVmZlsxMl07CiAKKyAgICAgICAgbXBf c2V0X21wZShzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSk7CisgICAgICAgIG1wX3NldF9vdnIo c2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUpOworICAgICAgICBtcF9zZXRfaHdlKHNhbl9wYXRo X2Vycl9yZWNvdmVyeV90aW1lKTsKKyAgICAgICAgbXBfc2V0X2NvbmYoc2FuX3BhdGhfZXJyX3Jl Y292ZXJ5X3RpbWUpOworICAgICAgICBtcF9zZXRfZGVmYXVsdChzYW5fcGF0aF9lcnJfcmVjb3Zl cnlfdGltZSwgREVGQVVMVF9FUlJfQ0hFQ0tTKTsKK291dDoKKyAgICAgICAgcHJpbnRfcGF0aF9l cnJfaW5mbyhidWZmLCAxMiwgJm1wLT5zYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSk7CisgICAg ICAgIGNvbmRsb2coMywgIiVzOiBzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSA9ICVzICVzIiwg bXAtPmFsaWFzLCBidWZmLCBvcmlnaW4pOworICAgICAgICByZXR1cm4gMDsKKworfQogaW50IHNl bGVjdF9za2lwX2twYXJ0eCAoc3RydWN0IGNvbmZpZyAqY29uZiwgc3RydWN0IG11bHRpcGF0aCAq IG1wKQogewogCWNoYXIgKm9yaWdpbjsKZGlmZiAtLWdpdCBhL2xpYm11bHRpcGF0aC9wcm9wc2Vs LmggYi9saWJtdWx0aXBhdGgvcHJvcHNlbC5oCmluZGV4IGFkOThmYTUuLjg4YjU4NDAgMTAwNjQ0 Ci0tLSBhL2xpYm11bHRpcGF0aC9wcm9wc2VsLmgKKysrIGIvbGlibXVsdGlwYXRoL3Byb3BzZWwu aApAQCAtMjQsMyArMjQsOSBAQCBpbnQgc2VsZWN0X2RlbGF5X3dhdGNoX2NoZWNrcyAoc3RydWN0 IGNvbmZpZyAqY29uZiwgc3RydWN0IG11bHRpcGF0aCAqIG1wKTsKIGludCBzZWxlY3RfZGVsYXlf d2FpdF9jaGVja3MgKHN0cnVjdCBjb25maWcgKmNvbmYsIHN0cnVjdCBtdWx0aXBhdGggKiBtcCk7 CiBpbnQgc2VsZWN0X3NraXBfa3BhcnR4IChzdHJ1Y3QgY29uZmlnICpjb25mLCBzdHJ1Y3QgbXVs dGlwYXRoICogbXApOwogaW50IHNlbGVjdF9tYXhfc2VjdG9yc19rYiAoc3RydWN0IGNvbmZpZyAq Y29uZiwgc3RydWN0IG11bHRpcGF0aCAqIG1wKTsKK2ludCBzZWxlY3Rfc2FuX3BhdGhfZXJyX3Ro cmVzaG9sZF93aW5kb3coc3RydWN0IGNvbmZpZyAqY29uZiwgc3RydWN0IG11bHRpcGF0aCAqbXAp OworaW50IHNlbGVjdF9zYW5fcGF0aF9lcnJfdGhyZXNob2xkKHN0cnVjdCBjb25maWcgKmNvbmYs IHN0cnVjdCBtdWx0aXBhdGggKm1wKTsKK2ludCBzZWxlY3Rfc2FuX3BhdGhfZXJyX3JlY292ZXJ5 X3RpbWUoc3RydWN0IGNvbmZpZyAqY29uZiwgc3RydWN0IG11bHRpcGF0aCAqbXApOworCisKKwpk aWZmIC0tZ2l0IGEvbGlibXVsdGlwYXRoL3N0cnVjdHMuaCBiL2xpYm11bHRpcGF0aC9zdHJ1Y3Rz LmgKaW5kZXggMzk2ZjY5ZC4uOGI3YTgwMyAxMDA2NDQKLS0tIGEvbGlibXVsdGlwYXRoL3N0cnVj dHMuaAorKysgYi9saWJtdWx0aXBhdGgvc3RydWN0cy5oCkBAIC0xNTYsNiArMTU2LDEwIEBAIGVu dW0gZGVsYXlfY2hlY2tzX3N0YXRlcyB7CiAJREVMQVlfQ0hFQ0tTX09GRiA9IC0xLAogCURFTEFZ X0NIRUNLU19VTkRFRiA9IDAsCiB9OworZW51bSBlcnJfY2hlY2tzX3N0YXRlcyB7CisJRVJSX0NI RUNLU19PRkYgPSAtMSwKKwlFUlJfQ0hFQ0tTX1VOREVGID0gMCwKK307CiAKIGVudW0gaW5pdGlh bGl6ZWRfc3RhdGVzIHsKIAlJTklUX0ZBSUxFRCwKQEAgLTIyMyw3ICsyMjcsMTAgQEAgc3RydWN0 IHBhdGggewogCWludCBpbml0aWFsaXplZDsKIAlpbnQgcmV0cmlnZ2VyczsKIAlpbnQgd3dpZF9j aGFuZ2VkOwotCisJdW5zaWduZWQgaW50IHBhdGhfZmFpbHVyZXM7CisJdGltZV90ICAgZmFpbHVy ZV9zdGFydF90aW1lOworCXRpbWVfdCBkaXNfcmVpbnN0YW50ZV90aW1lOworCWludCBkaXNhYmxl X3JlaW5zdGF0ZTsKIAkvKiBjb25maWdsZXQgcG9pbnRlcnMgKi8KIAlzdHJ1Y3QgaHdlbnRyeSAq IGh3ZTsKIH07CkBAIC0yNTUsNiArMjYyLDkgQEAgc3RydWN0IG11bHRpcGF0aCB7CiAJaW50IGRl ZmVycmVkX3JlbW92ZTsKIAlpbnQgZGVsYXlfd2F0Y2hfY2hlY2tzOwogCWludCBkZWxheV93YWl0 X2NoZWNrczsKKwlpbnQgc2FuX3BhdGhfZXJyX3RocmVzaG9sZDsKKwlpbnQgc2FuX3BhdGhfZXJy X3RocmVzaG9sZF93aW5kb3c7CisJaW50IHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lOwogCWlu dCBza2lwX2twYXJ0eDsKIAlpbnQgbWF4X3NlY3RvcnNfa2I7CiAJdW5zaWduZWQgaW50IGRldl9s b3NzOwpkaWZmIC0tZ2l0IGEvbGlibXVsdGlwYXRoL3N0cnVjdHNfdmVjLmMgYi9saWJtdWx0aXBh dGgvc3RydWN0c192ZWMuYwppbmRleCAyMmJlOGUwLi5iZjg0YjE3IDEwMDY0NAotLS0gYS9saWJt dWx0aXBhdGgvc3RydWN0c192ZWMuYworKysgYi9saWJtdWx0aXBhdGgvc3RydWN0c192ZWMuYwpA QCAtNTQ2LDYgKzU0Niw3IEBAIGludCB1cGRhdGVfbXVsdGlwYXRoIChzdHJ1Y3QgdmVjdG9ycyAq dmVjcywgY2hhciAqbWFwbmFtZSwgaW50IHJlc2V0KQogCXN0cnVjdCBwYXRoZ3JvdXAgICpwZ3A7 CiAJc3RydWN0IHBhdGggKnBwOwogCWludCBpLCBqOworCXN0cnVjdCB0aW1lc3BlYyBzdGFydF90 aW1lOwogCiAJbXBwID0gZmluZF9tcF9ieV9hbGlhcyh2ZWNzLT5tcHZlYywgbWFwbmFtZSk7CiAK QEAgLTU3MCw2ICs1NzEsMTUgQEAgaW50IHVwZGF0ZV9tdWx0aXBhdGggKHN0cnVjdCB2ZWN0b3Jz ICp2ZWNzLCBjaGFyICptYXBuYW1lLCBpbnQgcmVzZXQpCiAJCQkJaW50IG9sZHN0YXRlID0gcHAt PnN0YXRlOwogCQkJCWNvbmRsb2coMiwgIiVzOiBtYXJrIGFzIGZhaWxlZCIsIHBwLT5kZXYpOwog CQkJCW1wcC0+c3RhdF9wYXRoX2ZhaWx1cmVzKys7CisJCQkJLypDYXB0dXJlZCB0aGUgdGltZSB3 aGVuIHdlIHNlZSB0aGUgZmlyc3QgZmFpbHVyZSBvbiB0aGUgcGF0aCovCisJCQkJaWYocHAtPnBh dGhfZmFpbHVyZXMgPT0gMCkgeworCQkJCQlpZiAoY2xvY2tfZ2V0dGltZShDTE9DS19NT05PVE9O SUMsICZzdGFydF90aW1lKSAhPSAwKQorCQkJCQkJc3RhcnRfdGltZS50dl9zZWMgPSAwOworCQkJ CQlwcC0+ZmFpbHVyZV9zdGFydF90aW1lID0gc3RhcnRfdGltZS50dl9zZWM7CisJCisJCQkJfQor CQkJCS8qSW5jcmVtZW50IHRoZSBudW1iZXIgb2YgcGF0aCBmYWlsdXJlcyovCisJCQkJcHAtPnBh dGhfZmFpbHVyZXMrKzsKIAkJCQlwcC0+c3RhdGUgPSBQQVRIX0RPV047CiAJCQkJaWYgKG9sZHN0 YXRlID09IFBBVEhfVVAgfHwKIAkJCQkgICAgb2xkc3RhdGUgPT0gUEFUSF9HSE9TVCkKZGlmZiAt LWdpdCBhL211bHRpcGF0aC9tdWx0aXBhdGguY29uZi41IGIvbXVsdGlwYXRoL211bHRpcGF0aC5j b25mLjUKaW5kZXggMzY1ODlmNS4uN2RmZDQ4YSAxMDA2NDQKLS0tIGEvbXVsdGlwYXRoL211bHRp cGF0aC5jb25mLjUKKysrIGIvbXVsdGlwYXRoL211bHRpcGF0aC5jb25mLjUKQEAgLTc1MSw2ICs3 NTEsNDYgQEAgVGhlIGRlZmF1bHQgaXM6IFxmQi9ldGMvbXVsdGlwYXRoL2NvbmYuZC9cZlIKIC4K IC4KIC5UUAorLkIgc2FuX3BhdGhfZXJyX3RocmVzaG9sZAorSWYgc2V0IHRvIGEgdmFsdWUgZ3Jl YXRlciB0aGFuIDAsIG11bHRpcGF0aGQgd2lsbCB3YXRjaCBwYXRocyBhbmQgY2hlY2sgaG93IG1h bnkKK3RpbWVzIGEgcGF0aCBoYXMgYmVlbiBmYWlsZWQgZHVlIHRvIGVycm9ycy5JZiB0aGUgbnVt YmVyIG9mIGZhaWx1cmVzIG9uIGEgcGFydGljdWxhcgorcGF0aCBpcyBncmVhdGVyIHRoZW4gdGhl IHNhbl9wYXRoX2Vycl90aHJlc2hvbGQgdGhlbiB0aGUgcGF0aCB3aWxsIG5vdCAgcmVpbnN0YW50 ZQordGlsbCBzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZS5UaGVzZSBwYXRoIGZhaWx1cmVzIHNo b3VsZCBvY2N1ciB3aXRoaW4gYSAKK3Nhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93IHRpbWUg ZnJhbWUsIGlmIG5vdCB3ZSB3aWxsIGNvbnNpZGVyIHRoZSBwYXRoIGlzIGdvb2QgZW5vdWdoCit0 byByZWluc3RhbnRhdGUuCisuUlMKKy5UUAorVGhlIGRlZmF1bHQgaXM6IFxmQm5vXGZSCisuUkUK Ky4KKy4KKy5UUAorLkIgc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5kb3cKK0lmIHNldCB0byBh IHZhbHVlIGdyZWF0ZXIgdGhhbiAwLCBtdWx0aXBhdGhkIHdpbGwgY2hlY2sgd2hldGhlciB0aGUg cGF0aCBmYWlsdXJlcworaGFzIGV4Y2VlZGVkICB0aGUgc2FuX3BhdGhfZXJyX3RocmVzaG9sZCB3 aXRoaW4gdGhpcyB0aW1lIGZyYW1lIGkuZSAKK3Nhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93 IC4gSWYgc28gd2Ugd2lsbCBub3QgcmVpbnN0YW50ZSB0aGUgcGF0aCB0aWxsCitzYW5fcGF0aF9l cnJfcmVjb3ZlcnlfdGltZS4KK3Nhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93IHZhbHVlIHNo b3VsZCBiZSBpbiBzZWNzLgorLlJTCisuVFAKK1RoZSBkZWZhdWx0IGlzOiBcZkJub1xmUgorLlJF CisuCisuCisuVFAKKy5CIHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lCitJZiBzZXQgdG8gYSB2 YWx1ZSBncmVhdGVyIHRoYW4gMCwgbXVsdGlwYXRoZCB3aWxsIG1ha2Ugc3VyZSB0aGF0IHdoZW4g cGF0aCBmYWlsdXJlcworaGFzIGV4Y2VlZGVkIHRoZSBzYW5fcGF0aF9lcnJfdGhyZXNob2xkIHdp dGhpbiBzYW5fcGF0aF9lcnJfdGhyZXNob2xkX3dpbmRvdyB0aGVuIHRoZSBwYXRoCit3aWxsIGJl IHBsYWNlZCBpbiBmYWlsZWQgc3RhdGUgZm9yIHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lIGR1 cmF0aW9uLk9uY2Ugc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUKK2hhcyB0aW1lb3V0ICB3ZSB3 aWxsIHJlaW5zdGFudGUgdGhlIGZhaWxlZCBwYXRoIC4KK3Nhbl9wYXRoX2Vycl9yZWNvdmVyeV90 aW1lIHZhbHVlIHNob3VsZCBiZSBpbiBzZWNzLgorLlJTCisuVFAKK1RoZSBkZWZhdWx0IGlzOiBc ZkJub1xmUgorLlJFCisuCisuCisuVFAKIC5CIGRlbGF5X3dhdGNoX2NoZWNrcwogSWYgc2V0IHRv IGEgdmFsdWUgZ3JlYXRlciB0aGFuIDAsIG11bHRpcGF0aGQgd2lsbCB3YXRjaCBwYXRocyB0aGF0 IGhhdmUKIHJlY2VudGx5IGJlY29tZSB2YWxpZCBmb3IgdGhpcyBtYW55IGNoZWNrcy4gSWYgdGhl eSBmYWlsIGFnYWluIHdoaWxlIHRoZXkgYXJlCkBAIC0xMDE1LDYgKzEwNTUsMTIgQEAgYXJlIHRh a2VuIGZyb20gdGhlIFxmSWRlZmF1bHRzXGZSIG9yIFxmSWRldmljZXNcZlIgc2VjdGlvbjoKIC5U UAogLkIgZGVmZXJyZWRfcmVtb3ZlCiAuVFAKKy5CIHNhbl9wYXRoX2Vycl90aHJlc2hvbGQKKy5U UAorLkIgc2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5kb3cKKy5UUAorLkIgc2FuX3BhdGhfZXJy X3JlY292ZXJ5X3RpbWUKKy5UUAogLkIgZGVsYXlfd2F0Y2hfY2hlY2tzCiAuVFAKIC5CIGRlbGF5 X3dhaXRfY2hlY2tzCkBAIC0xMTI4LDYgKzExNzQsMTIgQEAgc2VjdGlvbjoKIC5UUAogLkIgZGVm ZXJyZWRfcmVtb3ZlCiAuVFAKKy5CIHNhbl9wYXRoX2Vycl90aHJlc2hvbGQKKy5UUAorLkIgc2Fu X3BhdGhfZXJyX3RocmVzaG9sZF93aW5kb3cKKy5UUAorLkIgc2FuX3BhdGhfZXJyX3JlY292ZXJ5 X3RpbWUKKy5UUAogLkIgZGVsYXlfd2F0Y2hfY2hlY2tzCiAuVFAKIC5CIGRlbGF5X3dhaXRfY2hl Y2tzCkBAIC0xMTkyLDYgKzEyNDQsMTIgQEAgdGhlIHZhbHVlcyBhcmUgdGFrZW4gZnJvbSB0aGUg XGZJZGV2aWNlc1xmUiBvciBcZklkZWZhdWx0c1xmUiBzZWN0aW9uczoKIC5UUAogLkIgZGVmZXJy ZWRfcmVtb3ZlCiAuVFAKKy5CIHNhbl9wYXRoX2Vycl90aHJlc2hvbGQKKy5UUAorLkIgc2FuX3Bh dGhfZXJyX3RocmVzaG9sZF93aW5kb3cKKy5UUAorLkIgc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3Rp bWUKKy5UUAogLkIgZGVsYXlfd2F0Y2hfY2hlY2tzCiAuVFAKIC5CIGRlbGF5X3dhaXRfY2hlY2tz CmRpZmYgLS1naXQgYS9tdWx0aXBhdGhkL21haW4uYyBiL211bHRpcGF0aGQvbWFpbi5jCmluZGV4 IGFkYzMyNTguLmZhY2ZjMDMgMTAwNjQ0Ci0tLSBhL211bHRpcGF0aGQvbWFpbi5jCisrKyBiL211 bHRpcGF0aGQvbWFpbi5jCkBAIC0xNDg2LDcgKzE0ODYsNTQgQEAgdm9pZCByZXBhaXJfcGF0aChz dHJ1Y3QgcGF0aCAqIHBwKQogCWNoZWNrZXJfcmVwYWlyKCZwcC0+Y2hlY2tlcik7CiAJTE9HX01T RygxLCBjaGVja2VyX21lc3NhZ2UoJnBwLT5jaGVja2VyKSk7CiB9CitzdGF0aWMgaW50IGNoZWNr X3BhdGhfdmFsaWRpdHlfZXJyKCBzdHJ1Y3QgcGF0aCAqIHBwKXsKKwlzdHJ1Y3QgdGltZXNwZWMg c3RhcnRfdGltZTsKKwlpbnQgZGlzYWJsZV9yZWluc3RhdGUgPSAwOworCisJaWYgKGNsb2NrX2dl dHRpbWUoQ0xPQ0tfTU9OT1RPTklDLCAmc3RhcnRfdGltZSkgIT0gMCkKKwkJc3RhcnRfdGltZS50 dl9zZWMgPSAwOworCisJCS8qSWYgbnVtYmVyIG9mIHBhdGggZmFpbHVyZXMgYXJlIG1vcmUgdGhl biB0aGUgc2FuX3BhdGhfZXJyX3RocmVzaG9sZCovCisJCWlmKChwcC0+bXBwLT5zYW5fcGF0aF9l cnJfdGhyZXNob2xkID4gMCkmJiAocHAtPnBhdGhfZmFpbHVyZXMgPiBwcC0+bXBwLT5zYW5fcGF0 aF9lcnJfdGhyZXNob2xkKSl7CisJCQljb25kbG9nKDMsIlxucGF0aCAlcyA6aGl0IHRoZSBlcnJv ciB0aHJlc2hvbGRcbiIscHAtPmRldik7CisKKwkJCWlmKCFwcC0+ZGlzYWJsZV9yZWluc3RhdGUp eworCQkJCS8qaWYgdGhlIGVycm9yIHRocmVzaG9sZCBoYXMgaGl0IGhpdCB3aXRoaW4gdGhlIHNh bl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93CisJCQkJICogdGltZSBmcmFtZSBkb25vdCByZWlu c3RhbnRlIHRoZSBwYXRoIHRpbGwgdGhlIHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lCisJCQkJ ICogcGxhY2UgdGhlIHBhdGggaW4gZmFpbGVkIHN0YXRlIHRpbGwgc2FuX3BhdGhfZXJyX3JlY292 ZXJ5X3RpbWUgc28gdGhhdCB0aGUgCisJCQkJICogY3V0b21lciBjYW4gcmVjdGlmeSB0aGUgaXNz dWUgd2l0aGluIHRoaXMgdGltZSAuT25jZSB0aGUgY29wbGV0aW9uIG9mIAorCQkJCSAqIHNhbl9w YXRoX2Vycl9yZWNvdmVyeV90aW1lIGl0IHNob3VsZCBhdXRvbWF0aWNhbGx5IHJlaW5zdGFudGF0 ZSB0aGUgcGF0aAorCQkJCSAqICovCisJCQkJaWYoKHBwLT5tcHAtPnNhbl9wYXRoX2Vycl90aHJl c2hvbGRfd2luZG93ID4gMCkgJiYgCisJCQkJICAgKChzdGFydF90aW1lLnR2X3NlYyAtIHBwLT5m YWlsdXJlX3N0YXJ0X3RpbWUpIDwgcHAtPm1wcC0+c2FuX3BhdGhfZXJyX3RocmVzaG9sZF93aW5k b3cpKXsKKwkJCQkJY29uZGxvZygzLCJcbnBhdGggJXMgOmhpdCB0aGUgZXJyb3IgdGhyZXNob2xk IHdpdGhpbiB0aGUgdGhyc2hvbGQgd2luZG93IHRpbWVcbiIscHAtPmRldik7CisJCQkJCWRpc2Fi bGVfcmVpbnN0YXRlID0gMTsgCisJCQkJCXBwLT5kaXNfcmVpbnN0YW50ZV90aW1lID0gc3RhcnRf dGltZS50dl9zZWMgOworCQkJCQlwcC0+ZGlzYWJsZV9yZWluc3RhdGUgPSAxOworCQkJCX1lbHNl eworCQkJCQkvKmV2ZW4gdGhvdWdoIHRoZSBudW1iZXIgb2YgZXJyb3JzIGFyZSBncmVhdGVyIHRo ZW4gdGhlIHNhbl9wYXRoX2Vycl90aHJlc2hvbGQKKwkJCQkJICpzaW5jZSBpdCBkb2Vzbm90IGhp dCB3aXRoaW4gdGhlIHNhbl9wYXRoX2Vycl90aHJlc2hvbGRfd2luZG93IHRpbWUgIHdlIHNob3Vs ZCBub3QgdGFrZSB0aGVzZQorCQkJCQkgKiBlcnJyb3MgaW50byBhY2NvdW50IGFuZCB3ZSBoYXZl IHRvIHJld2F0Y2ggdGhlIGVycm9ycworCQkJCQkgKi8KKwkJCQkJcHAtPnBhdGhfZmFpbHVyZXMg PSAwOworCQkJCQlwcC0+ZGlzYWJsZV9yZWluc3RhdGUgPSAwOworCisJCQkJfQorCQkJfQorCQkJ aWYocHAtPmRpc2FibGVfcmVpbnN0YXRlKXsKKwkJCQlkaXNhYmxlX3JlaW5zdGF0ZSA9IDE7CisJ CQkJaWYoKHBwLT5tcHAtPnNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lID4gMCkgJiYgCisJCQkJ ICAgKHN0YXJ0X3RpbWUudHZfc2VjIC0gcHAtPmRpc19yZWluc3RhbnRlX3RpbWUgKSA+IHBwLT5t cHAtPnNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lKXsKKwkJCQkJZGlzYWJsZV9yZWluc3RhdGUg PTA7CisJCQkJCXBwLT5wYXRoX2ZhaWx1cmVzID0gMDsKKwkJCQkJcHAtPmRpc2FibGVfcmVpbnN0 YXRlID0gMDsKKwkJCQkJIGNvbmRsb2coMywiXG5wYXRoICVzIDpyZWluc3RhdGUgdGhlIHBhdGgg YWZ0ZXIgZXJyIHJlY292ZXJ5IHRpbWVcbiIscHAtPmRldik7CisJCQkJfQogCisJCQl9CisJCX0K KwlyZXR1cm4gIGRpc2FibGVfcmVpbnN0YXRlOworfQogLyoKICAqIFJldHVybnMgJzEnIGlmIHRo ZSBwYXRoIGhhcyBiZWVuIGNoZWNrZWQsICctMScgaWYgaXQgd2FzIGJsYWNrbGlzdGVkCiAgKiBh bmQgJzAnIG90aGVyd2lzZQpAQCAtMTUwMyw3ICsxNTUwLDExIEBAIGNoZWNrX3BhdGggKHN0cnVj dCB2ZWN0b3JzICogdmVjcywgc3RydWN0IHBhdGggKiBwcCwgaW50IHRpY2tzKQogCWludCByZXRy aWdnZXJfdHJpZXMsIGNoZWNraW50OwogCXN0cnVjdCBjb25maWcgKmNvbmY7CiAJaW50IHJldDsK KwlzdHJ1Y3QgdGltZXNwZWMgc3RhcnRfdGltZTsKIAorCWlmIChjbG9ja19nZXR0aW1lKENMT0NL X01PTk9UT05JQywgJnN0YXJ0X3RpbWUpICE9IDApCisJCXN0YXJ0X3RpbWUudHZfc2VjID0gMDsK KwkKIAlpZiAoKHBwLT5pbml0aWFsaXplZCA9PSBJTklUX09LIHx8CiAJICAgICBwcC0+aW5pdGlh bGl6ZWQgPT0gSU5JVF9SRVFVRVNURURfVURFVikgJiYgIXBwLT5tcHApCiAJCXJldHVybiAwOwpA QCAtMTYxNSwxMiArMTY2NiwxOCBAQCBjaGVja19wYXRoIChzdHJ1Y3QgdmVjdG9ycyAqIHZlY3Ms IHN0cnVjdCBwYXRoICogcHAsIGludCB0aWNrcykKIAkgKiBhbmQgaWYgdGFyZ2V0IHN1cHBvcnRz IG9ubHkgaW1wbGljaXQgdHBncyBtb2RlLgogCSAqIHRoaXMgd2lsbCBwcmV2ZW50IHVubmVjZXNz YXJ5IGkvbyBieSBkbSBvbiBzdGFuZC1ieQogCSAqIHBhdGhzIGlmIHRoZXJlIGFyZSBubyBvdGhl ciBhY3RpdmUgcGF0aHMgaW4gbWFwLgorCSAqCisJICogd2hlbiBwYXRoIGZhaWx1cmVzIGhhcyBl eGNlZWRlZCB0aGUgc2FuX3BhdGhfZXJyX3RocmVzaG9sZCAKKwkgKiB3aXRoaW4gc2FuX3BhdGhf ZXJyX3RocmVzaG9sZF93aW5kb3cgdGhlbiB3ZSBkb24ndCByZWluc3RhdGUKKwkgKiBmYWlsZWQg cGF0aCBmb3Igc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUKIAkgKi8KLQlkaXNhYmxlX3JlaW5z dGF0ZSA9IChuZXdzdGF0ZSA9PSBQQVRIX0dIT1NUICYmCisJZGlzYWJsZV9yZWluc3RhdGUgPSAo KG5ld3N0YXRlID09IFBBVEhfR0hPU1QgJiYKIAkJCSAgICBwcC0+bXBwLT5ucl9hY3RpdmUgPT0g MCAmJgotCQkJICAgIHBwLT50cGdzID09IFRQR1NfSU1QTElDSVQpID8gMSA6IDA7CisJCQkgICAg cHAtPnRwZ3MgPT0gVFBHU19JTVBMSUNJVCkgPyAxIDoKKwkJCSAgICBjaGVja19wYXRoX3ZhbGlk aXR5X2VycihwcCkpOwogCiAJcHAtPmNoa3JzdGF0ZSA9IG5ld3N0YXRlOworCiAJaWYgKG5ld3N0 YXRlICE9IHBwLT5zdGF0ZSkgewogCQlpbnQgb2xkc3RhdGUgPSBwcC0+c3RhdGU7CiAJCXBwLT5z dGF0ZSA9IG5ld3N0YXRlOwo= --_005_4dfed25f04c04771a732580a4a8cc834BRMWPEXMB12corpbrocadec_ Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline --_005_4dfed25f04c04771a732580a4a8cc834BRMWPEXMB12corpbrocadec_-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Benjamin Marzinski" Subject: Re: deterministic io throughput in multipath Date: Mon, 16 Jan 2017 19:04:47 -0600 Message-ID: <20170117010447.GW2732@octiron.msp.redhat.com> References: <1649d4b8538d4b4cb1efacdfe8cf31eb@BRMWP-EXMB12.corp.brocade.com> <20161221160940.GG19659@octiron.msp.redhat.com> <8cd4cc5f20b540a1b8312ad485711152@BRMWP-EXMB12.corp.brocade.com> <20170103171159.GA2732@octiron.msp.redhat.com> <4dfed25f04c04771a732580a4a8cc834@BRMWP-EXMB12.corp.brocade.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Return-path: Content-Disposition: inline In-Reply-To: <4dfed25f04c04771a732580a4a8cc834@BRMWP-EXMB12.corp.brocade.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Muneendra Kumar M Cc: "dm-devel@redhat.com" List-Id: dm-devel.ids On Mon, Jan 16, 2017 at 11:19:19AM +0000, Muneendra Kumar M wrote: > Hi Ben, > After the below discussion we=A0 came with the approach which will mee= t our > requirement. > I have attached the patch which is working good in our field tests. > Could you please review the attached patch and provide us your valuable > comments . I can see a number of issues with this patch. First, some nit-picks: - I assume "dis_reinstante_time" should be "dis_reinstate_time" - The indenting in check_path_validity_err is wrong, which made it confusing until I noticed that if (clock_gettime(CLOCK_MONOTONIC, &start_time) !=3D 0) doesn't have an open brace, and shouldn't indent the rest of the function. - You call clock_gettime in check_path, but never use the result. - In dict.c, instead of writing your own functions that are the same as the *_delay_checks functions, you could make those functions generic and use them for both. To go match the other generic function names they would probably be something like set_off_int_undef print_off_int_undef You would also need to change DELAY_CHECKS_* and ERR_CHECKS_* to point to some common enum that you created, the way user_friendly_names_states (to name one of many) does. The generic enum used by *_off_int_undef would be something like. enum no_undef { NU_NO =3D -1, NU_UNDEF =3D 0, } The idea is to try to cut down on the number of functions that are simply copy-pasting other functions in dict.c. Those are all minor cleanup issues, but there are some bigger problems. Instead of checking if san_path_err_threshold, san_path_err_threshold_window, and san_path_err_recovery_time are greater than zero seperately, you should probably check them all at the start of check_path_validity_err, and return 0 unless they all are set. Right now, if a user sets san_path_err_threshold and san_path_err_threshold_window but not san_path_err_recovery_time, their path will never recover after it hits the error threshold. I pretty sure that you don't mean to permanently disable the paths. time_t is a signed type, which means that if you get the clock time in update_multpath and then fail to get the clock time in check_path_validity_err, this check: start_time.tv_sec - pp->failure_start_time) < pp->mpp->san_path_err_thresho= ld_window will always be true. I realize that clock_gettime is very unlikely to fail. But if it does, probably the safest thing to so is to just immediately return 0 in check_path_validity_err. The way you set path_failures in update_multipath may not get you what you want. It will only count path failures found by the kernel, and not the path checker. If the check_path finds the error, pp->state will be set to PATH_DOWN before pp->dmstate is set to PSTATE_FAILED. That means you will not increment path_failures. Perhaps this is what you want, but I would assume that you would want to count every time the path goes down regardless of whether multipathd or the kernel noticed it. I'm not super enthusiastic about how the san_path_err_threshold_window works. First, it starts counting from when the path goes down, so if the path takes long enough to get restored, and then fails immediately, it can just keep failing and it will never hit the san_path_err_threshold_window, since it spends so much of that time with the path failed. Also, the window gets set on the first error, and never reset until the number of errors is over the threshold. This means that if you get one early error and then a bunch of errors much later, you will go for (2 x san_path_err_threshold) - 1 errors until you stop reinstating the path, because of the window reset in the middle of the string of errors. It seems like a better idea would be to have check_path_validity_err reset path_failures as soon as it notices that you are past san_path_err_threshold_window, instead of waiting till the number of errors hits san_path_err_threshold. If I was going to design this, I think I would have san_path_err_threshold and san_path_err_recovery_time like you do, but instead of having a san_path_err_threshold_window, I would have something like san_path_err_forget_rate. The idea is that every san_path_err_forget_rate number of successful path checks you decrement path_failures by 1. This means that there is no window after which you reset. If the path failures come in faster than the forget rate, you will eventually hit the error threshold. This also has the benefit of easily not counting time when the path was down as time where the path wasn't having problems. But if you don't like my idea, yours will work fine with some polish. -Ben > Below are the files that has been changed . > =A0 > libmultipath/config.c=A0=A0=A0=A0=A0 |=A0 3 +++ > libmultipath/config.h=A0=A0=A0=A0=A0 |=A0 9 +++++++++ > libmultipath/configure.c=A0=A0 |=A0 3 +++ > libmultipath/defaults.h=A0=A0=A0 |=A0 1 + > libmultipath/dict.c=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 | 80 > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++= ++++++++++ > libmultipath/dict.h=A0=A0=A0=A0=A0=A0=A0 |=A0 1 + > libmultipath/propsel.c=A0=A0=A0=A0 | 44 > ++++++++++++++++++++++++++++++++++++++++++++ > libmultipath/propsel.h=A0=A0=A0=A0 |=A0 6 ++++++ > libmultipath/structs.h=A0=A0=A0=A0 | 12 +++++++++++- > libmultipath/structs_vec.c | 10 ++++++++++ > multipath/multipath.conf.5 | 58 > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > multipathd/main.c=A0=A0=A0=A0=A0=A0=A0=A0=A0 | 61 > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-- > =A0 > We have added three new config parameters whose description is below. > 1.san_path_err_threshold: > =A0=A0=A0=A0=A0=A0=A0 If set to a value greater than 0, multipathd wil= l watch paths and > check how many times a path has been failed due to errors. If the numb= er > of failures on a particular path is greater then the > san_path_err_threshold then the path will not=A0 reinstate=A0 till > san_path_err_recovery_time. These path failures should occur within a > san_path_err_threshold_window time frame, if not we will consider the = path > is good enough to reinstate. > =A0 > 2.san_path_err_threshold_window: > =A0=A0=A0=A0=A0=A0=A0 If set to a value greater than 0, multipathd wil= l check whether > the path failures has exceeded=A0 the san_path_err_threshold within th= is > time frame i.e san_path_err_threshold_window . If so we will not reins= tate > the path till=A0=A0=A0=A0=A0=A0=A0=A0=A0 san_path_err_recovery_time. > =A0 > 3.san_path_err_recovery_time: > If set to a value greater than 0, multipathd will make sure that when = path > failures has exceeded the san_path_err_threshold within > san_path_err_threshold_window then the path=A0 will be placed in failed > state for san_path_err_recovery_time duration. Once > san_path_err_recovery_time has timeout=A0 we will reinstate the failed= path > . > =A0 > Regards, > Muneendra. > =A0 > -----Original Message----- > From: Muneendra Kumar M > Sent: Wednesday, January 04, 2017 6:56 PM > To: 'Benjamin Marzinski' > Cc: dm-devel@redhat.com > Subject: RE: [dm-devel] deterministic io throughput in multipath > =A0 > Hi Ben, > Thanks for the information. > =A0 > Regards, > Muneendra. > =A0 > -----Original Message----- > From: Benjamin Marzinski [[1]mailto:bmarzins@redhat.com] > Sent: Tuesday, January 03, 2017 10:42 PM > To: Muneendra Kumar M <[2]mmandala@Brocade.com> > Cc: [3]dm-devel@redhat.com > Subject: Re: [dm-devel] deterministic io throughput in multipath > =A0 > On Mon, Dec 26, 2016 at 09:42:48AM +0000, Muneendra Kumar M wrote: > > Hi Ben, > > > > If there are two paths on a dm-1 say sda and sdb as below. > > > > #=A0 multipath -ll > >=A0=A0=A0=A0=A0=A0=A0 mpathd (3600110d001ee7f0102050001cc0b6751) dm-1= SANBlaze,VLUN > MyLun > >=A0=A0=A0=A0=A0=A0=A0 size=3D8.0M features=3D'0' hwhandler=3D'0' wp= =3Drw > >=A0=A0=A0=A0=A0=A0=A0 `-+- policy=3D'round-robin 0' prio=3D50 status= =3Dactive > >=A0=A0=A0=A0=A0=A0=A0=A0=A0 |- 8:0:1:0=A0 sda 8:48 active ready=A0 ru= nning > >=A0=A0=A0=A0=A0=A0=A0=A0=A0 `- 9:0:1:0=A0 sdb 8:64 active ready=A0 ru= nning=A0=A0=A0=A0=A0=A0=A0=A0=A0 > > > > And on sda if iam seeing lot of errors due to which the sda path is > fluctuating from failed state to active state and vicevera. > > > > My requirement is something like this if sda is failed for more then= 5 > > times in a hour duration ,then I want to keep the sda in failed state > > for few hours (3hrs) > > > > And the data should travel only thorugh sdb path. > > Will this be possible with the below parameters. > =A0 > No. delay_watch_checks sets how may path checks you watch a path that = has > recently come back from the failed state. If the path fails again with= in > this time, multipath device delays it.=A0 This means that the delay is > always trigger by two failures within the time limit.=A0 It's possible= to > adapt this to count numbers of failures, and act after a certain number > within a certain timeframe, but it would take a bit more work. > =A0 > delay_wait_checks doesn't guarantee that it will delay for any set len= gth > of time.=A0 Instead, it sets the number of consecutive successful path > checks that must occur before the path is usable again. You could set = this > for 3 hours of path checks, but if a check failed during this time, you > would restart the 3 hours over again. > =A0 > -Ben > =A0 > > Can you just let me know what values I should add for delay_watch_ch= ecks > and delay_wait_checks. > > > > Regards, > > Muneendra. > > > > > > > > -----Original Message----- > > From: Muneendra Kumar M > > Sent: Thursday, December 22, 2016 11:10 AM > > To: 'Benjamin Marzinski' <[4]bmarzins@redhat.com> > > Cc: [5]dm-devel@redhat.com > > Subject: RE: [dm-devel] deterministic io throughput in multipath > > > > Hi Ben, > > > > Thanks for the reply. > > I will look into this parameters will do the internal testing and let > you know the results. > > > > Regards, > > Muneendra. > > > > -----Original Message----- > > From: Benjamin Marzinski [[6]mailto:bmarzins@redhat.com] > > Sent: Wednesday, December 21, 2016 9:40 PM > > To: Muneendra Kumar M <[7]mmandala@Brocade.com> > > Cc: [8]dm-devel@redhat.com > > Subject: Re: [dm-devel] deterministic io throughput in multipath > > > > Have you looked into the delay_watch_checks and delay_wait_checks > configuration parameters?=A0 The idea behind them is to minimize the u= se of > paths that are intermittently failing. > > > > -Ben > > > > On Mon, Dec 19, 2016 at 11:50:36AM +0000, Muneendra Kumar M wrote: > > >=A0=A0=A0 Customers using Linux host (mostly RHEL host) using a SAN= network > for > > >=A0=A0=A0 block storage, complain the Linux multipath stack is not = resilient > to > > >=A0=A0=A0 handle non-deterministic storage network behaviors. This = has caused > many > > >=A0=A0=A0 customer move away to non-linux based servers. The intent= of the > below > > >=A0=A0=A0 patch and the prevailing issues are given below. With the= below > design we > > >=A0=A0=A0 are seeing the Linux multipath stack becoming resilient t= o such > network > > >=A0=A0=A0 issues. We hope by getting this patch accepted will help = in more > Linux > > >=A0=A0=A0 server adoption that use SAN network. > > > > > >=A0=A0=A0 I have already sent the design details to the community i= n a > different > > >=A0=A0=A0 mail chain and the details are available in the below lin= k. > > > > > >=A0=A0=A0 > [1][9]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redha= t.com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_X= qQWOjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3Dvfwp= Vp6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7= MRu52hG3MKzM&e=3D > . > > > > > >=A0=A0=A0 Can you please go through the design and send the comment= s to us. > > > > > >=A0=A0=A0 =A0 > > > > > >=A0=A0=A0 Regards, > > > > > >=A0=A0=A0 Muneendra. > > > > > >=A0=A0=A0 =A0 > > > > > >=A0=A0=A0 =A0 > > > > > > References > > > > > >=A0=A0=A0 Visible links > > >=A0=A0=A0 1. > > > > [10]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.= com_ > > > ar > > > chives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_= XqQWOj > > > ub > > > gfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3Dvf= wpVp6e > > > 1K > > > XtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_= Uj7M > > > Ru > > > 52hG3MKzM&e=3D > > > > > -- > > > dm-devel mailing list > > > [11]dm-devel@redhat.com > > > > [12]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.= com_ > > > ma > > > ilman_listinfo_dm-2Ddevel&d=3DDgIDAw&c=3DIL_XqQWOjubgfqINi2jTzg&r= =3DE3ftc4 > > > 7B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6e1KXtRA0ctwHYJ7cD= mPsL > > > i2C1L9pox7uexsY&s=3DUyE46dXOrNTbPz_TVGtpoHl3J3h_n0uYhI4TI-PgyWg&e= =3D > =A0 > = > References > = > Visible links > 1. mailto:bmarzins@redhat.com > 2. mailto:mmandala@brocade.com > 3. mailto:dm-devel@redhat.com > 4. mailto:bmarzins@redhat.com > 5. mailto:dm-devel@redhat.com > 6. mailto:bmarzins@redhat.com > 7. mailto:mmandala@brocade.com > 8. mailto:dm-devel@redhat.com > 9. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.c= om_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_XqQW= OjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6= e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7MRu= 52hG3MKzM&e > 10. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.c= om_ > 11. mailto:dm-devel@redhat.com > 12. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.c= om_ From mboxrd@z Thu Jan 1 00:00:00 1970 From: Muneendra Kumar M Subject: Re: deterministic io throughput in multipath Date: Tue, 17 Jan 2017 10:43:25 +0000 Message-ID: <8afcfb54df9c4d7c8525948917e64080@BRMWP-EXMB12.corp.brocade.com> References: <1649d4b8538d4b4cb1efacdfe8cf31eb@BRMWP-EXMB12.corp.brocade.com> <20161221160940.GG19659@octiron.msp.redhat.com> <8cd4cc5f20b540a1b8312ad485711152@BRMWP-EXMB12.corp.brocade.com> <20170103171159.GA2732@octiron.msp.redhat.com> <4dfed25f04c04771a732580a4a8cc834@BRMWP-EXMB12.corp.brocade.com> <20170117010447.GW2732@octiron.msp.redhat.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============2538909769962282500==" Return-path: In-Reply-To: <20170117010447.GW2732@octiron.msp.redhat.com> Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Benjamin Marzinski Cc: "dm-devel@redhat.com" List-Id: dm-devel.ids --===============2538909769962282500== Content-Language: en-US Content-Type: multipart/alternative; boundary="_000_8afcfb54df9c4d7c8525948917e64080BRMWPEXMB12corpbrocadec_" --_000_8afcfb54df9c4d7c8525948917e64080BRMWPEXMB12corpbrocadec_ Content-Type: text/plain; charset="us-ascii" Hi Ben, Thanks for the review. In dict.c I will make sure I will make generic functions which will be used by both delay_checks and err_checks. We want to increment the path failures every time the path goes down regardless of whether multipathd or the kernel noticed the failure of paths. Thanks for pointing this. I will completely agree with the idea which you mentioned below by reconsidering the san_path_err_threshold_window with san_path_err_forget_rate. This will avoid counting time when the path was down as time where the path wasn't having problems. I will incorporate all the changes mentioned below and will resend the patch once the testing is done. Regards, Muneendra. -----Original Message----- From: Benjamin Marzinski [mailto:bmarzins@redhat.com] Sent: Tuesday, January 17, 2017 6:35 AM To: Muneendra Kumar M Cc: dm-devel@redhat.com Subject: Re: [dm-devel] deterministic io throughput in multipath On Mon, Jan 16, 2017 at 11:19:19AM +0000, Muneendra Kumar M wrote: > Hi Ben, > After the below discussion we came with the approach which will meet our > requirement. > I have attached the patch which is working good in our field tests. > Could you please review the attached patch and provide us your valuable > comments . I can see a number of issues with this patch. First, some nit-picks: - I assume "dis_reinstante_time" should be "dis_reinstate_time" - The indenting in check_path_validity_err is wrong, which made it confusing until I noticed that if (clock_gettime(CLOCK_MONOTONIC, &start_time) != 0) doesn't have an open brace, and shouldn't indent the rest of the function. - You call clock_gettime in check_path, but never use the result. - In dict.c, instead of writing your own functions that are the same as the *_delay_checks functions, you could make those functions generic and use them for both. To go match the other generic function names they would probably be something like set_off_int_undef print_off_int_undef You would also need to change DELAY_CHECKS_* and ERR_CHECKS_* to point to some common enum that you created, the way user_friendly_names_states (to name one of many) does. The generic enum used by *_off_int_undef would be something like. enum no_undef { NU_NO = -1, NU_UNDEF = 0, } The idea is to try to cut down on the number of functions that are simply copy-pasting other functions in dict.c. Those are all minor cleanup issues, but there are some bigger problems. Instead of checking if san_path_err_threshold, san_path_err_threshold_window, and san_path_err_recovery_time are greater than zero seperately, you should probably check them all at the start of check_path_validity_err, and return 0 unless they all are set. Right now, if a user sets san_path_err_threshold and san_path_err_threshold_window but not san_path_err_recovery_time, their path will never recover after it hits the error threshold. I pretty sure that you don't mean to permanently disable the paths. time_t is a signed type, which means that if you get the clock time in update_multpath and then fail to get the clock time in check_path_validity_err, this check: start_time.tv_sec - pp->failure_start_time) < pp->mpp->san_path_err_threshold_window will always be true. I realize that clock_gettime is very unlikely to fail. But if it does, probably the safest thing to so is to just immediately return 0 in check_path_validity_err. The way you set path_failures in update_multipath may not get you what you want. It will only count path failures found by the kernel, and not the path checker. If the check_path finds the error, pp->state will be set to PATH_DOWN before pp->dmstate is set to PSTATE_FAILED. That means you will not increment path_failures. Perhaps this is what you want, but I would assume that you would want to count every time the path goes down regardless of whether multipathd or the kernel noticed it. I'm not super enthusiastic about how the san_path_err_threshold_window works. First, it starts counting from when the path goes down, so if the path takes long enough to get restored, and then fails immediately, it can just keep failing and it will never hit the san_path_err_threshold_window, since it spends so much of that time with the path failed. Also, the window gets set on the first error, and never reset until the number of errors is over the threshold. This means that if you get one early error and then a bunch of errors much later, you will go for (2 x san_path_err_threshold) - 1 errors until you stop reinstating the path, because of the window reset in the middle of the string of errors. It seems like a better idea would be to have check_path_validity_err reset path_failures as soon as it notices that you are past san_path_err_threshold_window, instead of waiting till the number of errors hits san_path_err_threshold. If I was going to design this, I think I would have san_path_err_threshold and san_path_err_recovery_time like you do, but instead of having a san_path_err_threshold_window, I would have something like san_path_err_forget_rate. The idea is that every san_path_err_forget_rate number of successful path checks you decrement path_failures by 1. This means that there is no window after which you reset. If the path failures come in faster than the forget rate, you will eventually hit the error threshold. This also has the benefit of easily not counting time when the path was down as time where the path wasn't having problems. But if you don't like my idea, yours will work fine with some polish. -Ben > Below are the files that has been changed . > > libmultipath/config.c | 3 +++ > libmultipath/config.h | 9 +++++++++ > libmultipath/configure.c | 3 +++ > libmultipath/defaults.h | 1 + > libmultipath/dict.c | 80 > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > libmultipath/dict.h | 1 + > libmultipath/propsel.c | 44 > ++++++++++++++++++++++++++++++++++++++++++++ > libmultipath/propsel.h | 6 ++++++ > libmultipath/structs.h | 12 +++++++++++- > libmultipath/structs_vec.c | 10 ++++++++++ > multipath/multipath.conf.5 | 58 > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > multipathd/main.c | 61 > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-- > > We have added three new config parameters whose description is below. > 1.san_path_err_threshold: > If set to a value greater than 0, multipathd will watch paths and > check how many times a path has been failed due to errors. If the number > of failures on a particular path is greater then the > san_path_err_threshold then the path will not reinstate till > san_path_err_recovery_time. These path failures should occur within a > san_path_err_threshold_window time frame, if not we will consider the path > is good enough to reinstate. > > 2.san_path_err_threshold_window: > If set to a value greater than 0, multipathd will check whether > the path failures has exceeded the san_path_err_threshold within this > time frame i.e san_path_err_threshold_window . If so we will not reinstate > the path till san_path_err_recovery_time. > > 3.san_path_err_recovery_time: > If set to a value greater than 0, multipathd will make sure that when path > failures has exceeded the san_path_err_threshold within > san_path_err_threshold_window then the path will be placed in failed > state for san_path_err_recovery_time duration. Once > san_path_err_recovery_time has timeout we will reinstate the failed path > . > > Regards, > Muneendra. > > -----Original Message----- > From: Muneendra Kumar M > Sent: Wednesday, January 04, 2017 6:56 PM > To: 'Benjamin Marzinski' > > Cc: dm-devel@redhat.com > Subject: RE: [dm-devel] deterministic io throughput in multipath > > Hi Ben, > Thanks for the information. > > Regards, > Muneendra. > > -----Original Message----- > From: Benjamin Marzinski [[1]mailto:bmarzins@redhat.com] > Sent: Tuesday, January 03, 2017 10:42 PM > To: Muneendra Kumar M <[2]mmandala@Brocade.com> > Cc: [3]dm-devel@redhat.com > Subject: Re: [dm-devel] deterministic io throughput in multipath > > On Mon, Dec 26, 2016 at 09:42:48AM +0000, Muneendra Kumar M wrote: > > Hi Ben, > > > > If there are two paths on a dm-1 say sda and sdb as below. > > > > # multipath -ll > > mpathd (3600110d001ee7f0102050001cc0b6751) dm-1 SANBlaze,VLUN > MyLun > > size=8.0M features='0' hwhandler='0' wp=rw > > `-+- policy='round-robin 0' prio=50 status=active > > |- 8:0:1:0 sda 8:48 active ready running > > `- 9:0:1:0 sdb 8:64 active ready running > > > > And on sda if iam seeing lot of errors due to which the sda path is > fluctuating from failed state to active state and vicevera. > > > > My requirement is something like this if sda is failed for more then 5 > > times in a hour duration ,then I want to keep the sda in failed state > > for few hours (3hrs) > > > > And the data should travel only thorugh sdb path. > > Will this be possible with the below parameters. > > No. delay_watch_checks sets how may path checks you watch a path that has > recently come back from the failed state. If the path fails again within > this time, multipath device delays it. This means that the delay is > always trigger by two failures within the time limit. It's possible to > adapt this to count numbers of failures, and act after a certain number > within a certain timeframe, but it would take a bit more work. > > delay_wait_checks doesn't guarantee that it will delay for any set length > of time. Instead, it sets the number of consecutive successful path > checks that must occur before the path is usable again. You could set this > for 3 hours of path checks, but if a check failed during this time, you > would restart the 3 hours over again. > > -Ben > > > Can you just let me know what values I should add for delay_watch_checks > and delay_wait_checks. > > > > Regards, > > Muneendra. > > > > > > > > -----Original Message----- > > From: Muneendra Kumar M > > Sent: Thursday, December 22, 2016 11:10 AM > > To: 'Benjamin Marzinski' <[4]bmarzins@redhat.com> > > Cc: [5]dm-devel@redhat.com > > Subject: RE: [dm-devel] deterministic io throughput in multipath > > > > Hi Ben, > > > > Thanks for the reply. > > I will look into this parameters will do the internal testing and let > you know the results. > > > > Regards, > > Muneendra. > > > > -----Original Message----- > > From: Benjamin Marzinski [[6]mailto:bmarzins@redhat.com] > > Sent: Wednesday, December 21, 2016 9:40 PM > > To: Muneendra Kumar M <[7]mmandala@Brocade.com> > > Cc: [8]dm-devel@redhat.com > > Subject: Re: [dm-devel] deterministic io throughput in multipath > > > > Have you looked into the delay_watch_checks and delay_wait_checks > configuration parameters? The idea behind them is to minimize the use of > paths that are intermittently failing. > > > > -Ben > > > > On Mon, Dec 19, 2016 at 11:50:36AM +0000, Muneendra Kumar M wrote: > > > Customers using Linux host (mostly RHEL host) using a SAN network > for > > > block storage, complain the Linux multipath stack is not resilient > to > > > handle non-deterministic storage network behaviors. This has caused > many > > > customer move away to non-linux based servers. The intent of the > below > > > patch and the prevailing issues are given below. With the below > design we > > > are seeing the Linux multipath stack becoming resilient to such > network > > > issues. We hope by getting this patch accepted will help in more > Linux > > > server adoption that use SAN network. > > > > > > I have already sent the design details to the community in a > different > > > mail chain and the details are available in the below link. > > > > > > > [1][9]https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=DgIDAw&c=IL_XqQWOjubgfqINi2jTzg&r=E3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=vfwpVp6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=q5OI-lfefNC2CHKmyUkokgiyiPo_Uj7MRu52hG3MKzM&e= > . > > > > > > Can you please go through the design and send the comments to us. > > > > > > > > > > > > Regards, > > > > > > Muneendra. > > > > > > > > > > > > > > > > > > References > > > > > > Visible links > > > 1. > > > > [10]https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_ > > > ar > > > chives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=DgIDAw&c=IL_XqQWOj > > > ub > > > gfqINi2jTzg&r=E3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=vfwpVp6e > > > 1K > > > XtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=q5OI-lfefNC2CHKmyUkokgiyiPo_Uj7M > > > Ru > > > 52hG3MKzM&e= > > > > > -- > > > dm-devel mailing list > > > [11]dm-devel@redhat.com > > > > [12]https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_ > > > ma > > > ilman_listinfo_dm-2Ddevel&d=DgIDAw&c=IL_XqQWOjubgfqINi2jTzg&r=E3ftc4 > > > 7B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=vfwpVp6e1KXtRA0ctwHYJ7cDmPsL > > > > i2C1L9pox7uexsY&s=UyE46dXOrNTbPz_TVGtpoHl3J3h_n0uYhI4TI-PgyWg&e= > > > References > > Visible links > 1. mailto:bmarzins@redhat.com > 2. mailto:mmandala@brocade.com > 3. mailto:dm-devel@redhat.com > 4. mailto:bmarzins@redhat.com > 5. mailto:dm-devel@redhat.com > 6. mailto:bmarzins@redhat.com > 7. mailto:mmandala@brocade.com > 8. mailto:dm-devel@redhat.com > 9. https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=DgIDAw&c=IL_XqQWOjubgfqINi2jTzg&r=E3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=vfwpVp6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=q5OI-lfefNC2CHKmyUkokgiyiPo_Uj7MRu52hG3MKzM&e > 10. https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_ > 11. mailto:dm-devel@redhat.com > 12. > https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_ --_000_8afcfb54df9c4d7c8525948917e64080BRMWPEXMB12corpbrocadec_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable
Hi Ben,
Thanks for the review.
In dict.c  I will make sure I will make generic functions which w= ill be used by both delay_checks and err_checks.
 
We want to increment the path failures every time the path goes down r= egardless of whether multipathd or the kernel noticed the failure of paths.=
Thanks for pointing this.
 
I will completely agree with the idea which you mentioned below by rec= onsidering the san_path_err_threshold_window with
san_path_err_forget_rate. This will avoid counting time when the path = was down as time where the path wasn't having problems.
 
I will incorporate all the changes mentioned below and will resend the= patch once the testing is done.
 
Regards,
Muneendra.
 
 
 
-----Original Message-----
From: Benjamin Marzinski [mailto:bma= rzins@redhat.com]
Sent: Tuesday, January 17, 2017 6:35 AM
To: Muneendra Kumar M <mmandala@Brocade.com>
Cc: dm-devel@redhat.com
Subject: Re: [dm-devel] deterministic io throughput in multipath
 
On Mon, Jan 16, 2017 at 11:19:19AM +0000, Muneendra Kumar M wrote:=
>    Hi Ben,
>    After the below discussion we  came with t= he approach which will meet our
>    requirement.
>    I have attached the patch which is working good= in our field tests.
>    Could you please review the attached patch and = provide us your valuable
>    comments .
 
I can see a number of issues with this patch.
 
First, some nit-picks:
- I assume "dis_reinstante_time" should be "dis_reinsta= te_time"
 
- The indenting in check_path_validity_err is wrong, which made it
  confusing until I noticed that
 
if (clock_gettime(CLOCK_MONOTONIC, &start_time) !=3D 0)
 
  doesn't have an open brace, and shouldn't indent the rest of th= e
  function.
 
- You call clock_gettime in check_path, but never use the result.
 
- In dict.c, instead of writing your own functions that are the same a= s
  the *_delay_checks functions, you could make those functions ge= neric
  and use them for both.  To go match the other generic func= tion names
  they would probably be something like
 
set_off_int_undef
 
print_off_int_undef
 
  You would also need to change DELAY_CHECKS_* and ERR_CHECKS_* t= o
  point to some common enum that you created, the way
  user_friendly_names_states (to name one of many) does. The gene= ric
  enum used by *_off_int_undef would be something like.
 
enum no_undef {
        NU_NO =3D -1,
        NU_UNDEF =3D 0,
}
 
  The idea is to try to cut down on the number of functions that = are
  simply copy-pasting other functions in dict.c.
 
 
Those are all minor cleanup issues, but there are some bigger problems= .
 
Instead of checking if san_path_err_threshold, san_path_err_threshold_= window, and san_path_err_recovery_time are greater than zero seperately, yo= u should probably check them all at the start of check_path_validity_err, a= nd return 0 unless they all are set.
Right now, if a user sets san_path_err_threshold and san_path_err_thre= shold_window but not san_path_err_recovery_time, their path will never reco= ver after it hits the error threshold.  I pretty sure that you don't m= ean to permanently disable the paths.
 
 
time_t is a signed type, which means that if you get the clock time in= update_multpath and then fail to get the clock time in check_path_validity= _err, this check:
 
start_time.tv_sec - pp->failure_start_time) < pp->mpp->san= _path_err_threshold_window
 
will always be true.  I realize that clock_gettime is very unlike= ly to fail.  But if it does, probably the safest thing to so is to jus= t immediately return 0 in check_path_validity_err.
 
 
The way you set path_failures in update_multipath may not get you what= you want.  It will only count path failures found by the kernel, and = not the path checker.  If the check_path finds the error, pp->state= will be set to PATH_DOWN before pp->dmstate is set to PSTATE_FAILED. That means you will not increment path_failures. P= erhaps this is what you want, but I would assume that you would want to cou= nt every time the path goes down regardless of whether multipathd or the ke= rnel noticed it.
 
 
I'm not super enthusiastic about how the san_path_err_threshold_window= works.  First, it starts counting from when the path goes down, so if= the path takes long enough to get restored, and then fails immediately, it= can just keep failing and it will never hit the san_path_err_threshold_window, since it spends so much of that time= with the path failed.  Also, the window gets set on the first error, = and never reset until the number of errors is over the threshold.  Thi= s means that if you get one early error and then a bunch of errors much later, you will go for (2 x san_path_err_thresh= old) - 1 errors until you stop reinstating the path, because of the window = reset in the middle of the string of errors.  It seems like a better i= dea would be to have check_path_validity_err reset path_failures as soon as it notices that you are past san_path_err_th= reshold_window, instead of waiting till the number of errors hits san_path_= err_threshold.
 
 
If I was going to design this, I think I would have san_path_err_thres= hold and san_path_err_recovery_time like you do, but instead of having a sa= n_path_err_threshold_window, I would have something like san_path_err_forge= t_rate.  The idea is that every san_path_err_forget_rate number of successful path checks you decrement pat= h_failures by 1. This means that there is no window after which you reset.&= nbsp; If the path failures come in faster than the forget rate, you will ev= entually hit the error threshold. This also has the benefit of easily not counting time when the path was down as = time where the path wasn't having problems. But if you don't like my idea, = yours will work fine with some polish.
 
-Ben
 
 
>    Below are the files that has been changed .
>     
>    libmultipath/config.c    &n= bsp; |  3 +++
>    libmultipath/config.h    &n= bsp; |  9 +++++++++
>    libmultipath/configure.c   |  3 = +++
>    libmultipath/defaults.h    |&nbs= p; 1 +
>    libmultipath/dict.c    &nbs= p;        | 80
>    +++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++
>    libmultipath/dict.h    &nbs= p;   |  1 +
>    libmultipath/propsel.c     = | 44
>    +++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++
>    libmultipath/propsel.h     = |  6 ++++++
>    libmultipath/structs.h     = | 12 +++++++++++-
>    libmultipath/structs_vec.c | 10 +++= +++++++
>    multipath/multipath.conf.5 | 58
>    +++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;+++
>    multipathd/main.c     =      | 61
>    +++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++--
>     
>    We have added three new config parameters whose= description is below.
>    1.san_path_err_threshold:
>            If s= et to a value greater than 0, multipathd will watch paths and
>    check how many times a path has been failed due= to errors. If the number
>    of failures on a particular path is greater the= n the
>    san_path_err_threshold then the path will not&n= bsp; reinstate  till
>    san_path_err_recovery_time. These path failures= should occur within a
>    san_path_err_threshold_window time frame, if no= t we will consider the path
>    is good enough to reinstate.
>     
>    2.san_path_err_threshold_window:
>            If s= et to a value greater than 0, multipathd will check whether
>    the path failures has exceeded  the san_pa= th_err_threshold within this
>    time frame i.e san_path_err_threshold_window . = If so we will not reinstate
>    the path till     &nbs= p;    san_path_err_recovery_time.
>     
>    3.san_path_err_recovery_time:
>    If set to a value greater than 0, multipathd wi= ll make sure that when path
>    failures has exceeded the san_path_err_threshol= d within
>    san_path_err_threshold_window then the path&nbs= p; will be placed in failed
>    state for san_path_err_recovery_time duration. = Once
>    san_path_err_recovery_time has timeout  we= will reinstate the failed path
>    .
>     
>    Regards,
>    Muneendra.
>     
>    -----Original Message-----
>    From: Muneendra Kumar M
>    Sent: Wednesday, January 04, 2017 6:56 PM
>    To: 'Benjamin Marzinski' <bmarzins@redhat.com>
>    Subject: RE: [dm-devel] deterministic io throug= hput in multipath
>     
>    Hi Ben,
>    Thanks for the information.
>     
>    Regards,
>    Muneendra.
>     
>    -----Original Message-----
>    From: Benjamin Marzinski [[1]mailto:bmarzins@redhat.com]
>    Sent: Tuesday, January 03, 2017 10:42 PM
>    To: Muneendra Kumar M <[2]mmandala@Brocade.com>
>    Cc: [3]d= m-devel@redhat.com
>    Subject: Re: [dm-devel] deterministic io throug= hput in multipath
>     
>    On Mon, Dec 26, 2016 at 09:42:48AM +0000, M= uneendra Kumar M wrote:
>    > Hi Ben,
>    >
>    > If there are two paths on a dm-1 say sda a= nd sdb as below.
>    >
>    > #  multipath -ll
>    >        = mpathd (3600110d001ee7f0102050001cc0b6751) dm-1 SANBlaze,VLUN
>    MyLun
>    >        = size=3D8.0M features=3D'0' hwhandler=3D'0' wp=3Drw
>    >        = `-+- policy=3D'round-robin 0' prio=3D50 status=3Dactive
>    >       &= nbsp;  |- 8:0:1:0  sda 8:48 active ready  running
>    >       &= nbsp;  `- 9:0:1:0  sdb 8:64 active ready  running  = ;       
>    >
>    > And on sda if iam seeing lot of errors due= to which the sda path is
>    fluctuating from failed state to active state a= nd vicevera.
>    >
>    > My requirement is something like this if s= da is failed for more then 5
>    > times in a hour duration ,then I want to k= eep the sda in failed state
>    > for few hours (3hrs)
>    >
>    > And the data should travel only thorugh sd= b path.
>    > Will this be possible with the below param= eters.
>     
>    No. delay_watch_checks sets how may path checks= you watch a path that has
>    recently come back from the failed state. If th= e path fails again within
>    this time, multipath device delays it.  Th= is means that the delay is
>    always trigger by two failures within the time = limit.  It's possible to
>    adapt this to count numbers of failures, and ac= t after a certain number
>    within a certain timeframe, but it would take a= bit more work.
>     
>    delay_wait_checks doesn't guarantee that it wil= l delay for any set length
>    of time.  Instead, it sets the number of c= onsecutive successful path
>    checks that must occur before the path is usabl= e again. You could set this
>    for 3 hours of path checks, but if a check fail= ed during this time, you
>    would restart the 3 hours over again.
>     
>    -Ben
>     
>    > Can you just let me know what values I sho= uld add for delay_watch_checks
>    and delay_wait_checks.
>    >
>    > Regards,
>    > Muneendra.
>    >
>    >
>    >
>    > -----Original Message-----
>    > From: Muneendra Kumar M
>    > Sent: Thursday, December 22, 2016 11:10 AM=
>    > To: 'Benjamin Marzinski' <[4]bmarzins@redhat.com>
>    > Cc: [5]dm-devel@redhat.com
>    > Subject: RE: [dm-devel] deterministic io t= hroughput in multipath
>    >
>    > Hi Ben,
>    >
>    > Thanks for the reply.
>    > I will look into this parameters will do t= he internal testing and let
>    you know the results.
>    >
>    > Regards,
>    > Muneendra.
>    >
>    > -----Original Message-----
>    > From: Benjamin Marzinski [[6]mailto:bmarzins@redhat.com]
>    > Sent: Wednesday, December 21, 2016 9:40 PM=
>    > To: Muneendra Kumar M <[7]mmandala@Brocade.com>
>    > Cc: [8]dm-devel@redhat.com
>    > Subject: Re: [dm-devel] deterministic io t= hroughput in multipath
>    >
>    > Have you looked into the delay_watch_check= s and delay_wait_checks
>    configuration parameters?  The idea behind= them is to minimize the use of
>    paths that are intermittently failing.
>    >
>    > -Ben
>    >
>    > On Mon, Dec 19, 2016 at 11:50:36AM +00= 00, Muneendra Kumar M wrote:
>    > >    Customers using Lin= ux host (mostly RHEL host) using a SAN network
>    for
>    > >    block storage, comp= lain the Linux multipath stack is not resilient
>    to
>    > >    handle non-determin= istic storage network behaviors. This has caused
>    many
>    > >    customer move away = to non-linux based servers. The intent of the
>    below
>    > >    patch and the preva= iling issues are given below. With the below
>    design we
>    > >    are seeing the Linu= x multipath stack becoming resilient to such
>    network
>    > >    issues. We hope by = getting this patch accepted will help in more
>    Linux
>    > >    server adoption tha= t use SAN network.
>    > >
>    > >    I have already sent= the design details to the community in a
>    different
>    > >    mail chain and the = details are available in the below link.
>    > >
>    > >   
>    .
>    > >
>    > >    Can you please go t= hrough the design and send the comments to us.
>    > >
>    > >     
>    > >
>    > >    Regards,
>    > >
>    > >    Muneendra.
>    > >
>    > >     
>    > >
>    > >     
>    > >
>    > > References
>    > >
>    > >    Visible links
>    > >    1.
>    > >
>    > > ar
>    > > chives_dm-2Ddevel_2016-2DDecember_msg= 00122.html&d=3DDgIDAw&c=3DIL_XqQWOj
>    > > ub
>    > > gfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVa= Yvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6e
>    > > 1K
>    > > XtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&= ;s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7M
>    > > Ru
>    > > 52hG3MKzM&e=3D
>    >
>    > > --
>    > > dm-devel mailing list
>    > > [11]dm-devel@redhat.com
>    > >
>    > > ma
>    > > ilman_listinfo_dm-2Ddevel&d=3DDgI= DAw&c=3DIL_XqQWOjubgfqINi2jTzg&r=3DE3ftc4
>    > > 7B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0= &m=3DvfwpVp6e1KXtRA0ctwHYJ7cDmPsL
>    > >
> i2C1L9pox7uexsY&s=3DUyE46dXOrNTbPz_TVGtpoHl3J3h_n0uYhI4TI-Pgy= Wg&e=3D
>     
>
> References
>
>    Visible links
>   12.
 
 
 
 
--_000_8afcfb54df9c4d7c8525948917e64080BRMWPEXMB12corpbrocadec_-- --===============2538909769962282500== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline --===============2538909769962282500==-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: Muneendra Kumar M Subject: Re: deterministic io throughput in multipath Date: Mon, 23 Jan 2017 11:02:42 +0000 Message-ID: <26d8e0b78873443c8e15b863bc33922d@BRMWP-EXMB12.corp.brocade.com> References: <1649d4b8538d4b4cb1efacdfe8cf31eb@BRMWP-EXMB12.corp.brocade.com> <20161221160940.GG19659@octiron.msp.redhat.com> <8cd4cc5f20b540a1b8312ad485711152@BRMWP-EXMB12.corp.brocade.com> <20170103171159.GA2732@octiron.msp.redhat.com> <4dfed25f04c04771a732580a4a8cc834@BRMWP-EXMB12.corp.brocade.com> <20170117010447.GW2732@octiron.msp.redhat.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="_004_26d8e0b78873443c8e15b863bc33922dBRMWPEXMB12corpbrocadec_" Return-path: Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Benjamin Marzinski Cc: "dm-devel@redhat.com" List-Id: dm-devel.ids --_004_26d8e0b78873443c8e15b863bc33922dBRMWPEXMB12corpbrocadec_ Content-Type: multipart/alternative; boundary="_000_26d8e0b78873443c8e15b863bc33922dBRMWPEXMB12corpbrocadec_" --_000_26d8e0b78873443c8e15b863bc33922dBRMWPEXMB12corpbrocadec_ Content-Type: text/plain; charset="us-ascii" Hi Ben, I have made the changes as per the below review comments . Could you please review the attached patch and provide us your valuable comments . Below are the files that has been changed . libmultipath/config.c | 3 +++ libmultipath/config.h | 9 +++++++++ libmultipath/configure.c | 3 +++ libmultipath/defaults.h | 3 ++- libmultipath/dict.c | 84 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------ libmultipath/dict.h | 3 +-- libmultipath/propsel.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++-- libmultipath/propsel.h | 3 +++ libmultipath/structs.h | 14 ++++++++++---- libmultipath/structs_vec.c | 6 ++++++ multipath/multipath.conf.5 | 57 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ multipathd/main.c | 70 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--- Regards, Muneendra. _____________________________________________ From: Muneendra Kumar M Sent: Tuesday, January 17, 2017 4:13 PM To: 'Benjamin Marzinski' Cc: dm-devel@redhat.com Subject: RE: [dm-devel] deterministic io throughput in multipath Hi Ben, Thanks for the review. In dict.c I will make sure I will make generic functions which will be used by both delay_checks and err_checks. We want to increment the path failures every time the path goes down regardless of whether multipathd or the kernel noticed the failure of paths. Thanks for pointing this. I will completely agree with the idea which you mentioned below by reconsidering the san_path_err_threshold_window with san_path_err_forget_rate. This will avoid counting time when the path was down as time where the path wasn't having problems. I will incorporate all the changes mentioned below and will resend the patch once the testing is done. Regards, Muneendra. -----Original Message----- From: Benjamin Marzinski [mailto:bmarzins@redhat.com] Sent: Tuesday, January 17, 2017 6:35 AM To: Muneendra Kumar M > Cc: dm-devel@redhat.com Subject: Re: [dm-devel] deterministic io throughput in multipath On Mon, Jan 16, 2017 at 11:19:19AM +0000, Muneendra Kumar M wrote: > Hi Ben, > After the below discussion we came with the approach which will meet our > requirement. > I have attached the patch which is working good in our field tests. > Could you please review the attached patch and provide us your valuable > comments . I can see a number of issues with this patch. First, some nit-picks: - I assume "dis_reinstante_time" should be "dis_reinstate_time" - The indenting in check_path_validity_err is wrong, which made it confusing until I noticed that if (clock_gettime(CLOCK_MONOTONIC, &start_time) != 0) doesn't have an open brace, and shouldn't indent the rest of the function. - You call clock_gettime in check_path, but never use the result. - In dict.c, instead of writing your own functions that are the same as the *_delay_checks functions, you could make those functions generic and use them for both. To go match the other generic function names they would probably be something like set_off_int_undef print_off_int_undef You would also need to change DELAY_CHECKS_* and ERR_CHECKS_* to point to some common enum that you created, the way user_friendly_names_states (to name one of many) does. The generic enum used by *_off_int_undef would be something like. enum no_undef { NU_NO = -1, NU_UNDEF = 0, } The idea is to try to cut down on the number of functions that are simply copy-pasting other functions in dict.c. Those are all minor cleanup issues, but there are some bigger problems. Instead of checking if san_path_err_threshold, san_path_err_threshold_window, and san_path_err_recovery_time are greater than zero seperately, you should probably check them all at the start of check_path_validity_err, and return 0 unless they all are set. Right now, if a user sets san_path_err_threshold and san_path_err_threshold_window but not san_path_err_recovery_time, their path will never recover after it hits the error threshold. I pretty sure that you don't mean to permanently disable the paths. time_t is a signed type, which means that if you get the clock time in update_multpath and then fail to get the clock time in check_path_validity_err, this check: start_time.tv_sec - pp->failure_start_time) < pp->mpp->san_path_err_threshold_window will always be true. I realize that clock_gettime is very unlikely to fail. But if it does, probably the safest thing to so is to just immediately return 0 in check_path_validity_err. The way you set path_failures in update_multipath may not get you what you want. It will only count path failures found by the kernel, and not the path checker. If the check_path finds the error, pp->state will be set to PATH_DOWN before pp->dmstate is set to PSTATE_FAILED. That means you will not increment path_failures. Perhaps this is what you want, but I would assume that you would want to count every time the path goes down regardless of whether multipathd or the kernel noticed it. I'm not super enthusiastic about how the san_path_err_threshold_window works. First, it starts counting from when the path goes down, so if the path takes long enough to get restored, and then fails immediately, it can just keep failing and it will never hit the san_path_err_threshold_window, since it spends so much of that time with the path failed. Also, the window gets set on the first error, and never reset until the number of errors is over the threshold. This means that if you get one early error and then a bunch of errors much later, you will go for (2 x san_path_err_threshold) - 1 errors until you stop reinstating the path, because of the window reset in the middle of the string of errors. It seems like a better idea would be to have check_path_validity_err reset path_failures as soon as it notices that you are past san_path_err_threshold_window, instead of waiting till the number of errors hits san_path_err_threshold. If I was going to design this, I think I would have san_path_err_threshold and san_path_err_recovery_time like you do, but instead of having a san_path_err_threshold_window, I would have something like san_path_err_forget_rate. The idea is that every san_path_err_forget_rate number of successful path checks you decrement path_failures by 1. This means that there is no window after which you reset. If the path failures come in faster than the forget rate, you will eventually hit the error threshold. This also has the benefit of easily not counting time when the path was down as time where the path wasn't having problems. But if you don't like my idea, yours will work fine with some polish. -Ben > Below are the files that has been changed . > > libmultipath/config.c | 3 +++ > libmultipath/config.h | 9 +++++++++ > libmultipath/configure.c | 3 +++ > libmultipath/defaults.h | 1 + > libmultipath/dict.c | 80 > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > libmultipath/dict.h | 1 + > libmultipath/propsel.c | 44 > ++++++++++++++++++++++++++++++++++++++++++++ > libmultipath/propsel.h | 6 ++++++ > libmultipath/structs.h | 12 +++++++++++- > libmultipath/structs_vec.c | 10 ++++++++++ > multipath/multipath.conf.5 | 58 > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > multipathd/main.c | 61 > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-- > > We have added three new config parameters whose description is below. > 1.san_path_err_threshold: > If set to a value greater than 0, multipathd will watch paths and > check how many times a path has been failed due to errors. If the number > of failures on a particular path is greater then the > san_path_err_threshold then the path will not reinstate till > san_path_err_recovery_time. These path failures should occur within a > san_path_err_threshold_window time frame, if not we will consider the path > is good enough to reinstate. > > 2.san_path_err_threshold_window: > If set to a value greater than 0, multipathd will check whether > the path failures has exceeded the san_path_err_threshold within this > time frame i.e san_path_err_threshold_window . If so we will not reinstate > the path till san_path_err_recovery_time. > > 3.san_path_err_recovery_time: > If set to a value greater than 0, multipathd will make sure that when path > failures has exceeded the san_path_err_threshold within > san_path_err_threshold_window then the path will be placed in failed > state for san_path_err_recovery_time duration. Once > san_path_err_recovery_time has timeout we will reinstate the failed path > . > > Regards, > Muneendra. > > -----Original Message----- > From: Muneendra Kumar M > Sent: Wednesday, January 04, 2017 6:56 PM > To: 'Benjamin Marzinski' > > Cc: dm-devel@redhat.com > Subject: RE: [dm-devel] deterministic io throughput in multipath > > Hi Ben, > Thanks for the information. > > Regards, > Muneendra. > > -----Original Message----- > From: Benjamin Marzinski [[1]mailto:bmarzins@redhat.com] > Sent: Tuesday, January 03, 2017 10:42 PM > To: Muneendra Kumar M <[2]mmandala@Brocade.com> > Cc: [3]dm-devel@redhat.com > Subject: Re: [dm-devel] deterministic io throughput in multipath > > On Mon, Dec 26, 2016 at 09:42:48AM +0000, Muneendra Kumar M wrote: > > Hi Ben, > > > > If there are two paths on a dm-1 say sda and sdb as below. > > > > # multipath -ll > > mpathd (3600110d001ee7f0102050001cc0b6751) dm-1 SANBlaze,VLUN > MyLun > > size=8.0M features='0' hwhandler='0' wp=rw > > `-+- policy='round-robin 0' prio=50 status=active > > |- 8:0:1:0 sda 8:48 active ready running > > `- 9:0:1:0 sdb 8:64 active ready running > > > > And on sda if iam seeing lot of errors due to which the sda path is > fluctuating from failed state to active state and vicevera. > > > > My requirement is something like this if sda is failed for more then 5 > > times in a hour duration ,then I want to keep the sda in failed state > > for few hours (3hrs) > > > > And the data should travel only thorugh sdb path. > > Will this be possible with the below parameters. > > No. delay_watch_checks sets how may path checks you watch a path that has > recently come back from the failed state. If the path fails again within > this time, multipath device delays it. This means that the delay is > always trigger by two failures within the time limit. It's possible to > adapt this to count numbers of failures, and act after a certain number > within a certain timeframe, but it would take a bit more work. > > delay_wait_checks doesn't guarantee that it will delay for any set length > of time. Instead, it sets the number of consecutive successful path > checks that must occur before the path is usable again. You could set this > for 3 hours of path checks, but if a check failed during this time, you > would restart the 3 hours over again. > > -Ben > > > Can you just let me know what values I should add for delay_watch_checks > and delay_wait_checks. > > > > Regards, > > Muneendra. > > > > > > > > -----Original Message----- > > From: Muneendra Kumar M > > Sent: Thursday, December 22, 2016 11:10 AM > > To: 'Benjamin Marzinski' <[4]bmarzins@redhat.com> > > Cc: [5]dm-devel@redhat.com > > Subject: RE: [dm-devel] deterministic io throughput in multipath > > > > Hi Ben, > > > > Thanks for the reply. > > I will look into this parameters will do the internal testing and let > you know the results. > > > > Regards, > > Muneendra. > > > > -----Original Message----- > > From: Benjamin Marzinski [[6]mailto:bmarzins@redhat.com] > > Sent: Wednesday, December 21, 2016 9:40 PM > > To: Muneendra Kumar M <[7]mmandala@Brocade.com> > > Cc: [8]dm-devel@redhat.com > > Subject: Re: [dm-devel] deterministic io throughput in multipath > > > > Have you looked into the delay_watch_checks and delay_wait_checks > configuration parameters? The idea behind them is to minimize the use of > paths that are intermittently failing. > > > > -Ben > > > > On Mon, Dec 19, 2016 at 11:50:36AM +0000, Muneendra Kumar M wrote: > > > Customers using Linux host (mostly RHEL host) using a SAN network > for > > > block storage, complain the Linux multipath stack is not resilient > to > > > handle non-deterministic storage network behaviors. This has caused > many > > > customer move away to non-linux based servers. The intent of the > below > > > patch and the prevailing issues are given below. With the below > design we > > > are seeing the Linux multipath stack becoming resilient to such > network > > > issues. We hope by getting this patch accepted will help in more > Linux > > > server adoption that use SAN network. > > > > > > I have already sent the design details to the community in a > different > > > mail chain and the details are available in the below link. > > > > > > > [1][9]https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=DgIDAw&c=IL_XqQWOjubgfqINi2jTzg&r=E3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=vfwpVp6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=q5OI-lfefNC2CHKmyUkokgiyiPo_Uj7MRu52hG3MKzM&e= > . > > > > > > Can you please go through the design and send the comments to us. > > > > > > > > > > > > Regards, > > > > > > Muneendra. > > > > > > > > > > > > > > > > > > References > > > > > > Visible links > > > 1. > > > > [10]https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_ > > > ar > > > chives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=DgIDAw&c=IL_XqQWOj > > > ub > > > gfqINi2jTzg&r=E3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=vfwpVp6e > > > 1K > > > XtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=q5OI-lfefNC2CHKmyUkokgiyiPo_Uj7M > > > Ru > > > 52hG3MKzM&e= > > > > > -- > > > dm-devel mailing list > > > [11]dm-devel@redhat.com > > > > [12]https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_ > > > ma > > > ilman_listinfo_dm-2Ddevel&d=DgIDAw&c=IL_XqQWOjubgfqINi2jTzg&r=E3ftc4 > > > 7B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=vfwpVp6e1KXtRA0ctwHYJ7cDmPsL > > > > i2C1L9pox7uexsY&s=UyE46dXOrNTbPz_TVGtpoHl3J3h_n0uYhI4TI-PgyWg&e= > > > References > > Visible links > 1. mailto:bmarzins@redhat.com > 2. mailto:mmandala@brocade.com > 3. mailto:dm-devel@redhat.com > 4. mailto:bmarzins@redhat.com > 5. mailto:dm-devel@redhat.com > 6. mailto:bmarzins@redhat.com > 7. mailto:mmandala@brocade.com > 8. mailto:dm-devel@redhat.com > 9. https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=DgIDAw&c=IL_XqQWOjubgfqINi2jTzg&r=E3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=vfwpVp6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=q5OI-lfefNC2CHKmyUkokgiyiPo_Uj7MRu52hG3MKzM&e > 10. https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_ > 11. mailto:dm-devel@redhat.com > 12. > https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_ --_000_26d8e0b78873443c8e15b863bc33922dBRMWPEXMB12corpbrocadec_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable
Hi Ben,
I have made the changes as per the below revie= w comments .
 
Could you please review the attached patch and= provide us your valuable comments .
Below are the files that has been changed .
 
libmultipath/config.c    &= nbsp; |  3 +++
libmultipath/config.h    =   |  9 +++++++++
libmultipath/configure.c   |  = 3 +++
libmultipath/defaults.h    |&n= bsp; 3 ++-
libmultipath/dict.c    &n= bsp;   | 84 ++++++++++= 3;++++++++++++++= 3;++++++++++++++= 3;++++++++++++++= 3;++++------------------------
libmultipath/dict.h    &n= bsp;   |  3 +--
libmultipath/propsel.c    = ; | 48 +++++++++++++= 3;++++++++++++++= 3;++++++++++++++= 3;++--
libmultipath/propsel.h    = ; |  3 +++
libmultipath/structs.h    = ; | 14 ++++++++++----
libmultipath/structs_vec.c |  6 += 3;++++
multipath/multipath.conf.5 | 57 ++= 3;++++++++++++++= 3;++++++++++++++= 3;++++++++++++++= 3;+++++++++
multipathd/main.c    &nbs= p;     | 70 ++++++++= 3;++++++++++++++= 3;++++++++++++++= 3;++++++++++++++= 3;+++++++++++++---
 
 
Regards,
Muneendra.
 
_____________________________________________
From: Muneendra Kumar M
Sent: Tuesday, January 17, 2017 4:13 PM
To: 'Benjamin Marzinski' <bmarzins@redhat.com>
Cc: dm-devel@redhat.com
Subject: RE: [dm-devel] deterministic io throughput in multipath
 
 
Hi Ben,
Thanks for the review.
In dict.c  I will make sure I will make generic functions which w= ill be used by both delay_checks and err_checks.
 
We want to increment the path failures every time the path goes down r= egardless of whether multipathd or the kernel noticed the failure of paths.=
Thanks for pointing this.
 
I will completely agree with the idea which you mentioned below by rec= onsidering the san_path_err_threshold_window with
san_path_err_forget_rate. This will avoid counting time when the path = was down as time where the path wasn't having problems.
 
I will incorporate all the changes mentioned below and will resend the= patch once the testing is done.
 
Regards,
Muneendra.
 
 
 
-----Original Message-----
From: Benjamin Marzinski [mailto:bmarzins@redhat.com]
Sent: Tuesday, January 17, 2017 6:35 AM
To: Muneendra Kumar M <mmandala@Brocade.com>
Cc: dm-de= vel@redhat.com
Subject: Re: [dm-devel] deterministic io throughput in multipath
 
On Mon, Jan 16, 2017 at 11:19:19AM +0000, Muneendra Kumar M wrote:=
>    Hi Ben,
>    After the below discussion we  came with t= he approach which will meet our
>    requirement.
>    I have attached the patch which is working good= in our field tests.
>    Could you please review the attached patch and = provide us your valuable
>    comments .
 
I can see a number of issues with this patch.
 
First, some nit-picks:
- I assume "dis_reinstante_time" should be "dis_reinsta= te_time"
 
- The indenting in check_path_validity_err is wrong, which made it
  confusing until I noticed that
 
if (clock_gettime(CLOCK_MONOTONIC, &start_time) !=3D 0)
 
  doesn't have an open brace, and shouldn't indent the rest of th= e
  function.
 
- You call clock_gettime in check_path, but never use the result.
 
- In dict.c, instead of writing your own functions that are the same a= s
  the *_delay_checks functions, you could make those functions ge= neric
  and use them for both.  To go match the other generic func= tion names
  they would probably be something like
 
set_off_int_undef
 
print_off_int_undef
 
  You would also need to change DELAY_CHECKS_* and ERR_CHECKS_* t= o
  point to some common enum that you created, the way
  user_friendly_names_states (to name one of many) does. The gene= ric
  enum used by *_off_int_undef would be something like.
 
enum no_undef {
        NU_NO =3D -1,
        NU_UNDEF =3D 0,
}
 
  The idea is to try to cut down on the number of functions that = are
  simply copy-pasting other functions in dict.c.
 
 
Those are all minor cleanup issues, but there are some bigger problems= .
 
Instead of checking if san_path_err_threshold, san_path_err_threshold_= window, and san_path_err_recovery_time are greater than zero seperately, yo= u should probably check them all at the start of check_path_validity_err, a= nd return 0 unless they all are set.
Right now, if a user sets san_path_err_threshold and san_path_err_thre= shold_window but not san_path_err_recovery_time, their path will never reco= ver after it hits the error threshold.  I pretty sure that you don't m= ean to permanently disable the paths.
 
 
time_t is a signed type, which means that if you get the clock time in= update_multpath and then fail to get the clock time in check_path_validity= _err, this check:
 
start_time.tv_sec - pp->failure_start_time) < pp->mpp->san= _path_err_threshold_window
 
will always be true.  I realize that clock_gettime is very unlike= ly to fail.  But if it does, probably the safest thing to so is to jus= t immediately return 0 in check_path_validity_err.
 
 
The way you set path_failures in update_multipath may not get you what= you want.  It will only count path failures found by the kernel, and = not the path checker.  If the check_path finds the error, pp->state= will be set to PATH_DOWN before pp->dmstate is set to PSTATE_FAILED. That means you will not increment path_failures. P= erhaps this is what you want, but I would assume that you would want to cou= nt every time the path goes down regardless of whether multipathd or the ke= rnel noticed it.
 
 
I'm not super enthusiastic about how the san_path_err_threshold_window= works.  First, it starts counting from when the path goes down, so if= the path takes long enough to get restored, and then fails immediately, it= can just keep failing and it will never hit the san_path_err_threshold_window, since it spends so much of that time= with the path failed.  Also, the window gets set on the first error, = and never reset until the number of errors is over the threshold.  Thi= s means that if you get one early error and then a bunch of errors much later, you will go for (2 x san_path_err_thresh= old) - 1 errors until you stop reinstating the path, because of the window = reset in the middle of the string of errors.  It seems like a better i= dea would be to have check_path_validity_err reset path_failures as soon as it notices that you are past san_path_err_th= reshold_window, instead of waiting till the number of errors hits san_path_= err_threshold.
 
 
If I was going to design this, I think I would have san_path_err_thres= hold and san_path_err_recovery_time like you do, but instead of having a sa= n_path_err_threshold_window, I would have something like san_path_err_forge= t_rate.  The idea is that every san_path_err_forget_rate number of successful path checks you decrement pat= h_failures by 1. This means that there is no window after which you reset.&= nbsp; If the path failures come in faster than the forget rate, you will ev= entually hit the error threshold. This also has the benefit of easily not counting time when the path was down as = time where the path wasn't having problems. But if you don't like my idea, = yours will work fine with some polish.
 
-Ben
 
 
>    Below are the files that has been changed .
>     
>    libmultipath/config.c    &n= bsp; |  3 +++
>    libmultipath/config.h    &n= bsp; |  9 +++++++++
>    libmultipath/configure.c   |  3 = +++
>    libmultipath/defaults.h    |&nbs= p; 1 +
>    libmultipath/dict.c    &nbs= p;        | 80
>    +++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++
>    libmultipath/dict.h    &nbs= p;   |  1 +
>    libmultipath/propsel.c     = | 44
>    +++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++
>    libmultipath/propsel.h     = |  6 ++++++
>    libmultipath/structs.h     = | 12 +++++++++++-
>    libmultipath/structs_vec.c | 10 +++= +++++++
>    multipath/multipath.conf.5 | 58
>    +++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;+++
>    multipathd/main.c     =      | 61
>    +++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++--
>     
>    We have added three new config parameters whose= description is below.
>    1.san_path_err_threshold:
>            If s= et to a value greater than 0, multipathd will watch paths and
>    check how many times a path has been failed due= to errors. If the number
>    of failures on a particular path is greater the= n the
>    san_path_err_threshold then the path will not&n= bsp; reinstate  till
>    san_path_err_recovery_time. These path failures= should occur within a
>    san_path_err_threshold_window time frame, if no= t we will consider the path
>    is good enough to reinstate.
>     
>    2.san_path_err_threshold_window:
>            If s= et to a value greater than 0, multipathd will check whether
>    the path failures has exceeded  the san_pa= th_err_threshold within this
>    time frame i.e san_path_err_threshold_window . = If so we will not reinstate
>    the path till     &nbs= p;    san_path_err_recovery_time.
>     
>    3.san_path_err_recovery_time:
>    If set to a value greater than 0, multipathd wi= ll make sure that when path
>    failures has exceeded the san_path_err_threshol= d within
>    san_path_err_threshold_window then the path&nbs= p; will be placed in failed
>    state for san_path_err_recovery_time duration. = Once
>    san_path_err_recovery_time has timeout  we= will reinstate the failed path
>    .
>     
>    Regards,
>    Muneendra.
>     
>    -----Original Message-----
>    From: Muneendra Kumar M
>    Sent: Wednesday, January 04, 2017 6:56 PM
>    To: 'Benjamin Marzinski' <bmarzins@redhat.com>
>    Subject: RE: [dm-devel] deterministic io throug= hput in multipath
>     
>    Hi Ben,
>    Thanks for the information.
>     
>    Regards,
>    Muneendra.
>     
>    -----Original Message-----
>    From: Benjamin Marzinski [[1]mailto:bmarzins@redhat.com]
>    Sent: Tuesday, January 03, 2017 10:42 PM
>    To: Muneendra Kumar M <[2]mmandala@Brocade.com>
>    Cc: [3]d= m-devel@redhat.com
>    Subject: Re: [dm-devel] deterministic io throug= hput in multipath
>     
>    On Mon, Dec 26, 2016 at 09:42:48AM +0000, M= uneendra Kumar M wrote:
>    > Hi Ben,
>    >
>    > If there are two paths on a dm-1 say sda a= nd sdb as below.
>    >
>    > #  multipath -ll
>    >        = mpathd (3600110d001ee7f0102050001cc0b6751) dm-1 SANBlaze,VLUN
>    MyLun
>    >        = size=3D8.0M features=3D'0' hwhandler=3D'0' wp=3Drw
>    >        = `-+- policy=3D'round-robin 0' prio=3D50 status=3Dactive
>    >       &= nbsp;  |- 8:0:1:0  sda 8:48 active ready  running
>    >       &= nbsp;  `- 9:0:1:0  sdb 8:64 active ready  running  = ;       
>    >
>    > And on sda if iam seeing lot of errors due= to which the sda path is
>    fluctuating from failed state to active state a= nd vicevera.
>    >
>    > My requirement is something like this if s= da is failed for more then 5
>    > times in a hour duration ,then I want to k= eep the sda in failed state
>    > for few hours (3hrs)
>    >
>    > And the data should travel only thorugh sd= b path.
>    > Will this be possible with the below param= eters.
>     
>    No. delay_watch_checks sets how may path checks= you watch a path that has
>    recently come back from the failed state. If th= e path fails again within
>    this time, multipath device delays it.  Th= is means that the delay is
>    always trigger by two failures within the time = limit.  It's possible to
>    adapt this to count numbers of failures, and ac= t after a certain number
>    within a certain timeframe, but it would take a= bit more work.
>     
>    delay_wait_checks doesn't guarantee that it wil= l delay for any set length
>    of time.  Instead, it sets the number of c= onsecutive successful path
>    checks that must occur before the path is usabl= e again. You could set this
>    for 3 hours of path checks, but if a check fail= ed during this time, you
>    would restart the 3 hours over again.
>     
>    -Ben
>     
>    > Can you just let me know what values I sho= uld add for delay_watch_checks
>    and delay_wait_checks.
>    >
>    > Regards,
>    > Muneendra.
>    >
>    >
>    >
>    > -----Original Message-----
>    > From: Muneendra Kumar M
>    > Sent: Thursday, December 22, 2016 11:10 AM=
>    > To: 'Benjamin Marzinski' <[4]bmarzins@redhat.com>
>    > Cc: [5]dm-devel@redhat.com
>    > Subject: RE: [dm-devel] deterministic io t= hroughput in multipath
>    >
>    > Hi Ben,
>    >
>    > Thanks for the reply.
>    > I will look into this parameters will do t= he internal testing and let
>    you know the results.
>    >
>    > Regards,
>    > Muneendra.
>    >
>    > -----Original Message-----
>    > From: Benjamin Marzinski [[6]mailto:bmarzins@redhat.com]
>    > Sent: Wednesday, December 21, 2016 9:40 PM=
>    > To: Muneendra Kumar M <[7]mmandala@Brocade.com>
>    > Cc: [8]dm-devel@redhat.com
>    > Subject: Re: [dm-devel] deterministic io t= hroughput in multipath
>    >
>    > Have you looked into the delay_watch_check= s and delay_wait_checks
>    configuration parameters?  The idea behind= them is to minimize the use of
>    paths that are intermittently failing.
>    >
>    > -Ben
>    >
>    > On Mon, Dec 19, 2016 at 11:50:36AM +00= 00, Muneendra Kumar M wrote:
>    > >    Customers using Lin= ux host (mostly RHEL host) using a SAN network
>    for
>    > >    block storage, comp= lain the Linux multipath stack is not resilient
>    to
>    > >    handle non-determin= istic storage network behaviors. This has caused
>    many
>    > >    customer move away = to non-linux based servers. The intent of the
>    below
>    > >    patch and the preva= iling issues are given below. With the below
>    design we
>    > >    are seeing the Linu= x multipath stack becoming resilient to such
>    network
>    > >    issues. We hope by = getting this patch accepted will help in more
>    Linux
>    > >    server adoption tha= t use SAN network.
>    > >
>    > >    I have already sent= the design details to the community in a
>    different
>    > >    mail chain and the = details are available in the below link.
>    > >
>    > >   
>    .
>    > >
>    > >    Can you please go t= hrough the design and send the comments to us.
>    > >
>    > >     
>    > >
>    > >    Regards,
>    > >
>    > >    Muneendra.
>    > >
>    > >     
>    > >
>    > >     
>    > >
>    > > References
>    > >
>    > >    Visible links
>    > >    1.
>    > >
>    > > ar
>    > > chives_dm-2Ddevel_2016-2DDecember_msg= 00122.html&d=3DDgIDAw&c=3DIL_XqQWOj
>    > > ub
>    > > gfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVa= Yvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6e
>    > > 1K
>    > > XtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&= ;s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7M
>    > > Ru
>    > > 52hG3MKzM&e=3D
>    >
>    > > --
>    > > dm-devel mailing list
>    > > [11]dm-devel@redhat.com
>    > >
>    > > ma
>    > > ilman_listinfo_dm-2Ddevel&d=3DDgI= DAw&c=3DIL_XqQWOjubgfqINi2jTzg&r=3DE3ftc4
>    > > 7B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0= &m=3DvfwpVp6e1KXtRA0ctwHYJ7cDmPsL
>    > >
> i2C1L9pox7uexsY&s=3DUyE46dXOrNTbPz_TVGtpoHl3J3h_n0uYhI4TI-Pgy= Wg&e=3D
>     
>
> References
>
>    Visible links
>   12.
 
 
 
 
--_000_26d8e0b78873443c8e15b863bc33922dBRMWPEXMB12corpbrocadec_-- --_004_26d8e0b78873443c8e15b863bc33922dBRMWPEXMB12corpbrocadec_ Content-Type: application/octet-stream; name="san_path_error.patch" Content-Description: san_path_error.patch Content-Disposition: attachment; filename="san_path_error.patch"; size=23007; creation-date="Mon, 23 Jan 2017 06:23:07 GMT"; modification-date="Mon, 23 Jan 2017 06:26:33 GMT" Content-Transfer-Encoding: base64 ZGlmZiAtLWdpdCBhL2xpYm11bHRpcGF0aC9jb25maWcuYyBiL2xpYm11bHRpcGF0aC9jb25maWcu YwppbmRleCAxNWRkYmQ4Li5iZTM4NGFmIDEwMDY0NAotLS0gYS9saWJtdWx0aXBhdGgvY29uZmln LmMKKysrIGIvbGlibXVsdGlwYXRoL2NvbmZpZy5jCkBAIC0zNDgsNiArMzQ4LDkgQEAgbWVyZ2Vf aHdlIChzdHJ1Y3QgaHdlbnRyeSAqIGRzdCwgc3RydWN0IGh3ZW50cnkgKiBzcmMpCiAJbWVyZ2Vf bnVtKGRlbGF5X3dhaXRfY2hlY2tzKTsKIAltZXJnZV9udW0oc2tpcF9rcGFydHgpOwogCW1lcmdl X251bShtYXhfc2VjdG9yc19rYik7CisJbWVyZ2VfbnVtKHNhbl9wYXRoX2Vycl90aHJlc2hvbGQp OworCW1lcmdlX251bShzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUpOworCW1lcmdlX251bShzYW5f cGF0aF9lcnJfcmVjb3ZlcnlfdGltZSk7CiAKIAkvKgogCSAqIE1ha2Ugc3VyZSBmZWF0dXJlcyBp cyBjb25zaXN0ZW50IHdpdGgKZGlmZiAtLWdpdCBhL2xpYm11bHRpcGF0aC9jb25maWcuaCBiL2xp Ym11bHRpcGF0aC9jb25maWcuaAppbmRleCA5NjcwMDIwLi45ZTQ3ODk0IDEwMDY0NAotLS0gYS9s aWJtdWx0aXBhdGgvY29uZmlnLmgKKysrIGIvbGlibXVsdGlwYXRoL2NvbmZpZy5oCkBAIC02NSw2 ICs2NSw5IEBAIHN0cnVjdCBod2VudHJ5IHsKIAlpbnQgZGVmZXJyZWRfcmVtb3ZlOwogCWludCBk ZWxheV93YXRjaF9jaGVja3M7CiAJaW50IGRlbGF5X3dhaXRfY2hlY2tzOworCWludCBzYW5fcGF0 aF9lcnJfdGhyZXNob2xkOworCWludCBzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGU7CisJaW50IHNh bl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lOwogCWludCBza2lwX2twYXJ0eDsKIAlpbnQgbWF4X3Nl Y3RvcnNfa2I7CiAJY2hhciAqIGJsX3Byb2R1Y3Q7CkBAIC05Myw2ICs5Niw5IEBAIHN0cnVjdCBt cGVudHJ5IHsKIAlpbnQgZGVmZXJyZWRfcmVtb3ZlOwogCWludCBkZWxheV93YXRjaF9jaGVja3M7 CiAJaW50IGRlbGF5X3dhaXRfY2hlY2tzOworCWludCBzYW5fcGF0aF9lcnJfdGhyZXNob2xkOwor CWludCBzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGU7CisJaW50IHNhbl9wYXRoX2Vycl9yZWNvdmVy eV90aW1lOwogCWludCBza2lwX2twYXJ0eDsKIAlpbnQgbWF4X3NlY3RvcnNfa2I7CiAJdWlkX3Qg dWlkOwpAQCAtMTM4LDYgKzE0NCw5IEBAIHN0cnVjdCBjb25maWcgewogCWludCBwcm9jZXNzZWRf bWFpbl9jb25maWc7CiAJaW50IGRlbGF5X3dhdGNoX2NoZWNrczsKIAlpbnQgZGVsYXlfd2FpdF9j aGVja3M7CisJaW50IHNhbl9wYXRoX2Vycl90aHJlc2hvbGQ7CisJaW50IHNhbl9wYXRoX2Vycl9m b3JnZXRfcmF0ZTsKKwlpbnQgc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWU7CiAJaW50IHV4c29j a190aW1lb3V0OwogCWludCBzdHJpY3RfdGltaW5nOwogCWludCByZXRyaWdnZXJfdHJpZXM7CmRp ZmYgLS1naXQgYS9saWJtdWx0aXBhdGgvY29uZmlndXJlLmMgYi9saWJtdWx0aXBhdGgvY29uZmln dXJlLmMKaW5kZXggYTBmY2FkOS4uNWFkMzAwNyAxMDA2NDQKLS0tIGEvbGlibXVsdGlwYXRoL2Nv bmZpZ3VyZS5jCisrKyBiL2xpYm11bHRpcGF0aC9jb25maWd1cmUuYwpAQCAtMjk0LDYgKzI5NCw5 IEBAIGludCBzZXR1cF9tYXAoc3RydWN0IG11bHRpcGF0aCAqbXBwLCBjaGFyICpwYXJhbXMsIGlu dCBwYXJhbXNfc2l6ZSkKIAlzZWxlY3RfZGVmZXJyZWRfcmVtb3ZlKGNvbmYsIG1wcCk7CiAJc2Vs ZWN0X2RlbGF5X3dhdGNoX2NoZWNrcyhjb25mLCBtcHApOwogCXNlbGVjdF9kZWxheV93YWl0X2No ZWNrcyhjb25mLCBtcHApOworCXNlbGVjdF9zYW5fcGF0aF9lcnJfdGhyZXNob2xkKGNvbmYsIG1w cCk7CisJc2VsZWN0X3Nhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZShjb25mLCBtcHApOworCXNlbGVj dF9zYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZShjb25mLCBtcHApOwogCXNlbGVjdF9za2lwX2tw YXJ0eChjb25mLCBtcHApOwogCXNlbGVjdF9tYXhfc2VjdG9yc19rYihjb25mLCBtcHApOwogCmRp ZmYgLS1naXQgYS9saWJtdWx0aXBhdGgvZGVmYXVsdHMuaCBiL2xpYm11bHRpcGF0aC9kZWZhdWx0 cy5oCmluZGV4IGI5YjBhMzcuLjNlZjE1NzkgMTAwNjQ0Ci0tLSBhL2xpYm11bHRpcGF0aC9kZWZh dWx0cy5oCisrKyBiL2xpYm11bHRpcGF0aC9kZWZhdWx0cy5oCkBAIC0yMyw3ICsyMyw4IEBACiAj ZGVmaW5lIERFRkFVTFRfUkVUQUlOX0hXSEFORExFUiBSRVRBSU5fSFdIQU5ETEVSX09OCiAjZGVm aW5lIERFRkFVTFRfREVURUNUX1BSSU8JREVURUNUX1BSSU9fT04KICNkZWZpbmUgREVGQVVMVF9E RUZFUlJFRF9SRU1PVkUJREVGRVJSRURfUkVNT1ZFX09GRgotI2RlZmluZSBERUZBVUxUX0RFTEFZ X0NIRUNLUwlERUxBWV9DSEVDS1NfT0ZGCisjZGVmaW5lIERFRkFVTFRfREVMQVlfQ0hFQ0tTCU5V X05PCisjZGVmaW5lIERFRkFVTFRfRVJSX0NIRUNLUwlOVV9OTwogI2RlZmluZSBERUZBVUxUX1VF VkVOVF9TVEFDS1NJWkUgMjU2CiAjZGVmaW5lIERFRkFVTFRfUkVUUklHR0VSX0RFTEFZCTEwCiAj ZGVmaW5lIERFRkFVTFRfUkVUUklHR0VSX1RSSUVTCTMKZGlmZiAtLWdpdCBhL2xpYm11bHRpcGF0 aC9kaWN0LmMgYi9saWJtdWx0aXBhdGgvZGljdC5jCmluZGV4IGRjMjE4NDYuLjQ3NTQ1NzIgMTAw NjQ0Ci0tLSBhL2xpYm11bHRpcGF0aC9kaWN0LmMKKysrIGIvbGlibXVsdGlwYXRoL2RpY3QuYwpA QCAtMTAyMyw3ICsxMDIzLDcgQEAgZGVjbGFyZV9tcF9oYW5kbGVyKHJlc2VydmF0aW9uX2tleSwg c2V0X3Jlc2VydmF0aW9uX2tleSkKIGRlY2xhcmVfbXBfc25wcmludChyZXNlcnZhdGlvbl9rZXks IHByaW50X3Jlc2VydmF0aW9uX2tleSkKIAogc3RhdGljIGludAotc2V0X2RlbGF5X2NoZWNrcyh2 ZWN0b3Igc3RydmVjLCB2b2lkICpwdHIpCitzZXRfb2ZmX2ludF91bmRlZih2ZWN0b3Igc3RydmVj LCB2b2lkICpwdHIpCiB7CiAJaW50ICppbnRfcHRyID0gKGludCAqKXB0cjsKIAljaGFyICogYnVm ZjsKQEAgLTEwMzMsNDcgKzEwMzMsNjkgQEAgc2V0X2RlbGF5X2NoZWNrcyh2ZWN0b3Igc3RydmVj LCB2b2lkICpwdHIpCiAJCXJldHVybiAxOwogCiAJaWYgKCFzdHJjbXAoYnVmZiwgIm5vIikgfHwg IXN0cmNtcChidWZmLCAiMCIpKQotCQkqaW50X3B0ciA9IERFTEFZX0NIRUNLU19PRkY7CisJCSpp bnRfcHRyID0gTlVfTk87CiAJZWxzZSBpZiAoKCppbnRfcHRyID0gYXRvaShidWZmKSkgPCAxKQot CQkqaW50X3B0ciA9IERFTEFZX0NIRUNLU19VTkRFRjsKKwkJKmludF9wdHIgPSBOVV9VTkRFRjsK IAogCUZSRUUoYnVmZik7CiAJcmV0dXJuIDA7CiB9CiAKIGludAotcHJpbnRfZGVsYXlfY2hlY2tz KGNoYXIgKiBidWZmLCBpbnQgbGVuLCB2b2lkICpwdHIpCitwcmludF9vZmZfaW50X3VuZGVmKGNo YXIgKiBidWZmLCBpbnQgbGVuLCB2b2lkICpwdHIpCiB7CiAJaW50ICppbnRfcHRyID0gKGludCAq KXB0cjsKIAogCXN3aXRjaCgqaW50X3B0cikgewotCWNhc2UgREVMQVlfQ0hFQ0tTX1VOREVGOgor CWNhc2UgTlVfVU5ERUY6CiAJCXJldHVybiAwOwotCWNhc2UgREVMQVlfQ0hFQ0tTX09GRjoKKwlj YXNlIE5VX05POgogCQlyZXR1cm4gc25wcmludGYoYnVmZiwgbGVuLCAiXCJvZmZcIiIpOwogCWRl ZmF1bHQ6CiAJCXJldHVybiBzbnByaW50ZihidWZmLCBsZW4sICIlaSIsICppbnRfcHRyKTsKIAl9 CiB9CiAKLWRlY2xhcmVfZGVmX2hhbmRsZXIoZGVsYXlfd2F0Y2hfY2hlY2tzLCBzZXRfZGVsYXlf Y2hlY2tzKQotZGVjbGFyZV9kZWZfc25wcmludChkZWxheV93YXRjaF9jaGVja3MsIHByaW50X2Rl bGF5X2NoZWNrcykKLWRlY2xhcmVfb3ZyX2hhbmRsZXIoZGVsYXlfd2F0Y2hfY2hlY2tzLCBzZXRf ZGVsYXlfY2hlY2tzKQotZGVjbGFyZV9vdnJfc25wcmludChkZWxheV93YXRjaF9jaGVja3MsIHBy aW50X2RlbGF5X2NoZWNrcykKLWRlY2xhcmVfaHdfaGFuZGxlcihkZWxheV93YXRjaF9jaGVja3Ms IHNldF9kZWxheV9jaGVja3MpCi1kZWNsYXJlX2h3X3NucHJpbnQoZGVsYXlfd2F0Y2hfY2hlY2tz LCBwcmludF9kZWxheV9jaGVja3MpCi1kZWNsYXJlX21wX2hhbmRsZXIoZGVsYXlfd2F0Y2hfY2hl Y2tzLCBzZXRfZGVsYXlfY2hlY2tzKQotZGVjbGFyZV9tcF9zbnByaW50KGRlbGF5X3dhdGNoX2No ZWNrcywgcHJpbnRfZGVsYXlfY2hlY2tzKQotCi1kZWNsYXJlX2RlZl9oYW5kbGVyKGRlbGF5X3dh aXRfY2hlY2tzLCBzZXRfZGVsYXlfY2hlY2tzKQotZGVjbGFyZV9kZWZfc25wcmludChkZWxheV93 YWl0X2NoZWNrcywgcHJpbnRfZGVsYXlfY2hlY2tzKQotZGVjbGFyZV9vdnJfaGFuZGxlcihkZWxh eV93YWl0X2NoZWNrcywgc2V0X2RlbGF5X2NoZWNrcykKLWRlY2xhcmVfb3ZyX3NucHJpbnQoZGVs YXlfd2FpdF9jaGVja3MsIHByaW50X2RlbGF5X2NoZWNrcykKLWRlY2xhcmVfaHdfaGFuZGxlcihk ZWxheV93YWl0X2NoZWNrcywgc2V0X2RlbGF5X2NoZWNrcykKLWRlY2xhcmVfaHdfc25wcmludChk ZWxheV93YWl0X2NoZWNrcywgcHJpbnRfZGVsYXlfY2hlY2tzKQotZGVjbGFyZV9tcF9oYW5kbGVy KGRlbGF5X3dhaXRfY2hlY2tzLCBzZXRfZGVsYXlfY2hlY2tzKQotZGVjbGFyZV9tcF9zbnByaW50 KGRlbGF5X3dhaXRfY2hlY2tzLCBwcmludF9kZWxheV9jaGVja3MpCi0KK2RlY2xhcmVfZGVmX2hh bmRsZXIoZGVsYXlfd2F0Y2hfY2hlY2tzLCBzZXRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfZGVm X3NucHJpbnQoZGVsYXlfd2F0Y2hfY2hlY2tzLCBwcmludF9vZmZfaW50X3VuZGVmKQorZGVjbGFy ZV9vdnJfaGFuZGxlcihkZWxheV93YXRjaF9jaGVja3MsIHNldF9vZmZfaW50X3VuZGVmKQorZGVj bGFyZV9vdnJfc25wcmludChkZWxheV93YXRjaF9jaGVja3MsIHByaW50X29mZl9pbnRfdW5kZWYp CitkZWNsYXJlX2h3X2hhbmRsZXIoZGVsYXlfd2F0Y2hfY2hlY2tzLCBzZXRfb2ZmX2ludF91bmRl ZikKK2RlY2xhcmVfaHdfc25wcmludChkZWxheV93YXRjaF9jaGVja3MsIHByaW50X29mZl9pbnRf dW5kZWYpCitkZWNsYXJlX21wX2hhbmRsZXIoZGVsYXlfd2F0Y2hfY2hlY2tzLCBzZXRfb2ZmX2lu dF91bmRlZikKK2RlY2xhcmVfbXBfc25wcmludChkZWxheV93YXRjaF9jaGVja3MsIHByaW50X29m Zl9pbnRfdW5kZWYpCitkZWNsYXJlX2RlZl9oYW5kbGVyKGRlbGF5X3dhaXRfY2hlY2tzLCBzZXRf b2ZmX2ludF91bmRlZikKK2RlY2xhcmVfZGVmX3NucHJpbnQoZGVsYXlfd2FpdF9jaGVja3MsIHBy aW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX292cl9oYW5kbGVyKGRlbGF5X3dhaXRfY2hlY2tz LCBzZXRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfb3ZyX3NucHJpbnQoZGVsYXlfd2FpdF9jaGVj a3MsIHByaW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX2h3X2hhbmRsZXIoZGVsYXlfd2FpdF9j aGVja3MsIHNldF9vZmZfaW50X3VuZGVmKQorZGVjbGFyZV9od19zbnByaW50KGRlbGF5X3dhaXRf Y2hlY2tzLCBwcmludF9vZmZfaW50X3VuZGVmKQorZGVjbGFyZV9tcF9oYW5kbGVyKGRlbGF5X3dh aXRfY2hlY2tzLCBzZXRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfbXBfc25wcmludChkZWxheV93 YWl0X2NoZWNrcywgcHJpbnRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfZGVmX2hhbmRsZXIoc2Fu X3BhdGhfZXJyX3RocmVzaG9sZCwgc2V0X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX2RlZl9zbnBy aW50KHNhbl9wYXRoX2Vycl90aHJlc2hvbGQsIHByaW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJl X292cl9oYW5kbGVyKHNhbl9wYXRoX2Vycl90aHJlc2hvbGQsIHNldF9vZmZfaW50X3VuZGVmKQor ZGVjbGFyZV9vdnJfc25wcmludChzYW5fcGF0aF9lcnJfdGhyZXNob2xkLCBwcmludF9vZmZfaW50 X3VuZGVmKQorZGVjbGFyZV9od19oYW5kbGVyKHNhbl9wYXRoX2Vycl90aHJlc2hvbGQsIHNldF9v ZmZfaW50X3VuZGVmKQorZGVjbGFyZV9od19zbnByaW50KHNhbl9wYXRoX2Vycl90aHJlc2hvbGQs IHByaW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX21wX2hhbmRsZXIoc2FuX3BhdGhfZXJyX3Ro cmVzaG9sZCwgc2V0X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX21wX3NucHJpbnQoc2FuX3BhdGhf ZXJyX3RocmVzaG9sZCwgcHJpbnRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfZGVmX2hhbmRsZXIo c2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlLCBzZXRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfZGVm X3NucHJpbnQoc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlLCBwcmludF9vZmZfaW50X3VuZGVmKQor ZGVjbGFyZV9vdnJfaGFuZGxlcihzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUsIHNldF9vZmZfaW50 X3VuZGVmKQorZGVjbGFyZV9vdnJfc25wcmludChzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUsIHBy aW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX2h3X2hhbmRsZXIoc2FuX3BhdGhfZXJyX2Zvcmdl dF9yYXRlLCBzZXRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfaHdfc25wcmludChzYW5fcGF0aF9l cnJfZm9yZ2V0X3JhdGUsIHByaW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX21wX2hhbmRsZXIo c2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlLCBzZXRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfbXBf c25wcmludChzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUsIHByaW50X29mZl9pbnRfdW5kZWYpCitk ZWNsYXJlX2RlZl9oYW5kbGVyKHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lLCBzZXRfb2ZmX2lu dF91bmRlZikKK2RlY2xhcmVfZGVmX3NucHJpbnQoc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUs IHByaW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX292cl9oYW5kbGVyKHNhbl9wYXRoX2Vycl9y ZWNvdmVyeV90aW1lLCBzZXRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfb3ZyX3NucHJpbnQoc2Fu X3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUsIHByaW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX2h3 X2hhbmRsZXIoc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUsIHNldF9vZmZfaW50X3VuZGVmKQor ZGVjbGFyZV9od19zbnByaW50KHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lLCBwcmludF9vZmZf aW50X3VuZGVmKQorZGVjbGFyZV9tcF9oYW5kbGVyKHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1l LCBzZXRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfbXBfc25wcmludChzYW5fcGF0aF9lcnJfcmVj b3ZlcnlfdGltZSwgcHJpbnRfb2ZmX2ludF91bmRlZikKIHN0YXRpYyBpbnQKIGRlZl91eHNvY2tf dGltZW91dF9oYW5kbGVyKHN0cnVjdCBjb25maWcgKmNvbmYsIHZlY3RvciBzdHJ2ZWMpCiB7CkBA IC0xNDA0LDYgKzE0MjYsMTAgQEAgaW5pdF9rZXl3b3Jkcyh2ZWN0b3Iga2V5d29yZHMpCiAJaW5z dGFsbF9rZXl3b3JkKCJjb25maWdfZGlyIiwgJmRlZl9jb25maWdfZGlyX2hhbmRsZXIsICZzbnBy aW50X2RlZl9jb25maWdfZGlyKTsKIAlpbnN0YWxsX2tleXdvcmQoImRlbGF5X3dhdGNoX2NoZWNr cyIsICZkZWZfZGVsYXlfd2F0Y2hfY2hlY2tzX2hhbmRsZXIsICZzbnByaW50X2RlZl9kZWxheV93 YXRjaF9jaGVja3MpOwogCWluc3RhbGxfa2V5d29yZCgiZGVsYXlfd2FpdF9jaGVja3MiLCAmZGVm X2RlbGF5X3dhaXRfY2hlY2tzX2hhbmRsZXIsICZzbnByaW50X2RlZl9kZWxheV93YWl0X2NoZWNr cyk7CisgICAgICAgIGluc3RhbGxfa2V5d29yZCgic2FuX3BhdGhfZXJyX3RocmVzaG9sZCIsICZk ZWZfc2FuX3BhdGhfZXJyX3RocmVzaG9sZF9oYW5kbGVyLCAmc25wcmludF9kZWZfc2FuX3BhdGhf ZXJyX3RocmVzaG9sZCk7CisgICAgICAgIGluc3RhbGxfa2V5d29yZCgic2FuX3BhdGhfZXJyX2Zv cmdldF9yYXRlIiwgJmRlZl9zYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGVfaGFuZGxlciwgJnNucHJp bnRfZGVmX3Nhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSk7CisgICAgICAgIGluc3RhbGxfa2V5d29y ZCgic2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUiLCAmZGVmX3Nhbl9wYXRoX2Vycl9yZWNvdmVy eV90aW1lX2hhbmRsZXIsICZzbnByaW50X2RlZl9zYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSk7 CisKIAlpbnN0YWxsX2tleXdvcmQoImZpbmRfbXVsdGlwYXRocyIsICZkZWZfZmluZF9tdWx0aXBh dGhzX2hhbmRsZXIsICZzbnByaW50X2RlZl9maW5kX211bHRpcGF0aHMpOwogCWluc3RhbGxfa2V5 d29yZCgidXhzb2NrX3RpbWVvdXQiLCAmZGVmX3V4c29ja190aW1lb3V0X2hhbmRsZXIsICZzbnBy aW50X2RlZl91eHNvY2tfdGltZW91dCk7CiAJaW5zdGFsbF9rZXl3b3JkKCJyZXRyaWdnZXJfdHJp ZXMiLCAmZGVmX3JldHJpZ2dlcl90cmllc19oYW5kbGVyLCAmc25wcmludF9kZWZfcmV0cmlnZ2Vy X3RyaWVzKTsKQEAgLTE0ODYsNiArMTUxMiw5IEBAIGluaXRfa2V5d29yZHModmVjdG9yIGtleXdv cmRzKQogCWluc3RhbGxfa2V5d29yZCgiZGVmZXJyZWRfcmVtb3ZlIiwgJmh3X2RlZmVycmVkX3Jl bW92ZV9oYW5kbGVyLCAmc25wcmludF9od19kZWZlcnJlZF9yZW1vdmUpOwogCWluc3RhbGxfa2V5 d29yZCgiZGVsYXlfd2F0Y2hfY2hlY2tzIiwgJmh3X2RlbGF5X3dhdGNoX2NoZWNrc19oYW5kbGVy LCAmc25wcmludF9od19kZWxheV93YXRjaF9jaGVja3MpOwogCWluc3RhbGxfa2V5d29yZCgiZGVs YXlfd2FpdF9jaGVja3MiLCAmaHdfZGVsYXlfd2FpdF9jaGVja3NfaGFuZGxlciwgJnNucHJpbnRf aHdfZGVsYXlfd2FpdF9jaGVja3MpOworICAgICAgICBpbnN0YWxsX2tleXdvcmQoInNhbl9wYXRo X2Vycl90aHJlc2hvbGQiLCAmaHdfc2FuX3BhdGhfZXJyX3RocmVzaG9sZF9oYW5kbGVyLCAmc25w cmludF9od19zYW5fcGF0aF9lcnJfdGhyZXNob2xkKTsKKyAgICAgICAgaW5zdGFsbF9rZXl3b3Jk KCJzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUiLCAmaHdfc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRl X2hhbmRsZXIsICZzbnByaW50X2h3X3Nhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSk7CisgICAgICAg IGluc3RhbGxfa2V5d29yZCgic2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUiLCAmaHdfc2FuX3Bh dGhfZXJyX3JlY292ZXJ5X3RpbWVfaGFuZGxlciwgJnNucHJpbnRfaHdfc2FuX3BhdGhfZXJyX3Jl Y292ZXJ5X3RpbWUpOwogCWluc3RhbGxfa2V5d29yZCgic2tpcF9rcGFydHgiLCAmaHdfc2tpcF9r cGFydHhfaGFuZGxlciwgJnNucHJpbnRfaHdfc2tpcF9rcGFydHgpOwogCWluc3RhbGxfa2V5d29y ZCgibWF4X3NlY3RvcnNfa2IiLCAmaHdfbWF4X3NlY3RvcnNfa2JfaGFuZGxlciwgJnNucHJpbnRf aHdfbWF4X3NlY3RvcnNfa2IpOwogCWluc3RhbGxfc3VibGV2ZWxfZW5kKCk7CkBAIC0xNTE1LDYg KzE1NDQsMTAgQEAgaW5pdF9rZXl3b3Jkcyh2ZWN0b3Iga2V5d29yZHMpCiAJaW5zdGFsbF9rZXl3 b3JkKCJkZWZlcnJlZF9yZW1vdmUiLCAmb3ZyX2RlZmVycmVkX3JlbW92ZV9oYW5kbGVyLCAmc25w cmludF9vdnJfZGVmZXJyZWRfcmVtb3ZlKTsKIAlpbnN0YWxsX2tleXdvcmQoImRlbGF5X3dhdGNo X2NoZWNrcyIsICZvdnJfZGVsYXlfd2F0Y2hfY2hlY2tzX2hhbmRsZXIsICZzbnByaW50X292cl9k ZWxheV93YXRjaF9jaGVja3MpOwogCWluc3RhbGxfa2V5d29yZCgiZGVsYXlfd2FpdF9jaGVja3Mi LCAmb3ZyX2RlbGF5X3dhaXRfY2hlY2tzX2hhbmRsZXIsICZzbnByaW50X292cl9kZWxheV93YWl0 X2NoZWNrcyk7CisgICAgICAgIGluc3RhbGxfa2V5d29yZCgic2FuX3BhdGhfZXJyX3RocmVzaG9s ZCIsICZvdnJfc2FuX3BhdGhfZXJyX3RocmVzaG9sZF9oYW5kbGVyLCAmc25wcmludF9vdnJfc2Fu X3BhdGhfZXJyX3RocmVzaG9sZCk7CisgICAgICAgIGluc3RhbGxfa2V5d29yZCgic2FuX3BhdGhf ZXJyX2ZvcmdldF9yYXRlIiwgJm92cl9zYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGVfaGFuZGxlciwg JnNucHJpbnRfb3ZyX3Nhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSk7CisgICAgICAgIGluc3RhbGxf a2V5d29yZCgic2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUiLCAmb3ZyX3Nhbl9wYXRoX2Vycl9y ZWNvdmVyeV90aW1lX2hhbmRsZXIsICZzbnByaW50X292cl9zYW5fcGF0aF9lcnJfcmVjb3Zlcnlf dGltZSk7CisKIAlpbnN0YWxsX2tleXdvcmQoInNraXBfa3BhcnR4IiwgJm92cl9za2lwX2twYXJ0 eF9oYW5kbGVyLCAmc25wcmludF9vdnJfc2tpcF9rcGFydHgpOwogCWluc3RhbGxfa2V5d29yZCgi bWF4X3NlY3RvcnNfa2IiLCAmb3ZyX21heF9zZWN0b3JzX2tiX2hhbmRsZXIsICZzbnByaW50X292 cl9tYXhfc2VjdG9yc19rYik7CiAKQEAgLTE1NDMsNiArMTU3Niw5IEBAIGluaXRfa2V5d29yZHMo dmVjdG9yIGtleXdvcmRzKQogCWluc3RhbGxfa2V5d29yZCgiZGVmZXJyZWRfcmVtb3ZlIiwgJm1w X2RlZmVycmVkX3JlbW92ZV9oYW5kbGVyLCAmc25wcmludF9tcF9kZWZlcnJlZF9yZW1vdmUpOwog CWluc3RhbGxfa2V5d29yZCgiZGVsYXlfd2F0Y2hfY2hlY2tzIiwgJm1wX2RlbGF5X3dhdGNoX2No ZWNrc19oYW5kbGVyLCAmc25wcmludF9tcF9kZWxheV93YXRjaF9jaGVja3MpOwogCWluc3RhbGxf a2V5d29yZCgiZGVsYXlfd2FpdF9jaGVja3MiLCAmbXBfZGVsYXlfd2FpdF9jaGVja3NfaGFuZGxl ciwgJnNucHJpbnRfbXBfZGVsYXlfd2FpdF9jaGVja3MpOworCWluc3RhbGxfa2V5d29yZCgic2Fu X3BhdGhfZXJyX3RocmVzaG9sZCIsICZtcF9zYW5fcGF0aF9lcnJfdGhyZXNob2xkX2hhbmRsZXIs ICZzbnByaW50X21wX3Nhbl9wYXRoX2Vycl90aHJlc2hvbGQpOworCWluc3RhbGxfa2V5d29yZCgi c2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlIiwgJm1wX3Nhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZV9o YW5kbGVyLCAmc25wcmludF9tcF9zYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUpOworCWluc3RhbGxf a2V5d29yZCgic2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUiLCAmbXBfc2FuX3BhdGhfZXJyX3Jl Y292ZXJ5X3RpbWVfaGFuZGxlciwgJnNucHJpbnRfbXBfc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3Rp bWUpOwogCWluc3RhbGxfa2V5d29yZCgic2tpcF9rcGFydHgiLCAmbXBfc2tpcF9rcGFydHhfaGFu ZGxlciwgJnNucHJpbnRfbXBfc2tpcF9rcGFydHgpOwogCWluc3RhbGxfa2V5d29yZCgibWF4X3Nl Y3RvcnNfa2IiLCAmbXBfbWF4X3NlY3RvcnNfa2JfaGFuZGxlciwgJnNucHJpbnRfbXBfbWF4X3Nl Y3RvcnNfa2IpOwogCWluc3RhbGxfc3VibGV2ZWxfZW5kKCk7CmRpZmYgLS1naXQgYS9saWJtdWx0 aXBhdGgvZGljdC5oIGIvbGlibXVsdGlwYXRoL2RpY3QuaAppbmRleCA0Y2QwM2M1Li4yZDYwOTdk IDEwMDY0NAotLS0gYS9saWJtdWx0aXBhdGgvZGljdC5oCisrKyBiL2xpYm11bHRpcGF0aC9kaWN0 LmgKQEAgLTE0LDYgKzE0LDUgQEAgaW50IHByaW50X25vX3BhdGhfcmV0cnkoY2hhciAqIGJ1ZmYs IGludCBsZW4sIHZvaWQgKnB0cik7CiBpbnQgcHJpbnRfZmFzdF9pb19mYWlsKGNoYXIgKiBidWZm LCBpbnQgbGVuLCB2b2lkICpwdHIpOwogaW50IHByaW50X2Rldl9sb3NzKGNoYXIgKiBidWZmLCBp bnQgbGVuLCB2b2lkICpwdHIpOwogaW50IHByaW50X3Jlc2VydmF0aW9uX2tleShjaGFyICogYnVm ZiwgaW50IGxlbiwgdm9pZCAqIHB0cik7Ci1pbnQgcHJpbnRfZGVsYXlfY2hlY2tzKGNoYXIgKiBi dWZmLCBpbnQgbGVuLCB2b2lkICpwdHIpOwotCitpbnQgcHJpbnRfb2ZmX2ludF91bmRlZihjaGFy ICogYnVmZiwgaW50IGxlbiwgdm9pZCAqcHRyKTsKICNlbmRpZiAvKiBfRElDVF9IICovCmRpZmYg LS1naXQgYS9saWJtdWx0aXBhdGgvcHJvcHNlbC5jIGIvbGlibXVsdGlwYXRoL3Byb3BzZWwuYwpp bmRleCBjMGJjNjE2Li5lNGFmZWY3IDEwMDY0NAotLS0gYS9saWJtdWx0aXBhdGgvcHJvcHNlbC5j CisrKyBiL2xpYm11bHRpcGF0aC9wcm9wc2VsLmMKQEAgLTYyMyw3ICs2MjMsNyBAQCBpbnQgc2Vs ZWN0X2RlbGF5X3dhdGNoX2NoZWNrcyhzdHJ1Y3QgY29uZmlnICpjb25mLCBzdHJ1Y3QgbXVsdGlw YXRoICptcCkKIAltcF9zZXRfY29uZihkZWxheV93YXRjaF9jaGVja3MpOwogCW1wX3NldF9kZWZh dWx0KGRlbGF5X3dhdGNoX2NoZWNrcywgREVGQVVMVF9ERUxBWV9DSEVDS1MpOwogb3V0OgotCXBy aW50X2RlbGF5X2NoZWNrcyhidWZmLCAxMiwgJm1wLT5kZWxheV93YXRjaF9jaGVja3MpOworCXBy aW50X29mZl9pbnRfdW5kZWYoYnVmZiwgMTIsICZtcC0+ZGVsYXlfd2F0Y2hfY2hlY2tzKTsKIAlj b25kbG9nKDMsICIlczogZGVsYXlfd2F0Y2hfY2hlY2tzID0gJXMgJXMiLCBtcC0+YWxpYXMsIGJ1 ZmYsIG9yaWdpbik7CiAJcmV0dXJuIDA7CiB9CkBAIC02MzgsMTIgKzYzOCw1NiBAQCBpbnQgc2Vs ZWN0X2RlbGF5X3dhaXRfY2hlY2tzKHN0cnVjdCBjb25maWcgKmNvbmYsIHN0cnVjdCBtdWx0aXBh dGggKm1wKQogCW1wX3NldF9jb25mKGRlbGF5X3dhaXRfY2hlY2tzKTsKIAltcF9zZXRfZGVmYXVs dChkZWxheV93YWl0X2NoZWNrcywgREVGQVVMVF9ERUxBWV9DSEVDS1MpOwogb3V0OgotCXByaW50 X2RlbGF5X2NoZWNrcyhidWZmLCAxMiwgJm1wLT5kZWxheV93YWl0X2NoZWNrcyk7CisJcHJpbnRf b2ZmX2ludF91bmRlZihidWZmLCAxMiwgJm1wLT5kZWxheV93YWl0X2NoZWNrcyk7CiAJY29uZGxv ZygzLCAiJXM6IGRlbGF5X3dhaXRfY2hlY2tzID0gJXMgJXMiLCBtcC0+YWxpYXMsIGJ1ZmYsIG9y aWdpbik7CiAJcmV0dXJuIDA7CiAKIH0KK2ludCBzZWxlY3Rfc2FuX3BhdGhfZXJyX3RocmVzaG9s ZChzdHJ1Y3QgY29uZmlnICpjb25mLCBzdHJ1Y3QgbXVsdGlwYXRoICptcCkKK3sKKyAgICAgICAg Y2hhciAqb3JpZ2luLCBidWZmWzEyXTsKKworICAgICAgICBtcF9zZXRfbXBlKHNhbl9wYXRoX2Vy cl90aHJlc2hvbGQpOworICAgICAgICBtcF9zZXRfb3ZyKHNhbl9wYXRoX2Vycl90aHJlc2hvbGQp OworICAgICAgICBtcF9zZXRfaHdlKHNhbl9wYXRoX2Vycl90aHJlc2hvbGQpOworICAgICAgICBt cF9zZXRfY29uZihzYW5fcGF0aF9lcnJfdGhyZXNob2xkKTsKKyAgICAgICAgbXBfc2V0X2RlZmF1 bHQoc2FuX3BhdGhfZXJyX3RocmVzaG9sZCwgREVGQVVMVF9FUlJfQ0hFQ0tTKTsKK291dDoKKyAg ICAgICAgcHJpbnRfb2ZmX2ludF91bmRlZihidWZmLCAxMiwgJm1wLT5zYW5fcGF0aF9lcnJfdGhy ZXNob2xkKTsKKyAgICAgICAgY29uZGxvZygzLCAiJXM6IHNhbl9wYXRoX2Vycl90aHJlc2hvbGQg PSAlcyAlcyIsIG1wLT5hbGlhcywgYnVmZiwgb3JpZ2luKTsKKyAgICAgICAgcmV0dXJuIDA7Cit9 CisKK2ludCBzZWxlY3Rfc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlKHN0cnVjdCBjb25maWcgKmNv bmYsIHN0cnVjdCBtdWx0aXBhdGggKm1wKQoreworICAgICAgICBjaGFyICpvcmlnaW4sIGJ1ZmZb MTJdOworCisgICAgICAgIG1wX3NldF9tcGUoc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlKTsKKyAg ICAgICAgbXBfc2V0X292cihzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUpOworICAgICAgICBtcF9z ZXRfaHdlKHNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSk7CisgICAgICAgIG1wX3NldF9jb25mKHNh bl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSk7CisgICAgICAgIG1wX3NldF9kZWZhdWx0KHNhbl9wYXRo X2Vycl9mb3JnZXRfcmF0ZSwgREVGQVVMVF9FUlJfQ0hFQ0tTKTsKK291dDoKKyAgICAgICAgcHJp bnRfb2ZmX2ludF91bmRlZihidWZmLCAxMiwgJm1wLT5zYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUp OworICAgICAgICBjb25kbG9nKDMsICIlczogc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlID0gJXMg JXMiLCBtcC0+YWxpYXMsIGJ1ZmYsIG9yaWdpbik7CisgICAgICAgIHJldHVybiAwOworCit9Citp bnQgc2VsZWN0X3Nhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lKHN0cnVjdCBjb25maWcgKmNvbmYs IHN0cnVjdCBtdWx0aXBhdGggKm1wKQoreworICAgICAgICBjaGFyICpvcmlnaW4sIGJ1ZmZbMTJd OwogCisgICAgICAgIG1wX3NldF9tcGUoc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUpOworICAg ICAgICBtcF9zZXRfb3ZyKHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lKTsKKyAgICAgICAgbXBf c2V0X2h3ZShzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSk7CisgICAgICAgIG1wX3NldF9jb25m KHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lKTsKKyAgICAgICAgbXBfc2V0X2RlZmF1bHQoc2Fu X3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUsIERFRkFVTFRfRVJSX0NIRUNLUyk7CitvdXQ6CisgICAg ICAgIHByaW50X29mZl9pbnRfdW5kZWYoYnVmZiwgMTIsICZtcC0+c2FuX3BhdGhfZXJyX3JlY292 ZXJ5X3RpbWUpOworICAgICAgICBjb25kbG9nKDMsICIlczogc2FuX3BhdGhfZXJyX3JlY292ZXJ5 X3RpbWUgPSAlcyAlcyIsIG1wLT5hbGlhcywgYnVmZiwgb3JpZ2luKTsKKyAgICAgICAgcmV0dXJu IDA7CisKK30KIGludCBzZWxlY3Rfc2tpcF9rcGFydHggKHN0cnVjdCBjb25maWcgKmNvbmYsIHN0 cnVjdCBtdWx0aXBhdGggKiBtcCkKIHsKIAljaGFyICpvcmlnaW47CmRpZmYgLS1naXQgYS9saWJt dWx0aXBhdGgvcHJvcHNlbC5oIGIvbGlibXVsdGlwYXRoL3Byb3BzZWwuaAppbmRleCBhZDk4ZmE1 Li5lNWI2ZjkzIDEwMDY0NAotLS0gYS9saWJtdWx0aXBhdGgvcHJvcHNlbC5oCisrKyBiL2xpYm11 bHRpcGF0aC9wcm9wc2VsLmgKQEAgLTI0LDMgKzI0LDYgQEAgaW50IHNlbGVjdF9kZWxheV93YXRj aF9jaGVja3MgKHN0cnVjdCBjb25maWcgKmNvbmYsIHN0cnVjdCBtdWx0aXBhdGggKiBtcCk7CiBp bnQgc2VsZWN0X2RlbGF5X3dhaXRfY2hlY2tzIChzdHJ1Y3QgY29uZmlnICpjb25mLCBzdHJ1Y3Qg bXVsdGlwYXRoICogbXApOwogaW50IHNlbGVjdF9za2lwX2twYXJ0eCAoc3RydWN0IGNvbmZpZyAq Y29uZiwgc3RydWN0IG11bHRpcGF0aCAqIG1wKTsKIGludCBzZWxlY3RfbWF4X3NlY3RvcnNfa2Ig KHN0cnVjdCBjb25maWcgKmNvbmYsIHN0cnVjdCBtdWx0aXBhdGggKiBtcCk7CitpbnQgc2VsZWN0 X3Nhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZShzdHJ1Y3QgY29uZmlnICpjb25mLCBzdHJ1Y3QgbXVs dGlwYXRoICptcCk7CitpbnQgc2VsZWN0X3Nhbl9wYXRoX2Vycl90aHJlc2hvbGQoc3RydWN0IGNv bmZpZyAqY29uZiwgc3RydWN0IG11bHRpcGF0aCAqbXApOworaW50IHNlbGVjdF9zYW5fcGF0aF9l cnJfcmVjb3ZlcnlfdGltZShzdHJ1Y3QgY29uZmlnICpjb25mLCBzdHJ1Y3QgbXVsdGlwYXRoICpt cCk7CmRpZmYgLS1naXQgYS9saWJtdWx0aXBhdGgvc3RydWN0cy5oIGIvbGlibXVsdGlwYXRoL3N0 cnVjdHMuaAppbmRleCAzOTZmNjlkLi42ZWRkOTI3IDEwMDY0NAotLS0gYS9saWJtdWx0aXBhdGgv c3RydWN0cy5oCisrKyBiL2xpYm11bHRpcGF0aC9zdHJ1Y3RzLmgKQEAgLTE1Miw5ICsxNTIsOSBA QCBlbnVtIHNjc2lfcHJvdG9jb2wgewogCVNDU0lfUFJPVE9DT0xfVU5TUEVDID0gMHhmLCAvKiBO byBzcGVjaWZpYyBwcm90b2NvbCAqLwogfTsKIAotZW51bSBkZWxheV9jaGVja3Nfc3RhdGVzIHsK LQlERUxBWV9DSEVDS1NfT0ZGID0gLTEsCi0JREVMQVlfQ0hFQ0tTX1VOREVGID0gMCwKK2VudW0g bm9fdW5kZWZfc3RhdGVzIHsKKwlOVV9OTyA9IC0xLAorCU5VX1VOREVGID0gMCwKIH07CiAKIGVu dW0gaW5pdGlhbGl6ZWRfc3RhdGVzIHsKQEAgLTIyMyw3ICsyMjMsMTAgQEAgc3RydWN0IHBhdGgg ewogCWludCBpbml0aWFsaXplZDsKIAlpbnQgcmV0cmlnZ2VyczsKIAlpbnQgd3dpZF9jaGFuZ2Vk OwotCisJdW5zaWduZWQgaW50IHBhdGhfZmFpbHVyZXM7CisJdGltZV90IGRpc19yZWluc3RhdGVf dGltZTsKKwlpbnQgZGlzYWJsZV9yZWluc3RhdGU7CisJaW50IHNhbl9wYXRoX2Vycl9mb3JnZXRf cmF0ZTsKIAkvKiBjb25maWdsZXQgcG9pbnRlcnMgKi8KIAlzdHJ1Y3QgaHdlbnRyeSAqIGh3ZTsK IH07CkBAIC0yNTUsNiArMjU4LDkgQEAgc3RydWN0IG11bHRpcGF0aCB7CiAJaW50IGRlZmVycmVk X3JlbW92ZTsKIAlpbnQgZGVsYXlfd2F0Y2hfY2hlY2tzOwogCWludCBkZWxheV93YWl0X2NoZWNr czsKKwlpbnQgc2FuX3BhdGhfZXJyX3RocmVzaG9sZDsKKwlpbnQgc2FuX3BhdGhfZXJyX2Zvcmdl dF9yYXRlOworCWludCBzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZTsKIAlpbnQgc2tpcF9rcGFy dHg7CiAJaW50IG1heF9zZWN0b3JzX2tiOwogCXVuc2lnbmVkIGludCBkZXZfbG9zczsKZGlmZiAt LWdpdCBhL2xpYm11bHRpcGF0aC9zdHJ1Y3RzX3ZlYy5jIGIvbGlibXVsdGlwYXRoL3N0cnVjdHNf dmVjLmMKaW5kZXggMjJiZThlMC4uMWRiYzNiMiAxMDA2NDQKLS0tIGEvbGlibXVsdGlwYXRoL3N0 cnVjdHNfdmVjLmMKKysrIGIvbGlibXVsdGlwYXRoL3N0cnVjdHNfdmVjLmMKQEAgLTU3MCw2ICs1 NzAsMTIgQEAgaW50IHVwZGF0ZV9tdWx0aXBhdGggKHN0cnVjdCB2ZWN0b3JzICp2ZWNzLCBjaGFy ICptYXBuYW1lLCBpbnQgcmVzZXQpCiAJCQkJaW50IG9sZHN0YXRlID0gcHAtPnN0YXRlOwogCQkJ CWNvbmRsb2coMiwgIiVzOiBtYXJrIGFzIGZhaWxlZCIsIHBwLT5kZXYpOwogCQkJCW1wcC0+c3Rh dF9wYXRoX2ZhaWx1cmVzKys7CisJCQkJLyphc3NpZ25lZCAgdGhlIHBhdGhfZXJyX2ZvcmdldF9y YXRlIHdoZW4gd2Ugc2VlIHRoZSBmaXJzdCBmYWlsdXJlIG9uIHRoZSBwYXRoKi8KKwkJCQlpZiAo cHAtPnBhdGhfZmFpbHVyZXMgPT0gMCkgeworCQkJCQlwcC0+c2FuX3BhdGhfZXJyX2ZvcmdldF9y YXRlID0gcHAtPm1wcC0+c2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlOworCQkJCX0KKwkJCQkvKklu Y3JlbWVudCB0aGUgbnVtYmVyIG9mIHBhdGggZmFpbHVyZXMqLworCQkJCXBwLT5wYXRoX2ZhaWx1 cmVzKys7CiAJCQkJcHAtPnN0YXRlID0gUEFUSF9ET1dOOwogCQkJCWlmIChvbGRzdGF0ZSA9PSBQ QVRIX1VQIHx8CiAJCQkJICAgIG9sZHN0YXRlID09IFBBVEhfR0hPU1QpCmRpZmYgLS1naXQgYS9t dWx0aXBhdGgvbXVsdGlwYXRoLmNvbmYuNSBiL211bHRpcGF0aC9tdWx0aXBhdGguY29uZi41Cmlu ZGV4IDM2NTg5ZjUuLjNjNTY0YWQgMTAwNjQ0Ci0tLSBhL211bHRpcGF0aC9tdWx0aXBhdGguY29u Zi41CisrKyBiL211bHRpcGF0aC9tdWx0aXBhdGguY29uZi41CkBAIC03NTEsNiArNzUxLDQ1IEBA IFRoZSBkZWZhdWx0IGlzOiBcZkIvZXRjL211bHRpcGF0aC9jb25mLmQvXGZSCiAuCiAuCiAuVFAK Ky5CIHNhbl9wYXRoX2Vycl90aHJlc2hvbGQKK0lmIHNldCB0byBhIHZhbHVlIGdyZWF0ZXIgdGhh biAwLCBtdWx0aXBhdGhkIHdpbGwgd2F0Y2ggcGF0aHMgYW5kIGNoZWNrIGhvdyBtYW55Cit0aW1l cyBhIHBhdGggaGFzIGJlZW4gZmFpbGVkIGR1ZSB0byBlcnJvcnMuSWYgdGhlIG51bWJlciBvZiBm YWlsdXJlcyBvbiBhIHBhcnRpY3VsYXIKK3BhdGggaXMgZ3JlYXRlciB0aGVuIHRoZSBzYW5fcGF0 aF9lcnJfdGhyZXNob2xkIHRoZW4gdGhlIHBhdGggd2lsbCBub3QgIHJlaW5zdGFudGUKK3RpbGwg c2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUuVGhlc2UgcGF0aCBmYWlsdXJlcyBzaG91bGQgb2Nj dXIgd2l0aGluIGEgCitzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUgY2hlY2tzLCBpZiBub3Qgd2Ug d2lsbCBjb25zaWRlciB0aGUgcGF0aCBpcyBnb29kIGVub3VnaAordG8gcmVpbnN0YW50YXRlLgor LlJTCisuVFAKK1RoZSBkZWZhdWx0IGlzOiBcZkJub1xmUgorLlJFCisuCisuCisuVFAKKy5CIHNh bl9wYXRoX2Vycl9mb3JnZXRfcmF0ZQorSWYgc2V0IHRvIGEgdmFsdWUgZ3JlYXRlciB0aGFuIDAs IG11bHRpcGF0aGQgd2lsbCBjaGVjayB3aGV0aGVyIHRoZSBwYXRoIGZhaWx1cmVzCitoYXMgZXhj ZWVkZWQgIHRoZSBzYW5fcGF0aF9lcnJfdGhyZXNob2xkIHdpdGhpbiB0aGlzIG1hbnkgY2hlY2tz IGkuZSAKK3Nhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSAuIElmIHNvIHdlIHdpbGwgbm90IHJlaW5z dGFudGUgdGhlIHBhdGggdGlsbAorc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUuCisuUlMKKy5U UAorVGhlIGRlZmF1bHQgaXM6IFxmQm5vXGZSCisuUkUKKy4KKy4KKy5UUAorLkIgc2FuX3BhdGhf ZXJyX3JlY292ZXJ5X3RpbWUKK0lmIHNldCB0byBhIHZhbHVlIGdyZWF0ZXIgdGhhbiAwLCBtdWx0 aXBhdGhkIHdpbGwgbWFrZSBzdXJlIHRoYXQgd2hlbiBwYXRoIGZhaWx1cmVzCitoYXMgZXhjZWVk ZWQgdGhlIHNhbl9wYXRoX2Vycl90aHJlc2hvbGQgd2l0aGluIHNhbl9wYXRoX2Vycl9mb3JnZXRf cmF0ZSB0aGVuIHRoZSBwYXRoCit3aWxsIGJlIHBsYWNlZCBpbiBmYWlsZWQgc3RhdGUgZm9yIHNh bl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lIGR1cmF0aW9uLk9uY2Ugc2FuX3BhdGhfZXJyX3JlY292 ZXJ5X3RpbWUKK2hhcyB0aW1lb3V0ICB3ZSB3aWxsIHJlaW5zdGFudGUgdGhlIGZhaWxlZCBwYXRo IC4KK3Nhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lIHZhbHVlIHNob3VsZCBiZSBpbiBzZWNzLgor LlJTCisuVFAKK1RoZSBkZWZhdWx0IGlzOiBcZkJub1xmUgorLlJFCisuCisuCisuVFAKIC5CIGRl bGF5X3dhdGNoX2NoZWNrcwogSWYgc2V0IHRvIGEgdmFsdWUgZ3JlYXRlciB0aGFuIDAsIG11bHRp cGF0aGQgd2lsbCB3YXRjaCBwYXRocyB0aGF0IGhhdmUKIHJlY2VudGx5IGJlY29tZSB2YWxpZCBm b3IgdGhpcyBtYW55IGNoZWNrcy4gSWYgdGhleSBmYWlsIGFnYWluIHdoaWxlIHRoZXkgYXJlCkBA IC0xMDE1LDYgKzEwNTQsMTIgQEAgYXJlIHRha2VuIGZyb20gdGhlIFxmSWRlZmF1bHRzXGZSIG9y IFxmSWRldmljZXNcZlIgc2VjdGlvbjoKIC5UUAogLkIgZGVmZXJyZWRfcmVtb3ZlCiAuVFAKKy5C IHNhbl9wYXRoX2Vycl90aHJlc2hvbGQKKy5UUAorLkIgc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRl CisuVFAKKy5CIHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lCisuVFAKIC5CIGRlbGF5X3dhdGNo X2NoZWNrcwogLlRQCiAuQiBkZWxheV93YWl0X2NoZWNrcwpAQCAtMTEyOCw2ICsxMTczLDEyIEBA IHNlY3Rpb246CiAuVFAKIC5CIGRlZmVycmVkX3JlbW92ZQogLlRQCisuQiBzYW5fcGF0aF9lcnJf dGhyZXNob2xkCisuVFAKKy5CIHNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZQorLlRQCisuQiBzYW5f cGF0aF9lcnJfcmVjb3ZlcnlfdGltZQorLlRQCiAuQiBkZWxheV93YXRjaF9jaGVja3MKIC5UUAog LkIgZGVsYXlfd2FpdF9jaGVja3MKQEAgLTExOTIsNiArMTI0MywxMiBAQCB0aGUgdmFsdWVzIGFy ZSB0YWtlbiBmcm9tIHRoZSBcZklkZXZpY2VzXGZSIG9yIFxmSWRlZmF1bHRzXGZSIHNlY3Rpb25z OgogLlRQCiAuQiBkZWZlcnJlZF9yZW1vdmUKIC5UUAorLkIgc2FuX3BhdGhfZXJyX3RocmVzaG9s ZAorLlRQCisuQiBzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUKKy5UUAorLkIgc2FuX3BhdGhfZXJy X3JlY292ZXJ5X3RpbWUKKy5UUAogLkIgZGVsYXlfd2F0Y2hfY2hlY2tzCiAuVFAKIC5CIGRlbGF5 X3dhaXRfY2hlY2tzCmRpZmYgLS1naXQgYS9tdWx0aXBhdGhkL21haW4uYyBiL211bHRpcGF0aGQv bWFpbi5jCmluZGV4IGFkYzMyNTguLjQzZDA3YWIgMTAwNjQ0Ci0tLSBhL211bHRpcGF0aGQvbWFp bi5jCisrKyBiL211bHRpcGF0aGQvbWFpbi5jCkBAIC0xNDg2LDcgKzE0ODYsNTcgQEAgdm9pZCBy ZXBhaXJfcGF0aChzdHJ1Y3QgcGF0aCAqIHBwKQogCWNoZWNrZXJfcmVwYWlyKCZwcC0+Y2hlY2tl cik7CiAJTE9HX01TRygxLCBjaGVja2VyX21lc3NhZ2UoJnBwLT5jaGVja2VyKSk7CiB9CitzdGF0 aWMgaW50IGNoZWNrX3BhdGhfdmFsaWRpdHlfZXJyICggc3RydWN0IHBhdGggKiBwcCkgeworCXN0 cnVjdCB0aW1lc3BlYyBzdGFydF90aW1lOworCWludCBkaXNhYmxlX3JlaW5zdGF0ZSA9IDA7CiAK KwlpZiAoISgocHAtPm1wcC0+c2FuX3BhdGhfZXJyX3RocmVzaG9sZCA+IDApICYmIAorCSAgICAo cHAtPm1wcC0+c2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlID4gMCkgJiYKKwkgICAgKHBwLT5tcHAt PnNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lID4wKSkpIHsKKwkJcmV0dXJuIGRpc2FibGVfcmVp bnN0YXRlOworCX0KKwkKKwlpZiAoY2xvY2tfZ2V0dGltZShDTE9DS19NT05PVE9OSUMsICZzdGFy dF90aW1lKSAhPSAwKSB7CisJCXJldHVybiBkaXNhYmxlX3JlaW5zdGF0ZTsJCisJfQorCisJaWYg KCFwcC0+ZGlzYWJsZV9yZWluc3RhdGUpIHsKKwkJaWYgKHBwLT5wYXRoX2ZhaWx1cmVzKSB7CisJ CQkvKmlmIHRoZSBlcnJvciB0aHJlc2hvbGQgaGFzIGhpdCBoaXQgd2l0aGluIHRoZSBzYW5fcGF0 aF9lcnJfZm9yZ2V0X3JhdGUKKwkJCSAqY3ljbGVzIGRvbm90IHJlaW5zdGFudGUgdGhlIHBhdGgg dGlsbCB0aGUgc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUKKwkJCSAqcGxhY2UgdGhlIHBhdGgg aW4gZmFpbGVkIHN0YXRlIHRpbGwgc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUgc28gdGhhdCB0 aGUKKwkJCSAqY3V0b21lciBjYW4gcmVjdGlmeSB0aGUgaXNzdWUgd2l0aGluIHRoaXMgdGltZSAu T25jZSB0aGUgY29tcGxldGlvbiBvZgorCQkJICpzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSBp dCBzaG91bGQgYXV0b21hdGljYWxseSByZWluc3RhbnRhdGUgdGhlIHBhdGgKKwkJCSAqLworCQkJ aWYgKChwcC0+cGF0aF9mYWlsdXJlcyA+IHBwLT5tcHAtPnNhbl9wYXRoX2Vycl90aHJlc2hvbGQp ICYmCisJCQkJCShwcC0+c2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlID4gMCkpIHsKKwkJCQlwcmlu dGYoIlxuJXM6JWQ6ICVzIGhpdCBlcnJvciB0aHJlc2hvbGQgXG4iLF9fZnVuY19fLF9fTElORV9f LHBwLT5kZXYpOworCQkJCXBwLT5kaXNfcmVpbnN0YXRlX3RpbWUgPSBzdGFydF90aW1lLnR2X3Nl YyA7CisJCQkJcHAtPmRpc2FibGVfcmVpbnN0YXRlID0gMTsKKwkJCQlkaXNhYmxlX3JlaW5zdGF0 ZSA9IDE7CisJCQl9IGVsc2UgaWYgKChwcC0+c2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlID4gMCkp IHsKKwkJCQlwcC0+c2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlLS07CisJCQl9IGVsc2UgeworCQkJ CS8qZm9yIGV2ZXJ5IHNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSBudW1iZXIKKwkJCQkgKm9mIHN1 Y2Nlc3NmdWwgcGF0aCBjaGVja3MgZGVjcmVtZW50IHBhdGhfZmFpbHVyZXMgYnkgMQorCQkJCSAq LworCQkJCXBwLT5wYXRoX2ZhaWx1cmVzIC0tOworCQkJCXBwLT5zYW5fcGF0aF9lcnJfZm9yZ2V0 X3JhdGUgPSBwcC0+bXBwLT5zYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGU7CisJCQl9CisJCX0KKwl9 IGVsc2UgeworCQlkaXNhYmxlX3JlaW5zdGF0ZSA9IDE7CisJCWlmICgocHAtPm1wcC0+c2FuX3Bh dGhfZXJyX3JlY292ZXJ5X3RpbWUgPiAwKSAmJgorCQkJCShzdGFydF90aW1lLnR2X3NlYyAtIHBw LT5kaXNfcmVpbnN0YXRlX3RpbWUgKSA+IHBwLT5tcHAtPnNhbl9wYXRoX2Vycl9yZWNvdmVyeV90 aW1lKSB7CisJCQlkaXNhYmxlX3JlaW5zdGF0ZSA9MDsKKwkJCXBwLT5wYXRoX2ZhaWx1cmVzID0g MDsKKwkJCXBwLT5kaXNhYmxlX3JlaW5zdGF0ZSA9IDA7CisJCQlwcC0+c2FuX3BhdGhfZXJyX2Zv cmdldF9yYXRlID0gcHAtPm1wcC0+c2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlOworCQkJY29uZGxv ZygzLCJcbnBhdGggJXMgOnJlaW5zdGF0ZSB0aGUgcGF0aCBhZnRlciBlcnIgcmVjb3ZlcnkgdGlt ZVxuIixwcC0+ZGV2KTsKKwkJfQorCX0KKwlyZXR1cm4gIGRpc2FibGVfcmVpbnN0YXRlOworfQog LyoKICAqIFJldHVybnMgJzEnIGlmIHRoZSBwYXRoIGhhcyBiZWVuIGNoZWNrZWQsICctMScgaWYg aXQgd2FzIGJsYWNrbGlzdGVkCiAgKiBhbmQgJzAnIG90aGVyd2lzZQpAQCAtMTUwMiw3ICsxNTUy LDcgQEAgY2hlY2tfcGF0aCAoc3RydWN0IHZlY3RvcnMgKiB2ZWNzLCBzdHJ1Y3QgcGF0aCAqIHBw LCBpbnQgdGlja3MpCiAJaW50IG9sZGNoa3JzdGF0ZSA9IHBwLT5jaGtyc3RhdGU7CiAJaW50IHJl dHJpZ2dlcl90cmllcywgY2hlY2tpbnQ7CiAJc3RydWN0IGNvbmZpZyAqY29uZjsKLQlpbnQgcmV0 OworCWludCByZXQ7CQogCiAJaWYgKChwcC0+aW5pdGlhbGl6ZWQgPT0gSU5JVF9PSyB8fAogCSAg ICAgcHAtPmluaXRpYWxpemVkID09IElOSVRfUkVRVUVTVEVEX1VERVYpICYmICFwcC0+bXBwKQpA QCAtMTYxMCwxNyArMTY2MCwzMSBAQCBjaGVja19wYXRoIChzdHJ1Y3QgdmVjdG9ycyAqIHZlY3Ms IHN0cnVjdCBwYXRoICogcHAsIGludCB0aWNrcykKIAkJCXBwLT53YWl0X2NoZWNrcyA9IDA7CiAJ fQogCisJaWYgKG5ld3N0YXRlID09IFBBVEhfRE9XTiB8fCBuZXdzdGF0ZSA9PSBQQVRIX0dIT1NU KSB7CisJCS8qYXNzaWduZWQgIHRoZSBwYXRoX2Vycl9mb3JnZXRfcmF0ZSB3aGVuIHdlIHNlZSB0 aGUgZmlyc3QgZmFpbHVyZSBvbiB0aGUgcGF0aCovCisJCWlmKHBwLT5wYXRoX2ZhaWx1cmVzID09 IDApeworCQkJcHAtPnNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSA9IHBwLT5tcHAtPnNhbl9wYXRo X2Vycl9mb3JnZXRfcmF0ZTsKKwkJfQorCQlwcC0+cGF0aF9mYWlsdXJlcysrOworCX0KKwogCS8q CiAJICogZG9uJ3QgcmVpbnN0YXRlIGZhaWxlZCBwYXRoLCBpZiBpdHMgaW4gc3RhbmQtYnkKIAkg KiBhbmQgaWYgdGFyZ2V0IHN1cHBvcnRzIG9ubHkgaW1wbGljaXQgdHBncyBtb2RlLgogCSAqIHRo aXMgd2lsbCBwcmV2ZW50IHVubmVjZXNzYXJ5IGkvbyBieSBkbSBvbiBzdGFuZC1ieQogCSAqIHBh dGhzIGlmIHRoZXJlIGFyZSBubyBvdGhlciBhY3RpdmUgcGF0aHMgaW4gbWFwLgorCSAqCisJICog d2hlbiBwYXRoIGZhaWx1cmVzIGhhcyBleGNlZWRlZCB0aGUgc2FuX3BhdGhfZXJyX3RocmVzaG9s ZCAKKwkgKiB3aXRoaW4gc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlIHRoZW4gd2UgZG9uJ3QgcmVp bnN0YXRlCisJICogZmFpbGVkIHBhdGggZm9yIHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lCiAJ ICovCi0JZGlzYWJsZV9yZWluc3RhdGUgPSAobmV3c3RhdGUgPT0gUEFUSF9HSE9TVCAmJgorCWRp c2FibGVfcmVpbnN0YXRlID0gKChuZXdzdGF0ZSA9PSBQQVRIX0dIT1NUICYmCiAJCQkgICAgcHAt Pm1wcC0+bnJfYWN0aXZlID09IDAgJiYKLQkJCSAgICBwcC0+dHBncyA9PSBUUEdTX0lNUExJQ0lU KSA/IDEgOiAwOworCQkJICAgIHBwLT50cGdzID09IFRQR1NfSU1QTElDSVQpID8gMSA6CisJCQkg ICAgY2hlY2tfcGF0aF92YWxpZGl0eV9lcnIocHApKTsKIAogCXBwLT5jaGtyc3RhdGUgPSBuZXdz dGF0ZTsKKwogCWlmIChuZXdzdGF0ZSAhPSBwcC0+c3RhdGUpIHsKIAkJaW50IG9sZHN0YXRlID0g cHAtPnN0YXRlOwogCQlwcC0+c3RhdGUgPSBuZXdzdGF0ZTsK --_004_26d8e0b78873443c8e15b863bc33922dBRMWPEXMB12corpbrocadec_ Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline --_004_26d8e0b78873443c8e15b863bc33922dBRMWPEXMB12corpbrocadec_-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Benjamin Marzinski" Subject: Re: deterministic io throughput in multipath Date: Wed, 25 Jan 2017 03:28:46 -0600 Message-ID: <20170125092846.GA2732@octiron.msp.redhat.com> References: <1649d4b8538d4b4cb1efacdfe8cf31eb@BRMWP-EXMB12.corp.brocade.com> <20161221160940.GG19659@octiron.msp.redhat.com> <8cd4cc5f20b540a1b8312ad485711152@BRMWP-EXMB12.corp.brocade.com> <20170103171159.GA2732@octiron.msp.redhat.com> <4dfed25f04c04771a732580a4a8cc834@BRMWP-EXMB12.corp.brocade.com> <20170117010447.GW2732@octiron.msp.redhat.com> <26d8e0b78873443c8e15b863bc33922d@BRMWP-EXMB12.corp.brocade.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Return-path: Content-Disposition: inline In-Reply-To: <26d8e0b78873443c8e15b863bc33922d@BRMWP-EXMB12.corp.brocade.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Muneendra Kumar M Cc: "dm-devel@redhat.com" List-Id: dm-devel.ids This looks fine to me. If this what you want to push, I'm o.k. with it. But I'd like to make some suggestions that you are free to ignore. Right now you have to check in two places to see if the path failed (in update_multipath and check_path). If you look at the delayed_*_checks code, it flags the path failures when you reinstate the path in check_path, since this will only happen there. Next, right now you use the disable_reinstate code to deal with the devices when they shouldn't be reinstated. The issue with this is that the path appears to be up when people look at its state, but still isn't being used. If you do the check early and set the path state to PATH_DELAYED, like delayed_*_checks does, then the path is clearly marked when users look to see why it isn't being used. Also, if you exit check_path early, then you won't be running the prioritizer on these likely-unstable paths. Finally, the way you use dis_reinstate_time, a flakey device can get reinstated as soon as it comes back up, as long it was down for long enough, simply because pp->dis_reinstate_time reached mpp->san_path_err_recovery_time while the device was failed. delayed_*_checks depends on a number of successful path checks, so you know that the device has at least been nominally functional for san_path_err_recovery_time. Like I said, you don't have to change any of this to make me happy with your patch. But if you did change all of these, then the current delay_*_checks code would just end up being a special case of your code. I'd really like to pull out the delayed_*_checks code and just keep your version, since it seems more useful. It would be nice to keep the same functionality. But even if you don't make these changes, I still think we should pull out the delayed_*_checks code, since they both do the same general thing, and your code does it better. -Ben On Mon, Jan 23, 2017 at 11:02:42AM +0000, Muneendra Kumar M wrote: > Hi Ben, > I have made the changes as per the below review comments . > =A0 > Could you please review the attached patch and provide us your valuable > comments . > Below are the files that has been changed . > =A0 > libmultipath/config.c=A0=A0=A0=A0=A0 |=A0 3 +++ > libmultipath/config.h=A0=A0=A0=A0=A0 |=A0 9 +++++++++ > libmultipath/configure.c=A0=A0 |=A0 3 +++ > libmultipath/defaults.h=A0=A0=A0 |=A0 3 ++- > libmultipath/dict.c=A0=A0=A0=A0=A0=A0=A0 | 84 > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++----------= -------------- > libmultipath/dict.h=A0=A0=A0=A0=A0=A0=A0 |=A0 3 +-- > libmultipath/propsel.c=A0=A0=A0=A0 | 48 > ++++++++++++++++++++++++++++++++++++++++++++++-- > libmultipath/propsel.h=A0=A0=A0=A0 |=A0 3 +++ > libmultipath/structs.h=A0=A0=A0=A0 | 14 ++++++++++---- > libmultipath/structs_vec.c |=A0 6 ++++++ > multipath/multipath.conf.5 | 57 > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > multipathd/main.c=A0=A0=A0=A0=A0=A0=A0=A0=A0 | 70 > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--- > =A0 > =A0 > Regards, > Muneendra. > =A0 > _____________________________________________ > From: Muneendra Kumar M > Sent: Tuesday, January 17, 2017 4:13 PM > To: 'Benjamin Marzinski' > Cc: dm-devel@redhat.com > Subject: RE: [dm-devel] deterministic io throughput in multipath > =A0 > =A0 > Hi Ben, > Thanks for the review. > In dict.c=A0 I will make sure I will make generic functions which will= be > used by both delay_checks and err_checks. > =A0 > We want to increment the path failures every time the path goes down > regardless of whether multipathd or the kernel noticed the failure of > paths. > Thanks for pointing this. > =A0 > I will completely agree with the idea which you mentioned below by > reconsidering the san_path_err_threshold_window with > san_path_err_forget_rate. This will avoid counting time when the path = was > down as time where the path wasn't having problems. > =A0 > I will incorporate all the changes mentioned below and will resend the > patch once the testing is done. > =A0 > Regards, > Muneendra. > =A0 > =A0 > =A0 > -----Original Message----- > From: Benjamin Marzinski [[1]mailto:bmarzins@redhat.com] > Sent: Tuesday, January 17, 2017 6:35 AM > To: Muneendra Kumar M <[2]mmandala@Brocade.com> > Cc: [3]dm-devel@redhat.com > Subject: Re: [dm-devel] deterministic io throughput in multipath > =A0 > On Mon, Jan 16, 2017 at 11:19:19AM +0000, Muneendra Kumar M wrote: > >=A0=A0=A0 Hi Ben, > >=A0=A0=A0 After the below discussion we=A0 came with the approach whi= ch will meet > our > >=A0=A0=A0 requirement. > >=A0=A0=A0 I have attached the patch which is working good in our fiel= d tests. > >=A0=A0=A0 Could you please review the attached patch and provide us y= our > valuable > >=A0=A0=A0 comments . > =A0 > I can see a number of issues with this patch. > =A0 > First, some nit-picks: > - I assume "dis_reinstante_time" should be "dis_reinstate_time" > =A0 > - The indenting in check_path_validity_err is wrong, which made it > =A0 confusing until I noticed that > =A0 > if (clock_gettime(CLOCK_MONOTONIC, &start_time) !=3D 0) > =A0 > =A0 doesn't have an open brace, and shouldn't indent the rest of the > =A0 function. > =A0 > - You call clock_gettime in check_path, but never use the result. > =A0 > - In dict.c, instead of writing your own functions that are the same as > =A0 the *_delay_checks functions, you could make those functions gener= ic > =A0 and use them for both.=A0 To go match the other generic function n= ames > =A0 they would probably be something like > =A0 > set_off_int_undef > =A0 > print_off_int_undef > =A0 > =A0 You would also need to change DELAY_CHECKS_* and ERR_CHECKS_* to > =A0 point to some common enum that you created, the way > =A0 user_friendly_names_states (to name one of many) does. The generic > =A0 enum used by *_off_int_undef would be something like. > =A0 > enum no_undef { > =A0=A0=A0=A0=A0=A0=A0 NU_NO =3D -1, > =A0=A0=A0=A0=A0=A0=A0 NU_UNDEF =3D 0, > } > =A0 > =A0 The idea is to try to cut down on the number of functions that are > =A0 simply copy-pasting other functions in dict.c. > =A0 > =A0 > Those are all minor cleanup issues, but there are some bigger problems. > =A0 > Instead of checking if san_path_err_threshold, > san_path_err_threshold_window, and san_path_err_recovery_time are grea= ter > than zero seperately, you should probably check them all at the start = of > check_path_validity_err, and return 0 unless they all are set. > Right now, if a user sets san_path_err_threshold and > san_path_err_threshold_window but not san_path_err_recovery_time, their > path will never recover after it hits the error threshold.=A0 I pretty= sure > that you don't mean to permanently disable the paths. > =A0 > =A0 > time_t is a signed type, which means that if you get the clock time in > update_multpath and then fail to get the clock time in > check_path_validity_err, this check: > =A0 > start_time.tv_sec - pp->failure_start_time) < > pp->mpp->san_path_err_threshold_window > =A0 > will always be true.=A0 I realize that clock_gettime is very unlikely = to > fail.=A0 But if it does, probably the safest thing to so is to just > immediately return 0 in check_path_validity_err. > =A0 > =A0 > The way you set path_failures in update_multipath may not get you what= you > want.=A0 It will only count path failures found by the kernel, and not= the > path checker.=A0 If the check_path finds the error, pp->state will be = set to > PATH_DOWN before pp->dmstate is set to PSTATE_FAILED. That means you w= ill > not increment path_failures. Perhaps this is what you want, but I would > assume that you would want to count every time the path goes down > regardless of whether multipathd or the kernel noticed it. > =A0 > =A0 > I'm not super enthusiastic about how the san_path_err_threshold_window > works.=A0 First, it starts counting from when the path goes down, so i= f the > path takes long enough to get restored, and then fails immediately, it= can > just keep failing and it will never hit the san_path_err_threshold_win= dow, > since it spends so much of that time with the path failed.=A0 Also, the > window gets set on the first error, and never reset until the number of > errors is over the threshold.=A0 This means that if you get one early = error > and then a bunch of errors much later, you will go for (2 x > san_path_err_threshold) - 1 errors until you stop reinstating the path, > because of the window reset in the middle of the string of errors.=A0 = It > seems like a better idea would be to have check_path_validity_err reset > path_failures as soon as it notices that you are past > san_path_err_threshold_window, instead of waiting till the number of > errors hits san_path_err_threshold. > =A0 > =A0 > If I was going to design this, I think I would have san_path_err_thres= hold > and san_path_err_recovery_time like you do, but instead of having a > san_path_err_threshold_window, I would have something like > san_path_err_forget_rate.=A0 The idea is that every san_path_err_forge= t_rate > number of successful path checks you decrement path_failures by 1. This > means that there is no window after which you reset.=A0 If the path fa= ilures > come in faster than the forget rate, you will eventually hit the error > threshold. This also has the benefit of easily not counting time when = the > path was down as time where the path wasn't having problems. But if you > don't like my idea, yours will work fine with some polish. > =A0 > -Ben > =A0 > =A0 > >=A0=A0=A0 Below are the files that has been changed . > >=A0=A0=A0 =A0 > >=A0=A0=A0 libmultipath/config.c=A0=A0=A0=A0=A0 |=A0 3 +++ > >=A0=A0=A0 libmultipath/config.h=A0=A0=A0=A0=A0 |=A0 9 +++++++++ > >=A0=A0=A0 libmultipath/configure.c=A0=A0 |=A0 3 +++ > >=A0=A0=A0 libmultipath/defaults.h=A0=A0=A0 |=A0 1 + > >=A0=A0=A0 libmultipath/dict.c=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 | 80 > >=A0=A0=A0 > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++= ++++++++++ > >=A0=A0=A0 libmultipath/dict.h=A0=A0=A0=A0=A0=A0=A0 |=A0 1 + > >=A0=A0=A0 libmultipath/propsel.c=A0=A0=A0=A0 | 44 > >=A0=A0=A0 ++++++++++++++++++++++++++++++++++++++++++++ > >=A0=A0=A0 libmultipath/propsel.h=A0=A0=A0=A0 |=A0 6 ++++++ > >=A0=A0=A0 libmultipath/structs.h=A0=A0=A0=A0 | 12 +++++++++++- > >=A0=A0=A0 libmultipath/structs_vec.c | 10 ++++++++++ > >=A0=A0=A0 multipath/multipath.conf.5 | 58 > >=A0=A0=A0 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > >=A0=A0=A0 multipathd/main.c=A0=A0=A0=A0=A0=A0=A0=A0=A0 | 61 > >=A0=A0=A0 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++= -- > >=A0=A0=A0 =A0 > >=A0=A0=A0 We have added three new config parameters whose description= is below. > >=A0=A0=A0 1.san_path_err_threshold: > >=A0=A0=A0 =A0=A0=A0=A0=A0=A0=A0 If set to a value greater than 0, mul= tipathd will watch paths > and > >=A0=A0=A0 check how many times a path has been failed due to errors. = If the > number > >=A0=A0=A0 of failures on a particular path is greater then the > >=A0=A0=A0 san_path_err_threshold then the path will not=A0 reinstate= =A0 till > >=A0=A0=A0 san_path_err_recovery_time. These path failures should occu= r within a > >=A0=A0=A0 san_path_err_threshold_window time frame, if not we will co= nsider the > path > >=A0=A0=A0 is good enough to reinstate. > >=A0=A0=A0 =A0 > >=A0=A0=A0 2.san_path_err_threshold_window: > >=A0=A0=A0 =A0=A0=A0=A0=A0=A0=A0 If set to a value greater than 0, mul= tipathd will check > whether > >=A0=A0=A0 the path failures has exceeded=A0 the san_path_err_threshol= d within > this > >=A0=A0=A0 time frame i.e san_path_err_threshold_window . If so we wil= l not > reinstate > >=A0=A0=A0 the path till=A0=A0=A0=A0=A0=A0=A0=A0=A0 san_path_err_recov= ery_time. > >=A0=A0=A0 =A0 > >=A0=A0=A0 3.san_path_err_recovery_time: > >=A0=A0=A0 If set to a value greater than 0, multipathd will make sure= that when > path > >=A0=A0=A0 failures has exceeded the san_path_err_threshold within > >=A0=A0=A0 san_path_err_threshold_window then the path=A0 will be plac= ed in failed > >=A0=A0=A0 state for san_path_err_recovery_time duration. Once > >=A0=A0=A0 san_path_err_recovery_time has timeout=A0 we will reinstate= the failed > path > >=A0=A0=A0 . > >=A0=A0=A0 =A0 > >=A0=A0=A0 Regards, > >=A0=A0=A0 Muneendra. > >=A0=A0=A0 =A0 > >=A0=A0=A0 -----Original Message----- > >=A0=A0=A0 From: Muneendra Kumar M > >=A0=A0=A0 Sent: Wednesday, January 04, 2017 6:56 PM > >=A0=A0=A0 To: 'Benjamin Marzinski' <[4]bmarzins@redhat.com> > >=A0=A0=A0 Cc: [5]dm-devel@redhat.com > >=A0=A0=A0 Subject: RE: [dm-devel] deterministic io throughput in mult= ipath > >=A0=A0=A0 =A0 > >=A0=A0=A0 Hi Ben, > >=A0=A0=A0 Thanks for the information. > >=A0=A0=A0 =A0 > >=A0=A0=A0 Regards, > >=A0=A0=A0 Muneendra. > >=A0=A0=A0 =A0 > >=A0=A0=A0 -----Original Message----- > >=A0=A0=A0 From: Benjamin Marzinski [[1][6]mailto:bmarzins@redhat.com] > >=A0=A0=A0 Sent: Tuesday, January 03, 2017 10:42 PM > >=A0=A0=A0 To: Muneendra Kumar M <[2][7]mmandala@Brocade.com> > >=A0=A0=A0 Cc: [3][8]dm-devel@redhat.com > >=A0=A0=A0 Subject: Re: [dm-devel] deterministic io throughput in mult= ipath > >=A0=A0=A0 =A0 > >=A0=A0=A0 On Mon, Dec 26, 2016 at 09:42:48AM +0000, Muneendra Kumar M= wrote: > >=A0=A0=A0 > Hi Ben, > >=A0=A0=A0 > > >=A0=A0=A0 > If there are two paths on a dm-1 say sda and sdb as below. > >=A0=A0=A0 > > >=A0=A0=A0 > #=A0 multipath -ll > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0 mpathd (3600110d001ee7f0102050001cc0= b6751) dm-1 > SANBlaze,VLUN > >=A0=A0=A0 MyLun > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0 size=3D8.0M features=3D'0' hwhandler= =3D'0' wp=3Drw > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0 `-+- policy=3D'round-robin 0' prio= =3D50 status=3Dactive > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0=A0=A0 |- 8:0:1:0=A0 sda 8:48 active = ready=A0 running > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0=A0=A0 `- 9:0:1:0=A0 sdb 8:64 active = ready=A0 running=A0=A0=A0=A0=A0=A0=A0=A0=A0 > >=A0=A0=A0 > > >=A0=A0=A0 > And on sda if iam seeing lot of errors due to which the s= da path is > >=A0=A0=A0 fluctuating from failed state to active state and vicevera. > >=A0=A0=A0 > > >=A0=A0=A0 > My requirement is something like this if sda is failed fo= r more > then 5 > >=A0=A0=A0 > times in a hour duration ,then I want to keep the sda in = failed > state > >=A0=A0=A0 > for few hours (3hrs) > >=A0=A0=A0 > > >=A0=A0=A0 > And the data should travel only thorugh sdb path. > >=A0=A0=A0 > Will this be possible with the below parameters. > >=A0=A0=A0 =A0 > >=A0=A0=A0 No. delay_watch_checks sets how may path checks you watch a= path that > has > >=A0=A0=A0 recently come back from the failed state. If the path fails= again > within > >=A0=A0=A0 this time, multipath device delays it.=A0 This means that t= he delay is > >=A0=A0=A0 always trigger by two failures within the time limit.=A0 It= 's possible > to > >=A0=A0=A0 adapt this to count numbers of failures, and act after a ce= rtain > number > >=A0=A0=A0 within a certain timeframe, but it would take a bit more wo= rk. > >=A0=A0=A0 =A0 > >=A0=A0=A0 delay_wait_checks doesn't guarantee that it will delay for = any set > length > >=A0=A0=A0 of time.=A0 Instead, it sets the number of consecutive succ= essful path > >=A0=A0=A0 checks that must occur before the path is usable again. You= could set > this > >=A0=A0=A0 for 3 hours of path checks, but if a check failed during th= is time, > you > >=A0=A0=A0 would restart the 3 hours over again. > >=A0=A0=A0 =A0 > >=A0=A0=A0 -Ben > >=A0=A0=A0 =A0 > >=A0=A0=A0 > Can you just let me know what values I should add for > delay_watch_checks > >=A0=A0=A0 and delay_wait_checks. > >=A0=A0=A0 > > >=A0=A0=A0 > Regards, > >=A0=A0=A0 > Muneendra. > >=A0=A0=A0 > > >=A0=A0=A0 > > >=A0=A0=A0 > > >=A0=A0=A0 > -----Original Message----- > >=A0=A0=A0 > From: Muneendra Kumar M > >=A0=A0=A0 > Sent: Thursday, December 22, 2016 11:10 AM > >=A0=A0=A0 > To: 'Benjamin Marzinski' <[4][9]bmarzins@redhat.com> > >=A0=A0=A0 > Cc: [5][10]dm-devel@redhat.com > >=A0=A0=A0 > Subject: RE: [dm-devel] deterministic io throughput in mu= ltipath > >=A0=A0=A0 > > >=A0=A0=A0 > Hi Ben, > >=A0=A0=A0 > > >=A0=A0=A0 > Thanks for the reply. > >=A0=A0=A0 > I will look into this parameters will do the internal tes= ting and > let > >=A0=A0=A0 you know the results. > >=A0=A0=A0 > > >=A0=A0=A0 > Regards, > >=A0=A0=A0 > Muneendra. > >=A0=A0=A0 > > >=A0=A0=A0 > -----Original Message----- > >=A0=A0=A0 > From: Benjamin Marzinski [[6][11]mailto:bmarzins@redhat.c= om] > >=A0=A0=A0 > Sent: Wednesday, December 21, 2016 9:40 PM > >=A0=A0=A0 > To: Muneendra Kumar M <[7][12]mmandala@Brocade.com> > >=A0=A0=A0 > Cc: [8][13]dm-devel@redhat.com > >=A0=A0=A0 > Subject: Re: [dm-devel] deterministic io throughput in mu= ltipath > >=A0=A0=A0 > > >=A0=A0=A0 > Have you looked into the delay_watch_checks and delay_wai= t_checks > >=A0=A0=A0 configuration parameters?=A0 The idea behind them is to min= imize the > use of > >=A0=A0=A0 paths that are intermittently failing. > >=A0=A0=A0 > > >=A0=A0=A0 > -Ben > >=A0=A0=A0 > > >=A0=A0=A0 > On Mon, Dec 19, 2016 at 11:50:36AM +0000, Muneendra Kumar= M wrote: > >=A0=A0=A0 > >=A0=A0=A0 Customers using Linux host (mostly RHEL host) = using a SAN > network > >=A0=A0=A0 for > >=A0=A0=A0 > >=A0=A0=A0 block storage, complain the Linux multipath st= ack is not > resilient > >=A0=A0=A0 to > >=A0=A0=A0 > >=A0=A0=A0 handle non-deterministic storage network behav= iors. This has > caused > >=A0=A0=A0 many > >=A0=A0=A0 > >=A0=A0=A0 customer move away to non-linux based servers.= The intent of > the > >=A0=A0=A0 below > >=A0=A0=A0 > >=A0=A0=A0 patch and the prevailing issues are given belo= w. With the > below > >=A0=A0=A0 design we > >=A0=A0=A0 > >=A0=A0=A0 are seeing the Linux multipath stack becoming = resilient to > such > >=A0=A0=A0 network > >=A0=A0=A0 > >=A0=A0=A0 issues. We hope by getting this patch accepted= will help in > more > >=A0=A0=A0 Linux > >=A0=A0=A0 > >=A0=A0=A0 server adoption that use SAN network. > >=A0=A0=A0 > > > >=A0=A0=A0 > >=A0=A0=A0 I have already sent the design details to the = community in a > >=A0=A0=A0 different > >=A0=A0=A0 > >=A0=A0=A0 mail chain and the details are available in th= e below link. > >=A0=A0=A0 > > > >=A0=A0=A0 > >=A0=A0=A0 > >=A0=A0=A0 > [1][9][14]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.r= edhat.com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3D= IL_XqQWOjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3D= vfwpVp6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo= _Uj7MRu52hG3MKzM&e=3D > >=A0=A0=A0 . > >=A0=A0=A0 > > > >=A0=A0=A0 > >=A0=A0=A0 Can you please go through the design and send = the comments to > us. > >=A0=A0=A0 > > > >=A0=A0=A0 > >=A0=A0=A0 =A0 > >=A0=A0=A0 > > > >=A0=A0=A0 > >=A0=A0=A0 Regards, > >=A0=A0=A0 > > > >=A0=A0=A0 > >=A0=A0=A0 Muneendra. > >=A0=A0=A0 > > > >=A0=A0=A0 > >=A0=A0=A0 =A0 > >=A0=A0=A0 > > > >=A0=A0=A0 > >=A0=A0=A0 =A0 > >=A0=A0=A0 > > > >=A0=A0=A0 > > References > >=A0=A0=A0 > > > >=A0=A0=A0 > >=A0=A0=A0 Visible links > >=A0=A0=A0 > >=A0=A0=A0 1. > >=A0=A0=A0 > > > >=A0=A0=A0 > [10][15]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.red= hat.com_ > >=A0=A0=A0 > > ar > >=A0=A0=A0 > > > chives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_XqQW= Oj > >=A0=A0=A0 > > ub > >=A0=A0=A0 > > > gfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp= 6e > >=A0=A0=A0 > > 1K > >=A0=A0=A0 > > > XtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7M > >=A0=A0=A0 > > Ru > >=A0=A0=A0 > > 52hG3MKzM&e=3D > >=A0=A0=A0 > > >=A0=A0=A0 > > -- > >=A0=A0=A0 > > dm-devel mailing list > >=A0=A0=A0 > > [11][16]dm-devel@redhat.com > >=A0=A0=A0 > > > >=A0=A0=A0 > [12][17]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.red= hat.com_ > >=A0=A0=A0 > > ma > >=A0=A0=A0 > > > ilman_listinfo_dm-2Ddevel&d=3DDgIDAw&c=3DIL_XqQWOjubgfqINi2jTzg&r=3DE3= ftc4 > >=A0=A0=A0 > > > 7B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6e1KXtRA0ctwHYJ7cDmPsL > >=A0=A0=A0 > > > > i2C1L9pox7uexsY&s=3DUyE46dXOrNTbPz_TVGtpoHl3J3h_n0uYhI4TI-PgyWg&e=3D > >=A0=A0=A0 =A0 > > > > References > > > >=A0=A0=A0 Visible links > >=A0=A0=A0 1. [18]mailto:bmarzins@redhat.com > >=A0=A0=A0 2. [19]mailto:mmandala@brocade.com > >=A0=A0=A0 3. [20]mailto:dm-devel@redhat.com > >=A0=A0=A0 4. [21]mailto:bmarzins@redhat.com > >=A0=A0=A0 5. [22]mailto:dm-devel@redhat.com > >=A0=A0=A0 6. [23]mailto:bmarzins@redhat.com > >=A0=A0=A0 7. [24]mailto:mmandala@brocade.com > >=A0=A0=A0 8. [25]mailto:dm-devel@redhat.com > >=A0=A0=A0 9. > [26]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.= com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_XqQ= WOjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp= 6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7MR= u52hG3MKzM&e > >=A0=A0 10. > [27]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.= com_ > >=A0=A0 11. [28]mailto:dm-devel@redhat.com > >=A0=A0 12. > > [29]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redha= t.com_ > =A0 > =A0 > =A0 > =A0 > = > References > = > Visible links > 1. mailto:bmarzins@redhat.com > 2. mailto:mmandala@brocade.com > 3. mailto:dm-devel@redhat.com > 4. mailto:bmarzins@redhat.com > 5. mailto:dm-devel@redhat.com > 6. mailto:bmarzins@redhat.com > 7. mailto:mmandala@brocade.com > 8. mailto:dm-devel@redhat.com > 9. mailto:bmarzins@redhat.com > 10. mailto:dm-devel@redhat.com > 11. mailto:bmarzins@redhat.com > 12. mailto:mmandala@brocade.com > 13. mailto:dm-devel@redhat.com > 14. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.c= om_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_XqQW= OjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6= e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7MRu= 52hG3MKzM&e > 15. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.c= om_ > 16. mailto:dm-devel@redhat.com > 17. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.c= om_ > 18. mailto:bmarzins@redhat.com > 19. mailto:mmandala@brocade.com > 20. mailto:dm-devel@redhat.com > 21. mailto:bmarzins@redhat.com > 22. mailto:dm-devel@redhat.com > 23. mailto:bmarzins@redhat.com > 24. mailto:mmandala@brocade.com > 25. mailto:dm-devel@redhat.com > 26. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.c= om_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_XqQW= OjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6= e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7MRu= 52hG3MKzM&e > 27. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.c= om_ > 28. mailto:dm-devel@redhat.com > 29. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.c= om_ From mboxrd@z Thu Jan 1 00:00:00 1970 From: Muneendra Kumar M Subject: Re: deterministic io throughput in multipath Date: Wed, 25 Jan 2017 11:48:33 +0000 Message-ID: References: <1649d4b8538d4b4cb1efacdfe8cf31eb@BRMWP-EXMB12.corp.brocade.com> <20161221160940.GG19659@octiron.msp.redhat.com> <8cd4cc5f20b540a1b8312ad485711152@BRMWP-EXMB12.corp.brocade.com> <20170103171159.GA2732@octiron.msp.redhat.com> <4dfed25f04c04771a732580a4a8cc834@BRMWP-EXMB12.corp.brocade.com> <20170117010447.GW2732@octiron.msp.redhat.com> <26d8e0b78873443c8e15b863bc33922d@BRMWP-EXMB12.corp.brocade.com> <20170125092846.GA2732@octiron.msp.redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: <20170125092846.GA2732@octiron.msp.redhat.com> Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Benjamin Marzinski Cc: "dm-devel@redhat.com" List-Id: dm-devel.ids Hi Ben, Thanks for the review . I will consider the below points and will do the necessary changes. I have two general questions which may not be related to this. 1)Is there any standard tests that we need to do to check the functionality= of the multipath daemon. 2)Iam new to git is there any standard steps which we generally follow to = push the changes . Regards, Muneendra. -----Original Message----- From: Benjamin Marzinski [mailto:bmarzins@redhat.com] = Sent: Wednesday, January 25, 2017 2:59 PM To: Muneendra Kumar M Cc: dm-devel@redhat.com Subject: Re: [dm-devel] deterministic io throughput in multipath This looks fine to me. If this what you want to push, I'm o.k. with it. But I'd like to make some suggestions that you are free to ignore. Right now you have to check in two places to see if the path failed (in upd= ate_multipath and check_path). If you look at the delayed_*_checks code, it= flags the path failures when you reinstate the path in check_path, since t= his will only happen there. Next, right now you use the disable_reinstate code to deal with the devices= when they shouldn't be reinstated. The issue with this is that the path ap= pears to be up when people look at its state, but still isn't being used. I= f you do the check early and set the path state to PATH_DELAYED, like delay= ed_*_checks does, then the path is clearly marked when users look to see wh= y it isn't being used. Also, if you exit check_path early, then you won't b= e running the prioritizer on these likely-unstable paths. Finally, the way you use dis_reinstate_time, a flakey device can get reinst= ated as soon as it comes back up, as long it was down for long enough, simp= ly because pp->dis_reinstate_time reached mpp->san_path_err_recovery_time while the device was failed. delayed_*_checks depends on a number of successful path checks, so you know= that the device has at least been nominally functional for san_path_err_re= covery_time. Like I said, you don't have to change any of this to make me happy with you= r patch. But if you did change all of these, then the current delay_*_check= s code would just end up being a special case of your code. I'd really like to pull out the delayed_*_checks code and just keep your ve= rsion, since it seems more useful. It would be nice to keep the same functi= onality. But even if you don't make these changes, I still think we should = pull out the delayed_*_checks code, since they both do the same general thi= ng, and your code does it better. -Ben On Mon, Jan 23, 2017 at 11:02:42AM +0000, Muneendra Kumar M wrote: > Hi Ben, > I have made the changes as per the below review comments . > =A0 > Could you please review the attached patch and provide us your valuable > comments . > Below are the files that has been changed . > =A0 > libmultipath/config.c=A0=A0=A0=A0=A0 |=A0 3 +++ > libmultipath/config.h=A0=A0=A0=A0=A0 |=A0 9 +++++++++ > libmultipath/configure.c=A0=A0 |=A0 3 +++ > libmultipath/defaults.h=A0=A0=A0 |=A0 3 ++- > libmultipath/dict.c=A0=A0=A0=A0=A0=A0=A0 | 84 > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++----------= -------------- > libmultipath/dict.h=A0=A0=A0=A0=A0=A0=A0 |=A0 3 +-- > libmultipath/propsel.c=A0=A0=A0=A0 | 48 > ++++++++++++++++++++++++++++++++++++++++++++++-- > libmultipath/propsel.h=A0=A0=A0=A0 |=A0 3 +++ > libmultipath/structs.h=A0=A0=A0=A0 | 14 ++++++++++---- > libmultipath/structs_vec.c |=A0 6 ++++++ > multipath/multipath.conf.5 | 57 > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > multipathd/main.c=A0=A0=A0=A0=A0=A0=A0=A0=A0 | 70 > = > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--- > =A0 > =A0 > Regards, > Muneendra. > =A0 > _____________________________________________ > From: Muneendra Kumar M > Sent: Tuesday, January 17, 2017 4:13 PM > To: 'Benjamin Marzinski' > Cc: dm-devel@redhat.com > Subject: RE: [dm-devel] deterministic io throughput in multipath > =A0 > =A0 > Hi Ben, > Thanks for the review. > In dict.c=A0 I will make sure I will make generic functions which will= be > used by both delay_checks and err_checks. > =A0 > We want to increment the path failures every time the path goes down > regardless of whether multipathd or the kernel noticed the failure of > paths. > Thanks for pointing this. > =A0 > I will completely agree with the idea which you mentioned below by > reconsidering the san_path_err_threshold_window with > san_path_err_forget_rate. This will avoid counting time when the path = was > down as time where the path wasn't having problems. > =A0 > I will incorporate all the changes mentioned below and will resend the > patch once the testing is done. > =A0 > Regards, > Muneendra. > =A0 > =A0 > =A0 > -----Original Message----- > From: Benjamin Marzinski [[1]mailto:bmarzins@redhat.com] > Sent: Tuesday, January 17, 2017 6:35 AM > To: Muneendra Kumar M <[2]mmandala@Brocade.com> > Cc: [3]dm-devel@redhat.com > Subject: Re: [dm-devel] deterministic io throughput in multipath > =A0 > On Mon, Jan 16, 2017 at 11:19:19AM +0000, Muneendra Kumar M wrote: > >=A0=A0=A0 Hi Ben, > >=A0=A0=A0 After the below discussion we=A0 came with the approach whi= ch will meet > our > >=A0=A0=A0 requirement. > >=A0=A0=A0 I have attached the patch which is working good in our fiel= d tests. > >=A0=A0=A0 Could you please review the attached patch and provide us y= our > valuable > >=A0=A0=A0 comments . > =A0 > I can see a number of issues with this patch. > =A0 > First, some nit-picks: > - I assume "dis_reinstante_time" should be "dis_reinstate_time" > =A0 > - The indenting in check_path_validity_err is wrong, which made it > =A0 confusing until I noticed that > =A0 > if (clock_gettime(CLOCK_MONOTONIC, &start_time) !=3D 0) > =A0 > =A0 doesn't have an open brace, and shouldn't indent the rest of the > =A0 function. > =A0 > - You call clock_gettime in check_path, but never use the result. > =A0 > - In dict.c, instead of writing your own functions that are the same as > =A0 the *_delay_checks functions, you could make those functions gener= ic > =A0 and use them for both.=A0 To go match the other generic function n= ames > =A0 they would probably be something like > =A0 > set_off_int_undef > =A0 > print_off_int_undef > =A0 > =A0 You would also need to change DELAY_CHECKS_* and ERR_CHECKS_* to > =A0 point to some common enum that you created, the way > =A0 user_friendly_names_states (to name one of many) does. The generic > =A0 enum used by *_off_int_undef would be something like. > =A0 > enum no_undef { > =A0=A0=A0=A0=A0=A0=A0 NU_NO =3D -1, > =A0=A0=A0=A0=A0=A0=A0 NU_UNDEF =3D 0, > } > =A0 > =A0 The idea is to try to cut down on the number of functions that are > =A0 simply copy-pasting other functions in dict.c. > =A0 > =A0 > Those are all minor cleanup issues, but there are some bigger problems. > =A0 > Instead of checking if san_path_err_threshold, > san_path_err_threshold_window, and san_path_err_recovery_time are grea= ter > than zero seperately, you should probably check them all at the start = of > check_path_validity_err, and return 0 unless they all are set. > Right now, if a user sets san_path_err_threshold and > san_path_err_threshold_window but not san_path_err_recovery_time, their > path will never recover after it hits the error threshold.=A0 I pretty= sure > that you don't mean to permanently disable the paths. > =A0 > =A0 > time_t is a signed type, which means that if you get the clock time in > update_multpath and then fail to get the clock time in > check_path_validity_err, this check: > =A0 > start_time.tv_sec - pp->failure_start_time) < > pp->mpp->san_path_err_threshold_window > =A0 > will always be true.=A0 I realize that clock_gettime is very unlikely = to > fail.=A0 But if it does, probably the safest thing to so is to just > immediately return 0 in check_path_validity_err. > =A0 > =A0 > The way you set path_failures in update_multipath may not get you what= you > want.=A0 It will only count path failures found by the kernel, and not= the > path checker.=A0 If the check_path finds the error, pp->state will be = set to > PATH_DOWN before pp->dmstate is set to PSTATE_FAILED. That means you w= ill > not increment path_failures. Perhaps this is what you want, but I would > assume that you would want to count every time the path goes down > regardless of whether multipathd or the kernel noticed it. > =A0 > =A0 > I'm not super enthusiastic about how the san_path_err_threshold_window > works.=A0 First, it starts counting from when the path goes down, so i= f the > path takes long enough to get restored, and then fails immediately, it= can > just keep failing and it will never hit the san_path_err_threshold_win= dow, > since it spends so much of that time with the path failed.=A0 Also, the > window gets set on the first error, and never reset until the number of > errors is over the threshold.=A0 This means that if you get one early = error > and then a bunch of errors much later, you will go for (2 x > san_path_err_threshold) - 1 errors until you stop reinstating the path, > because of the window reset in the middle of the string of errors.=A0 = It > seems like a better idea would be to have check_path_validity_err reset > path_failures as soon as it notices that you are past > san_path_err_threshold_window, instead of waiting till the number of > errors hits san_path_err_threshold. > =A0 > =A0 > If I was going to design this, I think I would have san_path_err_thres= hold > and san_path_err_recovery_time like you do, but instead of having a > san_path_err_threshold_window, I would have something like > san_path_err_forget_rate.=A0 The idea is that every san_path_err_forge= t_rate > number of successful path checks you decrement path_failures by 1. This > means that there is no window after which you reset.=A0 If the path fa= ilures > come in faster than the forget rate, you will eventually hit the error > threshold. This also has the benefit of easily not counting time when = the > path was down as time where the path wasn't having problems. But if you > don't like my idea, yours will work fine with some polish. > =A0 > -Ben > =A0 > =A0 > >=A0=A0=A0 Below are the files that has been changed . > >=A0=A0=A0 =A0 > >=A0=A0=A0 libmultipath/config.c=A0=A0=A0=A0=A0 |=A0 3 +++ > >=A0=A0=A0 libmultipath/config.h=A0=A0=A0=A0=A0 |=A0 9 +++++++++ > >=A0=A0=A0 libmultipath/configure.c=A0=A0 |=A0 3 +++ > >=A0=A0=A0 libmultipath/defaults.h=A0=A0=A0 |=A0 1 + > >=A0=A0=A0 libmultipath/dict.c=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 | 80 > >=A0=A0=A0 > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++= ++++++++++ > >=A0=A0=A0 libmultipath/dict.h=A0=A0=A0=A0=A0=A0=A0 |=A0 1 + > >=A0=A0=A0 libmultipath/propsel.c=A0=A0=A0=A0 | 44 > >=A0=A0=A0 ++++++++++++++++++++++++++++++++++++++++++++ > >=A0=A0=A0 libmultipath/propsel.h=A0=A0=A0=A0 |=A0 6 ++++++ > >=A0=A0=A0 libmultipath/structs.h=A0=A0=A0=A0 | 12 +++++++++++- > >=A0=A0=A0 libmultipath/structs_vec.c | 10 ++++++++++ > >=A0=A0=A0 multipath/multipath.conf.5 | 58 > >=A0=A0=A0 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > >=A0=A0=A0 multipathd/main.c=A0=A0=A0=A0=A0=A0=A0=A0=A0 | 61 > >=A0=A0=A0 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++= -- > >=A0=A0=A0 =A0 > >=A0=A0=A0 We have added three new config parameters whose description= is below. > >=A0=A0=A0 1.san_path_err_threshold: > >=A0=A0=A0 =A0=A0=A0=A0=A0=A0=A0 If set to a value greater than 0, mul= tipathd will watch paths > and > >=A0=A0=A0 check how many times a path has been failed due to errors. = If the > number > >=A0=A0=A0 of failures on a particular path is greater then the > >=A0=A0=A0 san_path_err_threshold then the path will not=A0 reinstate= =A0 till > >=A0=A0=A0 san_path_err_recovery_time. These path failures should occu= r within a > >=A0=A0=A0 san_path_err_threshold_window time frame, if not we will co= nsider the > path > >=A0=A0=A0 is good enough to reinstate. > >=A0=A0=A0 =A0 > >=A0=A0=A0 2.san_path_err_threshold_window: > >=A0=A0=A0 =A0=A0=A0=A0=A0=A0=A0 If set to a value greater than 0, mul= tipathd will check > whether > >=A0=A0=A0 the path failures has exceeded=A0 the san_path_err_threshol= d within > this > >=A0=A0=A0 time frame i.e san_path_err_threshold_window . If so we wil= l not > reinstate > >=A0=A0=A0 the path till=A0=A0=A0=A0=A0=A0=A0=A0=A0 san_path_err_recov= ery_time. > >=A0=A0=A0 =A0 > >=A0=A0=A0 3.san_path_err_recovery_time: > >=A0=A0=A0 If set to a value greater than 0, multipathd will make sure= that when > path > >=A0=A0=A0 failures has exceeded the san_path_err_threshold within > >=A0=A0=A0 san_path_err_threshold_window then the path=A0 will be plac= ed in failed > >=A0=A0=A0 state for san_path_err_recovery_time duration. Once > >=A0=A0=A0 san_path_err_recovery_time has timeout=A0 we will reinstate= the failed > path > >=A0=A0=A0 . > >=A0=A0=A0 =A0 > >=A0=A0=A0 Regards, > >=A0=A0=A0 Muneendra. > >=A0=A0=A0 =A0 > >=A0=A0=A0 -----Original Message----- > >=A0=A0=A0 From: Muneendra Kumar M > >=A0=A0=A0 Sent: Wednesday, January 04, 2017 6:56 PM > >=A0=A0=A0 To: 'Benjamin Marzinski' <[4]bmarzins@redhat.com> > >=A0=A0=A0 Cc: [5]dm-devel@redhat.com > >=A0=A0=A0 Subject: RE: [dm-devel] deterministic io throughput in mult= ipath > >=A0=A0=A0 =A0 > >=A0=A0=A0 Hi Ben, > >=A0=A0=A0 Thanks for the information. > >=A0=A0=A0 =A0 > >=A0=A0=A0 Regards, > >=A0=A0=A0 Muneendra. > >=A0=A0=A0 =A0 > >=A0=A0=A0 -----Original Message----- > >=A0=A0=A0 From: Benjamin Marzinski [[1][6]mailto:bmarzins@redhat.com] > >=A0=A0=A0 Sent: Tuesday, January 03, 2017 10:42 PM > >=A0=A0=A0 To: Muneendra Kumar M <[2][7]mmandala@Brocade.com> > >=A0=A0=A0 Cc: [3][8]dm-devel@redhat.com > >=A0=A0=A0 Subject: Re: [dm-devel] deterministic io throughput in mult= ipath > >=A0=A0=A0 =A0 > >=A0=A0=A0 On Mon, Dec 26, 2016 at 09:42:48AM +0000, Muneendra Kumar M= wrote: > >=A0=A0=A0 > Hi Ben, > >=A0=A0=A0 > > >=A0=A0=A0 > If there are two paths on a dm-1 say sda and sdb as below. > >=A0=A0=A0 > > >=A0=A0=A0 > #=A0 multipath -ll > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0 mpathd (3600110d001ee7f0102050001cc0= b6751) dm-1 > SANBlaze,VLUN > >=A0=A0=A0 MyLun > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0 size=3D8.0M features=3D'0' hwhandler= =3D'0' wp=3Drw > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0 `-+- policy=3D'round-robin 0' prio= =3D50 status=3Dactive > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0=A0=A0 |- 8:0:1:0=A0 sda 8:48 active = ready=A0 running > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0=A0=A0 `- 9:0:1:0=A0 sdb 8:64 active = ready=A0 running=A0=A0=A0=A0=A0=A0=A0=A0=A0 > >=A0=A0=A0 > > >=A0=A0=A0 > And on sda if iam seeing lot of errors due to which the s= da path is > >=A0=A0=A0 fluctuating from failed state to active state and vicevera. > >=A0=A0=A0 > > >=A0=A0=A0 > My requirement is something like this if sda is failed fo= r more > then 5 > >=A0=A0=A0 > times in a hour duration ,then I want to keep the sda in = failed > state > >=A0=A0=A0 > for few hours (3hrs) > >=A0=A0=A0 > > >=A0=A0=A0 > And the data should travel only thorugh sdb path. > >=A0=A0=A0 > Will this be possible with the below parameters. > >=A0=A0=A0 =A0 > >=A0=A0=A0 No. delay_watch_checks sets how may path checks you watch a= path that > has > >=A0=A0=A0 recently come back from the failed state. If the path fails= again > within > >=A0=A0=A0 this time, multipath device delays it.=A0 This means that t= he delay is > >=A0=A0=A0 always trigger by two failures within the time limit.=A0 It= 's possible > to > >=A0=A0=A0 adapt this to count numbers of failures, and act after a ce= rtain > number > >=A0=A0=A0 within a certain timeframe, but it would take a bit more wo= rk. > >=A0=A0=A0 =A0 > >=A0=A0=A0 delay_wait_checks doesn't guarantee that it will delay for = any set > length > >=A0=A0=A0 of time.=A0 Instead, it sets the number of consecutive succ= essful path > >=A0=A0=A0 checks that must occur before the path is usable again. You= could set > this > >=A0=A0=A0 for 3 hours of path checks, but if a check failed during th= is time, > you > >=A0=A0=A0 would restart the 3 hours over again. > >=A0=A0=A0 =A0 > >=A0=A0=A0 -Ben > >=A0=A0=A0 =A0 > >=A0=A0=A0 > Can you just let me know what values I should add for > delay_watch_checks > >=A0=A0=A0 and delay_wait_checks. > >=A0=A0=A0 > > >=A0=A0=A0 > Regards, > >=A0=A0=A0 > Muneendra. > >=A0=A0=A0 > > >=A0=A0=A0 > > >=A0=A0=A0 > > >=A0=A0=A0 > -----Original Message----- > >=A0=A0=A0 > From: Muneendra Kumar M > >=A0=A0=A0 > Sent: Thursday, December 22, 2016 11:10 AM > >=A0=A0=A0 > To: 'Benjamin Marzinski' <[4][9]bmarzins@redhat.com> > >=A0=A0=A0 > Cc: [5][10]dm-devel@redhat.com > >=A0=A0=A0 > Subject: RE: [dm-devel] deterministic io throughput in mu= ltipath > >=A0=A0=A0 > > >=A0=A0=A0 > Hi Ben, > >=A0=A0=A0 > > >=A0=A0=A0 > Thanks for the reply. > >=A0=A0=A0 > I will look into this parameters will do the internal tes= ting and > let > >=A0=A0=A0 you know the results. > >=A0=A0=A0 > > >=A0=A0=A0 > Regards, > >=A0=A0=A0 > Muneendra. > >=A0=A0=A0 > > >=A0=A0=A0 > -----Original Message----- > >=A0=A0=A0 > From: Benjamin Marzinski [[6][11]mailto:bmarzins@redhat.c= om] > >=A0=A0=A0 > Sent: Wednesday, December 21, 2016 9:40 PM > >=A0=A0=A0 > To: Muneendra Kumar M <[7][12]mmandala@Brocade.com> > >=A0=A0=A0 > Cc: [8][13]dm-devel@redhat.com > >=A0=A0=A0 > Subject: Re: [dm-devel] deterministic io throughput in mu= ltipath > >=A0=A0=A0 > > >=A0=A0=A0 > Have you looked into the delay_watch_checks and delay_wai= t_checks > >=A0=A0=A0 configuration parameters?=A0 The idea behind them is to min= imize the > use of > >=A0=A0=A0 paths that are intermittently failing. > >=A0=A0=A0 > > >=A0=A0=A0 > -Ben > >=A0=A0=A0 > > >=A0=A0=A0 > On Mon, Dec 19, 2016 at 11:50:36AM +0000, Muneendra Kumar= M wrote: > >=A0=A0=A0 > >=A0=A0=A0 Customers using Linux host (mostly RHEL host) = using a SAN > network > >=A0=A0=A0 for > >=A0=A0=A0 > >=A0=A0=A0 block storage, complain the Linux multipath st= ack is not > resilient > >=A0=A0=A0 to > >=A0=A0=A0 > >=A0=A0=A0 handle non-deterministic storage network behav= iors. This has > caused > >=A0=A0=A0 many > >=A0=A0=A0 > >=A0=A0=A0 customer move away to non-linux based servers.= The intent of > the > >=A0=A0=A0 below > >=A0=A0=A0 > >=A0=A0=A0 patch and the prevailing issues are given belo= w. With the > below > >=A0=A0=A0 design we > >=A0=A0=A0 > >=A0=A0=A0 are seeing the Linux multipath stack becoming = resilient to > such > >=A0=A0=A0 network > >=A0=A0=A0 > >=A0=A0=A0 issues. We hope by getting this patch accepted= will help in > more > >=A0=A0=A0 Linux > >=A0=A0=A0 > >=A0=A0=A0 server adoption that use SAN network. > >=A0=A0=A0 > > > >=A0=A0=A0 > >=A0=A0=A0 I have already sent the design details to the = community in a > >=A0=A0=A0 different > >=A0=A0=A0 > >=A0=A0=A0 mail chain and the details are available in th= e below link. > >=A0=A0=A0 > > > >=A0=A0=A0 > >=A0=A0=A0 > >=A0=A0=A0 > [1][9][14]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.r= edhat.com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3D= IL_XqQWOjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3D= vfwpVp6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo= _Uj7MRu52hG3MKzM&e=3D > >=A0=A0=A0 . > >=A0=A0=A0 > > > >=A0=A0=A0 > >=A0=A0=A0 Can you please go through the design and send = the comments to > us. > >=A0=A0=A0 > > > >=A0=A0=A0 > >=A0=A0=A0 =A0 > >=A0=A0=A0 > > > >=A0=A0=A0 > >=A0=A0=A0 Regards, > >=A0=A0=A0 > > > >=A0=A0=A0 > >=A0=A0=A0 Muneendra. > >=A0=A0=A0 > > > >=A0=A0=A0 > >=A0=A0=A0 =A0 > >=A0=A0=A0 > > > >=A0=A0=A0 > >=A0=A0=A0 =A0 > >=A0=A0=A0 > > > >=A0=A0=A0 > > References > >=A0=A0=A0 > > > >=A0=A0=A0 > >=A0=A0=A0 Visible links > >=A0=A0=A0 > >=A0=A0=A0 1. > >=A0=A0=A0 > > > >=A0=A0=A0 > [10][15]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.red= hat.com_ > >=A0=A0=A0 > > ar > >=A0=A0=A0 > > > chives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_XqQW= Oj > >=A0=A0=A0 > > ub > >=A0=A0=A0 > > > gfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp= 6e > >=A0=A0=A0 > > 1K > >=A0=A0=A0 > > > XtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7M > >=A0=A0=A0 > > Ru > >=A0=A0=A0 > > 52hG3MKzM&e=3D > >=A0=A0=A0 > > >=A0=A0=A0 > > -- > >=A0=A0=A0 > > dm-devel mailing list > >=A0=A0=A0 > > [11][16]dm-devel@redhat.com > >=A0=A0=A0 > > > >=A0=A0=A0 > [12][17]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.red= hat.com_ > >=A0=A0=A0 > > ma > >=A0=A0=A0 > > > ilman_listinfo_dm-2Ddevel&d=3DDgIDAw&c=3DIL_XqQWOjubgfqINi2jTzg&r=3DE3= ftc4 > >=A0=A0=A0 > > > 7B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6e1KXtRA0ctwHYJ7cDmPsL > >=A0=A0=A0 > > > > i2C1L9pox7uexsY&s=3DUyE46dXOrNTbPz_TVGtpoHl3J3h_n0uYhI4TI-PgyWg&e=3D > >=A0=A0=A0 =A0 > > > > References > > > >=A0=A0=A0 Visible links > >=A0=A0=A0 1. [18]mailto:bmarzins@redhat.com > >=A0=A0=A0 2. [19]mailto:mmandala@brocade.com > >=A0=A0=A0 3. [20]mailto:dm-devel@redhat.com > >=A0=A0=A0 4. [21]mailto:bmarzins@redhat.com > >=A0=A0=A0 5. [22]mailto:dm-devel@redhat.com > >=A0=A0=A0 6. [23]mailto:bmarzins@redhat.com > >=A0=A0=A0 7. [24]mailto:mmandala@brocade.com > >=A0=A0=A0 8. [25]mailto:dm-devel@redhat.com > >=A0=A0=A0 9. > [26]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.= com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_XqQ= WOjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp= 6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7MR= u52hG3MKzM&e > >=A0=A0 10. > [27]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.= com_ > >=A0=A0 11. [28]mailto:dm-devel@redhat.com > >=A0=A0 12. > > = > [29]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.co > m_ > =A0 > =A0 > =A0 > =A0 > = > References > = > Visible links > 1. mailto:bmarzins@redhat.com > 2. mailto:mmandala@brocade.com > 3. mailto:dm-devel@redhat.com > 4. mailto:bmarzins@redhat.com > 5. mailto:dm-devel@redhat.com > 6. mailto:bmarzins@redhat.com > 7. mailto:mmandala@brocade.com > 8. mailto:dm-devel@redhat.com > 9. mailto:bmarzins@redhat.com > 10. mailto:dm-devel@redhat.com > 11. mailto:bmarzins@redhat.com > 12. mailto:mmandala@brocade.com > 13. mailto:dm-devel@redhat.com > 14. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.c= om_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_XqQW= OjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6= e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7MRu= 52hG3MKzM&e > 15. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.c= om_ > 16. mailto:dm-devel@redhat.com > 17. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.c= om_ > 18. mailto:bmarzins@redhat.com > 19. mailto:mmandala@brocade.com > 20. mailto:dm-devel@redhat.com > 21. mailto:bmarzins@redhat.com > 22. mailto:dm-devel@redhat.com > 23. mailto:bmarzins@redhat.com > 24. mailto:mmandala@brocade.com > 25. mailto:dm-devel@redhat.com > 26. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.c= om_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_XqQW= OjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6= e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7MRu= 52hG3MKzM&e > 27. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.c= om_ > 28. mailto:dm-devel@redhat.com > 29. = > https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.com_ From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Benjamin Marzinski" Subject: Re: deterministic io throughput in multipath Date: Wed, 25 Jan 2017 07:07:03 -0600 Message-ID: <20170125130703.GA22981@octiron.msp.redhat.com> References: <1649d4b8538d4b4cb1efacdfe8cf31eb@BRMWP-EXMB12.corp.brocade.com> <20161221160940.GG19659@octiron.msp.redhat.com> <8cd4cc5f20b540a1b8312ad485711152@BRMWP-EXMB12.corp.brocade.com> <20170103171159.GA2732@octiron.msp.redhat.com> <4dfed25f04c04771a732580a4a8cc834@BRMWP-EXMB12.corp.brocade.com> <20170117010447.GW2732@octiron.msp.redhat.com> <26d8e0b78873443c8e15b863bc33922d@BRMWP-EXMB12.corp.brocade.com> <20170125092846.GA2732@octiron.msp.redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Return-path: Content-Disposition: inline In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Muneendra Kumar M Cc: "dm-devel@redhat.com" List-Id: dm-devel.ids On Wed, Jan 25, 2017 at 11:48:33AM +0000, Muneendra Kumar M wrote: > Hi Ben, > Thanks for the review . > I will consider the below points and will do the necessary changes. > = > I have two general questions which may not be related to this. > 1)Is there any standard tests that we need to do to check the functionali= ty of the multipath daemon. No. multipath doesn't have a standard set of regression tests. You need to do your own testing. > 2)Iam new to git is there any standard steps which we generally follow t= o push the changes . You don't need to use git to push a patch, but it is easier to process if your patch is inline in the email instead of as an attachment (assuming your mail client doesn't mangle the patch). If you want to use git, you just need to commit your patches to a branch off the head of master. Then you can build patches with # git format-patch --cover-letter -s -n -o origin and send them with # git send-email --to "device-mapper development " --c= c "Christophe Varoqui " --no-chain-reply-to= --suppress-from You may need to first need to set up your git name and email -Ben > Regards, > Muneendra. > = > = > = > -----Original Message----- > From: Benjamin Marzinski [mailto:bmarzins@redhat.com] = > Sent: Wednesday, January 25, 2017 2:59 PM > To: Muneendra Kumar M > Cc: dm-devel@redhat.com > Subject: Re: [dm-devel] deterministic io throughput in multipath > = > This looks fine to me. If this what you want to push, I'm o.k. with it. > But I'd like to make some suggestions that you are free to ignore. > = > Right now you have to check in two places to see if the path failed (in u= pdate_multipath and check_path). If you look at the delayed_*_checks code, = it flags the path failures when you reinstate the path in check_path, since= this will only happen there. > = > Next, right now you use the disable_reinstate code to deal with the devic= es when they shouldn't be reinstated. The issue with this is that the path = appears to be up when people look at its state, but still isn't being used.= If you do the check early and set the path state to PATH_DELAYED, like del= ayed_*_checks does, then the path is clearly marked when users look to see = why it isn't being used. Also, if you exit check_path early, then you won't= be running the prioritizer on these likely-unstable paths. > = > Finally, the way you use dis_reinstate_time, a flakey device can get rein= stated as soon as it comes back up, as long it was down for long enough, si= mply because pp->dis_reinstate_time reached > mpp->san_path_err_recovery_time while the device was failed. > delayed_*_checks depends on a number of successful path checks, so you kn= ow that the device has at least been nominally functional for san_path_err_= recovery_time. > = > Like I said, you don't have to change any of this to make me happy with y= our patch. But if you did change all of these, then the current delay_*_che= cks code would just end up being a special case of your code. > I'd really like to pull out the delayed_*_checks code and just keep your = version, since it seems more useful. It would be nice to keep the same func= tionality. But even if you don't make these changes, I still think we shoul= d pull out the delayed_*_checks code, since they both do the same general t= hing, and your code does it better. > = > -Ben > = > On Mon, Jan 23, 2017 at 11:02:42AM +0000, Muneendra Kumar M wrote: > > Hi Ben, > > I have made the changes as per the below review comments . > > =A0 > > Could you please review the attached patch and provide us your valua= ble > > comments . > > Below are the files that has been changed . > > =A0 > > libmultipath/config.c=A0=A0=A0=A0=A0 |=A0 3 +++ > > libmultipath/config.h=A0=A0=A0=A0=A0 |=A0 9 +++++++++ > > libmultipath/configure.c=A0=A0 |=A0 3 +++ > > libmultipath/defaults.h=A0=A0=A0 |=A0 3 ++- > > libmultipath/dict.c=A0=A0=A0=A0=A0=A0=A0 | 84 > > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------= ---------------- > > libmultipath/dict.h=A0=A0=A0=A0=A0=A0=A0 |=A0 3 +-- > > libmultipath/propsel.c=A0=A0=A0=A0 | 48 > > ++++++++++++++++++++++++++++++++++++++++++++++-- > > libmultipath/propsel.h=A0=A0=A0=A0 |=A0 3 +++ > > libmultipath/structs.h=A0=A0=A0=A0 | 14 ++++++++++---- > > libmultipath/structs_vec.c |=A0 6 ++++++ > > multipath/multipath.conf.5 | 57 > > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > > multipathd/main.c=A0=A0=A0=A0=A0=A0=A0=A0=A0 | 70 > > = > > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--- > > =A0 > > =A0 > > Regards, > > Muneendra. > > =A0 > > _____________________________________________ > > From: Muneendra Kumar M > > Sent: Tuesday, January 17, 2017 4:13 PM > > To: 'Benjamin Marzinski' > > Cc: dm-devel@redhat.com > > Subject: RE: [dm-devel] deterministic io throughput in multipath > > =A0 > > =A0 > > Hi Ben, > > Thanks for the review. > > In dict.c=A0 I will make sure I will make generic functions which wi= ll be > > used by both delay_checks and err_checks. > > =A0 > > We want to increment the path failures every time the path goes down > > regardless of whether multipathd or the kernel noticed the failure of > > paths. > > Thanks for pointing this. > > =A0 > > I will completely agree with the idea which you mentioned below by > > reconsidering the san_path_err_threshold_window with > > san_path_err_forget_rate. This will avoid counting time when the pat= h was > > down as time where the path wasn't having problems. > > =A0 > > I will incorporate all the changes mentioned below and will resend t= he > > patch once the testing is done. > > =A0 > > Regards, > > Muneendra. > > =A0 > > =A0 > > =A0 > > -----Original Message----- > > From: Benjamin Marzinski [[1]mailto:bmarzins@redhat.com] > > Sent: Tuesday, January 17, 2017 6:35 AM > > To: Muneendra Kumar M <[2]mmandala@Brocade.com> > > Cc: [3]dm-devel@redhat.com > > Subject: Re: [dm-devel] deterministic io throughput in multipath > > =A0 > > On Mon, Jan 16, 2017 at 11:19:19AM +0000, Muneendra Kumar M wrote: > > >=A0=A0=A0 Hi Ben, > > >=A0=A0=A0 After the below discussion we=A0 came with the approach w= hich will meet > > our > > >=A0=A0=A0 requirement. > > >=A0=A0=A0 I have attached the patch which is working good in our fi= eld tests. > > >=A0=A0=A0 Could you please review the attached patch and provide us= your > > valuable > > >=A0=A0=A0 comments . > > =A0 > > I can see a number of issues with this patch. > > =A0 > > First, some nit-picks: > > - I assume "dis_reinstante_time" should be "dis_reinstate_time" > > =A0 > > - The indenting in check_path_validity_err is wrong, which made it > > =A0 confusing until I noticed that > > =A0 > > if (clock_gettime(CLOCK_MONOTONIC, &start_time) !=3D 0) > > =A0 > > =A0 doesn't have an open brace, and shouldn't indent the rest of the > > =A0 function. > > =A0 > > - You call clock_gettime in check_path, but never use the result. > > =A0 > > - In dict.c, instead of writing your own functions that are the same= as > > =A0 the *_delay_checks functions, you could make those functions gen= eric > > =A0 and use them for both.=A0 To go match the other generic function= names > > =A0 they would probably be something like > > =A0 > > set_off_int_undef > > =A0 > > print_off_int_undef > > =A0 > > =A0 You would also need to change DELAY_CHECKS_* and ERR_CHECKS_* to > > =A0 point to some common enum that you created, the way > > =A0 user_friendly_names_states (to name one of many) does. The gener= ic > > =A0 enum used by *_off_int_undef would be something like. > > =A0 > > enum no_undef { > > =A0=A0=A0=A0=A0=A0=A0 NU_NO =3D -1, > > =A0=A0=A0=A0=A0=A0=A0 NU_UNDEF =3D 0, > > } > > =A0 > > =A0 The idea is to try to cut down on the number of functions that a= re > > =A0 simply copy-pasting other functions in dict.c. > > =A0 > > =A0 > > Those are all minor cleanup issues, but there are some bigger proble= ms. > > =A0 > > Instead of checking if san_path_err_threshold, > > san_path_err_threshold_window, and san_path_err_recovery_time are gr= eater > > than zero seperately, you should probably check them all at the star= t of > > check_path_validity_err, and return 0 unless they all are set. > > Right now, if a user sets san_path_err_threshold and > > san_path_err_threshold_window but not san_path_err_recovery_time, th= eir > > path will never recover after it hits the error threshold.=A0 I pret= ty sure > > that you don't mean to permanently disable the paths. > > =A0 > > =A0 > > time_t is a signed type, which means that if you get the clock time = in > > update_multpath and then fail to get the clock time in > > check_path_validity_err, this check: > > =A0 > > start_time.tv_sec - pp->failure_start_time) < > > pp->mpp->san_path_err_threshold_window > > =A0 > > will always be true.=A0 I realize that clock_gettime is very unlikel= y to > > fail.=A0 But if it does, probably the safest thing to so is to just > > immediately return 0 in check_path_validity_err. > > =A0 > > =A0 > > The way you set path_failures in update_multipath may not get you wh= at you > > want.=A0 It will only count path failures found by the kernel, and n= ot the > > path checker.=A0 If the check_path finds the error, pp->state will b= e set to > > PATH_DOWN before pp->dmstate is set to PSTATE_FAILED. That means you= will > > not increment path_failures. Perhaps this is what you want, but I wo= uld > > assume that you would want to count every time the path goes down > > regardless of whether multipathd or the kernel noticed it. > > =A0 > > =A0 > > I'm not super enthusiastic about how the san_path_err_threshold_wind= ow > > works.=A0 First, it starts counting from when the path goes down, so= if the > > path takes long enough to get restored, and then fails immediately, = it can > > just keep failing and it will never hit the san_path_err_threshold_w= indow, > > since it spends so much of that time with the path failed.=A0 Also, = the > > window gets set on the first error, and never reset until the number= of > > errors is over the threshold.=A0 This means that if you get one earl= y error > > and then a bunch of errors much later, you will go for (2 x > > san_path_err_threshold) - 1 errors until you stop reinstating the pa= th, > > because of the window reset in the middle of the string of errors.= =A0 It > > seems like a better idea would be to have check_path_validity_err re= set > > path_failures as soon as it notices that you are past > > san_path_err_threshold_window, instead of waiting till the number of > > errors hits san_path_err_threshold. > > =A0 > > =A0 > > If I was going to design this, I think I would have san_path_err_thr= eshold > > and san_path_err_recovery_time like you do, but instead of having a > > san_path_err_threshold_window, I would have something like > > san_path_err_forget_rate.=A0 The idea is that every san_path_err_for= get_rate > > number of successful path checks you decrement path_failures by 1. T= his > > means that there is no window after which you reset.=A0 If the path = failures > > come in faster than the forget rate, you will eventually hit the err= or > > threshold. This also has the benefit of easily not counting time whe= n the > > path was down as time where the path wasn't having problems. But if = you > > don't like my idea, yours will work fine with some polish. > > =A0 > > -Ben > > =A0 > > =A0 > > >=A0=A0=A0 Below are the files that has been changed . > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 libmultipath/config.c=A0=A0=A0=A0=A0 |=A0 3 +++ > > >=A0=A0=A0 libmultipath/config.h=A0=A0=A0=A0=A0 |=A0 9 +++++++++ > > >=A0=A0=A0 libmultipath/configure.c=A0=A0 |=A0 3 +++ > > >=A0=A0=A0 libmultipath/defaults.h=A0=A0=A0 |=A0 1 + > > >=A0=A0=A0 libmultipath/dict.c=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 |= 80 > > >=A0=A0=A0 > > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++= ++++++++++++ > > >=A0=A0=A0 libmultipath/dict.h=A0=A0=A0=A0=A0=A0=A0 |=A0 1 + > > >=A0=A0=A0 libmultipath/propsel.c=A0=A0=A0=A0 | 44 > > >=A0=A0=A0 ++++++++++++++++++++++++++++++++++++++++++++ > > >=A0=A0=A0 libmultipath/propsel.h=A0=A0=A0=A0 |=A0 6 ++++++ > > >=A0=A0=A0 libmultipath/structs.h=A0=A0=A0=A0 | 12 +++++++++++- > > >=A0=A0=A0 libmultipath/structs_vec.c | 10 ++++++++++ > > >=A0=A0=A0 multipath/multipath.conf.5 | 58 > > >=A0=A0=A0 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > > >=A0=A0=A0 multipathd/main.c=A0=A0=A0=A0=A0=A0=A0=A0=A0 | 61 > > >=A0=A0=A0 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++= ++-- > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 We have added three new config parameters whose descripti= on is below. > > >=A0=A0=A0 1.san_path_err_threshold: > > >=A0=A0=A0 =A0=A0=A0=A0=A0=A0=A0 If set to a value greater than 0, m= ultipathd will watch paths > > and > > >=A0=A0=A0 check how many times a path has been failed due to errors= . If the > > number > > >=A0=A0=A0 of failures on a particular path is greater then the > > >=A0=A0=A0 san_path_err_threshold then the path will not=A0 reinstat= e=A0 till > > >=A0=A0=A0 san_path_err_recovery_time. These path failures should oc= cur within a > > >=A0=A0=A0 san_path_err_threshold_window time frame, if not we will = consider the > > path > > >=A0=A0=A0 is good enough to reinstate. > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 2.san_path_err_threshold_window: > > >=A0=A0=A0 =A0=A0=A0=A0=A0=A0=A0 If set to a value greater than 0, m= ultipathd will check > > whether > > >=A0=A0=A0 the path failures has exceeded=A0 the san_path_err_thresh= old within > > this > > >=A0=A0=A0 time frame i.e san_path_err_threshold_window . If so we w= ill not > > reinstate > > >=A0=A0=A0 the path till=A0=A0=A0=A0=A0=A0=A0=A0=A0 san_path_err_rec= overy_time. > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 3.san_path_err_recovery_time: > > >=A0=A0=A0 If set to a value greater than 0, multipathd will make su= re that when > > path > > >=A0=A0=A0 failures has exceeded the san_path_err_threshold within > > >=A0=A0=A0 san_path_err_threshold_window then the path=A0 will be pl= aced in failed > > >=A0=A0=A0 state for san_path_err_recovery_time duration. Once > > >=A0=A0=A0 san_path_err_recovery_time has timeout=A0 we will reinsta= te the failed > > path > > >=A0=A0=A0 . > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 Regards, > > >=A0=A0=A0 Muneendra. > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 -----Original Message----- > > >=A0=A0=A0 From: Muneendra Kumar M > > >=A0=A0=A0 Sent: Wednesday, January 04, 2017 6:56 PM > > >=A0=A0=A0 To: 'Benjamin Marzinski' <[4]bmarzins@redhat.com> > > >=A0=A0=A0 Cc: [5]dm-devel@redhat.com > > >=A0=A0=A0 Subject: RE: [dm-devel] deterministic io throughput in mu= ltipath > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 Hi Ben, > > >=A0=A0=A0 Thanks for the information. > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 Regards, > > >=A0=A0=A0 Muneendra. > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 -----Original Message----- > > >=A0=A0=A0 From: Benjamin Marzinski [[1][6]mailto:bmarzins@redhat.co= m] > > >=A0=A0=A0 Sent: Tuesday, January 03, 2017 10:42 PM > > >=A0=A0=A0 To: Muneendra Kumar M <[2][7]mmandala@Brocade.com> > > >=A0=A0=A0 Cc: [3][8]dm-devel@redhat.com > > >=A0=A0=A0 Subject: Re: [dm-devel] deterministic io throughput in mu= ltipath > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 On Mon, Dec 26, 2016 at 09:42:48AM +0000, Muneendra Kumar= M wrote: > > >=A0=A0=A0 > Hi Ben, > > >=A0=A0=A0 > > > >=A0=A0=A0 > If there are two paths on a dm-1 say sda and sdb as bel= ow. > > >=A0=A0=A0 > > > >=A0=A0=A0 > #=A0 multipath -ll > > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0 mpathd (3600110d001ee7f0102050001c= c0b6751) dm-1 > > SANBlaze,VLUN > > >=A0=A0=A0 MyLun > > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0 size=3D8.0M features=3D'0' hwhandl= er=3D'0' wp=3Drw > > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0 `-+- policy=3D'round-robin 0' prio= =3D50 status=3Dactive > > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0=A0=A0 |- 8:0:1:0=A0 sda 8:48 activ= e ready=A0 running > > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0=A0=A0 `- 9:0:1:0=A0 sdb 8:64 activ= e ready=A0 running=A0=A0=A0=A0=A0=A0=A0=A0=A0 > > >=A0=A0=A0 > > > >=A0=A0=A0 > And on sda if iam seeing lot of errors due to which the= sda path is > > >=A0=A0=A0 fluctuating from failed state to active state and vicever= a. > > >=A0=A0=A0 > > > >=A0=A0=A0 > My requirement is something like this if sda is failed = for more > > then 5 > > >=A0=A0=A0 > times in a hour duration ,then I want to keep the sda i= n failed > > state > > >=A0=A0=A0 > for few hours (3hrs) > > >=A0=A0=A0 > > > >=A0=A0=A0 > And the data should travel only thorugh sdb path. > > >=A0=A0=A0 > Will this be possible with the below parameters. > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 No. delay_watch_checks sets how may path checks you watch= a path that > > has > > >=A0=A0=A0 recently come back from the failed state. If the path fai= ls again > > within > > >=A0=A0=A0 this time, multipath device delays it.=A0 This means that= the delay is > > >=A0=A0=A0 always trigger by two failures within the time limit.=A0 = It's possible > > to > > >=A0=A0=A0 adapt this to count numbers of failures, and act after a = certain > > number > > >=A0=A0=A0 within a certain timeframe, but it would take a bit more = work. > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 delay_wait_checks doesn't guarantee that it will delay fo= r any set > > length > > >=A0=A0=A0 of time.=A0 Instead, it sets the number of consecutive su= ccessful path > > >=A0=A0=A0 checks that must occur before the path is usable again. Y= ou could set > > this > > >=A0=A0=A0 for 3 hours of path checks, but if a check failed during = this time, > > you > > >=A0=A0=A0 would restart the 3 hours over again. > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 -Ben > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 > Can you just let me know what values I should add for > > delay_watch_checks > > >=A0=A0=A0 and delay_wait_checks. > > >=A0=A0=A0 > > > >=A0=A0=A0 > Regards, > > >=A0=A0=A0 > Muneendra. > > >=A0=A0=A0 > > > >=A0=A0=A0 > > > >=A0=A0=A0 > > > >=A0=A0=A0 > -----Original Message----- > > >=A0=A0=A0 > From: Muneendra Kumar M > > >=A0=A0=A0 > Sent: Thursday, December 22, 2016 11:10 AM > > >=A0=A0=A0 > To: 'Benjamin Marzinski' <[4][9]bmarzins@redhat.com> > > >=A0=A0=A0 > Cc: [5][10]dm-devel@redhat.com > > >=A0=A0=A0 > Subject: RE: [dm-devel] deterministic io throughput in = multipath > > >=A0=A0=A0 > > > >=A0=A0=A0 > Hi Ben, > > >=A0=A0=A0 > > > >=A0=A0=A0 > Thanks for the reply. > > >=A0=A0=A0 > I will look into this parameters will do the internal t= esting and > > let > > >=A0=A0=A0 you know the results. > > >=A0=A0=A0 > > > >=A0=A0=A0 > Regards, > > >=A0=A0=A0 > Muneendra. > > >=A0=A0=A0 > > > >=A0=A0=A0 > -----Original Message----- > > >=A0=A0=A0 > From: Benjamin Marzinski [[6][11]mailto:bmarzins@redhat= .com] > > >=A0=A0=A0 > Sent: Wednesday, December 21, 2016 9:40 PM > > >=A0=A0=A0 > To: Muneendra Kumar M <[7][12]mmandala@Brocade.com> > > >=A0=A0=A0 > Cc: [8][13]dm-devel@redhat.com > > >=A0=A0=A0 > Subject: Re: [dm-devel] deterministic io throughput in = multipath > > >=A0=A0=A0 > > > >=A0=A0=A0 > Have you looked into the delay_watch_checks and delay_w= ait_checks > > >=A0=A0=A0 configuration parameters?=A0 The idea behind them is to m= inimize the > > use of > > >=A0=A0=A0 paths that are intermittently failing. > > >=A0=A0=A0 > > > >=A0=A0=A0 > -Ben > > >=A0=A0=A0 > > > >=A0=A0=A0 > On Mon, Dec 19, 2016 at 11:50:36AM +0000, Muneendra Kum= ar M wrote: > > >=A0=A0=A0 > >=A0=A0=A0 Customers using Linux host (mostly RHEL host= ) using a SAN > > network > > >=A0=A0=A0 for > > >=A0=A0=A0 > >=A0=A0=A0 block storage, complain the Linux multipath = stack is not > > resilient > > >=A0=A0=A0 to > > >=A0=A0=A0 > >=A0=A0=A0 handle non-deterministic storage network beh= aviors. This has > > caused > > >=A0=A0=A0 many > > >=A0=A0=A0 > >=A0=A0=A0 customer move away to non-linux based server= s. The intent of > > the > > >=A0=A0=A0 below > > >=A0=A0=A0 > >=A0=A0=A0 patch and the prevailing issues are given be= low. With the > > below > > >=A0=A0=A0 design we > > >=A0=A0=A0 > >=A0=A0=A0 are seeing the Linux multipath stack becomin= g resilient to > > such > > >=A0=A0=A0 network > > >=A0=A0=A0 > >=A0=A0=A0 issues. We hope by getting this patch accept= ed will help in > > more > > >=A0=A0=A0 Linux > > >=A0=A0=A0 > >=A0=A0=A0 server adoption that use SAN network. > > >=A0=A0=A0 > > > > >=A0=A0=A0 > >=A0=A0=A0 I have already sent the design details to th= e community in a > > >=A0=A0=A0 different > > >=A0=A0=A0 > >=A0=A0=A0 mail chain and the details are available in = the below link. > > >=A0=A0=A0 > > > > >=A0=A0=A0 > >=A0=A0=A0 > > >=A0=A0=A0 > > [1][9][14]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www= .redhat.com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c= =3DIL_XqQWOjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m= =3DvfwpVp6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiy= iPo_Uj7MRu52hG3MKzM&e=3D > > >=A0=A0=A0 . > > >=A0=A0=A0 > > > > >=A0=A0=A0 > >=A0=A0=A0 Can you please go through the design and sen= d the comments to > > us. > > >=A0=A0=A0 > > > > >=A0=A0=A0 > >=A0=A0=A0 =A0 > > >=A0=A0=A0 > > > > >=A0=A0=A0 > >=A0=A0=A0 Regards, > > >=A0=A0=A0 > > > > >=A0=A0=A0 > >=A0=A0=A0 Muneendra. > > >=A0=A0=A0 > > > > >=A0=A0=A0 > >=A0=A0=A0 =A0 > > >=A0=A0=A0 > > > > >=A0=A0=A0 > >=A0=A0=A0 =A0 > > >=A0=A0=A0 > > > > >=A0=A0=A0 > > References > > >=A0=A0=A0 > > > > >=A0=A0=A0 > >=A0=A0=A0 Visible links > > >=A0=A0=A0 > >=A0=A0=A0 1. > > >=A0=A0=A0 > > > > >=A0=A0=A0 > > [10][15]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.r= edhat.com_ > > >=A0=A0=A0 > > ar > > >=A0=A0=A0 > > > > chives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_Xq= QWOj > > >=A0=A0=A0 > > ub > > >=A0=A0=A0 > > > > gfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3Dvfwp= Vp6e > > >=A0=A0=A0 > > 1K > > >=A0=A0=A0 > > > > XtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj= 7M > > >=A0=A0=A0 > > Ru > > >=A0=A0=A0 > > 52hG3MKzM&e=3D > > >=A0=A0=A0 > > > >=A0=A0=A0 > > -- > > >=A0=A0=A0 > > dm-devel mailing list > > >=A0=A0=A0 > > [11][16]dm-devel@redhat.com > > >=A0=A0=A0 > > > > >=A0=A0=A0 > > [12][17]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.r= edhat.com_ > > >=A0=A0=A0 > > ma > > >=A0=A0=A0 > > > > ilman_listinfo_dm-2Ddevel&d=3DDgIDAw&c=3DIL_XqQWOjubgfqINi2jTzg&r=3D= E3ftc4 > > >=A0=A0=A0 > > > > 7B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6e1KXtRA0ctwHYJ7cDmP= sL > > >=A0=A0=A0 > > > > > i2C1L9pox7uexsY&s=3DUyE46dXOrNTbPz_TVGtpoHl3J3h_n0uYhI4TI-PgyWg&e= =3D > > >=A0=A0=A0 =A0 > > > > > > References > > > > > >=A0=A0=A0 Visible links > > >=A0=A0=A0 1. [18]mailto:bmarzins@redhat.com > > >=A0=A0=A0 2. [19]mailto:mmandala@brocade.com > > >=A0=A0=A0 3. [20]mailto:dm-devel@redhat.com > > >=A0=A0=A0 4. [21]mailto:bmarzins@redhat.com > > >=A0=A0=A0 5. [22]mailto:dm-devel@redhat.com > > >=A0=A0=A0 6. [23]mailto:bmarzins@redhat.com > > >=A0=A0=A0 7. [24]mailto:mmandala@brocade.com > > >=A0=A0=A0 8. [25]mailto:dm-devel@redhat.com > > >=A0=A0=A0 9. > > [26]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redha= t.com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_X= qQWOjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3Dvfwp= Vp6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7= MRu52hG3MKzM&e > > >=A0=A0 10. > > [27]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redha= t.com_ > > >=A0=A0 11. [28]mailto:dm-devel@redhat.com > > >=A0=A0 12. > > > = > > [29]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.co > > m_ > > =A0 > > =A0 > > =A0 > > =A0 > > = > > References > > = > > Visible links > > 1. mailto:bmarzins@redhat.com > > 2. mailto:mmandala@brocade.com > > 3. mailto:dm-devel@redhat.com > > 4. mailto:bmarzins@redhat.com > > 5. mailto:dm-devel@redhat.com > > 6. mailto:bmarzins@redhat.com > > 7. mailto:mmandala@brocade.com > > 8. mailto:dm-devel@redhat.com > > 9. mailto:bmarzins@redhat.com > > 10. mailto:dm-devel@redhat.com > > 11. mailto:bmarzins@redhat.com > > 12. mailto:mmandala@brocade.com > > 13. mailto:dm-devel@redhat.com > > 14. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat= .com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_Xq= QWOjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpV= p6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7M= Ru52hG3MKzM&e > > 15. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat= .com_ > > 16. mailto:dm-devel@redhat.com > > 17. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat= .com_ > > 18. mailto:bmarzins@redhat.com > > 19. mailto:mmandala@brocade.com > > 20. mailto:dm-devel@redhat.com > > 21. mailto:bmarzins@redhat.com > > 22. mailto:dm-devel@redhat.com > > 23. mailto:bmarzins@redhat.com > > 24. mailto:mmandala@brocade.com > > 25. mailto:dm-devel@redhat.com > > 26. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat= .com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_Xq= QWOjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpV= p6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7M= Ru52hG3MKzM&e > > 27. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat= .com_ > > 28. mailto:dm-devel@redhat.com > > 29. = > > https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.com_ > = From mboxrd@z Thu Jan 1 00:00:00 1970 From: Muneendra Kumar M Subject: Re: deterministic io throughput in multipath Date: Wed, 1 Feb 2017 11:58:52 +0000 Message-ID: References: <1649d4b8538d4b4cb1efacdfe8cf31eb@BRMWP-EXMB12.corp.brocade.com> <20161221160940.GG19659@octiron.msp.redhat.com> <8cd4cc5f20b540a1b8312ad485711152@BRMWP-EXMB12.corp.brocade.com> <20170103171159.GA2732@octiron.msp.redhat.com> <4dfed25f04c04771a732580a4a8cc834@BRMWP-EXMB12.corp.brocade.com> <20170117010447.GW2732@octiron.msp.redhat.com> <26d8e0b78873443c8e15b863bc33922d@BRMWP-EXMB12.corp.brocade.com> <20170125092846.GA2732@octiron.msp.redhat.com> <20170125130703.GA22981@octiron.msp.redhat.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="_002_af56a18e992f4a8cb474de87a208e02aBRMWPEXMB12corpbrocadec_" Return-path: In-Reply-To: <20170125130703.GA22981@octiron.msp.redhat.com> Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Benjamin Marzinski Cc: "dm-devel@redhat.com" List-Id: dm-devel.ids --_002_af56a18e992f4a8cb474de87a208e02aBRMWPEXMB12corpbrocadec_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hi Ben, I have made the changes as per the below review comments . Could you please review the attached patch and provide us your valuable com= ments . Below are the files that has been changed .=20 libmultipath/config.c | 3 +++ libmultipath/config.h | 9 +++++++++ libmultipath/configure.c | 3 +++ libmultipath/defaults.h | 3 ++- libmultipath/dict.c | 84 ++++++++++++++++++++++++++++++++++++++++++= ++++++++++++++++++------------------------ libmultipath/dict.h | 3 +-- libmultipath/propsel.c | 48 ++++++++++++++++++++++++++++++++++++++++++= ++++-- libmultipath/propsel.h | 3 +++ libmultipath/structs.h | 14 ++++++++++---- multipath/multipath.conf.5 | 57 ++++++++++++++++++++++++++++++++++++++++++= +++++++++++++++ multipathd/main.c | 97 ++++++++++++++++++++++++++++++++++++++++++= +++++++++++++++++++++++++++++++++++++++++++++++++++---- 11 files changed, 287 insertions(+), 37 deletions(-) Thanks for the general info provided below. I will commit the changes as below once the review is done. Regards, Muneendra. -----Original Message----- From: Benjamin Marzinski [mailto:bmarzins@redhat.com]=20 Sent: Wednesday, January 25, 2017 6:37 PM To: Muneendra Kumar M Cc: dm-devel@redhat.com Subject: Re: [dm-devel] deterministic io throughput in multipath On Wed, Jan 25, 2017 at 11:48:33AM +0000, Muneendra Kumar M wrote: > Hi Ben, > Thanks for the review . > I will consider the below points and will do the necessary changes. >=20 > I have two general questions which may not be related to this. > 1)Is there any standard tests that we need to do to check the functionali= ty of the multipath daemon. No. multipath doesn't have a standard set of regression tests. You need to= do your own testing. > 2)Iam new to git is there any standard steps which we generally follow t= o push the changes . You don't need to use git to push a patch, but it is easier to process if y= our patch is inline in the email instead of as an attachment (assuming your= mail client doesn't mangle the patch). If you want to use git, you just need to commit your patches to a branch of= f the head of master. Then you can build patches with # git format-patch --cover-letter -s -n -o origin and send them with # git send-email --to "device-mapper development " --c= c "Christophe Varoqui " --no-chain-reply-to= --suppress-from You may need to first need to set up your git name and email -Ben > Regards, > Muneendra. >=20 >=20 >=20 > -----Original Message----- > From: Benjamin Marzinski [mailto:bmarzins@redhat.com] > Sent: Wednesday, January 25, 2017 2:59 PM > To: Muneendra Kumar M > Cc: dm-devel@redhat.com > Subject: Re: [dm-devel] deterministic io throughput in multipath >=20 > This looks fine to me. If this what you want to push, I'm o.k. with it. > But I'd like to make some suggestions that you are free to ignore. >=20 > Right now you have to check in two places to see if the path failed (in u= pdate_multipath and check_path). If you look at the delayed_*_checks code, = it flags the path failures when you reinstate the path in check_path, since= this will only happen there. >=20 > Next, right now you use the disable_reinstate code to deal with the devic= es when they shouldn't be reinstated. The issue with this is that the path = appears to be up when people look at its state, but still isn't being used.= If you do the check early and set the path state to PATH_DELAYED, like del= ayed_*_checks does, then the path is clearly marked when users look to see = why it isn't being used. Also, if you exit check_path early, then you won't= be running the prioritizer on these likely-unstable paths. >=20 > Finally, the way you use dis_reinstate_time, a flakey device can get=20 > reinstated as soon as it comes back up, as long it was down for long=20 > enough, simply because pp->dis_reinstate_time reached > mpp->san_path_err_recovery_time while the device was failed. > delayed_*_checks depends on a number of successful path checks, so you kn= ow that the device has at least been nominally functional for san_path_err_= recovery_time. >=20 > Like I said, you don't have to change any of this to make me happy with y= our patch. But if you did change all of these, then the current delay_*_che= cks code would just end up being a special case of your code. > I'd really like to pull out the delayed_*_checks code and just keep your = version, since it seems more useful. It would be nice to keep the same func= tionality. But even if you don't make these changes, I still think we shoul= d pull out the delayed_*_checks code, since they both do the same general t= hing, and your code does it better. >=20 > -Ben >=20 > On Mon, Jan 23, 2017 at 11:02:42AM +0000, Muneendra Kumar M wrote: > > Hi Ben, > > I have made the changes as per the below review comments . > > =A0 > > Could you please review the attached patch and provide us your valua= ble > > comments . > > Below are the files that has been changed . > > =A0 > > libmultipath/config.c=A0=A0=A0=A0=A0 |=A0 3 +++ > > libmultipath/config.h=A0=A0=A0=A0=A0 |=A0 9 +++++++++ > > libmultipath/configure.c=A0=A0 |=A0 3 +++ > > libmultipath/defaults.h=A0=A0=A0 |=A0 3 ++- > > libmultipath/dict.c=A0=A0=A0=A0=A0=A0=A0 | 84 > > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------= ---------------- > > libmultipath/dict.h=A0=A0=A0=A0=A0=A0=A0 |=A0 3 +-- > > libmultipath/propsel.c=A0=A0=A0=A0 | 48 > > ++++++++++++++++++++++++++++++++++++++++++++++-- > > libmultipath/propsel.h=A0=A0=A0=A0 |=A0 3 +++ > > libmultipath/structs.h=A0=A0=A0=A0 | 14 ++++++++++---- > > libmultipath/structs_vec.c |=A0 6 ++++++ > > multipath/multipath.conf.5 | 57 > > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > > multipathd/main.c=A0=A0=A0=A0=A0=A0=A0=A0=A0 | 70 > > =20 > > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++- > > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++- > > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++- > > =A0 > > =A0 > > Regards, > > Muneendra. > > =A0 > > _____________________________________________ > > From: Muneendra Kumar M > > Sent: Tuesday, January 17, 2017 4:13 PM > > To: 'Benjamin Marzinski' > > Cc: dm-devel@redhat.com > > Subject: RE: [dm-devel] deterministic io throughput in multipath > > =A0 > > =A0 > > Hi Ben, > > Thanks for the review. > > In dict.c=A0 I will make sure I will make generic functions which wi= ll be > > used by both delay_checks and err_checks. > > =A0 > > We want to increment the path failures every time the path goes down > > regardless of whether multipathd or the kernel noticed the failure o= f > > paths. > > Thanks for pointing this. > > =A0 > > I will completely agree with the idea which you mentioned below by > > reconsidering the san_path_err_threshold_window with > > san_path_err_forget_rate. This will avoid counting time when the pat= h was > > down as time where the path wasn't having problems. > > =A0 > > I will incorporate all the changes mentioned below and will resend t= he > > patch once the testing is done. > > =A0 > > Regards, > > Muneendra. > > =A0 > > =A0 > > =A0 > > -----Original Message----- > > From: Benjamin Marzinski [[1]mailto:bmarzins@redhat.com] > > Sent: Tuesday, January 17, 2017 6:35 AM > > To: Muneendra Kumar M <[2]mmandala@Brocade.com> > > Cc: [3]dm-devel@redhat.com > > Subject: Re: [dm-devel] deterministic io throughput in multipath > > =A0 > > On Mon, Jan 16, 2017 at 11:19:19AM +0000, Muneendra Kumar M wrote: > > >=A0=A0=A0 Hi Ben, > > >=A0=A0=A0 After the below discussion we=A0 came with the approach w= hich will meet > > our > > >=A0=A0=A0 requirement. > > >=A0=A0=A0 I have attached the patch which is working good in our fi= eld tests. > > >=A0=A0=A0 Could you please review the attached patch and provide us= your > > valuable > > >=A0=A0=A0 comments . > > =A0 > > I can see a number of issues with this patch. > > =A0 > > First, some nit-picks: > > - I assume "dis_reinstante_time" should be "dis_reinstate_time" > > =A0 > > - The indenting in check_path_validity_err is wrong, which made it > > =A0 confusing until I noticed that > > =A0 > > if (clock_gettime(CLOCK_MONOTONIC, &start_time) !=3D 0) > > =A0 > > =A0 doesn't have an open brace, and shouldn't indent the rest of the > > =A0 function. > > =A0 > > - You call clock_gettime in check_path, but never use the result. > > =A0 > > - In dict.c, instead of writing your own functions that are the same= as > > =A0 the *_delay_checks functions, you could make those functions gen= eric > > =A0 and use them for both.=A0 To go match the other generic function= names > > =A0 they would probably be something like > > =A0 > > set_off_int_undef > > =A0 > > print_off_int_undef > > =A0 > > =A0 You would also need to change DELAY_CHECKS_* and ERR_CHECKS_* to > > =A0 point to some common enum that you created, the way > > =A0 user_friendly_names_states (to name one of many) does. The gener= ic > > =A0 enum used by *_off_int_undef would be something like. > > =A0 > > enum no_undef { > > =A0=A0=A0=A0=A0=A0=A0 NU_NO =3D -1, > > =A0=A0=A0=A0=A0=A0=A0 NU_UNDEF =3D 0, > > } > > =A0 > > =A0 The idea is to try to cut down on the number of functions that a= re > > =A0 simply copy-pasting other functions in dict.c. > > =A0 > > =A0 > > Those are all minor cleanup issues, but there are some bigger proble= ms. > > =A0 > > Instead of checking if san_path_err_threshold, > > san_path_err_threshold_window, and san_path_err_recovery_time are gr= eater > > than zero seperately, you should probably check them all at the star= t of > > check_path_validity_err, and return 0 unless they all are set. > > Right now, if a user sets san_path_err_threshold and > > san_path_err_threshold_window but not san_path_err_recovery_time, th= eir > > path will never recover after it hits the error threshold.=A0 I pret= ty sure > > that you don't mean to permanently disable the paths. > > =A0 > > =A0 > > time_t is a signed type, which means that if you get the clock time = in > > update_multpath and then fail to get the clock time in > > check_path_validity_err, this check: > > =A0 > > start_time.tv_sec - pp->failure_start_time) < > > pp->mpp->san_path_err_threshold_window > > =A0 > > will always be true.=A0 I realize that clock_gettime is very unlikel= y to > > fail.=A0 But if it does, probably the safest thing to so is to just > > immediately return 0 in check_path_validity_err. > > =A0 > > =A0 > > The way you set path_failures in update_multipath may not get you wh= at you > > want.=A0 It will only count path failures found by the kernel, and n= ot the > > path checker.=A0 If the check_path finds the error, pp->state will b= e set to > > PATH_DOWN before pp->dmstate is set to PSTATE_FAILED. That means you= will > > not increment path_failures. Perhaps this is what you want, but I wo= uld > > assume that you would want to count every time the path goes down > > regardless of whether multipathd or the kernel noticed it. > > =A0 > > =A0 > > I'm not super enthusiastic about how the san_path_err_threshold_wind= ow > > works.=A0 First, it starts counting from when the path goes down, so= if the > > path takes long enough to get restored, and then fails immediately, = it can > > just keep failing and it will never hit the san_path_err_threshold_w= indow, > > since it spends so much of that time with the path failed.=A0 Also, = the > > window gets set on the first error, and never reset until the number= of > > errors is over the threshold.=A0 This means that if you get one earl= y error > > and then a bunch of errors much later, you will go for (2 x > > san_path_err_threshold) - 1 errors until you stop reinstating the pa= th, > > because of the window reset in the middle of the string of errors.= =A0 It > > seems like a better idea would be to have check_path_validity_err re= set > > path_failures as soon as it notices that you are past > > san_path_err_threshold_window, instead of waiting till the number of > > errors hits san_path_err_threshold. > > =A0 > > =A0 > > If I was going to design this, I think I would have san_path_err_thr= eshold > > and san_path_err_recovery_time like you do, but instead of having a > > san_path_err_threshold_window, I would have something like > > san_path_err_forget_rate.=A0 The idea is that every san_path_err_for= get_rate > > number of successful path checks you decrement path_failures by 1. T= his > > means that there is no window after which you reset.=A0 If the path = failures > > come in faster than the forget rate, you will eventually hit the err= or > > threshold. This also has the benefit of easily not counting time whe= n the > > path was down as time where the path wasn't having problems. But if = you > > don't like my idea, yours will work fine with some polish. > > =A0 > > -Ben > > =A0 > > =A0 > > >=A0=A0=A0 Below are the files that has been changed . > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 libmultipath/config.c=A0=A0=A0=A0=A0 |=A0 3 +++ > > >=A0=A0=A0 libmultipath/config.h=A0=A0=A0=A0=A0 |=A0 9 +++++++++ > > >=A0=A0=A0 libmultipath/configure.c=A0=A0 |=A0 3 +++ > > >=A0=A0=A0 libmultipath/defaults.h=A0=A0=A0 |=A0 1 + > > >=A0=A0=A0 libmultipath/dict.c=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 |= 80 > > >=A0=A0=A0 > > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++= ++++++++++++ > > >=A0=A0=A0 libmultipath/dict.h=A0=A0=A0=A0=A0=A0=A0 |=A0 1 + > > >=A0=A0=A0 libmultipath/propsel.c=A0=A0=A0=A0 | 44 > > >=A0=A0=A0 ++++++++++++++++++++++++++++++++++++++++++++ > > >=A0=A0=A0 libmultipath/propsel.h=A0=A0=A0=A0 |=A0 6 ++++++ > > >=A0=A0=A0 libmultipath/structs.h=A0=A0=A0=A0 | 12 +++++++++++- > > >=A0=A0=A0 libmultipath/structs_vec.c | 10 ++++++++++ > > >=A0=A0=A0 multipath/multipath.conf.5 | 58 > > >=A0=A0=A0 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++= + > > >=A0=A0=A0 multipathd/main.c=A0=A0=A0=A0=A0=A0=A0=A0=A0 | 61 > > >=A0=A0=A0 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++= ++-- > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 We have added three new config parameters whose descripti= on is below. > > >=A0=A0=A0 1.san_path_err_threshold: > > >=A0=A0=A0 =A0=A0=A0=A0=A0=A0=A0 If set to a value greater than 0, m= ultipathd will watch paths > > and > > >=A0=A0=A0 check how many times a path has been failed due to errors= . If the > > number > > >=A0=A0=A0 of failures on a particular path is greater then the > > >=A0=A0=A0 san_path_err_threshold then the path will not=A0 reinstat= e=A0 till > > >=A0=A0=A0 san_path_err_recovery_time. These path failures should oc= cur within a > > >=A0=A0=A0 san_path_err_threshold_window time frame, if not we will = consider the > > path > > >=A0=A0=A0 is good enough to reinstate. > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 2.san_path_err_threshold_window: > > >=A0=A0=A0 =A0=A0=A0=A0=A0=A0=A0 If set to a value greater than 0, m= ultipathd will check > > whether > > >=A0=A0=A0 the path failures has exceeded=A0 the san_path_err_thresh= old within > > this > > >=A0=A0=A0 time frame i.e san_path_err_threshold_window . If so we w= ill not > > reinstate > > >=A0=A0=A0 the path till=A0=A0=A0=A0=A0=A0=A0=A0=A0 san_path_err_rec= overy_time. > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 3.san_path_err_recovery_time: > > >=A0=A0=A0 If set to a value greater than 0, multipathd will make su= re that when > > path > > >=A0=A0=A0 failures has exceeded the san_path_err_threshold within > > >=A0=A0=A0 san_path_err_threshold_window then the path=A0 will be pl= aced in failed > > >=A0=A0=A0 state for san_path_err_recovery_time duration. Once > > >=A0=A0=A0 san_path_err_recovery_time has timeout=A0 we will reinsta= te the failed > > path > > >=A0=A0=A0 . > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 Regards, > > >=A0=A0=A0 Muneendra. > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 -----Original Message----- > > >=A0=A0=A0 From: Muneendra Kumar M > > >=A0=A0=A0 Sent: Wednesday, January 04, 2017 6:56 PM > > >=A0=A0=A0 To: 'Benjamin Marzinski' <[4]bmarzins@redhat.com> > > >=A0=A0=A0 Cc: [5]dm-devel@redhat.com > > >=A0=A0=A0 Subject: RE: [dm-devel] deterministic io throughput in mu= ltipath > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 Hi Ben, > > >=A0=A0=A0 Thanks for the information. > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 Regards, > > >=A0=A0=A0 Muneendra. > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 -----Original Message----- > > >=A0=A0=A0 From: Benjamin Marzinski [[1][6]mailto:bmarzins@redhat.co= m] > > >=A0=A0=A0 Sent: Tuesday, January 03, 2017 10:42 PM > > >=A0=A0=A0 To: Muneendra Kumar M <[2][7]mmandala@Brocade.com> > > >=A0=A0=A0 Cc: [3][8]dm-devel@redhat.com > > >=A0=A0=A0 Subject: Re: [dm-devel] deterministic io throughput in mu= ltipath > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 On Mon, Dec 26, 2016 at 09:42:48AM +0000, Muneendra Kumar= M wrote: > > >=A0=A0=A0 > Hi Ben, > > >=A0=A0=A0 > > > >=A0=A0=A0 > If there are two paths on a dm-1 say sda and sdb as bel= ow. > > >=A0=A0=A0 > > > >=A0=A0=A0 > #=A0 multipath -ll > > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0 mpathd (3600110d001ee7f0102050001c= c0b6751) dm-1 > > SANBlaze,VLUN > > >=A0=A0=A0 MyLun > > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0 size=3D8.0M features=3D'0' hwhandl= er=3D'0' wp=3Drw > > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0 `-+- policy=3D'round-robin 0' prio= =3D50 status=3Dactive > > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0=A0=A0 |- 8:0:1:0=A0 sda 8:48 activ= e ready=A0 running > > >=A0=A0=A0 >=A0=A0=A0=A0=A0=A0=A0=A0=A0 `- 9:0:1:0=A0 sdb 8:64 activ= e ready=A0 running=A0=A0=A0=A0=A0=A0=A0=A0=A0 > > >=A0=A0=A0 > > > >=A0=A0=A0 > And on sda if iam seeing lot of errors due to which the= sda path is > > >=A0=A0=A0 fluctuating from failed state to active state and vicever= a. > > >=A0=A0=A0 > > > >=A0=A0=A0 > My requirement is something like this if sda is failed = for more > > then 5 > > >=A0=A0=A0 > times in a hour duration ,then I want to keep the sda i= n failed > > state > > >=A0=A0=A0 > for few hours (3hrs) > > >=A0=A0=A0 > > > >=A0=A0=A0 > And the data should travel only thorugh sdb path. > > >=A0=A0=A0 > Will this be possible with the below parameters. > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 No. delay_watch_checks sets how may path checks you watch= a path that > > has > > >=A0=A0=A0 recently come back from the failed state. If the path fai= ls again > > within > > >=A0=A0=A0 this time, multipath device delays it.=A0 This means that= the delay is > > >=A0=A0=A0 always trigger by two failures within the time limit.=A0 = It's possible > > to > > >=A0=A0=A0 adapt this to count numbers of failures, and act after a = certain > > number > > >=A0=A0=A0 within a certain timeframe, but it would take a bit more = work. > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 delay_wait_checks doesn't guarantee that it will delay fo= r any set > > length > > >=A0=A0=A0 of time.=A0 Instead, it sets the number of consecutive su= ccessful path > > >=A0=A0=A0 checks that must occur before the path is usable again. Y= ou could set > > this > > >=A0=A0=A0 for 3 hours of path checks, but if a check failed during = this time, > > you > > >=A0=A0=A0 would restart the 3 hours over again. > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 -Ben > > >=A0=A0=A0 =A0 > > >=A0=A0=A0 > Can you just let me know what values I should add for > > delay_watch_checks > > >=A0=A0=A0 and delay_wait_checks. > > >=A0=A0=A0 > > > >=A0=A0=A0 > Regards, > > >=A0=A0=A0 > Muneendra. > > >=A0=A0=A0 > > > >=A0=A0=A0 > > > >=A0=A0=A0 > > > >=A0=A0=A0 > -----Original Message----- > > >=A0=A0=A0 > From: Muneendra Kumar M > > >=A0=A0=A0 > Sent: Thursday, December 22, 2016 11:10 AM > > >=A0=A0=A0 > To: 'Benjamin Marzinski' <[4][9]bmarzins@redhat.com> > > >=A0=A0=A0 > Cc: [5][10]dm-devel@redhat.com > > >=A0=A0=A0 > Subject: RE: [dm-devel] deterministic io throughput in = multipath > > >=A0=A0=A0 > > > >=A0=A0=A0 > Hi Ben, > > >=A0=A0=A0 > > > >=A0=A0=A0 > Thanks for the reply. > > >=A0=A0=A0 > I will look into this parameters will do the internal t= esting and > > let > > >=A0=A0=A0 you know the results. > > >=A0=A0=A0 > > > >=A0=A0=A0 > Regards, > > >=A0=A0=A0 > Muneendra. > > >=A0=A0=A0 > > > >=A0=A0=A0 > -----Original Message----- > > >=A0=A0=A0 > From: Benjamin Marzinski [[6][11]mailto:bmarzins@redhat= .com] > > >=A0=A0=A0 > Sent: Wednesday, December 21, 2016 9:40 PM > > >=A0=A0=A0 > To: Muneendra Kumar M <[7][12]mmandala@Brocade.com> > > >=A0=A0=A0 > Cc: [8][13]dm-devel@redhat.com > > >=A0=A0=A0 > Subject: Re: [dm-devel] deterministic io throughput in = multipath > > >=A0=A0=A0 > > > >=A0=A0=A0 > Have you looked into the delay_watch_checks and delay_w= ait_checks > > >=A0=A0=A0 configuration parameters?=A0 The idea behind them is to m= inimize the > > use of > > >=A0=A0=A0 paths that are intermittently failing. > > >=A0=A0=A0 > > > >=A0=A0=A0 > -Ben > > >=A0=A0=A0 > > > >=A0=A0=A0 > On Mon, Dec 19, 2016 at 11:50:36AM +0000, Muneendra Kum= ar M wrote: > > >=A0=A0=A0 > >=A0=A0=A0 Customers using Linux host (mostly RHEL host= ) using a SAN > > network > > >=A0=A0=A0 for > > >=A0=A0=A0 > >=A0=A0=A0 block storage, complain the Linux multipath = stack is not > > resilient > > >=A0=A0=A0 to > > >=A0=A0=A0 > >=A0=A0=A0 handle non-deterministic storage network beh= aviors. This has > > caused > > >=A0=A0=A0 many > > >=A0=A0=A0 > >=A0=A0=A0 customer move away to non-linux based server= s. The intent of > > the > > >=A0=A0=A0 below > > >=A0=A0=A0 > >=A0=A0=A0 patch and the prevailing issues are given be= low. With the > > below > > >=A0=A0=A0 design we > > >=A0=A0=A0 > >=A0=A0=A0 are seeing the Linux multipath stack becomin= g resilient to > > such > > >=A0=A0=A0 network > > >=A0=A0=A0 > >=A0=A0=A0 issues. We hope by getting this patch accept= ed will help in > > more > > >=A0=A0=A0 Linux > > >=A0=A0=A0 > >=A0=A0=A0 server adoption that use SAN network. > > >=A0=A0=A0 > > > > >=A0=A0=A0 > >=A0=A0=A0 I have already sent the design details to th= e community in a > > >=A0=A0=A0 different > > >=A0=A0=A0 > >=A0=A0=A0 mail chain and the details are available in = the below link. > > >=A0=A0=A0 > > > > >=A0=A0=A0 > >=A0=A0=A0 > > >=A0=A0=A0 > > [1][9][14]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www= .redhat.com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c= =3DIL_XqQWOjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m= =3DvfwpVp6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiy= iPo_Uj7MRu52hG3MKzM&e=3D > > >=A0=A0=A0 . > > >=A0=A0=A0 > > > > >=A0=A0=A0 > >=A0=A0=A0 Can you please go through the design and sen= d the comments to > > us. > > >=A0=A0=A0 > > > > >=A0=A0=A0 > >=A0=A0=A0 =A0 > > >=A0=A0=A0 > > > > >=A0=A0=A0 > >=A0=A0=A0 Regards, > > >=A0=A0=A0 > > > > >=A0=A0=A0 > >=A0=A0=A0 Muneendra. > > >=A0=A0=A0 > > > > >=A0=A0=A0 > >=A0=A0=A0 =A0 > > >=A0=A0=A0 > > > > >=A0=A0=A0 > >=A0=A0=A0 =A0 > > >=A0=A0=A0 > > > > >=A0=A0=A0 > > References > > >=A0=A0=A0 > > > > >=A0=A0=A0 > >=A0=A0=A0 Visible links > > >=A0=A0=A0 > >=A0=A0=A0 1. > > >=A0=A0=A0 > > > > >=A0=A0=A0 > > [10][15]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.r= edhat.com_ > > >=A0=A0=A0 > > ar > > >=A0=A0=A0 > > > > chives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_Xq= QWOj > > >=A0=A0=A0 > > ub > > >=A0=A0=A0 > > > > gfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3Dvfwp= Vp6e > > >=A0=A0=A0 > > 1K > > >=A0=A0=A0 > > > > XtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj= 7M > > >=A0=A0=A0 > > Ru > > >=A0=A0=A0 > > 52hG3MKzM&e=3D > > >=A0=A0=A0 > > > >=A0=A0=A0 > > -- > > >=A0=A0=A0 > > dm-devel mailing list > > >=A0=A0=A0 > > [11][16]dm-devel@redhat.com > > >=A0=A0=A0 > > > > >=A0=A0=A0 > > [12][17]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.r= edhat.com_ > > >=A0=A0=A0 > > ma > > >=A0=A0=A0 > > > > ilman_listinfo_dm-2Ddevel&d=3DDgIDAw&c=3DIL_XqQWOjubgfqINi2jTzg&r=3D= E3ftc4 > > >=A0=A0=A0 > > > > 7B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpVp6e1KXtRA0ctwHYJ7cDmP= sL > > >=A0=A0=A0 > > > > > i2C1L9pox7uexsY&s=3DUyE46dXOrNTbPz_TVGtpoHl3J3h_n0uYhI4TI-PgyWg&e= =3D > > >=A0=A0=A0 =A0 > > > > > > References > > > > > >=A0=A0=A0 Visible links > > >=A0=A0=A0 1. [18]mailto:bmarzins@redhat.com > > >=A0=A0=A0 2. [19]mailto:mmandala@brocade.com > > >=A0=A0=A0 3. [20]mailto:dm-devel@redhat.com > > >=A0=A0=A0 4. [21]mailto:bmarzins@redhat.com > > >=A0=A0=A0 5. [22]mailto:dm-devel@redhat.com > > >=A0=A0=A0 6. [23]mailto:bmarzins@redhat.com > > >=A0=A0=A0 7. [24]mailto:mmandala@brocade.com > > >=A0=A0=A0 8. [25]mailto:dm-devel@redhat.com > > >=A0=A0=A0 9. > > [26]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redha= t.com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_X= qQWOjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3Dvfwp= Vp6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7= MRu52hG3MKzM&e > > >=A0=A0 10. > > [27]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redha= t.com_ > > >=A0=A0 11. [28]mailto:dm-devel@redhat.com > > >=A0=A0 12. > > > > > [29]https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat. > > co > > m_ > > =A0 > > =A0 > > =A0 > > =A0 > >=20 > > References > >=20 > > Visible links > > 1. mailto:bmarzins@redhat.com > > 2. mailto:mmandala@brocade.com > > 3. mailto:dm-devel@redhat.com > > 4. mailto:bmarzins@redhat.com > > 5. mailto:dm-devel@redhat.com > > 6. mailto:bmarzins@redhat.com > > 7. mailto:mmandala@brocade.com > > 8. mailto:dm-devel@redhat.com > > 9. mailto:bmarzins@redhat.com > > 10. mailto:dm-devel@redhat.com > > 11. mailto:bmarzins@redhat.com > > 12. mailto:mmandala@brocade.com > > 13. mailto:dm-devel@redhat.com > > 14. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat= .com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_Xq= QWOjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpV= p6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7M= Ru52hG3MKzM&e > > 15. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat= .com_ > > 16. mailto:dm-devel@redhat.com > > 17. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat= .com_ > > 18. mailto:bmarzins@redhat.com > > 19. mailto:mmandala@brocade.com > > 20. mailto:dm-devel@redhat.com > > 21. mailto:bmarzins@redhat.com > > 22. mailto:dm-devel@redhat.com > > 23. mailto:bmarzins@redhat.com > > 24. mailto:mmandala@brocade.com > > 25. mailto:dm-devel@redhat.com > > 26. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat= .com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=3DDgIDAw&c=3DIL_Xq= QWOjubgfqINi2jTzg&r=3DE3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=3DvfwpV= p6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=3Dq5OI-lfefNC2CHKmyUkokgiyiPo_Uj7M= Ru52hG3MKzM&e > > 27. https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat= .com_ > > 28. mailto:dm-devel@redhat.com > > 29.=20 > > https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__www.redhat.com_ >=20 --_002_af56a18e992f4a8cb474de87a208e02aBRMWPEXMB12corpbrocadec_ Content-Type: application/octet-stream; name="san_path_error.patch" Content-Description: san_path_error.patch Content-Disposition: attachment; filename="san_path_error.patch"; size=23194; creation-date="Wed, 01 Feb 2017 05:58:31 GMT"; modification-date="Wed, 01 Feb 2017 06:03:04 GMT" Content-Transfer-Encoding: base64 ZGlmZiAtLWdpdCBhL2xpYm11bHRpcGF0aC9jb25maWcuYyBiL2xpYm11bHRpcGF0aC9jb25maWcu YwppbmRleCAxNWRkYmQ4Li5iZTM4NGFmIDEwMDY0NAotLS0gYS9saWJtdWx0aXBhdGgvY29uZmln LmMKKysrIGIvbGlibXVsdGlwYXRoL2NvbmZpZy5jCkBAIC0zNDgsNiArMzQ4LDkgQEAgbWVyZ2Vf aHdlIChzdHJ1Y3QgaHdlbnRyeSAqIGRzdCwgc3RydWN0IGh3ZW50cnkgKiBzcmMpCiAJbWVyZ2Vf bnVtKGRlbGF5X3dhaXRfY2hlY2tzKTsKIAltZXJnZV9udW0oc2tpcF9rcGFydHgpOwogCW1lcmdl X251bShtYXhfc2VjdG9yc19rYik7CisJbWVyZ2VfbnVtKHNhbl9wYXRoX2Vycl90aHJlc2hvbGQp OworCW1lcmdlX251bShzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUpOworCW1lcmdlX251bShzYW5f cGF0aF9lcnJfcmVjb3ZlcnlfdGltZSk7CiAKIAkvKgogCSAqIE1ha2Ugc3VyZSBmZWF0dXJlcyBp cyBjb25zaXN0ZW50IHdpdGgKZGlmZiAtLWdpdCBhL2xpYm11bHRpcGF0aC9jb25maWcuaCBiL2xp Ym11bHRpcGF0aC9jb25maWcuaAppbmRleCA5NjcwMDIwLi45ZTQ3ODk0IDEwMDY0NAotLS0gYS9s aWJtdWx0aXBhdGgvY29uZmlnLmgKKysrIGIvbGlibXVsdGlwYXRoL2NvbmZpZy5oCkBAIC02NSw2 ICs2NSw5IEBAIHN0cnVjdCBod2VudHJ5IHsKIAlpbnQgZGVmZXJyZWRfcmVtb3ZlOwogCWludCBk ZWxheV93YXRjaF9jaGVja3M7CiAJaW50IGRlbGF5X3dhaXRfY2hlY2tzOworCWludCBzYW5fcGF0 aF9lcnJfdGhyZXNob2xkOworCWludCBzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGU7CisJaW50IHNh bl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lOwogCWludCBza2lwX2twYXJ0eDsKIAlpbnQgbWF4X3Nl Y3RvcnNfa2I7CiAJY2hhciAqIGJsX3Byb2R1Y3Q7CkBAIC05Myw2ICs5Niw5IEBAIHN0cnVjdCBt cGVudHJ5IHsKIAlpbnQgZGVmZXJyZWRfcmVtb3ZlOwogCWludCBkZWxheV93YXRjaF9jaGVja3M7 CiAJaW50IGRlbGF5X3dhaXRfY2hlY2tzOworCWludCBzYW5fcGF0aF9lcnJfdGhyZXNob2xkOwor CWludCBzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGU7CisJaW50IHNhbl9wYXRoX2Vycl9yZWNvdmVy eV90aW1lOwogCWludCBza2lwX2twYXJ0eDsKIAlpbnQgbWF4X3NlY3RvcnNfa2I7CiAJdWlkX3Qg dWlkOwpAQCAtMTM4LDYgKzE0NCw5IEBAIHN0cnVjdCBjb25maWcgewogCWludCBwcm9jZXNzZWRf bWFpbl9jb25maWc7CiAJaW50IGRlbGF5X3dhdGNoX2NoZWNrczsKIAlpbnQgZGVsYXlfd2FpdF9j aGVja3M7CisJaW50IHNhbl9wYXRoX2Vycl90aHJlc2hvbGQ7CisJaW50IHNhbl9wYXRoX2Vycl9m b3JnZXRfcmF0ZTsKKwlpbnQgc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWU7CiAJaW50IHV4c29j a190aW1lb3V0OwogCWludCBzdHJpY3RfdGltaW5nOwogCWludCByZXRyaWdnZXJfdHJpZXM7CmRp ZmYgLS1naXQgYS9saWJtdWx0aXBhdGgvY29uZmlndXJlLmMgYi9saWJtdWx0aXBhdGgvY29uZmln dXJlLmMKaW5kZXggYTBmY2FkOS4uNWFkMzAwNyAxMDA2NDQKLS0tIGEvbGlibXVsdGlwYXRoL2Nv bmZpZ3VyZS5jCisrKyBiL2xpYm11bHRpcGF0aC9jb25maWd1cmUuYwpAQCAtMjk0LDYgKzI5NCw5 IEBAIGludCBzZXR1cF9tYXAoc3RydWN0IG11bHRpcGF0aCAqbXBwLCBjaGFyICpwYXJhbXMsIGlu dCBwYXJhbXNfc2l6ZSkKIAlzZWxlY3RfZGVmZXJyZWRfcmVtb3ZlKGNvbmYsIG1wcCk7CiAJc2Vs ZWN0X2RlbGF5X3dhdGNoX2NoZWNrcyhjb25mLCBtcHApOwogCXNlbGVjdF9kZWxheV93YWl0X2No ZWNrcyhjb25mLCBtcHApOworCXNlbGVjdF9zYW5fcGF0aF9lcnJfdGhyZXNob2xkKGNvbmYsIG1w cCk7CisJc2VsZWN0X3Nhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZShjb25mLCBtcHApOworCXNlbGVj dF9zYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZShjb25mLCBtcHApOwogCXNlbGVjdF9za2lwX2tw YXJ0eChjb25mLCBtcHApOwogCXNlbGVjdF9tYXhfc2VjdG9yc19rYihjb25mLCBtcHApOwogCmRp ZmYgLS1naXQgYS9saWJtdWx0aXBhdGgvZGVmYXVsdHMuaCBiL2xpYm11bHRpcGF0aC9kZWZhdWx0 cy5oCmluZGV4IGI5YjBhMzcuLjNlZjE1NzkgMTAwNjQ0Ci0tLSBhL2xpYm11bHRpcGF0aC9kZWZh dWx0cy5oCisrKyBiL2xpYm11bHRpcGF0aC9kZWZhdWx0cy5oCkBAIC0yMyw3ICsyMyw4IEBACiAj ZGVmaW5lIERFRkFVTFRfUkVUQUlOX0hXSEFORExFUiBSRVRBSU5fSFdIQU5ETEVSX09OCiAjZGVm aW5lIERFRkFVTFRfREVURUNUX1BSSU8JREVURUNUX1BSSU9fT04KICNkZWZpbmUgREVGQVVMVF9E RUZFUlJFRF9SRU1PVkUJREVGRVJSRURfUkVNT1ZFX09GRgotI2RlZmluZSBERUZBVUxUX0RFTEFZ X0NIRUNLUwlERUxBWV9DSEVDS1NfT0ZGCisjZGVmaW5lIERFRkFVTFRfREVMQVlfQ0hFQ0tTCU5V X05PCisjZGVmaW5lIERFRkFVTFRfRVJSX0NIRUNLUwlOVV9OTwogI2RlZmluZSBERUZBVUxUX1VF VkVOVF9TVEFDS1NJWkUgMjU2CiAjZGVmaW5lIERFRkFVTFRfUkVUUklHR0VSX0RFTEFZCTEwCiAj ZGVmaW5lIERFRkFVTFRfUkVUUklHR0VSX1RSSUVTCTMKZGlmZiAtLWdpdCBhL2xpYm11bHRpcGF0 aC9kaWN0LmMgYi9saWJtdWx0aXBhdGgvZGljdC5jCmluZGV4IGRjMjE4NDYuLjQ3NTQ1NzIgMTAw NjQ0Ci0tLSBhL2xpYm11bHRpcGF0aC9kaWN0LmMKKysrIGIvbGlibXVsdGlwYXRoL2RpY3QuYwpA QCAtMTAyMyw3ICsxMDIzLDcgQEAgZGVjbGFyZV9tcF9oYW5kbGVyKHJlc2VydmF0aW9uX2tleSwg c2V0X3Jlc2VydmF0aW9uX2tleSkKIGRlY2xhcmVfbXBfc25wcmludChyZXNlcnZhdGlvbl9rZXks IHByaW50X3Jlc2VydmF0aW9uX2tleSkKIAogc3RhdGljIGludAotc2V0X2RlbGF5X2NoZWNrcyh2 ZWN0b3Igc3RydmVjLCB2b2lkICpwdHIpCitzZXRfb2ZmX2ludF91bmRlZih2ZWN0b3Igc3RydmVj LCB2b2lkICpwdHIpCiB7CiAJaW50ICppbnRfcHRyID0gKGludCAqKXB0cjsKIAljaGFyICogYnVm ZjsKQEAgLTEwMzMsNDcgKzEwMzMsNjkgQEAgc2V0X2RlbGF5X2NoZWNrcyh2ZWN0b3Igc3RydmVj LCB2b2lkICpwdHIpCiAJCXJldHVybiAxOwogCiAJaWYgKCFzdHJjbXAoYnVmZiwgIm5vIikgfHwg IXN0cmNtcChidWZmLCAiMCIpKQotCQkqaW50X3B0ciA9IERFTEFZX0NIRUNLU19PRkY7CisJCSpp bnRfcHRyID0gTlVfTk87CiAJZWxzZSBpZiAoKCppbnRfcHRyID0gYXRvaShidWZmKSkgPCAxKQot CQkqaW50X3B0ciA9IERFTEFZX0NIRUNLU19VTkRFRjsKKwkJKmludF9wdHIgPSBOVV9VTkRFRjsK IAogCUZSRUUoYnVmZik7CiAJcmV0dXJuIDA7CiB9CiAKIGludAotcHJpbnRfZGVsYXlfY2hlY2tz KGNoYXIgKiBidWZmLCBpbnQgbGVuLCB2b2lkICpwdHIpCitwcmludF9vZmZfaW50X3VuZGVmKGNo YXIgKiBidWZmLCBpbnQgbGVuLCB2b2lkICpwdHIpCiB7CiAJaW50ICppbnRfcHRyID0gKGludCAq KXB0cjsKIAogCXN3aXRjaCgqaW50X3B0cikgewotCWNhc2UgREVMQVlfQ0hFQ0tTX1VOREVGOgor CWNhc2UgTlVfVU5ERUY6CiAJCXJldHVybiAwOwotCWNhc2UgREVMQVlfQ0hFQ0tTX09GRjoKKwlj YXNlIE5VX05POgogCQlyZXR1cm4gc25wcmludGYoYnVmZiwgbGVuLCAiXCJvZmZcIiIpOwogCWRl ZmF1bHQ6CiAJCXJldHVybiBzbnByaW50ZihidWZmLCBsZW4sICIlaSIsICppbnRfcHRyKTsKIAl9 CiB9CiAKLWRlY2xhcmVfZGVmX2hhbmRsZXIoZGVsYXlfd2F0Y2hfY2hlY2tzLCBzZXRfZGVsYXlf Y2hlY2tzKQotZGVjbGFyZV9kZWZfc25wcmludChkZWxheV93YXRjaF9jaGVja3MsIHByaW50X2Rl bGF5X2NoZWNrcykKLWRlY2xhcmVfb3ZyX2hhbmRsZXIoZGVsYXlfd2F0Y2hfY2hlY2tzLCBzZXRf ZGVsYXlfY2hlY2tzKQotZGVjbGFyZV9vdnJfc25wcmludChkZWxheV93YXRjaF9jaGVja3MsIHBy aW50X2RlbGF5X2NoZWNrcykKLWRlY2xhcmVfaHdfaGFuZGxlcihkZWxheV93YXRjaF9jaGVja3Ms IHNldF9kZWxheV9jaGVja3MpCi1kZWNsYXJlX2h3X3NucHJpbnQoZGVsYXlfd2F0Y2hfY2hlY2tz LCBwcmludF9kZWxheV9jaGVja3MpCi1kZWNsYXJlX21wX2hhbmRsZXIoZGVsYXlfd2F0Y2hfY2hl Y2tzLCBzZXRfZGVsYXlfY2hlY2tzKQotZGVjbGFyZV9tcF9zbnByaW50KGRlbGF5X3dhdGNoX2No ZWNrcywgcHJpbnRfZGVsYXlfY2hlY2tzKQotCi1kZWNsYXJlX2RlZl9oYW5kbGVyKGRlbGF5X3dh aXRfY2hlY2tzLCBzZXRfZGVsYXlfY2hlY2tzKQotZGVjbGFyZV9kZWZfc25wcmludChkZWxheV93 YWl0X2NoZWNrcywgcHJpbnRfZGVsYXlfY2hlY2tzKQotZGVjbGFyZV9vdnJfaGFuZGxlcihkZWxh eV93YWl0X2NoZWNrcywgc2V0X2RlbGF5X2NoZWNrcykKLWRlY2xhcmVfb3ZyX3NucHJpbnQoZGVs YXlfd2FpdF9jaGVja3MsIHByaW50X2RlbGF5X2NoZWNrcykKLWRlY2xhcmVfaHdfaGFuZGxlcihk ZWxheV93YWl0X2NoZWNrcywgc2V0X2RlbGF5X2NoZWNrcykKLWRlY2xhcmVfaHdfc25wcmludChk ZWxheV93YWl0X2NoZWNrcywgcHJpbnRfZGVsYXlfY2hlY2tzKQotZGVjbGFyZV9tcF9oYW5kbGVy KGRlbGF5X3dhaXRfY2hlY2tzLCBzZXRfZGVsYXlfY2hlY2tzKQotZGVjbGFyZV9tcF9zbnByaW50 KGRlbGF5X3dhaXRfY2hlY2tzLCBwcmludF9kZWxheV9jaGVja3MpCi0KK2RlY2xhcmVfZGVmX2hh bmRsZXIoZGVsYXlfd2F0Y2hfY2hlY2tzLCBzZXRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfZGVm X3NucHJpbnQoZGVsYXlfd2F0Y2hfY2hlY2tzLCBwcmludF9vZmZfaW50X3VuZGVmKQorZGVjbGFy ZV9vdnJfaGFuZGxlcihkZWxheV93YXRjaF9jaGVja3MsIHNldF9vZmZfaW50X3VuZGVmKQorZGVj bGFyZV9vdnJfc25wcmludChkZWxheV93YXRjaF9jaGVja3MsIHByaW50X29mZl9pbnRfdW5kZWYp CitkZWNsYXJlX2h3X2hhbmRsZXIoZGVsYXlfd2F0Y2hfY2hlY2tzLCBzZXRfb2ZmX2ludF91bmRl ZikKK2RlY2xhcmVfaHdfc25wcmludChkZWxheV93YXRjaF9jaGVja3MsIHByaW50X29mZl9pbnRf dW5kZWYpCitkZWNsYXJlX21wX2hhbmRsZXIoZGVsYXlfd2F0Y2hfY2hlY2tzLCBzZXRfb2ZmX2lu dF91bmRlZikKK2RlY2xhcmVfbXBfc25wcmludChkZWxheV93YXRjaF9jaGVja3MsIHByaW50X29m Zl9pbnRfdW5kZWYpCitkZWNsYXJlX2RlZl9oYW5kbGVyKGRlbGF5X3dhaXRfY2hlY2tzLCBzZXRf b2ZmX2ludF91bmRlZikKK2RlY2xhcmVfZGVmX3NucHJpbnQoZGVsYXlfd2FpdF9jaGVja3MsIHBy aW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX292cl9oYW5kbGVyKGRlbGF5X3dhaXRfY2hlY2tz LCBzZXRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfb3ZyX3NucHJpbnQoZGVsYXlfd2FpdF9jaGVj a3MsIHByaW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX2h3X2hhbmRsZXIoZGVsYXlfd2FpdF9j aGVja3MsIHNldF9vZmZfaW50X3VuZGVmKQorZGVjbGFyZV9od19zbnByaW50KGRlbGF5X3dhaXRf Y2hlY2tzLCBwcmludF9vZmZfaW50X3VuZGVmKQorZGVjbGFyZV9tcF9oYW5kbGVyKGRlbGF5X3dh aXRfY2hlY2tzLCBzZXRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfbXBfc25wcmludChkZWxheV93 YWl0X2NoZWNrcywgcHJpbnRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfZGVmX2hhbmRsZXIoc2Fu X3BhdGhfZXJyX3RocmVzaG9sZCwgc2V0X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX2RlZl9zbnBy aW50KHNhbl9wYXRoX2Vycl90aHJlc2hvbGQsIHByaW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJl X292cl9oYW5kbGVyKHNhbl9wYXRoX2Vycl90aHJlc2hvbGQsIHNldF9vZmZfaW50X3VuZGVmKQor ZGVjbGFyZV9vdnJfc25wcmludChzYW5fcGF0aF9lcnJfdGhyZXNob2xkLCBwcmludF9vZmZfaW50 X3VuZGVmKQorZGVjbGFyZV9od19oYW5kbGVyKHNhbl9wYXRoX2Vycl90aHJlc2hvbGQsIHNldF9v ZmZfaW50X3VuZGVmKQorZGVjbGFyZV9od19zbnByaW50KHNhbl9wYXRoX2Vycl90aHJlc2hvbGQs IHByaW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX21wX2hhbmRsZXIoc2FuX3BhdGhfZXJyX3Ro cmVzaG9sZCwgc2V0X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX21wX3NucHJpbnQoc2FuX3BhdGhf ZXJyX3RocmVzaG9sZCwgcHJpbnRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfZGVmX2hhbmRsZXIo c2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlLCBzZXRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfZGVm X3NucHJpbnQoc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlLCBwcmludF9vZmZfaW50X3VuZGVmKQor ZGVjbGFyZV9vdnJfaGFuZGxlcihzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUsIHNldF9vZmZfaW50 X3VuZGVmKQorZGVjbGFyZV9vdnJfc25wcmludChzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUsIHBy aW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX2h3X2hhbmRsZXIoc2FuX3BhdGhfZXJyX2Zvcmdl dF9yYXRlLCBzZXRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfaHdfc25wcmludChzYW5fcGF0aF9l cnJfZm9yZ2V0X3JhdGUsIHByaW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX21wX2hhbmRsZXIo c2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlLCBzZXRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfbXBf c25wcmludChzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUsIHByaW50X29mZl9pbnRfdW5kZWYpCitk ZWNsYXJlX2RlZl9oYW5kbGVyKHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lLCBzZXRfb2ZmX2lu dF91bmRlZikKK2RlY2xhcmVfZGVmX3NucHJpbnQoc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUs IHByaW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX292cl9oYW5kbGVyKHNhbl9wYXRoX2Vycl9y ZWNvdmVyeV90aW1lLCBzZXRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfb3ZyX3NucHJpbnQoc2Fu X3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUsIHByaW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX2h3 X2hhbmRsZXIoc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUsIHNldF9vZmZfaW50X3VuZGVmKQor ZGVjbGFyZV9od19zbnByaW50KHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lLCBwcmludF9vZmZf aW50X3VuZGVmKQorZGVjbGFyZV9tcF9oYW5kbGVyKHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1l LCBzZXRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfbXBfc25wcmludChzYW5fcGF0aF9lcnJfcmVj b3ZlcnlfdGltZSwgcHJpbnRfb2ZmX2ludF91bmRlZikKIHN0YXRpYyBpbnQKIGRlZl91eHNvY2tf dGltZW91dF9oYW5kbGVyKHN0cnVjdCBjb25maWcgKmNvbmYsIHZlY3RvciBzdHJ2ZWMpCiB7CkBA IC0xNDA0LDYgKzE0MjYsMTAgQEAgaW5pdF9rZXl3b3Jkcyh2ZWN0b3Iga2V5d29yZHMpCiAJaW5z dGFsbF9rZXl3b3JkKCJjb25maWdfZGlyIiwgJmRlZl9jb25maWdfZGlyX2hhbmRsZXIsICZzbnBy aW50X2RlZl9jb25maWdfZGlyKTsKIAlpbnN0YWxsX2tleXdvcmQoImRlbGF5X3dhdGNoX2NoZWNr cyIsICZkZWZfZGVsYXlfd2F0Y2hfY2hlY2tzX2hhbmRsZXIsICZzbnByaW50X2RlZl9kZWxheV93 YXRjaF9jaGVja3MpOwogCWluc3RhbGxfa2V5d29yZCgiZGVsYXlfd2FpdF9jaGVja3MiLCAmZGVm X2RlbGF5X3dhaXRfY2hlY2tzX2hhbmRsZXIsICZzbnByaW50X2RlZl9kZWxheV93YWl0X2NoZWNr cyk7CisgICAgICAgIGluc3RhbGxfa2V5d29yZCgic2FuX3BhdGhfZXJyX3RocmVzaG9sZCIsICZk ZWZfc2FuX3BhdGhfZXJyX3RocmVzaG9sZF9oYW5kbGVyLCAmc25wcmludF9kZWZfc2FuX3BhdGhf ZXJyX3RocmVzaG9sZCk7CisgICAgICAgIGluc3RhbGxfa2V5d29yZCgic2FuX3BhdGhfZXJyX2Zv cmdldF9yYXRlIiwgJmRlZl9zYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGVfaGFuZGxlciwgJnNucHJp bnRfZGVmX3Nhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSk7CisgICAgICAgIGluc3RhbGxfa2V5d29y ZCgic2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUiLCAmZGVmX3Nhbl9wYXRoX2Vycl9yZWNvdmVy eV90aW1lX2hhbmRsZXIsICZzbnByaW50X2RlZl9zYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSk7 CisKIAlpbnN0YWxsX2tleXdvcmQoImZpbmRfbXVsdGlwYXRocyIsICZkZWZfZmluZF9tdWx0aXBh dGhzX2hhbmRsZXIsICZzbnByaW50X2RlZl9maW5kX211bHRpcGF0aHMpOwogCWluc3RhbGxfa2V5 d29yZCgidXhzb2NrX3RpbWVvdXQiLCAmZGVmX3V4c29ja190aW1lb3V0X2hhbmRsZXIsICZzbnBy aW50X2RlZl91eHNvY2tfdGltZW91dCk7CiAJaW5zdGFsbF9rZXl3b3JkKCJyZXRyaWdnZXJfdHJp ZXMiLCAmZGVmX3JldHJpZ2dlcl90cmllc19oYW5kbGVyLCAmc25wcmludF9kZWZfcmV0cmlnZ2Vy X3RyaWVzKTsKQEAgLTE0ODYsNiArMTUxMiw5IEBAIGluaXRfa2V5d29yZHModmVjdG9yIGtleXdv cmRzKQogCWluc3RhbGxfa2V5d29yZCgiZGVmZXJyZWRfcmVtb3ZlIiwgJmh3X2RlZmVycmVkX3Jl bW92ZV9oYW5kbGVyLCAmc25wcmludF9od19kZWZlcnJlZF9yZW1vdmUpOwogCWluc3RhbGxfa2V5 d29yZCgiZGVsYXlfd2F0Y2hfY2hlY2tzIiwgJmh3X2RlbGF5X3dhdGNoX2NoZWNrc19oYW5kbGVy LCAmc25wcmludF9od19kZWxheV93YXRjaF9jaGVja3MpOwogCWluc3RhbGxfa2V5d29yZCgiZGVs YXlfd2FpdF9jaGVja3MiLCAmaHdfZGVsYXlfd2FpdF9jaGVja3NfaGFuZGxlciwgJnNucHJpbnRf aHdfZGVsYXlfd2FpdF9jaGVja3MpOworICAgICAgICBpbnN0YWxsX2tleXdvcmQoInNhbl9wYXRo X2Vycl90aHJlc2hvbGQiLCAmaHdfc2FuX3BhdGhfZXJyX3RocmVzaG9sZF9oYW5kbGVyLCAmc25w cmludF9od19zYW5fcGF0aF9lcnJfdGhyZXNob2xkKTsKKyAgICAgICAgaW5zdGFsbF9rZXl3b3Jk KCJzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUiLCAmaHdfc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRl X2hhbmRsZXIsICZzbnByaW50X2h3X3Nhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSk7CisgICAgICAg IGluc3RhbGxfa2V5d29yZCgic2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUiLCAmaHdfc2FuX3Bh dGhfZXJyX3JlY292ZXJ5X3RpbWVfaGFuZGxlciwgJnNucHJpbnRfaHdfc2FuX3BhdGhfZXJyX3Jl Y292ZXJ5X3RpbWUpOwogCWluc3RhbGxfa2V5d29yZCgic2tpcF9rcGFydHgiLCAmaHdfc2tpcF9r cGFydHhfaGFuZGxlciwgJnNucHJpbnRfaHdfc2tpcF9rcGFydHgpOwogCWluc3RhbGxfa2V5d29y ZCgibWF4X3NlY3RvcnNfa2IiLCAmaHdfbWF4X3NlY3RvcnNfa2JfaGFuZGxlciwgJnNucHJpbnRf aHdfbWF4X3NlY3RvcnNfa2IpOwogCWluc3RhbGxfc3VibGV2ZWxfZW5kKCk7CkBAIC0xNTE1LDYg KzE1NDQsMTAgQEAgaW5pdF9rZXl3b3Jkcyh2ZWN0b3Iga2V5d29yZHMpCiAJaW5zdGFsbF9rZXl3 b3JkKCJkZWZlcnJlZF9yZW1vdmUiLCAmb3ZyX2RlZmVycmVkX3JlbW92ZV9oYW5kbGVyLCAmc25w cmludF9vdnJfZGVmZXJyZWRfcmVtb3ZlKTsKIAlpbnN0YWxsX2tleXdvcmQoImRlbGF5X3dhdGNo X2NoZWNrcyIsICZvdnJfZGVsYXlfd2F0Y2hfY2hlY2tzX2hhbmRsZXIsICZzbnByaW50X292cl9k ZWxheV93YXRjaF9jaGVja3MpOwogCWluc3RhbGxfa2V5d29yZCgiZGVsYXlfd2FpdF9jaGVja3Mi LCAmb3ZyX2RlbGF5X3dhaXRfY2hlY2tzX2hhbmRsZXIsICZzbnByaW50X292cl9kZWxheV93YWl0 X2NoZWNrcyk7CisgICAgICAgIGluc3RhbGxfa2V5d29yZCgic2FuX3BhdGhfZXJyX3RocmVzaG9s ZCIsICZvdnJfc2FuX3BhdGhfZXJyX3RocmVzaG9sZF9oYW5kbGVyLCAmc25wcmludF9vdnJfc2Fu X3BhdGhfZXJyX3RocmVzaG9sZCk7CisgICAgICAgIGluc3RhbGxfa2V5d29yZCgic2FuX3BhdGhf ZXJyX2ZvcmdldF9yYXRlIiwgJm92cl9zYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGVfaGFuZGxlciwg JnNucHJpbnRfb3ZyX3Nhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSk7CisgICAgICAgIGluc3RhbGxf a2V5d29yZCgic2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUiLCAmb3ZyX3Nhbl9wYXRoX2Vycl9y ZWNvdmVyeV90aW1lX2hhbmRsZXIsICZzbnByaW50X292cl9zYW5fcGF0aF9lcnJfcmVjb3Zlcnlf dGltZSk7CisKIAlpbnN0YWxsX2tleXdvcmQoInNraXBfa3BhcnR4IiwgJm92cl9za2lwX2twYXJ0 eF9oYW5kbGVyLCAmc25wcmludF9vdnJfc2tpcF9rcGFydHgpOwogCWluc3RhbGxfa2V5d29yZCgi bWF4X3NlY3RvcnNfa2IiLCAmb3ZyX21heF9zZWN0b3JzX2tiX2hhbmRsZXIsICZzbnByaW50X292 cl9tYXhfc2VjdG9yc19rYik7CiAKQEAgLTE1NDMsNiArMTU3Niw5IEBAIGluaXRfa2V5d29yZHMo dmVjdG9yIGtleXdvcmRzKQogCWluc3RhbGxfa2V5d29yZCgiZGVmZXJyZWRfcmVtb3ZlIiwgJm1w X2RlZmVycmVkX3JlbW92ZV9oYW5kbGVyLCAmc25wcmludF9tcF9kZWZlcnJlZF9yZW1vdmUpOwog CWluc3RhbGxfa2V5d29yZCgiZGVsYXlfd2F0Y2hfY2hlY2tzIiwgJm1wX2RlbGF5X3dhdGNoX2No ZWNrc19oYW5kbGVyLCAmc25wcmludF9tcF9kZWxheV93YXRjaF9jaGVja3MpOwogCWluc3RhbGxf a2V5d29yZCgiZGVsYXlfd2FpdF9jaGVja3MiLCAmbXBfZGVsYXlfd2FpdF9jaGVja3NfaGFuZGxl ciwgJnNucHJpbnRfbXBfZGVsYXlfd2FpdF9jaGVja3MpOworCWluc3RhbGxfa2V5d29yZCgic2Fu X3BhdGhfZXJyX3RocmVzaG9sZCIsICZtcF9zYW5fcGF0aF9lcnJfdGhyZXNob2xkX2hhbmRsZXIs ICZzbnByaW50X21wX3Nhbl9wYXRoX2Vycl90aHJlc2hvbGQpOworCWluc3RhbGxfa2V5d29yZCgi c2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlIiwgJm1wX3Nhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZV9o YW5kbGVyLCAmc25wcmludF9tcF9zYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUpOworCWluc3RhbGxf a2V5d29yZCgic2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUiLCAmbXBfc2FuX3BhdGhfZXJyX3Jl Y292ZXJ5X3RpbWVfaGFuZGxlciwgJnNucHJpbnRfbXBfc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3Rp bWUpOwogCWluc3RhbGxfa2V5d29yZCgic2tpcF9rcGFydHgiLCAmbXBfc2tpcF9rcGFydHhfaGFu ZGxlciwgJnNucHJpbnRfbXBfc2tpcF9rcGFydHgpOwogCWluc3RhbGxfa2V5d29yZCgibWF4X3Nl Y3RvcnNfa2IiLCAmbXBfbWF4X3NlY3RvcnNfa2JfaGFuZGxlciwgJnNucHJpbnRfbXBfbWF4X3Nl Y3RvcnNfa2IpOwogCWluc3RhbGxfc3VibGV2ZWxfZW5kKCk7CmRpZmYgLS1naXQgYS9saWJtdWx0 aXBhdGgvZGljdC5oIGIvbGlibXVsdGlwYXRoL2RpY3QuaAppbmRleCA0Y2QwM2M1Li4yZDYwOTdk IDEwMDY0NAotLS0gYS9saWJtdWx0aXBhdGgvZGljdC5oCisrKyBiL2xpYm11bHRpcGF0aC9kaWN0 LmgKQEAgLTE0LDYgKzE0LDUgQEAgaW50IHByaW50X25vX3BhdGhfcmV0cnkoY2hhciAqIGJ1ZmYs IGludCBsZW4sIHZvaWQgKnB0cik7CiBpbnQgcHJpbnRfZmFzdF9pb19mYWlsKGNoYXIgKiBidWZm LCBpbnQgbGVuLCB2b2lkICpwdHIpOwogaW50IHByaW50X2Rldl9sb3NzKGNoYXIgKiBidWZmLCBp bnQgbGVuLCB2b2lkICpwdHIpOwogaW50IHByaW50X3Jlc2VydmF0aW9uX2tleShjaGFyICogYnVm ZiwgaW50IGxlbiwgdm9pZCAqIHB0cik7Ci1pbnQgcHJpbnRfZGVsYXlfY2hlY2tzKGNoYXIgKiBi dWZmLCBpbnQgbGVuLCB2b2lkICpwdHIpOwotCitpbnQgcHJpbnRfb2ZmX2ludF91bmRlZihjaGFy ICogYnVmZiwgaW50IGxlbiwgdm9pZCAqcHRyKTsKICNlbmRpZiAvKiBfRElDVF9IICovCmRpZmYg LS1naXQgYS9saWJtdWx0aXBhdGgvcHJvcHNlbC5jIGIvbGlibXVsdGlwYXRoL3Byb3BzZWwuYwpp bmRleCBjMGJjNjE2Li5lNGFmZWY3IDEwMDY0NAotLS0gYS9saWJtdWx0aXBhdGgvcHJvcHNlbC5j CisrKyBiL2xpYm11bHRpcGF0aC9wcm9wc2VsLmMKQEAgLTYyMyw3ICs2MjMsNyBAQCBpbnQgc2Vs ZWN0X2RlbGF5X3dhdGNoX2NoZWNrcyhzdHJ1Y3QgY29uZmlnICpjb25mLCBzdHJ1Y3QgbXVsdGlw YXRoICptcCkKIAltcF9zZXRfY29uZihkZWxheV93YXRjaF9jaGVja3MpOwogCW1wX3NldF9kZWZh dWx0KGRlbGF5X3dhdGNoX2NoZWNrcywgREVGQVVMVF9ERUxBWV9DSEVDS1MpOwogb3V0OgotCXBy aW50X2RlbGF5X2NoZWNrcyhidWZmLCAxMiwgJm1wLT5kZWxheV93YXRjaF9jaGVja3MpOworCXBy aW50X29mZl9pbnRfdW5kZWYoYnVmZiwgMTIsICZtcC0+ZGVsYXlfd2F0Y2hfY2hlY2tzKTsKIAlj b25kbG9nKDMsICIlczogZGVsYXlfd2F0Y2hfY2hlY2tzID0gJXMgJXMiLCBtcC0+YWxpYXMsIGJ1 ZmYsIG9yaWdpbik7CiAJcmV0dXJuIDA7CiB9CkBAIC02MzgsMTIgKzYzOCw1NiBAQCBpbnQgc2Vs ZWN0X2RlbGF5X3dhaXRfY2hlY2tzKHN0cnVjdCBjb25maWcgKmNvbmYsIHN0cnVjdCBtdWx0aXBh dGggKm1wKQogCW1wX3NldF9jb25mKGRlbGF5X3dhaXRfY2hlY2tzKTsKIAltcF9zZXRfZGVmYXVs dChkZWxheV93YWl0X2NoZWNrcywgREVGQVVMVF9ERUxBWV9DSEVDS1MpOwogb3V0OgotCXByaW50 X2RlbGF5X2NoZWNrcyhidWZmLCAxMiwgJm1wLT5kZWxheV93YWl0X2NoZWNrcyk7CisJcHJpbnRf b2ZmX2ludF91bmRlZihidWZmLCAxMiwgJm1wLT5kZWxheV93YWl0X2NoZWNrcyk7CiAJY29uZGxv ZygzLCAiJXM6IGRlbGF5X3dhaXRfY2hlY2tzID0gJXMgJXMiLCBtcC0+YWxpYXMsIGJ1ZmYsIG9y aWdpbik7CiAJcmV0dXJuIDA7CiAKIH0KK2ludCBzZWxlY3Rfc2FuX3BhdGhfZXJyX3RocmVzaG9s ZChzdHJ1Y3QgY29uZmlnICpjb25mLCBzdHJ1Y3QgbXVsdGlwYXRoICptcCkKK3sKKyAgICAgICAg Y2hhciAqb3JpZ2luLCBidWZmWzEyXTsKKworICAgICAgICBtcF9zZXRfbXBlKHNhbl9wYXRoX2Vy cl90aHJlc2hvbGQpOworICAgICAgICBtcF9zZXRfb3ZyKHNhbl9wYXRoX2Vycl90aHJlc2hvbGQp OworICAgICAgICBtcF9zZXRfaHdlKHNhbl9wYXRoX2Vycl90aHJlc2hvbGQpOworICAgICAgICBt cF9zZXRfY29uZihzYW5fcGF0aF9lcnJfdGhyZXNob2xkKTsKKyAgICAgICAgbXBfc2V0X2RlZmF1 bHQoc2FuX3BhdGhfZXJyX3RocmVzaG9sZCwgREVGQVVMVF9FUlJfQ0hFQ0tTKTsKK291dDoKKyAg ICAgICAgcHJpbnRfb2ZmX2ludF91bmRlZihidWZmLCAxMiwgJm1wLT5zYW5fcGF0aF9lcnJfdGhy ZXNob2xkKTsKKyAgICAgICAgY29uZGxvZygzLCAiJXM6IHNhbl9wYXRoX2Vycl90aHJlc2hvbGQg PSAlcyAlcyIsIG1wLT5hbGlhcywgYnVmZiwgb3JpZ2luKTsKKyAgICAgICAgcmV0dXJuIDA7Cit9 CisKK2ludCBzZWxlY3Rfc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlKHN0cnVjdCBjb25maWcgKmNv bmYsIHN0cnVjdCBtdWx0aXBhdGggKm1wKQoreworICAgICAgICBjaGFyICpvcmlnaW4sIGJ1ZmZb MTJdOworCisgICAgICAgIG1wX3NldF9tcGUoc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlKTsKKyAg ICAgICAgbXBfc2V0X292cihzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUpOworICAgICAgICBtcF9z ZXRfaHdlKHNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSk7CisgICAgICAgIG1wX3NldF9jb25mKHNh bl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSk7CisgICAgICAgIG1wX3NldF9kZWZhdWx0KHNhbl9wYXRo X2Vycl9mb3JnZXRfcmF0ZSwgREVGQVVMVF9FUlJfQ0hFQ0tTKTsKK291dDoKKyAgICAgICAgcHJp bnRfb2ZmX2ludF91bmRlZihidWZmLCAxMiwgJm1wLT5zYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUp OworICAgICAgICBjb25kbG9nKDMsICIlczogc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlID0gJXMg JXMiLCBtcC0+YWxpYXMsIGJ1ZmYsIG9yaWdpbik7CisgICAgICAgIHJldHVybiAwOworCit9Citp bnQgc2VsZWN0X3Nhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lKHN0cnVjdCBjb25maWcgKmNvbmYs IHN0cnVjdCBtdWx0aXBhdGggKm1wKQoreworICAgICAgICBjaGFyICpvcmlnaW4sIGJ1ZmZbMTJd OwogCisgICAgICAgIG1wX3NldF9tcGUoc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUpOworICAg ICAgICBtcF9zZXRfb3ZyKHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lKTsKKyAgICAgICAgbXBf c2V0X2h3ZShzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSk7CisgICAgICAgIG1wX3NldF9jb25m KHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lKTsKKyAgICAgICAgbXBfc2V0X2RlZmF1bHQoc2Fu X3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUsIERFRkFVTFRfRVJSX0NIRUNLUyk7CitvdXQ6CisgICAg ICAgIHByaW50X29mZl9pbnRfdW5kZWYoYnVmZiwgMTIsICZtcC0+c2FuX3BhdGhfZXJyX3JlY292 ZXJ5X3RpbWUpOworICAgICAgICBjb25kbG9nKDMsICIlczogc2FuX3BhdGhfZXJyX3JlY292ZXJ5 X3RpbWUgPSAlcyAlcyIsIG1wLT5hbGlhcywgYnVmZiwgb3JpZ2luKTsKKyAgICAgICAgcmV0dXJu IDA7CisKK30KIGludCBzZWxlY3Rfc2tpcF9rcGFydHggKHN0cnVjdCBjb25maWcgKmNvbmYsIHN0 cnVjdCBtdWx0aXBhdGggKiBtcCkKIHsKIAljaGFyICpvcmlnaW47CmRpZmYgLS1naXQgYS9saWJt dWx0aXBhdGgvcHJvcHNlbC5oIGIvbGlibXVsdGlwYXRoL3Byb3BzZWwuaAppbmRleCBhZDk4ZmE1 Li5lNWI2ZjkzIDEwMDY0NAotLS0gYS9saWJtdWx0aXBhdGgvcHJvcHNlbC5oCisrKyBiL2xpYm11 bHRpcGF0aC9wcm9wc2VsLmgKQEAgLTI0LDMgKzI0LDYgQEAgaW50IHNlbGVjdF9kZWxheV93YXRj aF9jaGVja3MgKHN0cnVjdCBjb25maWcgKmNvbmYsIHN0cnVjdCBtdWx0aXBhdGggKiBtcCk7CiBp bnQgc2VsZWN0X2RlbGF5X3dhaXRfY2hlY2tzIChzdHJ1Y3QgY29uZmlnICpjb25mLCBzdHJ1Y3Qg bXVsdGlwYXRoICogbXApOwogaW50IHNlbGVjdF9za2lwX2twYXJ0eCAoc3RydWN0IGNvbmZpZyAq Y29uZiwgc3RydWN0IG11bHRpcGF0aCAqIG1wKTsKIGludCBzZWxlY3RfbWF4X3NlY3RvcnNfa2Ig KHN0cnVjdCBjb25maWcgKmNvbmYsIHN0cnVjdCBtdWx0aXBhdGggKiBtcCk7CitpbnQgc2VsZWN0 X3Nhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZShzdHJ1Y3QgY29uZmlnICpjb25mLCBzdHJ1Y3QgbXVs dGlwYXRoICptcCk7CitpbnQgc2VsZWN0X3Nhbl9wYXRoX2Vycl90aHJlc2hvbGQoc3RydWN0IGNv bmZpZyAqY29uZiwgc3RydWN0IG11bHRpcGF0aCAqbXApOworaW50IHNlbGVjdF9zYW5fcGF0aF9l cnJfcmVjb3ZlcnlfdGltZShzdHJ1Y3QgY29uZmlnICpjb25mLCBzdHJ1Y3QgbXVsdGlwYXRoICpt cCk7CmRpZmYgLS1naXQgYS9saWJtdWx0aXBhdGgvc3RydWN0cy5oIGIvbGlibXVsdGlwYXRoL3N0 cnVjdHMuaAppbmRleCAzOTZmNjlkLi42ZWRkOTI3IDEwMDY0NAotLS0gYS9saWJtdWx0aXBhdGgv c3RydWN0cy5oCisrKyBiL2xpYm11bHRpcGF0aC9zdHJ1Y3RzLmgKQEAgLTE1Miw5ICsxNTIsOSBA QCBlbnVtIHNjc2lfcHJvdG9jb2wgewogCVNDU0lfUFJPVE9DT0xfVU5TUEVDID0gMHhmLCAvKiBO byBzcGVjaWZpYyBwcm90b2NvbCAqLwogfTsKIAotZW51bSBkZWxheV9jaGVja3Nfc3RhdGVzIHsK LQlERUxBWV9DSEVDS1NfT0ZGID0gLTEsCi0JREVMQVlfQ0hFQ0tTX1VOREVGID0gMCwKK2VudW0g bm9fdW5kZWZfc3RhdGVzIHsKKwlOVV9OTyA9IC0xLAorCU5VX1VOREVGID0gMCwKIH07CiAKIGVu dW0gaW5pdGlhbGl6ZWRfc3RhdGVzIHsKQEAgLTIyMyw3ICsyMjMsMTAgQEAgc3RydWN0IHBhdGgg ewogCWludCBpbml0aWFsaXplZDsKIAlpbnQgcmV0cmlnZ2VyczsKIAlpbnQgd3dpZF9jaGFuZ2Vk OwotCisJdW5zaWduZWQgaW50IHBhdGhfZmFpbHVyZXM7CisJdGltZV90IGRpc19yZWluc3RhdGVf dGltZTsKKwlpbnQgZGlzYWJsZV9yZWluc3RhdGU7CisJaW50IHNhbl9wYXRoX2Vycl9mb3JnZXRf cmF0ZTsKIAkvKiBjb25maWdsZXQgcG9pbnRlcnMgKi8KIAlzdHJ1Y3QgaHdlbnRyeSAqIGh3ZTsK IH07CkBAIC0yNTUsNiArMjU4LDkgQEAgc3RydWN0IG11bHRpcGF0aCB7CiAJaW50IGRlZmVycmVk X3JlbW92ZTsKIAlpbnQgZGVsYXlfd2F0Y2hfY2hlY2tzOwogCWludCBkZWxheV93YWl0X2NoZWNr czsKKwlpbnQgc2FuX3BhdGhfZXJyX3RocmVzaG9sZDsKKwlpbnQgc2FuX3BhdGhfZXJyX2Zvcmdl dF9yYXRlOworCWludCBzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZTsKIAlpbnQgc2tpcF9rcGFy dHg7CiAJaW50IG1heF9zZWN0b3JzX2tiOwogCXVuc2lnbmVkIGludCBkZXZfbG9zczsKZGlmZiAt LWdpdCBhL211bHRpcGF0aC9tdWx0aXBhdGguY29uZi41IGIvbXVsdGlwYXRoL211bHRpcGF0aC5j b25mLjUKaW5kZXggMzY1ODlmNS4uM2M1NjRhZCAxMDA2NDQKLS0tIGEvbXVsdGlwYXRoL211bHRp cGF0aC5jb25mLjUKKysrIGIvbXVsdGlwYXRoL211bHRpcGF0aC5jb25mLjUKQEAgLTc1MSw2ICs3 NTEsNDUgQEAgVGhlIGRlZmF1bHQgaXM6IFxmQi9ldGMvbXVsdGlwYXRoL2NvbmYuZC9cZlIKIC4K IC4KIC5UUAorLkIgc2FuX3BhdGhfZXJyX3RocmVzaG9sZAorSWYgc2V0IHRvIGEgdmFsdWUgZ3Jl YXRlciB0aGFuIDAsIG11bHRpcGF0aGQgd2lsbCB3YXRjaCBwYXRocyBhbmQgY2hlY2sgaG93IG1h bnkKK3RpbWVzIGEgcGF0aCBoYXMgYmVlbiBmYWlsZWQgZHVlIHRvIGVycm9ycy5JZiB0aGUgbnVt YmVyIG9mIGZhaWx1cmVzIG9uIGEgcGFydGljdWxhcgorcGF0aCBpcyBncmVhdGVyIHRoZW4gdGhl IHNhbl9wYXRoX2Vycl90aHJlc2hvbGQgdGhlbiB0aGUgcGF0aCB3aWxsIG5vdCAgcmVpbnN0YW50 ZQordGlsbCBzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZS5UaGVzZSBwYXRoIGZhaWx1cmVzIHNo b3VsZCBvY2N1ciB3aXRoaW4gYSAKK3Nhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSBjaGVja3MsIGlm IG5vdCB3ZSB3aWxsIGNvbnNpZGVyIHRoZSBwYXRoIGlzIGdvb2QgZW5vdWdoCit0byByZWluc3Rh bnRhdGUuCisuUlMKKy5UUAorVGhlIGRlZmF1bHQgaXM6IFxmQm5vXGZSCisuUkUKKy4KKy4KKy5U UAorLkIgc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlCitJZiBzZXQgdG8gYSB2YWx1ZSBncmVhdGVy IHRoYW4gMCwgbXVsdGlwYXRoZCB3aWxsIGNoZWNrIHdoZXRoZXIgdGhlIHBhdGggZmFpbHVyZXMK K2hhcyBleGNlZWRlZCAgdGhlIHNhbl9wYXRoX2Vycl90aHJlc2hvbGQgd2l0aGluIHRoaXMgbWFu eSBjaGVja3MgaS5lIAorc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlIC4gSWYgc28gd2Ugd2lsbCBu b3QgcmVpbnN0YW50ZSB0aGUgcGF0aCB0aWxsCitzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZS4K Ky5SUworLlRQCitUaGUgZGVmYXVsdCBpczogXGZCbm9cZlIKKy5SRQorLgorLgorLlRQCisuQiBz YW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZQorSWYgc2V0IHRvIGEgdmFsdWUgZ3JlYXRlciB0aGFu IDAsIG11bHRpcGF0aGQgd2lsbCBtYWtlIHN1cmUgdGhhdCB3aGVuIHBhdGggZmFpbHVyZXMKK2hh cyBleGNlZWRlZCB0aGUgc2FuX3BhdGhfZXJyX3RocmVzaG9sZCB3aXRoaW4gc2FuX3BhdGhfZXJy X2ZvcmdldF9yYXRlIHRoZW4gdGhlIHBhdGgKK3dpbGwgYmUgcGxhY2VkIGluIGZhaWxlZCBzdGF0 ZSBmb3Igc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUgZHVyYXRpb24uT25jZSBzYW5fcGF0aF9l cnJfcmVjb3ZlcnlfdGltZQoraGFzIHRpbWVvdXQgIHdlIHdpbGwgcmVpbnN0YW50ZSB0aGUgZmFp bGVkIHBhdGggLgorc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUgdmFsdWUgc2hvdWxkIGJlIGlu IHNlY3MuCisuUlMKKy5UUAorVGhlIGRlZmF1bHQgaXM6IFxmQm5vXGZSCisuUkUKKy4KKy4KKy5U UAogLkIgZGVsYXlfd2F0Y2hfY2hlY2tzCiBJZiBzZXQgdG8gYSB2YWx1ZSBncmVhdGVyIHRoYW4g MCwgbXVsdGlwYXRoZCB3aWxsIHdhdGNoIHBhdGhzIHRoYXQgaGF2ZQogcmVjZW50bHkgYmVjb21l IHZhbGlkIGZvciB0aGlzIG1hbnkgY2hlY2tzLiBJZiB0aGV5IGZhaWwgYWdhaW4gd2hpbGUgdGhl eSBhcmUKQEAgLTEwMTUsNiArMTA1NCwxMiBAQCBhcmUgdGFrZW4gZnJvbSB0aGUgXGZJZGVmYXVs dHNcZlIgb3IgXGZJZGV2aWNlc1xmUiBzZWN0aW9uOgogLlRQCiAuQiBkZWZlcnJlZF9yZW1vdmUK IC5UUAorLkIgc2FuX3BhdGhfZXJyX3RocmVzaG9sZAorLlRQCisuQiBzYW5fcGF0aF9lcnJfZm9y Z2V0X3JhdGUKKy5UUAorLkIgc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUKKy5UUAogLkIgZGVs YXlfd2F0Y2hfY2hlY2tzCiAuVFAKIC5CIGRlbGF5X3dhaXRfY2hlY2tzCkBAIC0xMTI4LDYgKzEx NzMsMTIgQEAgc2VjdGlvbjoKIC5UUAogLkIgZGVmZXJyZWRfcmVtb3ZlCiAuVFAKKy5CIHNhbl9w YXRoX2Vycl90aHJlc2hvbGQKKy5UUAorLkIgc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlCisuVFAK Ky5CIHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lCisuVFAKIC5CIGRlbGF5X3dhdGNoX2NoZWNr cwogLlRQCiAuQiBkZWxheV93YWl0X2NoZWNrcwpAQCAtMTE5Miw2ICsxMjQzLDEyIEBAIHRoZSB2 YWx1ZXMgYXJlIHRha2VuIGZyb20gdGhlIFxmSWRldmljZXNcZlIgb3IgXGZJZGVmYXVsdHNcZlIg c2VjdGlvbnM6CiAuVFAKIC5CIGRlZmVycmVkX3JlbW92ZQogLlRQCisuQiBzYW5fcGF0aF9lcnJf dGhyZXNob2xkCisuVFAKKy5CIHNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZQorLlRQCisuQiBzYW5f cGF0aF9lcnJfcmVjb3ZlcnlfdGltZQorLlRQCiAuQiBkZWxheV93YXRjaF9jaGVja3MKIC5UUAog LkIgZGVsYXlfd2FpdF9jaGVja3MKZGlmZiAtLWdpdCBhL211bHRpcGF0aGQvbWFpbi5jIGIvbXVs dGlwYXRoZC9tYWluLmMKaW5kZXggYWRjMzI1OC4uZDZkNjhhNCAxMDA2NDQKLS0tIGEvbXVsdGlw YXRoZC9tYWluLmMKKysrIGIvbXVsdGlwYXRoZC9tYWluLmMKQEAgLTE0ODcsNiArMTQ4Nyw3MCBA QCB2b2lkIHJlcGFpcl9wYXRoKHN0cnVjdCBwYXRoICogcHApCiAJTE9HX01TRygxLCBjaGVja2Vy X21lc3NhZ2UoJnBwLT5jaGVja2VyKSk7CiB9CiAKK3N0YXRpYyBpbnQgY2hlY2tfcGF0aF9yZWlu c3RhdGVfc3RhdGUoc3RydWN0IHBhdGggKiBwcCkgeworCXN0cnVjdCB0aW1lc3BlYyBzdGFydF90 aW1lOworCWludCBkaXNhYmxlX3JlaW5zdGF0ZSA9IDE7CisKKwlpZiAoISgocHAtPm1wcC0+c2Fu X3BhdGhfZXJyX3RocmVzaG9sZCA+IDApICYmIAorCQkJCShwcC0+bXBwLT5zYW5fcGF0aF9lcnJf Zm9yZ2V0X3JhdGUgPiAwKSAmJgorCQkJCShwcC0+bXBwLT5zYW5fcGF0aF9lcnJfcmVjb3Zlcnlf dGltZSA+MCkpKSB7CisJCXJldHVybiBkaXNhYmxlX3JlaW5zdGF0ZTsKKwl9CisKKwlpZiAoY2xv Y2tfZ2V0dGltZShDTE9DS19NT05PVE9OSUMsICZzdGFydF90aW1lKSAhPSAwKSB7CisJCXJldHVy biBkaXNhYmxlX3JlaW5zdGF0ZTsJCisJfQorCisJaWYgKChzdGFydF90aW1lLnR2X3NlYyAtIHBw LT5kaXNfcmVpbnN0YXRlX3RpbWUgKSA+IHBwLT5tcHAtPnNhbl9wYXRoX2Vycl9yZWNvdmVyeV90 aW1lKSB7CisJCWRpc2FibGVfcmVpbnN0YXRlID0wOworCQlwcC0+cGF0aF9mYWlsdXJlcyA9IDA7 CisJCXBwLT5kaXNhYmxlX3JlaW5zdGF0ZSA9IDA7CisJCXBwLT5zYW5fcGF0aF9lcnJfZm9yZ2V0 X3JhdGUgPSBwcC0+bXBwLT5zYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGU7CisJCWNvbmRsb2coMywi XG5wYXRoICVzIDpyZWluc3RhdGUgdGhlIHBhdGggYWZ0ZXIgZXJyIHJlY292ZXJ5IHRpbWVcbiIs cHAtPmRldik7CisJfQorCXJldHVybiAgZGlzYWJsZV9yZWluc3RhdGU7Cit9CisKK3N0YXRpYyBp bnQgY2hlY2tfcGF0aF92YWxpZGl0eV9lcnIgKHN0cnVjdCBwYXRoICogcHApIHsKKwlzdHJ1Y3Qg dGltZXNwZWMgc3RhcnRfdGltZTsKKwlpbnQgZGlzYWJsZV9yZWluc3RhdGUgPSAwOworCisJaWYg KCEoKHBwLT5tcHAtPnNhbl9wYXRoX2Vycl90aHJlc2hvbGQgPiAwKSAmJiAKKwkJCQkocHAtPm1w cC0+c2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlID4gMCkgJiYKKwkJCQkocHAtPm1wcC0+c2FuX3Bh dGhfZXJyX3JlY292ZXJ5X3RpbWUgPjApKSkgeworCQlyZXR1cm4gZGlzYWJsZV9yZWluc3RhdGU7 CisJfQorCisJaWYgKGNsb2NrX2dldHRpbWUoQ0xPQ0tfTU9OT1RPTklDLCAmc3RhcnRfdGltZSkg IT0gMCkgeworCQlyZXR1cm4gZGlzYWJsZV9yZWluc3RhdGU7CQorCX0KKwlpZiAoIXBwLT5kaXNh YmxlX3JlaW5zdGF0ZSkgeworCQlpZiAocHAtPnBhdGhfZmFpbHVyZXMpIHsKKwkJCS8qaWYgdGhl IGVycm9yIHRocmVzaG9sZCBoYXMgaGl0IGhpdCB3aXRoaW4gdGhlIHNhbl9wYXRoX2Vycl9mb3Jn ZXRfcmF0ZQorCQkJICpjeWNsZXMgZG9ub3QgcmVpbnN0YW50ZSB0aGUgcGF0aCB0aWxsIHRoZSBz YW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZQorCQkJICpwbGFjZSB0aGUgcGF0aCBpbiBmYWlsZWQg c3RhdGUgdGlsbCBzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSBzbyB0aGF0IHRoZQorCQkJICpj dXRvbWVyIGNhbiByZWN0aWZ5IHRoZSBpc3N1ZSB3aXRoaW4gdGhpcyB0aW1lIC5PbmNlIHRoZSBj b21wbGV0aW9uIG9mCisJCQkgKnNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lIGl0IHNob3VsZCBh dXRvbWF0aWNhbGx5IHJlaW5zdGFudGF0ZSB0aGUgcGF0aAorCQkJICovCisJCQlpZiAoKHBwLT5w YXRoX2ZhaWx1cmVzID4gcHAtPm1wcC0+c2FuX3BhdGhfZXJyX3RocmVzaG9sZCkgJiYKKwkJCQkJ KHBwLT5zYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUgPiAwKSkgeworCQkJCXByaW50ZigiXG4lczol ZDogJXMgaGl0IGVycm9yIHRocmVzaG9sZCBcbiIsX19mdW5jX18sX19MSU5FX18scHAtPmRldik7 CisJCQkJcHAtPmRpc19yZWluc3RhdGVfdGltZSA9IHN0YXJ0X3RpbWUudHZfc2VjIDsKKwkJCQlw cC0+ZGlzYWJsZV9yZWluc3RhdGUgPSAxOworCQkJCWRpc2FibGVfcmVpbnN0YXRlID0gMTsKKwkJ CX0gZWxzZSBpZiAoKHBwLT5zYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUgPiAwKSkgeworCQkJCXBw LT5zYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUtLTsKKwkJCX0gZWxzZSB7CisJCQkJLypmb3IgZXZl cnkgc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlIG51bWJlcgorCQkJCSAqb2Ygc3VjY2Vzc2Z1bCBw YXRoIGNoZWNrcyBkZWNyZW1lbnQgcGF0aF9mYWlsdXJlcyBieSAxCisJCQkJICovCisJCQkJcHAt PnBhdGhfZmFpbHVyZXMgLS07CisJCQkJcHAtPnNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSA9IHBw LT5tcHAtPnNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZTsKKwkJCX0KKwkJfQorCX0KKwlyZXR1cm4g IGRpc2FibGVfcmVpbnN0YXRlOworfQogLyoKICAqIFJldHVybnMgJzEnIGlmIHRoZSBwYXRoIGhh cyBiZWVuIGNoZWNrZWQsICctMScgaWYgaXQgd2FzIGJsYWNrbGlzdGVkCiAgKiBhbmQgJzAnIG90 aGVyd2lzZQpAQCAtMTUwMiw3ICsxNTY2LDcgQEAgY2hlY2tfcGF0aCAoc3RydWN0IHZlY3RvcnMg KiB2ZWNzLCBzdHJ1Y3QgcGF0aCAqIHBwLCBpbnQgdGlja3MpCiAJaW50IG9sZGNoa3JzdGF0ZSA9 IHBwLT5jaGtyc3RhdGU7CiAJaW50IHJldHJpZ2dlcl90cmllcywgY2hlY2tpbnQ7CiAJc3RydWN0 IGNvbmZpZyAqY29uZjsKLQlpbnQgcmV0OworCWludCByZXQ7CQogCiAJaWYgKChwcC0+aW5pdGlh bGl6ZWQgPT0gSU5JVF9PSyB8fAogCSAgICAgcHAtPmluaXRpYWxpemVkID09IElOSVRfUkVRVUVT VEVEX1VERVYpICYmICFwcC0+bXBwKQpAQCAtMTYwMSw2ICsxNjY1LDE4IEBAIGNoZWNrX3BhdGgg KHN0cnVjdCB2ZWN0b3JzICogdmVjcywgc3RydWN0IHBhdGggKiBwcCwgaW50IHRpY2tzKQogCQly ZXR1cm4gMDsKIAogCWlmICgobmV3c3RhdGUgPT0gUEFUSF9VUCB8fCBuZXdzdGF0ZSA9PSBQQVRI X0dIT1NUKSAmJgorCSAgICAgcHAtPmRpc2FibGVfcmVpbnN0YXRlKSB7CisJCS8qCisJCSAqIGNo ZWNrIGlmIHRoZSBwYXRoIGlzIGluIGZhaWxlZCBzdGF0ZSBmb3IgbW9yZSB0aGFuIHNhbl9wYXRo X2Vycl9yZWNvdmVyeV90aW1lCisJCSAqIGlmIG5vdCBwbGFjZSB0aGUgcGF0aCBpbiBkZWxheWVk IHN0YXRlCisJCSAqLworCQlpZiAoY2hlY2tfcGF0aF9yZWluc3RhdGVfc3RhdGUocHApKSB7CisJ CQlwcC0+c3RhdGUgPSBQQVRIX0RFTEFZRUQ7CisJCQlyZXR1cm4gMTsKKwkJfQorCX0KKwkKKwlp ZiAoKG5ld3N0YXRlID09IFBBVEhfVVAgfHwgbmV3c3RhdGUgPT0gUEFUSF9HSE9TVCkgJiYKIAkg ICAgIHBwLT53YWl0X2NoZWNrcyA+IDApIHsKIAkJaWYgKHBwLT5tcHAtPm5yX2FjdGl2ZSA+IDAp IHsKIAkJCXBwLT5zdGF0ZSA9IFBBVEhfREVMQVlFRDsKQEAgLTE2MDksMTggKzE2ODUsMzEgQEAg Y2hlY2tfcGF0aCAoc3RydWN0IHZlY3RvcnMgKiB2ZWNzLCBzdHJ1Y3QgcGF0aCAqIHBwLCBpbnQg dGlja3MpCiAJCX0gZWxzZQogCQkJcHAtPndhaXRfY2hlY2tzID0gMDsKIAl9Ci0KKwlpZiAoKG5l d3N0YXRlID09IFBBVEhfRE9XTiB8fCBuZXdzdGF0ZSA9PSBQQVRIX0dIT1NUIHx8CisJCXBwLT5z dGF0ZSA9PSBQQVRIX0RPV04pKSB7CisJCS8qYXNzaWduZWQgIHRoZSBwYXRoX2Vycl9mb3JnZXRf cmF0ZSB3aGVuIHdlIHNlZSB0aGUgZmlyc3QgZmFpbHVyZSBvbiB0aGUgcGF0aCovCisJCWlmKHBw LT5wYXRoX2ZhaWx1cmVzID09IDApeworCQkJcHAtPnNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSA9 IHBwLT5tcHAtPnNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZTsKKwkJfQorCQlwcC0+cGF0aF9mYWls dXJlcysrOworCX0KIAkvKgogCSAqIGRvbid0IHJlaW5zdGF0ZSBmYWlsZWQgcGF0aCwgaWYgaXRz IGluIHN0YW5kLWJ5CiAJICogYW5kIGlmIHRhcmdldCBzdXBwb3J0cyBvbmx5IGltcGxpY2l0IHRw Z3MgbW9kZS4KIAkgKiB0aGlzIHdpbGwgcHJldmVudCB1bm5lY2Vzc2FyeSBpL28gYnkgZG0gb24g c3RhbmQtYnkKIAkgKiBwYXRocyBpZiB0aGVyZSBhcmUgbm8gb3RoZXIgYWN0aXZlIHBhdGhzIGlu IG1hcC4KKwkgKgorCSAqIHdoZW4gcGF0aCBmYWlsdXJlcyBoYXMgZXhjZWVkZWQgdGhlIHNhbl9w YXRoX2Vycl90aHJlc2hvbGQgCisJICogd2l0aGluIHNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSB0 aGVuIHdlIGRvbid0IHJlaW5zdGF0ZQorCSAqIGZhaWxlZCBwYXRoIGZvciBzYW5fcGF0aF9lcnJf cmVjb3ZlcnlfdGltZQogCSAqLwotCWRpc2FibGVfcmVpbnN0YXRlID0gKG5ld3N0YXRlID09IFBB VEhfR0hPU1QgJiYKKwlkaXNhYmxlX3JlaW5zdGF0ZSA9ICgobmV3c3RhdGUgPT0gUEFUSF9HSE9T VCAmJgogCQkJICAgIHBwLT5tcHAtPm5yX2FjdGl2ZSA9PSAwICYmCi0JCQkgICAgcHAtPnRwZ3Mg PT0gVFBHU19JTVBMSUNJVCkgPyAxIDogMDsKKwkJCSAgICBwcC0+dHBncyA9PSBUUEdTX0lNUExJ Q0lUKSA/IDEgOgorCQkJICAgIGNoZWNrX3BhdGhfdmFsaWRpdHlfZXJyKHBwKSk7CiAKIAlwcC0+ Y2hrcnN0YXRlID0gbmV3c3RhdGU7CisKIAlpZiAobmV3c3RhdGUgIT0gcHAtPnN0YXRlKSB7CiAJ CWludCBvbGRzdGF0ZSA9IHBwLT5zdGF0ZTsKIAkJcHAtPnN0YXRlID0gbmV3c3RhdGU7Cg== --_002_af56a18e992f4a8cb474de87a208e02aBRMWPEXMB12corpbrocadec_ Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline --_002_af56a18e992f4a8cb474de87a208e02aBRMWPEXMB12corpbrocadec_-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Benjamin Marzinski" Subject: Re: deterministic io throughput in multipath Date: Wed, 1 Feb 2017 19:50:01 -0600 Message-ID: <1486000201-3960-1-git-send-email-bmarzins@redhat.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Muneendra Kumar M Cc: device-mapper development List-Id: dm-devel.ids This is certainly moving in the right direction. There are a couple of things I would change. check_path_reinstate_state() will automatically disable the path if there are configuration problems. If things aren't configured correctly, or the code can't get the current time, it seems like it should allow the path to get reinstated, to avoid keeping a perfectly good path down indefinitely. Also, if you look at the delay_*_checks code, it automatically reinstates a problematic path if there are no other paths to use. This seems like a good idea as well. Also, your code increments path_failures every time the checker fails. This means that if a device is down for a while, when it comes back up, it will get delayed. I'm not sure if this is intentional, or if you were trying to track the number of times the path was restored and then failed again, instead of the total time a path was failed for. Perhaps it would be easier to show the kind of changes I would make with a patch. What do you think about this? I haven't done much testing on it at all, but these are the changes I would make. Signed-off-by: Benjamin Marzinski --- libmultipath/config.c | 3 + libmultipath/dict.c | 2 +- multipathd/main.c | 149 +++++++++++++++++++++++--------------------------- 3 files changed, 72 insertions(+), 82 deletions(-) diff --git a/libmultipath/config.c b/libmultipath/config.c index be384af..5837dc6 100644 --- a/libmultipath/config.c +++ b/libmultipath/config.c @@ -624,6 +624,9 @@ load_config (char * file) conf->disable_changed_wwids = DEFAULT_DISABLE_CHANGED_WWIDS; conf->remove_retries = 0; conf->max_sectors_kb = DEFAULT_MAX_SECTORS_KB; + conf->san_path_err_threshold = DEFAULT_ERR_CHECKS; + conf->san_path_err_forget_rate = DEFAULT_ERR_CHECKS; + conf->san_path_err_recovery_time = DEFAULT_ERR_CHECKS; /* * preload default hwtable diff --git a/libmultipath/dict.c b/libmultipath/dict.c index 4754572..ae94c88 100644 --- a/libmultipath/dict.c +++ b/libmultipath/dict.c @@ -1050,7 +1050,7 @@ print_off_int_undef(char * buff, int len, void *ptr) case NU_UNDEF: return 0; case NU_NO: - return snprintf(buff, len, "\"off\""); + return snprintf(buff, len, "\"no\""); default: return snprintf(buff, len, "%i", *int_ptr); } diff --git a/multipathd/main.c b/multipathd/main.c index d6d68a4..305e236 100644 --- a/multipathd/main.c +++ b/multipathd/main.c @@ -1488,69 +1488,70 @@ void repair_path(struct path * pp) } static int check_path_reinstate_state(struct path * pp) { - struct timespec start_time; - int disable_reinstate = 1; - - if (!((pp->mpp->san_path_err_threshold > 0) && - (pp->mpp->san_path_err_forget_rate > 0) && - (pp->mpp->san_path_err_recovery_time >0))) { - return disable_reinstate; - } - - if (clock_gettime(CLOCK_MONOTONIC, &start_time) != 0) { - return disable_reinstate; + struct timespec curr_time; + + if (pp->disable_reinstate) { + /* If we don't know how much time has passed, automatically + * reinstate the path, just to be safe. Also, if there are + * no other usable paths, reinstate the path */ + if (clock_gettime(CLOCK_MONOTONIC, &curr_time) != 0 || + pp->mpp->nr_active == 0) { + condlog(2, "%s : reinstating path early", pp->dev); + goto reinstate_path; + } + if ((curr_time.tv_sec - pp->dis_reinstate_time ) > pp->mpp->san_path_err_recovery_time) { + condlog(2,"%s : reinstate the path after err recovery time", pp->dev); + goto reinstate_path; + } + return 1; } - if ((start_time.tv_sec - pp->dis_reinstate_time ) > pp->mpp->san_path_err_recovery_time) { - disable_reinstate =0; - pp->path_failures = 0; - pp->disable_reinstate = 0; - pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; - condlog(3,"\npath %s :reinstate the path after err recovery time\n",pp->dev); + /* forget errors on a working path */ + if ((pp->state == PATH_UP || pp->state == PATH_GHOST) && + pp->path_failures > 0) { + if (pp->san_path_err_forget_rate > 0) + pp->san_path_err_forget_rate--; + else { + /* for every san_path_err_forget_rate number of + * successful path checks decrement path_failures by 1 + */ + pp->path_failures--; + pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; + } + return 0; } - return disable_reinstate; -} -static int check_path_validity_err (struct path * pp) { - struct timespec start_time; - int disable_reinstate = 0; + /* If the path isn't recovering from a failed state, do nothing */ + if (pp->state != PATH_DOWN && pp->state != PATH_SHAKY && + pp->state != PATH_TIMEOUT) + return 0; - if (!((pp->mpp->san_path_err_threshold > 0) && - (pp->mpp->san_path_err_forget_rate > 0) && - (pp->mpp->san_path_err_recovery_time >0))) { - return disable_reinstate; - } + if (pp->path_failures == 0) + pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; + pp->path_failures++; - if (clock_gettime(CLOCK_MONOTONIC, &start_time) != 0) { - return disable_reinstate; - } - if (!pp->disable_reinstate) { - if (pp->path_failures) { - /*if the error threshold has hit hit within the san_path_err_forget_rate - *cycles donot reinstante the path till the san_path_err_recovery_time - *place the path in failed state till san_path_err_recovery_time so that the - *cutomer can rectify the issue within this time .Once the completion of - *san_path_err_recovery_time it should automatically reinstantate the path - */ - if ((pp->path_failures > pp->mpp->san_path_err_threshold) && - (pp->san_path_err_forget_rate > 0)) { - printf("\n%s:%d: %s hit error threshold \n",__func__,__LINE__,pp->dev); - pp->dis_reinstate_time = start_time.tv_sec ; - pp->disable_reinstate = 1; - disable_reinstate = 1; - } else if ((pp->san_path_err_forget_rate > 0)) { - pp->san_path_err_forget_rate--; - } else { - /*for every san_path_err_forget_rate number - *of successful path checks decrement path_failures by 1 - */ - pp->path_failures --; - pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; - } - } + /* if we don't know the currently time, we don't know how long to + * delay the path, so there's no point in checking if we should */ + if (clock_gettime(CLOCK_MONOTONIC, &curr_time) != 0) + return 0; + /* when path failures has exceeded the san_path_err_threshold + * place the path in delayed state till san_path_err_recovery_time + * so that the cutomer can rectify the issue within this time. After + * the completion of san_path_err_recovery_time it should + * automatically reinstate the path */ + if (pp->path_failures > pp->mpp->san_path_err_threshold) { + condlog(2, "%s : hit error threshold. Delaying path reinstatement", pp->dev); + pp->dis_reinstate_time = curr_time.tv_sec; + pp->disable_reinstate = 1; + return 1; } - return disable_reinstate; + return 0; +reinstate_path: + pp->path_failures = 0; + pp->disable_reinstate = 0; + return 0; } + /* * Returns '1' if the path has been checked, '-1' if it was blacklisted * and '0' otherwise @@ -1566,7 +1567,7 @@ check_path (struct vectors * vecs, struct path * pp, int ticks) int oldchkrstate = pp->chkrstate; int retrigger_tries, checkint; struct config *conf; - int ret; + int ret; if ((pp->initialized == INIT_OK || pp->initialized == INIT_REQUESTED_UDEV) && !pp->mpp) @@ -1664,16 +1665,15 @@ check_path (struct vectors * vecs, struct path * pp, int ticks) if (!pp->mpp) return 0; + /* We only need to check if the path should be delayed when the + * the path is actually usable and san_path_err is configured */ if ((newstate == PATH_UP || newstate == PATH_GHOST) && - pp->disable_reinstate) { - /* - * check if the path is in failed state for more than san_path_err_recovery_time - * if not place the path in delayed state - */ - if (check_path_reinstate_state(pp)) { - pp->state = PATH_DELAYED; - return 1; - } + pp->mpp->san_path_err_threshold > 0 && + pp->mpp->san_path_err_forget_rate > 0 && + pp->mpp->san_path_err_recovery_time > 0 && + check_path_reinstate_state(pp)) { + pp->state = PATH_DELAYED; + return 1; } if ((newstate == PATH_UP || newstate == PATH_GHOST) && @@ -1685,31 +1685,18 @@ check_path (struct vectors * vecs, struct path * pp, int ticks) } else pp->wait_checks = 0; } - if ((newstate == PATH_DOWN || newstate == PATH_GHOST || - pp->state == PATH_DOWN)) { - /*assigned the path_err_forget_rate when we see the first failure on the path*/ - if(pp->path_failures == 0){ - pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; - } - pp->path_failures++; - } + /* * don't reinstate failed path, if its in stand-by * and if target supports only implicit tpgs mode. * this will prevent unnecessary i/o by dm on stand-by * paths if there are no other active paths in map. - * - * when path failures has exceeded the san_path_err_threshold - * within san_path_err_forget_rate then we don't reinstate - * failed path for san_path_err_recovery_time */ - disable_reinstate = ((newstate == PATH_GHOST && + disable_reinstate = (newstate == PATH_GHOST && pp->mpp->nr_active == 0 && - pp->tpgs == TPGS_IMPLICIT) ? 1 : - check_path_validity_err(pp)); + pp->tpgs == TPGS_IMPLICIT) ? 1 : 0; pp->chkrstate = newstate; - if (newstate != pp->state) { int oldstate = pp->state; pp->state = newstate; -- 1.8.3.1 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Muneendra Kumar M Subject: Re: deterministic io throughput in multipath Date: Thu, 2 Feb 2017 11:48:39 +0000 Message-ID: <3ab5d07903b44e0a913b5637862c4c96@BRMWP-EXMB12.corp.brocade.com> References: <1486000201-3960-1-git-send-email-bmarzins@redhat.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="_002_3ab5d07903b44e0a913b5637862c4c96BRMWPEXMB12corpbrocadec_" Return-path: In-Reply-To: <1486000201-3960-1-git-send-email-bmarzins@redhat.com> Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Benjamin Marzinski Cc: device-mapper development List-Id: dm-devel.ids --_002_3ab5d07903b44e0a913b5637862c4c96BRMWPEXMB12corpbrocadec_ Content-Type: text/plain; charset="us-ascii" Hi Ben, The below changes suggested by you are good. Thanks for it. I have taken your changes and made few changes to make the functionality working. I have tested the same on the setup which works fine. We need to increment the path_failures every time checker fails. if a device is down for a while, when it comes back up, it will get delayed only if the path failures exceeds the error threshold. Whether checker fails or kernel identifies the failures we need to capture those as it tells the state of the path and target. The below code has already taken care of this. Could you please review the attached patch and provide us your valuable comments . Below are the files that has been changed . libmultipath/config.c | 6 ++++++ libmultipath/config.h | 9 +++++++++ libmultipath/configure.c | 3 +++ libmultipath/defaults.h | 3 ++- libmultipath/dict.c | 86 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------- libmultipath/dict.h | 3 +-- libmultipath/propsel.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++-- libmultipath/propsel.h | 3 +++ libmultipath/structs.h | 14 ++++++++++---- multipath/multipath.conf.5 | 57 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ multipathd/main.c | 83 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 11 files changed, 281 insertions(+), 34 deletions(-) Regards, Muneendra. -----Original Message----- From: Benjamin Marzinski [mailto:bmarzins@redhat.com] Sent: Thursday, February 02, 2017 7:20 AM To: Muneendra Kumar M Cc: device-mapper development Subject: RE: [dm-devel] deterministic io throughput in multipath This is certainly moving in the right direction. There are a couple of things I would change. check_path_reinstate_state() will automatically disable the path if there are configuration problems. If things aren't configured correctly, or the code can't get the current time, it seems like it should allow the path to get reinstated, to avoid keeping a perfectly good path down indefinitely. Also, if you look at the delay_*_checks code, it automatically reinstates a problematic path if there are no other paths to use. This seems like a good idea as well. Also, your code increments path_failures every time the checker fails. This means that if a device is down for a while, when it comes back up, it will get delayed. I'm not sure if this is intentional, or if you were trying to track the number of times the path was restored and then failed again, instead of the total time a path was failed for. Perhaps it would be easier to show the kind of changes I would make with a patch. What do you think about this? I haven't done much testing on it at all, but these are the changes I would make. Signed-off-by: Benjamin Marzinski --- libmultipath/config.c | 3 + libmultipath/dict.c | 2 +- multipathd/main.c | 149 +++++++++++++++++++++++--------------------------- 3 files changed, 72 insertions(+), 82 deletions(-) diff --git a/libmultipath/config.c b/libmultipath/config.c index be384af..5837dc6 100644 --- a/libmultipath/config.c +++ b/libmultipath/config.c @@ -624,6 +624,9 @@ load_config (char * file) conf->disable_changed_wwids = DEFAULT_DISABLE_CHANGED_WWIDS; conf->remove_retries = 0; conf->max_sectors_kb = DEFAULT_MAX_SECTORS_KB; + conf->san_path_err_threshold = DEFAULT_ERR_CHECKS; + conf->san_path_err_forget_rate = DEFAULT_ERR_CHECKS; + conf->san_path_err_recovery_time = DEFAULT_ERR_CHECKS; /* * preload default hwtable diff --git a/libmultipath/dict.c b/libmultipath/dict.c index 4754572..ae94c88 100644 --- a/libmultipath/dict.c +++ b/libmultipath/dict.c @@ -1050,7 +1050,7 @@ print_off_int_undef(char * buff, int len, void *ptr) case NU_UNDEF: return 0; case NU_NO: - return snprintf(buff, len, "\"off\""); + return snprintf(buff, len, "\"no\""); default: return snprintf(buff, len, "%i", *int_ptr); } diff --git a/multipathd/main.c b/multipathd/main.c index d6d68a4..305e236 100644 --- a/multipathd/main.c +++ b/multipathd/main.c @@ -1488,69 +1488,70 @@ void repair_path(struct path * pp) } static int check_path_reinstate_state(struct path * pp) { - struct timespec start_time; - int disable_reinstate = 1; - - if (!((pp->mpp->san_path_err_threshold > 0) && - (pp->mpp->san_path_err_forget_rate > 0) && - (pp->mpp->san_path_err_recovery_time >0))) { - return disable_reinstate; - } - - if (clock_gettime(CLOCK_MONOTONIC, &start_time) != 0) { - return disable_reinstate; + struct timespec curr_time; + + if (pp->disable_reinstate) { + /* If we don't know how much time has passed, automatically + * reinstate the path, just to be safe. Also, if there are + * no other usable paths, reinstate the path */ + if (clock_gettime(CLOCK_MONOTONIC, &curr_time) != 0 || + pp->mpp->nr_active == 0) { + condlog(2, "%s : reinstating path early", pp->dev); + goto reinstate_path; + } + if ((curr_time.tv_sec - pp->dis_reinstate_time ) > pp->mpp->san_path_err_recovery_time) { + condlog(2,"%s : reinstate the path after err recovery time", pp->dev); + goto reinstate_path; + } + return 1; } - if ((start_time.tv_sec - pp->dis_reinstate_time ) > pp->mpp->san_path_err_recovery_time) { - disable_reinstate =0; - pp->path_failures = 0; - pp->disable_reinstate = 0; - pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; - condlog(3,"\npath %s :reinstate the path after err recovery time\n",pp->dev); + /* forget errors on a working path */ + if ((pp->state == PATH_UP || pp->state == PATH_GHOST) && + pp->path_failures > 0) { + if (pp->san_path_err_forget_rate > 0) + pp->san_path_err_forget_rate--; + else { + /* for every san_path_err_forget_rate number of + * successful path checks decrement path_failures by 1 + */ + pp->path_failures--; + pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; + } + return 0; } - return disable_reinstate; -} -static int check_path_validity_err (struct path * pp) { - struct timespec start_time; - int disable_reinstate = 0; + /* If the path isn't recovering from a failed state, do nothing */ + if (pp->state != PATH_DOWN && pp->state != PATH_SHAKY && + pp->state != PATH_TIMEOUT) + return 0; - if (!((pp->mpp->san_path_err_threshold > 0) && - (pp->mpp->san_path_err_forget_rate > 0) && - (pp->mpp->san_path_err_recovery_time >0))) { - return disable_reinstate; - } + if (pp->path_failures == 0) + pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; + pp->path_failures++; - if (clock_gettime(CLOCK_MONOTONIC, &start_time) != 0) { - return disable_reinstate; - } - if (!pp->disable_reinstate) { - if (pp->path_failures) { - /*if the error threshold has hit hit within the san_path_err_forget_rate - *cycles donot reinstante the path till the san_path_err_recovery_time - *place the path in failed state till san_path_err_recovery_time so that the - *cutomer can rectify the issue within this time .Once the completion of - *san_path_err_recovery_time it should automatically reinstantate the path - */ - if ((pp->path_failures > pp->mpp->san_path_err_threshold) && - (pp->san_path_err_forget_rate > 0)) { - printf("\n%s:%d: %s hit error threshold \n",__func__,__LINE__,pp->dev); - pp->dis_reinstate_time = start_time.tv_sec ; - pp->disable_reinstate = 1; - disable_reinstate = 1; - } else if ((pp->san_path_err_forget_rate > 0)) { - pp->san_path_err_forget_rate--; - } else { - /*for every san_path_err_forget_rate number - *of successful path checks decrement path_failures by 1 - */ - pp->path_failures --; - pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; - } - } + /* if we don't know the currently time, we don't know how long to + * delay the path, so there's no point in checking if we should */ + if (clock_gettime(CLOCK_MONOTONIC, &curr_time) != 0) + return 0; + /* when path failures has exceeded the san_path_err_threshold + * place the path in delayed state till san_path_err_recovery_time + * so that the cutomer can rectify the issue within this time. After + * the completion of san_path_err_recovery_time it should + * automatically reinstate the path */ + if (pp->path_failures > pp->mpp->san_path_err_threshold) { + condlog(2, "%s : hit error threshold. Delaying path reinstatement", pp->dev); + pp->dis_reinstate_time = curr_time.tv_sec; + pp->disable_reinstate = 1; + return 1; } - return disable_reinstate; + return 0; +reinstate_path: + pp->path_failures = 0; + pp->disable_reinstate = 0; + return 0; } + /* * Returns '1' if the path has been checked, '-1' if it was blacklisted * and '0' otherwise @@ -1566,7 +1567,7 @@ check_path (struct vectors * vecs, struct path * pp, int ticks) int oldchkrstate = pp->chkrstate; int retrigger_tries, checkint; struct config *conf; - int ret; + int ret; if ((pp->initialized == INIT_OK || pp->initialized == INIT_REQUESTED_UDEV) && !pp->mpp) @@ -1664,16 +1665,15 @@ check_path (struct vectors * vecs, struct path * pp, int ticks) if (!pp->mpp) return 0; + /* We only need to check if the path should be delayed when the + * the path is actually usable and san_path_err is configured */ if ((newstate == PATH_UP || newstate == PATH_GHOST) && - pp->disable_reinstate) { - /* - * check if the path is in failed state for more than san_path_err_recovery_time - * if not place the path in delayed state - */ - if (check_path_reinstate_state(pp)) { - pp->state = PATH_DELAYED; - return 1; - } + pp->mpp->san_path_err_threshold > 0 && + pp->mpp->san_path_err_forget_rate > 0 && + pp->mpp->san_path_err_recovery_time > 0 && + check_path_reinstate_state(pp)) { + pp->state = PATH_DELAYED; + return 1; } if ((newstate == PATH_UP || newstate == PATH_GHOST) && @@ -1685,31 +1685,18 @@ check_path (struct vectors * vecs, struct path * pp, int ticks) } else pp->wait_checks = 0; } - if ((newstate == PATH_DOWN || newstate == PATH_GHOST || - pp->state == PATH_DOWN)) { - /*assigned the path_err_forget_rate when we see the first failure on the path*/ - if(pp->path_failures == 0){ - pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; - } - pp->path_failures++; - } + /* * don't reinstate failed path, if its in stand-by * and if target supports only implicit tpgs mode. * this will prevent unnecessary i/o by dm on stand-by * paths if there are no other active paths in map. - * - * when path failures has exceeded the san_path_err_threshold - * within san_path_err_forget_rate then we don't reinstate - * failed path for san_path_err_recovery_time */ - disable_reinstate = ((newstate == PATH_GHOST && + disable_reinstate = (newstate == PATH_GHOST && pp->mpp->nr_active == 0 && - pp->tpgs == TPGS_IMPLICIT) ? 1 : - check_path_validity_err(pp)); + pp->tpgs == TPGS_IMPLICIT) ? 1 : 0; pp->chkrstate = newstate; - if (newstate != pp->state) { int oldstate = pp->state; pp->state = newstate; -- 1.8.3.1 --_002_3ab5d07903b44e0a913b5637862c4c96BRMWPEXMB12corpbrocadec_ Content-Type: application/octet-stream; name="san_path_error.patch" Content-Description: san_path_error.patch Content-Disposition: attachment; filename="san_path_error.patch"; size=22168; creation-date="Wed, 01 Feb 2017 05:58:31 GMT"; modification-date="Thu, 02 Feb 2017 10:14:56 GMT" Content-Transfer-Encoding: base64 ZGlmZiAtLWdpdCBhL2xpYm11bHRpcGF0aC9jb25maWcuYyBiL2xpYm11bHRpcGF0aC9jb25maWcu YwppbmRleCAxNWRkYmQ4Li41ODM3ZGM2IDEwMDY0NAotLS0gYS9saWJtdWx0aXBhdGgvY29uZmln LmMKKysrIGIvbGlibXVsdGlwYXRoL2NvbmZpZy5jCkBAIC0zNDgsNiArMzQ4LDkgQEAgbWVyZ2Vf aHdlIChzdHJ1Y3QgaHdlbnRyeSAqIGRzdCwgc3RydWN0IGh3ZW50cnkgKiBzcmMpCiAJbWVyZ2Vf bnVtKGRlbGF5X3dhaXRfY2hlY2tzKTsKIAltZXJnZV9udW0oc2tpcF9rcGFydHgpOwogCW1lcmdl X251bShtYXhfc2VjdG9yc19rYik7CisJbWVyZ2VfbnVtKHNhbl9wYXRoX2Vycl90aHJlc2hvbGQp OworCW1lcmdlX251bShzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUpOworCW1lcmdlX251bShzYW5f cGF0aF9lcnJfcmVjb3ZlcnlfdGltZSk7CiAKIAkvKgogCSAqIE1ha2Ugc3VyZSBmZWF0dXJlcyBp cyBjb25zaXN0ZW50IHdpdGgKQEAgLTYyMSw2ICs2MjQsOSBAQCBsb2FkX2NvbmZpZyAoY2hhciAq IGZpbGUpCiAJY29uZi0+ZGlzYWJsZV9jaGFuZ2VkX3d3aWRzID0gREVGQVVMVF9ESVNBQkxFX0NI QU5HRURfV1dJRFM7CiAJY29uZi0+cmVtb3ZlX3JldHJpZXMgPSAwOwogCWNvbmYtPm1heF9zZWN0 b3JzX2tiID0gREVGQVVMVF9NQVhfU0VDVE9SU19LQjsKKwljb25mLT5zYW5fcGF0aF9lcnJfdGhy ZXNob2xkID0gREVGQVVMVF9FUlJfQ0hFQ0tTOworCWNvbmYtPnNhbl9wYXRoX2Vycl9mb3JnZXRf cmF0ZSA9IERFRkFVTFRfRVJSX0NIRUNLUzsKKwljb25mLT5zYW5fcGF0aF9lcnJfcmVjb3Zlcnlf dGltZSA9IERFRkFVTFRfRVJSX0NIRUNLUzsKIAogCS8qCiAJICogcHJlbG9hZCBkZWZhdWx0IGh3 dGFibGUKZGlmZiAtLWdpdCBhL2xpYm11bHRpcGF0aC9jb25maWcuaCBiL2xpYm11bHRpcGF0aC9j b25maWcuaAppbmRleCA5NjcwMDIwLi45ZTQ3ODk0IDEwMDY0NAotLS0gYS9saWJtdWx0aXBhdGgv Y29uZmlnLmgKKysrIGIvbGlibXVsdGlwYXRoL2NvbmZpZy5oCkBAIC02NSw2ICs2NSw5IEBAIHN0 cnVjdCBod2VudHJ5IHsKIAlpbnQgZGVmZXJyZWRfcmVtb3ZlOwogCWludCBkZWxheV93YXRjaF9j aGVja3M7CiAJaW50IGRlbGF5X3dhaXRfY2hlY2tzOworCWludCBzYW5fcGF0aF9lcnJfdGhyZXNo b2xkOworCWludCBzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGU7CisJaW50IHNhbl9wYXRoX2Vycl9y ZWNvdmVyeV90aW1lOwogCWludCBza2lwX2twYXJ0eDsKIAlpbnQgbWF4X3NlY3RvcnNfa2I7CiAJ Y2hhciAqIGJsX3Byb2R1Y3Q7CkBAIC05Myw2ICs5Niw5IEBAIHN0cnVjdCBtcGVudHJ5IHsKIAlp bnQgZGVmZXJyZWRfcmVtb3ZlOwogCWludCBkZWxheV93YXRjaF9jaGVja3M7CiAJaW50IGRlbGF5 X3dhaXRfY2hlY2tzOworCWludCBzYW5fcGF0aF9lcnJfdGhyZXNob2xkOworCWludCBzYW5fcGF0 aF9lcnJfZm9yZ2V0X3JhdGU7CisJaW50IHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lOwogCWlu dCBza2lwX2twYXJ0eDsKIAlpbnQgbWF4X3NlY3RvcnNfa2I7CiAJdWlkX3QgdWlkOwpAQCAtMTM4 LDYgKzE0NCw5IEBAIHN0cnVjdCBjb25maWcgewogCWludCBwcm9jZXNzZWRfbWFpbl9jb25maWc7 CiAJaW50IGRlbGF5X3dhdGNoX2NoZWNrczsKIAlpbnQgZGVsYXlfd2FpdF9jaGVja3M7CisJaW50 IHNhbl9wYXRoX2Vycl90aHJlc2hvbGQ7CisJaW50IHNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZTsK KwlpbnQgc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWU7CiAJaW50IHV4c29ja190aW1lb3V0Owog CWludCBzdHJpY3RfdGltaW5nOwogCWludCByZXRyaWdnZXJfdHJpZXM7CmRpZmYgLS1naXQgYS9s aWJtdWx0aXBhdGgvY29uZmlndXJlLmMgYi9saWJtdWx0aXBhdGgvY29uZmlndXJlLmMKaW5kZXgg YTBmY2FkOS4uNWFkMzAwNyAxMDA2NDQKLS0tIGEvbGlibXVsdGlwYXRoL2NvbmZpZ3VyZS5jCisr KyBiL2xpYm11bHRpcGF0aC9jb25maWd1cmUuYwpAQCAtMjk0LDYgKzI5NCw5IEBAIGludCBzZXR1 cF9tYXAoc3RydWN0IG11bHRpcGF0aCAqbXBwLCBjaGFyICpwYXJhbXMsIGludCBwYXJhbXNfc2l6 ZSkKIAlzZWxlY3RfZGVmZXJyZWRfcmVtb3ZlKGNvbmYsIG1wcCk7CiAJc2VsZWN0X2RlbGF5X3dh dGNoX2NoZWNrcyhjb25mLCBtcHApOwogCXNlbGVjdF9kZWxheV93YWl0X2NoZWNrcyhjb25mLCBt cHApOworCXNlbGVjdF9zYW5fcGF0aF9lcnJfdGhyZXNob2xkKGNvbmYsIG1wcCk7CisJc2VsZWN0 X3Nhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZShjb25mLCBtcHApOworCXNlbGVjdF9zYW5fcGF0aF9l cnJfcmVjb3ZlcnlfdGltZShjb25mLCBtcHApOwogCXNlbGVjdF9za2lwX2twYXJ0eChjb25mLCBt cHApOwogCXNlbGVjdF9tYXhfc2VjdG9yc19rYihjb25mLCBtcHApOwogCmRpZmYgLS1naXQgYS9s aWJtdWx0aXBhdGgvZGVmYXVsdHMuaCBiL2xpYm11bHRpcGF0aC9kZWZhdWx0cy5oCmluZGV4IGI5 YjBhMzcuLjNlZjE1NzkgMTAwNjQ0Ci0tLSBhL2xpYm11bHRpcGF0aC9kZWZhdWx0cy5oCisrKyBi L2xpYm11bHRpcGF0aC9kZWZhdWx0cy5oCkBAIC0yMyw3ICsyMyw4IEBACiAjZGVmaW5lIERFRkFV TFRfUkVUQUlOX0hXSEFORExFUiBSRVRBSU5fSFdIQU5ETEVSX09OCiAjZGVmaW5lIERFRkFVTFRf REVURUNUX1BSSU8JREVURUNUX1BSSU9fT04KICNkZWZpbmUgREVGQVVMVF9ERUZFUlJFRF9SRU1P VkUJREVGRVJSRURfUkVNT1ZFX09GRgotI2RlZmluZSBERUZBVUxUX0RFTEFZX0NIRUNLUwlERUxB WV9DSEVDS1NfT0ZGCisjZGVmaW5lIERFRkFVTFRfREVMQVlfQ0hFQ0tTCU5VX05PCisjZGVmaW5l IERFRkFVTFRfRVJSX0NIRUNLUwlOVV9OTwogI2RlZmluZSBERUZBVUxUX1VFVkVOVF9TVEFDS1NJ WkUgMjU2CiAjZGVmaW5lIERFRkFVTFRfUkVUUklHR0VSX0RFTEFZCTEwCiAjZGVmaW5lIERFRkFV TFRfUkVUUklHR0VSX1RSSUVTCTMKZGlmZiAtLWdpdCBhL2xpYm11bHRpcGF0aC9kaWN0LmMgYi9s aWJtdWx0aXBhdGgvZGljdC5jCmluZGV4IGRjMjE4NDYuLmFlOTRjODggMTAwNjQ0Ci0tLSBhL2xp Ym11bHRpcGF0aC9kaWN0LmMKKysrIGIvbGlibXVsdGlwYXRoL2RpY3QuYwpAQCAtMTAyMyw3ICsx MDIzLDcgQEAgZGVjbGFyZV9tcF9oYW5kbGVyKHJlc2VydmF0aW9uX2tleSwgc2V0X3Jlc2VydmF0 aW9uX2tleSkKIGRlY2xhcmVfbXBfc25wcmludChyZXNlcnZhdGlvbl9rZXksIHByaW50X3Jlc2Vy dmF0aW9uX2tleSkKIAogc3RhdGljIGludAotc2V0X2RlbGF5X2NoZWNrcyh2ZWN0b3Igc3RydmVj LCB2b2lkICpwdHIpCitzZXRfb2ZmX2ludF91bmRlZih2ZWN0b3Igc3RydmVjLCB2b2lkICpwdHIp CiB7CiAJaW50ICppbnRfcHRyID0gKGludCAqKXB0cjsKIAljaGFyICogYnVmZjsKQEAgLTEwMzMs NDcgKzEwMzMsNjkgQEAgc2V0X2RlbGF5X2NoZWNrcyh2ZWN0b3Igc3RydmVjLCB2b2lkICpwdHIp CiAJCXJldHVybiAxOwogCiAJaWYgKCFzdHJjbXAoYnVmZiwgIm5vIikgfHwgIXN0cmNtcChidWZm LCAiMCIpKQotCQkqaW50X3B0ciA9IERFTEFZX0NIRUNLU19PRkY7CisJCSppbnRfcHRyID0gTlVf Tk87CiAJZWxzZSBpZiAoKCppbnRfcHRyID0gYXRvaShidWZmKSkgPCAxKQotCQkqaW50X3B0ciA9 IERFTEFZX0NIRUNLU19VTkRFRjsKKwkJKmludF9wdHIgPSBOVV9VTkRFRjsKIAogCUZSRUUoYnVm Zik7CiAJcmV0dXJuIDA7CiB9CiAKIGludAotcHJpbnRfZGVsYXlfY2hlY2tzKGNoYXIgKiBidWZm LCBpbnQgbGVuLCB2b2lkICpwdHIpCitwcmludF9vZmZfaW50X3VuZGVmKGNoYXIgKiBidWZmLCBp bnQgbGVuLCB2b2lkICpwdHIpCiB7CiAJaW50ICppbnRfcHRyID0gKGludCAqKXB0cjsKIAogCXN3 aXRjaCgqaW50X3B0cikgewotCWNhc2UgREVMQVlfQ0hFQ0tTX1VOREVGOgorCWNhc2UgTlVfVU5E RUY6CiAJCXJldHVybiAwOwotCWNhc2UgREVMQVlfQ0hFQ0tTX09GRjoKLQkJcmV0dXJuIHNucHJp bnRmKGJ1ZmYsIGxlbiwgIlwib2ZmXCIiKTsKKwljYXNlIE5VX05POgorCQlyZXR1cm4gc25wcmlu dGYoYnVmZiwgbGVuLCAiXCJub1wiIik7CiAJZGVmYXVsdDoKIAkJcmV0dXJuIHNucHJpbnRmKGJ1 ZmYsIGxlbiwgIiVpIiwgKmludF9wdHIpOwogCX0KIH0KIAotZGVjbGFyZV9kZWZfaGFuZGxlcihk ZWxheV93YXRjaF9jaGVja3MsIHNldF9kZWxheV9jaGVja3MpCi1kZWNsYXJlX2RlZl9zbnByaW50 KGRlbGF5X3dhdGNoX2NoZWNrcywgcHJpbnRfZGVsYXlfY2hlY2tzKQotZGVjbGFyZV9vdnJfaGFu ZGxlcihkZWxheV93YXRjaF9jaGVja3MsIHNldF9kZWxheV9jaGVja3MpCi1kZWNsYXJlX292cl9z bnByaW50KGRlbGF5X3dhdGNoX2NoZWNrcywgcHJpbnRfZGVsYXlfY2hlY2tzKQotZGVjbGFyZV9o d19oYW5kbGVyKGRlbGF5X3dhdGNoX2NoZWNrcywgc2V0X2RlbGF5X2NoZWNrcykKLWRlY2xhcmVf aHdfc25wcmludChkZWxheV93YXRjaF9jaGVja3MsIHByaW50X2RlbGF5X2NoZWNrcykKLWRlY2xh cmVfbXBfaGFuZGxlcihkZWxheV93YXRjaF9jaGVja3MsIHNldF9kZWxheV9jaGVja3MpCi1kZWNs YXJlX21wX3NucHJpbnQoZGVsYXlfd2F0Y2hfY2hlY2tzLCBwcmludF9kZWxheV9jaGVja3MpCi0K LWRlY2xhcmVfZGVmX2hhbmRsZXIoZGVsYXlfd2FpdF9jaGVja3MsIHNldF9kZWxheV9jaGVja3Mp Ci1kZWNsYXJlX2RlZl9zbnByaW50KGRlbGF5X3dhaXRfY2hlY2tzLCBwcmludF9kZWxheV9jaGVj a3MpCi1kZWNsYXJlX292cl9oYW5kbGVyKGRlbGF5X3dhaXRfY2hlY2tzLCBzZXRfZGVsYXlfY2hl Y2tzKQotZGVjbGFyZV9vdnJfc25wcmludChkZWxheV93YWl0X2NoZWNrcywgcHJpbnRfZGVsYXlf Y2hlY2tzKQotZGVjbGFyZV9od19oYW5kbGVyKGRlbGF5X3dhaXRfY2hlY2tzLCBzZXRfZGVsYXlf Y2hlY2tzKQotZGVjbGFyZV9od19zbnByaW50KGRlbGF5X3dhaXRfY2hlY2tzLCBwcmludF9kZWxh eV9jaGVja3MpCi1kZWNsYXJlX21wX2hhbmRsZXIoZGVsYXlfd2FpdF9jaGVja3MsIHNldF9kZWxh eV9jaGVja3MpCi1kZWNsYXJlX21wX3NucHJpbnQoZGVsYXlfd2FpdF9jaGVja3MsIHByaW50X2Rl bGF5X2NoZWNrcykKLQorZGVjbGFyZV9kZWZfaGFuZGxlcihkZWxheV93YXRjaF9jaGVja3MsIHNl dF9vZmZfaW50X3VuZGVmKQorZGVjbGFyZV9kZWZfc25wcmludChkZWxheV93YXRjaF9jaGVja3Ms IHByaW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX292cl9oYW5kbGVyKGRlbGF5X3dhdGNoX2No ZWNrcywgc2V0X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX292cl9zbnByaW50KGRlbGF5X3dhdGNo X2NoZWNrcywgcHJpbnRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfaHdfaGFuZGxlcihkZWxheV93 YXRjaF9jaGVja3MsIHNldF9vZmZfaW50X3VuZGVmKQorZGVjbGFyZV9od19zbnByaW50KGRlbGF5 X3dhdGNoX2NoZWNrcywgcHJpbnRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfbXBfaGFuZGxlcihk ZWxheV93YXRjaF9jaGVja3MsIHNldF9vZmZfaW50X3VuZGVmKQorZGVjbGFyZV9tcF9zbnByaW50 KGRlbGF5X3dhdGNoX2NoZWNrcywgcHJpbnRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfZGVmX2hh bmRsZXIoZGVsYXlfd2FpdF9jaGVja3MsIHNldF9vZmZfaW50X3VuZGVmKQorZGVjbGFyZV9kZWZf c25wcmludChkZWxheV93YWl0X2NoZWNrcywgcHJpbnRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVf b3ZyX2hhbmRsZXIoZGVsYXlfd2FpdF9jaGVja3MsIHNldF9vZmZfaW50X3VuZGVmKQorZGVjbGFy ZV9vdnJfc25wcmludChkZWxheV93YWl0X2NoZWNrcywgcHJpbnRfb2ZmX2ludF91bmRlZikKK2Rl Y2xhcmVfaHdfaGFuZGxlcihkZWxheV93YWl0X2NoZWNrcywgc2V0X29mZl9pbnRfdW5kZWYpCitk ZWNsYXJlX2h3X3NucHJpbnQoZGVsYXlfd2FpdF9jaGVja3MsIHByaW50X29mZl9pbnRfdW5kZWYp CitkZWNsYXJlX21wX2hhbmRsZXIoZGVsYXlfd2FpdF9jaGVja3MsIHNldF9vZmZfaW50X3VuZGVm KQorZGVjbGFyZV9tcF9zbnByaW50KGRlbGF5X3dhaXRfY2hlY2tzLCBwcmludF9vZmZfaW50X3Vu ZGVmKQorZGVjbGFyZV9kZWZfaGFuZGxlcihzYW5fcGF0aF9lcnJfdGhyZXNob2xkLCBzZXRfb2Zm X2ludF91bmRlZikKK2RlY2xhcmVfZGVmX3NucHJpbnQoc2FuX3BhdGhfZXJyX3RocmVzaG9sZCwg cHJpbnRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfb3ZyX2hhbmRsZXIoc2FuX3BhdGhfZXJyX3Ro cmVzaG9sZCwgc2V0X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX292cl9zbnByaW50KHNhbl9wYXRo X2Vycl90aHJlc2hvbGQsIHByaW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX2h3X2hhbmRsZXIo c2FuX3BhdGhfZXJyX3RocmVzaG9sZCwgc2V0X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX2h3X3Nu cHJpbnQoc2FuX3BhdGhfZXJyX3RocmVzaG9sZCwgcHJpbnRfb2ZmX2ludF91bmRlZikKK2RlY2xh cmVfbXBfaGFuZGxlcihzYW5fcGF0aF9lcnJfdGhyZXNob2xkLCBzZXRfb2ZmX2ludF91bmRlZikK K2RlY2xhcmVfbXBfc25wcmludChzYW5fcGF0aF9lcnJfdGhyZXNob2xkLCBwcmludF9vZmZfaW50 X3VuZGVmKQorZGVjbGFyZV9kZWZfaGFuZGxlcihzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUsIHNl dF9vZmZfaW50X3VuZGVmKQorZGVjbGFyZV9kZWZfc25wcmludChzYW5fcGF0aF9lcnJfZm9yZ2V0 X3JhdGUsIHByaW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX292cl9oYW5kbGVyKHNhbl9wYXRo X2Vycl9mb3JnZXRfcmF0ZSwgc2V0X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX292cl9zbnByaW50 KHNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSwgcHJpbnRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVf aHdfaGFuZGxlcihzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUsIHNldF9vZmZfaW50X3VuZGVmKQor ZGVjbGFyZV9od19zbnByaW50KHNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSwgcHJpbnRfb2ZmX2lu dF91bmRlZikKK2RlY2xhcmVfbXBfaGFuZGxlcihzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUsIHNl dF9vZmZfaW50X3VuZGVmKQorZGVjbGFyZV9tcF9zbnByaW50KHNhbl9wYXRoX2Vycl9mb3JnZXRf cmF0ZSwgcHJpbnRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfZGVmX2hhbmRsZXIoc2FuX3BhdGhf ZXJyX3JlY292ZXJ5X3RpbWUsIHNldF9vZmZfaW50X3VuZGVmKQorZGVjbGFyZV9kZWZfc25wcmlu dChzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSwgcHJpbnRfb2ZmX2ludF91bmRlZikKK2RlY2xh cmVfb3ZyX2hhbmRsZXIoc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUsIHNldF9vZmZfaW50X3Vu ZGVmKQorZGVjbGFyZV9vdnJfc25wcmludChzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSwgcHJp bnRfb2ZmX2ludF91bmRlZikKK2RlY2xhcmVfaHdfaGFuZGxlcihzYW5fcGF0aF9lcnJfcmVjb3Zl cnlfdGltZSwgc2V0X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX2h3X3NucHJpbnQoc2FuX3BhdGhf ZXJyX3JlY292ZXJ5X3RpbWUsIHByaW50X29mZl9pbnRfdW5kZWYpCitkZWNsYXJlX21wX2hhbmRs ZXIoc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUsIHNldF9vZmZfaW50X3VuZGVmKQorZGVjbGFy ZV9tcF9zbnByaW50KHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lLCBwcmludF9vZmZfaW50X3Vu ZGVmKQogc3RhdGljIGludAogZGVmX3V4c29ja190aW1lb3V0X2hhbmRsZXIoc3RydWN0IGNvbmZp ZyAqY29uZiwgdmVjdG9yIHN0cnZlYykKIHsKQEAgLTE0MDQsNiArMTQyNiwxMCBAQCBpbml0X2tl eXdvcmRzKHZlY3RvciBrZXl3b3JkcykKIAlpbnN0YWxsX2tleXdvcmQoImNvbmZpZ19kaXIiLCAm ZGVmX2NvbmZpZ19kaXJfaGFuZGxlciwgJnNucHJpbnRfZGVmX2NvbmZpZ19kaXIpOwogCWluc3Rh bGxfa2V5d29yZCgiZGVsYXlfd2F0Y2hfY2hlY2tzIiwgJmRlZl9kZWxheV93YXRjaF9jaGVja3Nf aGFuZGxlciwgJnNucHJpbnRfZGVmX2RlbGF5X3dhdGNoX2NoZWNrcyk7CiAJaW5zdGFsbF9rZXl3 b3JkKCJkZWxheV93YWl0X2NoZWNrcyIsICZkZWZfZGVsYXlfd2FpdF9jaGVja3NfaGFuZGxlciwg JnNucHJpbnRfZGVmX2RlbGF5X3dhaXRfY2hlY2tzKTsKKyAgICAgICAgaW5zdGFsbF9rZXl3b3Jk KCJzYW5fcGF0aF9lcnJfdGhyZXNob2xkIiwgJmRlZl9zYW5fcGF0aF9lcnJfdGhyZXNob2xkX2hh bmRsZXIsICZzbnByaW50X2RlZl9zYW5fcGF0aF9lcnJfdGhyZXNob2xkKTsKKyAgICAgICAgaW5z dGFsbF9rZXl3b3JkKCJzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUiLCAmZGVmX3Nhbl9wYXRoX2Vy cl9mb3JnZXRfcmF0ZV9oYW5kbGVyLCAmc25wcmludF9kZWZfc2FuX3BhdGhfZXJyX2ZvcmdldF9y YXRlKTsKKyAgICAgICAgaW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGlt ZSIsICZkZWZfc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWVfaGFuZGxlciwgJnNucHJpbnRfZGVm X3Nhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lKTsKKwogCWluc3RhbGxfa2V5d29yZCgiZmluZF9t dWx0aXBhdGhzIiwgJmRlZl9maW5kX211bHRpcGF0aHNfaGFuZGxlciwgJnNucHJpbnRfZGVmX2Zp bmRfbXVsdGlwYXRocyk7CiAJaW5zdGFsbF9rZXl3b3JkKCJ1eHNvY2tfdGltZW91dCIsICZkZWZf dXhzb2NrX3RpbWVvdXRfaGFuZGxlciwgJnNucHJpbnRfZGVmX3V4c29ja190aW1lb3V0KTsKIAlp bnN0YWxsX2tleXdvcmQoInJldHJpZ2dlcl90cmllcyIsICZkZWZfcmV0cmlnZ2VyX3RyaWVzX2hh bmRsZXIsICZzbnByaW50X2RlZl9yZXRyaWdnZXJfdHJpZXMpOwpAQCAtMTQ4Niw2ICsxNTEyLDkg QEAgaW5pdF9rZXl3b3Jkcyh2ZWN0b3Iga2V5d29yZHMpCiAJaW5zdGFsbF9rZXl3b3JkKCJkZWZl cnJlZF9yZW1vdmUiLCAmaHdfZGVmZXJyZWRfcmVtb3ZlX2hhbmRsZXIsICZzbnByaW50X2h3X2Rl ZmVycmVkX3JlbW92ZSk7CiAJaW5zdGFsbF9rZXl3b3JkKCJkZWxheV93YXRjaF9jaGVja3MiLCAm aHdfZGVsYXlfd2F0Y2hfY2hlY2tzX2hhbmRsZXIsICZzbnByaW50X2h3X2RlbGF5X3dhdGNoX2No ZWNrcyk7CiAJaW5zdGFsbF9rZXl3b3JkKCJkZWxheV93YWl0X2NoZWNrcyIsICZod19kZWxheV93 YWl0X2NoZWNrc19oYW5kbGVyLCAmc25wcmludF9od19kZWxheV93YWl0X2NoZWNrcyk7CisgICAg ICAgIGluc3RhbGxfa2V5d29yZCgic2FuX3BhdGhfZXJyX3RocmVzaG9sZCIsICZod19zYW5fcGF0 aF9lcnJfdGhyZXNob2xkX2hhbmRsZXIsICZzbnByaW50X2h3X3Nhbl9wYXRoX2Vycl90aHJlc2hv bGQpOworICAgICAgICBpbnN0YWxsX2tleXdvcmQoInNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSIs ICZod19zYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGVfaGFuZGxlciwgJnNucHJpbnRfaHdfc2FuX3Bh dGhfZXJyX2ZvcmdldF9yYXRlKTsKKyAgICAgICAgaW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0aF9l cnJfcmVjb3ZlcnlfdGltZSIsICZod19zYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZV9oYW5kbGVy LCAmc25wcmludF9od19zYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSk7CiAJaW5zdGFsbF9rZXl3 b3JkKCJza2lwX2twYXJ0eCIsICZod19za2lwX2twYXJ0eF9oYW5kbGVyLCAmc25wcmludF9od19z a2lwX2twYXJ0eCk7CiAJaW5zdGFsbF9rZXl3b3JkKCJtYXhfc2VjdG9yc19rYiIsICZod19tYXhf c2VjdG9yc19rYl9oYW5kbGVyLCAmc25wcmludF9od19tYXhfc2VjdG9yc19rYik7CiAJaW5zdGFs bF9zdWJsZXZlbF9lbmQoKTsKQEAgLTE1MTUsNiArMTU0NCwxMCBAQCBpbml0X2tleXdvcmRzKHZl Y3RvciBrZXl3b3JkcykKIAlpbnN0YWxsX2tleXdvcmQoImRlZmVycmVkX3JlbW92ZSIsICZvdnJf ZGVmZXJyZWRfcmVtb3ZlX2hhbmRsZXIsICZzbnByaW50X292cl9kZWZlcnJlZF9yZW1vdmUpOwog CWluc3RhbGxfa2V5d29yZCgiZGVsYXlfd2F0Y2hfY2hlY2tzIiwgJm92cl9kZWxheV93YXRjaF9j aGVja3NfaGFuZGxlciwgJnNucHJpbnRfb3ZyX2RlbGF5X3dhdGNoX2NoZWNrcyk7CiAJaW5zdGFs bF9rZXl3b3JkKCJkZWxheV93YWl0X2NoZWNrcyIsICZvdnJfZGVsYXlfd2FpdF9jaGVja3NfaGFu ZGxlciwgJnNucHJpbnRfb3ZyX2RlbGF5X3dhaXRfY2hlY2tzKTsKKyAgICAgICAgaW5zdGFsbF9r ZXl3b3JkKCJzYW5fcGF0aF9lcnJfdGhyZXNob2xkIiwgJm92cl9zYW5fcGF0aF9lcnJfdGhyZXNo b2xkX2hhbmRsZXIsICZzbnByaW50X292cl9zYW5fcGF0aF9lcnJfdGhyZXNob2xkKTsKKyAgICAg ICAgaW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUiLCAmb3ZyX3Nhbl9w YXRoX2Vycl9mb3JnZXRfcmF0ZV9oYW5kbGVyLCAmc25wcmludF9vdnJfc2FuX3BhdGhfZXJyX2Zv cmdldF9yYXRlKTsKKyAgICAgICAgaW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0aF9lcnJfcmVjb3Zl cnlfdGltZSIsICZvdnJfc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWVfaGFuZGxlciwgJnNucHJp bnRfb3ZyX3Nhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lKTsKKwogCWluc3RhbGxfa2V5d29yZCgi c2tpcF9rcGFydHgiLCAmb3ZyX3NraXBfa3BhcnR4X2hhbmRsZXIsICZzbnByaW50X292cl9za2lw X2twYXJ0eCk7CiAJaW5zdGFsbF9rZXl3b3JkKCJtYXhfc2VjdG9yc19rYiIsICZvdnJfbWF4X3Nl Y3RvcnNfa2JfaGFuZGxlciwgJnNucHJpbnRfb3ZyX21heF9zZWN0b3JzX2tiKTsKIApAQCAtMTU0 Myw2ICsxNTc2LDkgQEAgaW5pdF9rZXl3b3Jkcyh2ZWN0b3Iga2V5d29yZHMpCiAJaW5zdGFsbF9r ZXl3b3JkKCJkZWZlcnJlZF9yZW1vdmUiLCAmbXBfZGVmZXJyZWRfcmVtb3ZlX2hhbmRsZXIsICZz bnByaW50X21wX2RlZmVycmVkX3JlbW92ZSk7CiAJaW5zdGFsbF9rZXl3b3JkKCJkZWxheV93YXRj aF9jaGVja3MiLCAmbXBfZGVsYXlfd2F0Y2hfY2hlY2tzX2hhbmRsZXIsICZzbnByaW50X21wX2Rl bGF5X3dhdGNoX2NoZWNrcyk7CiAJaW5zdGFsbF9rZXl3b3JkKCJkZWxheV93YWl0X2NoZWNrcyIs ICZtcF9kZWxheV93YWl0X2NoZWNrc19oYW5kbGVyLCAmc25wcmludF9tcF9kZWxheV93YWl0X2No ZWNrcyk7CisJaW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0aF9lcnJfdGhyZXNob2xkIiwgJm1wX3Nh bl9wYXRoX2Vycl90aHJlc2hvbGRfaGFuZGxlciwgJnNucHJpbnRfbXBfc2FuX3BhdGhfZXJyX3Ro cmVzaG9sZCk7CisJaW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUiLCAm bXBfc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlX2hhbmRsZXIsICZzbnByaW50X21wX3Nhbl9wYXRo X2Vycl9mb3JnZXRfcmF0ZSk7CisJaW5zdGFsbF9rZXl3b3JkKCJzYW5fcGF0aF9lcnJfcmVjb3Zl cnlfdGltZSIsICZtcF9zYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZV9oYW5kbGVyLCAmc25wcmlu dF9tcF9zYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSk7CiAJaW5zdGFsbF9rZXl3b3JkKCJza2lw X2twYXJ0eCIsICZtcF9za2lwX2twYXJ0eF9oYW5kbGVyLCAmc25wcmludF9tcF9za2lwX2twYXJ0 eCk7CiAJaW5zdGFsbF9rZXl3b3JkKCJtYXhfc2VjdG9yc19rYiIsICZtcF9tYXhfc2VjdG9yc19r Yl9oYW5kbGVyLCAmc25wcmludF9tcF9tYXhfc2VjdG9yc19rYik7CiAJaW5zdGFsbF9zdWJsZXZl bF9lbmQoKTsKZGlmZiAtLWdpdCBhL2xpYm11bHRpcGF0aC9kaWN0LmggYi9saWJtdWx0aXBhdGgv ZGljdC5oCmluZGV4IDRjZDAzYzUuLjJkNjA5N2QgMTAwNjQ0Ci0tLSBhL2xpYm11bHRpcGF0aC9k aWN0LmgKKysrIGIvbGlibXVsdGlwYXRoL2RpY3QuaApAQCAtMTQsNiArMTQsNSBAQCBpbnQgcHJp bnRfbm9fcGF0aF9yZXRyeShjaGFyICogYnVmZiwgaW50IGxlbiwgdm9pZCAqcHRyKTsKIGludCBw cmludF9mYXN0X2lvX2ZhaWwoY2hhciAqIGJ1ZmYsIGludCBsZW4sIHZvaWQgKnB0cik7CiBpbnQg cHJpbnRfZGV2X2xvc3MoY2hhciAqIGJ1ZmYsIGludCBsZW4sIHZvaWQgKnB0cik7CiBpbnQgcHJp bnRfcmVzZXJ2YXRpb25fa2V5KGNoYXIgKiBidWZmLCBpbnQgbGVuLCB2b2lkICogcHRyKTsKLWlu dCBwcmludF9kZWxheV9jaGVja3MoY2hhciAqIGJ1ZmYsIGludCBsZW4sIHZvaWQgKnB0cik7Ci0K K2ludCBwcmludF9vZmZfaW50X3VuZGVmKGNoYXIgKiBidWZmLCBpbnQgbGVuLCB2b2lkICpwdHIp OwogI2VuZGlmIC8qIF9ESUNUX0ggKi8KZGlmZiAtLWdpdCBhL2xpYm11bHRpcGF0aC9wcm9wc2Vs LmMgYi9saWJtdWx0aXBhdGgvcHJvcHNlbC5jCmluZGV4IGMwYmM2MTYuLmU0YWZlZjcgMTAwNjQ0 Ci0tLSBhL2xpYm11bHRpcGF0aC9wcm9wc2VsLmMKKysrIGIvbGlibXVsdGlwYXRoL3Byb3BzZWwu YwpAQCAtNjIzLDcgKzYyMyw3IEBAIGludCBzZWxlY3RfZGVsYXlfd2F0Y2hfY2hlY2tzKHN0cnVj dCBjb25maWcgKmNvbmYsIHN0cnVjdCBtdWx0aXBhdGggKm1wKQogCW1wX3NldF9jb25mKGRlbGF5 X3dhdGNoX2NoZWNrcyk7CiAJbXBfc2V0X2RlZmF1bHQoZGVsYXlfd2F0Y2hfY2hlY2tzLCBERUZB VUxUX0RFTEFZX0NIRUNLUyk7CiBvdXQ6Ci0JcHJpbnRfZGVsYXlfY2hlY2tzKGJ1ZmYsIDEyLCAm bXAtPmRlbGF5X3dhdGNoX2NoZWNrcyk7CisJcHJpbnRfb2ZmX2ludF91bmRlZihidWZmLCAxMiwg Jm1wLT5kZWxheV93YXRjaF9jaGVja3MpOwogCWNvbmRsb2coMywgIiVzOiBkZWxheV93YXRjaF9j aGVja3MgPSAlcyAlcyIsIG1wLT5hbGlhcywgYnVmZiwgb3JpZ2luKTsKIAlyZXR1cm4gMDsKIH0K QEAgLTYzOCwxMiArNjM4LDU2IEBAIGludCBzZWxlY3RfZGVsYXlfd2FpdF9jaGVja3Moc3RydWN0 IGNvbmZpZyAqY29uZiwgc3RydWN0IG11bHRpcGF0aCAqbXApCiAJbXBfc2V0X2NvbmYoZGVsYXlf d2FpdF9jaGVja3MpOwogCW1wX3NldF9kZWZhdWx0KGRlbGF5X3dhaXRfY2hlY2tzLCBERUZBVUxU X0RFTEFZX0NIRUNLUyk7CiBvdXQ6Ci0JcHJpbnRfZGVsYXlfY2hlY2tzKGJ1ZmYsIDEyLCAmbXAt PmRlbGF5X3dhaXRfY2hlY2tzKTsKKwlwcmludF9vZmZfaW50X3VuZGVmKGJ1ZmYsIDEyLCAmbXAt PmRlbGF5X3dhaXRfY2hlY2tzKTsKIAljb25kbG9nKDMsICIlczogZGVsYXlfd2FpdF9jaGVja3Mg PSAlcyAlcyIsIG1wLT5hbGlhcywgYnVmZiwgb3JpZ2luKTsKIAlyZXR1cm4gMDsKIAogfQoraW50 IHNlbGVjdF9zYW5fcGF0aF9lcnJfdGhyZXNob2xkKHN0cnVjdCBjb25maWcgKmNvbmYsIHN0cnVj dCBtdWx0aXBhdGggKm1wKQoreworICAgICAgICBjaGFyICpvcmlnaW4sIGJ1ZmZbMTJdOworCisg ICAgICAgIG1wX3NldF9tcGUoc2FuX3BhdGhfZXJyX3RocmVzaG9sZCk7CisgICAgICAgIG1wX3Nl dF9vdnIoc2FuX3BhdGhfZXJyX3RocmVzaG9sZCk7CisgICAgICAgIG1wX3NldF9od2Uoc2FuX3Bh dGhfZXJyX3RocmVzaG9sZCk7CisgICAgICAgIG1wX3NldF9jb25mKHNhbl9wYXRoX2Vycl90aHJl c2hvbGQpOworICAgICAgICBtcF9zZXRfZGVmYXVsdChzYW5fcGF0aF9lcnJfdGhyZXNob2xkLCBE RUZBVUxUX0VSUl9DSEVDS1MpOworb3V0OgorICAgICAgICBwcmludF9vZmZfaW50X3VuZGVmKGJ1 ZmYsIDEyLCAmbXAtPnNhbl9wYXRoX2Vycl90aHJlc2hvbGQpOworICAgICAgICBjb25kbG9nKDMs ICIlczogc2FuX3BhdGhfZXJyX3RocmVzaG9sZCA9ICVzICVzIiwgbXAtPmFsaWFzLCBidWZmLCBv cmlnaW4pOworICAgICAgICByZXR1cm4gMDsKK30KKworaW50IHNlbGVjdF9zYW5fcGF0aF9lcnJf Zm9yZ2V0X3JhdGUoc3RydWN0IGNvbmZpZyAqY29uZiwgc3RydWN0IG11bHRpcGF0aCAqbXApCit7 CisgICAgICAgIGNoYXIgKm9yaWdpbiwgYnVmZlsxMl07CisKKyAgICAgICAgbXBfc2V0X21wZShz YW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUpOworICAgICAgICBtcF9zZXRfb3ZyKHNhbl9wYXRoX2Vy cl9mb3JnZXRfcmF0ZSk7CisgICAgICAgIG1wX3NldF9od2Uoc2FuX3BhdGhfZXJyX2ZvcmdldF9y YXRlKTsKKyAgICAgICAgbXBfc2V0X2NvbmYoc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlKTsKKyAg ICAgICAgbXBfc2V0X2RlZmF1bHQoc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlLCBERUZBVUxUX0VS Ul9DSEVDS1MpOworb3V0OgorICAgICAgICBwcmludF9vZmZfaW50X3VuZGVmKGJ1ZmYsIDEyLCAm bXAtPnNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZSk7CisgICAgICAgIGNvbmRsb2coMywgIiVzOiBz YW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUgPSAlcyAlcyIsIG1wLT5hbGlhcywgYnVmZiwgb3JpZ2lu KTsKKyAgICAgICAgcmV0dXJuIDA7CisKK30KK2ludCBzZWxlY3Rfc2FuX3BhdGhfZXJyX3JlY292 ZXJ5X3RpbWUoc3RydWN0IGNvbmZpZyAqY29uZiwgc3RydWN0IG11bHRpcGF0aCAqbXApCit7Cisg ICAgICAgIGNoYXIgKm9yaWdpbiwgYnVmZlsxMl07CiAKKyAgICAgICAgbXBfc2V0X21wZShzYW5f cGF0aF9lcnJfcmVjb3ZlcnlfdGltZSk7CisgICAgICAgIG1wX3NldF9vdnIoc2FuX3BhdGhfZXJy X3JlY292ZXJ5X3RpbWUpOworICAgICAgICBtcF9zZXRfaHdlKHNhbl9wYXRoX2Vycl9yZWNvdmVy eV90aW1lKTsKKyAgICAgICAgbXBfc2V0X2NvbmYoc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUp OworICAgICAgICBtcF9zZXRfZGVmYXVsdChzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSwgREVG QVVMVF9FUlJfQ0hFQ0tTKTsKK291dDoKKyAgICAgICAgcHJpbnRfb2ZmX2ludF91bmRlZihidWZm LCAxMiwgJm1wLT5zYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSk7CisgICAgICAgIGNvbmRsb2co MywgIiVzOiBzYW5fcGF0aF9lcnJfcmVjb3ZlcnlfdGltZSA9ICVzICVzIiwgbXAtPmFsaWFzLCBi dWZmLCBvcmlnaW4pOworICAgICAgICByZXR1cm4gMDsKKworfQogaW50IHNlbGVjdF9za2lwX2tw YXJ0eCAoc3RydWN0IGNvbmZpZyAqY29uZiwgc3RydWN0IG11bHRpcGF0aCAqIG1wKQogewogCWNo YXIgKm9yaWdpbjsKZGlmZiAtLWdpdCBhL2xpYm11bHRpcGF0aC9wcm9wc2VsLmggYi9saWJtdWx0 aXBhdGgvcHJvcHNlbC5oCmluZGV4IGFkOThmYTUuLmU1YjZmOTMgMTAwNjQ0Ci0tLSBhL2xpYm11 bHRpcGF0aC9wcm9wc2VsLmgKKysrIGIvbGlibXVsdGlwYXRoL3Byb3BzZWwuaApAQCAtMjQsMyAr MjQsNiBAQCBpbnQgc2VsZWN0X2RlbGF5X3dhdGNoX2NoZWNrcyAoc3RydWN0IGNvbmZpZyAqY29u Ziwgc3RydWN0IG11bHRpcGF0aCAqIG1wKTsKIGludCBzZWxlY3RfZGVsYXlfd2FpdF9jaGVja3Mg KHN0cnVjdCBjb25maWcgKmNvbmYsIHN0cnVjdCBtdWx0aXBhdGggKiBtcCk7CiBpbnQgc2VsZWN0 X3NraXBfa3BhcnR4IChzdHJ1Y3QgY29uZmlnICpjb25mLCBzdHJ1Y3QgbXVsdGlwYXRoICogbXAp OwogaW50IHNlbGVjdF9tYXhfc2VjdG9yc19rYiAoc3RydWN0IGNvbmZpZyAqY29uZiwgc3RydWN0 IG11bHRpcGF0aCAqIG1wKTsKK2ludCBzZWxlY3Rfc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlKHN0 cnVjdCBjb25maWcgKmNvbmYsIHN0cnVjdCBtdWx0aXBhdGggKm1wKTsKK2ludCBzZWxlY3Rfc2Fu X3BhdGhfZXJyX3RocmVzaG9sZChzdHJ1Y3QgY29uZmlnICpjb25mLCBzdHJ1Y3QgbXVsdGlwYXRo ICptcCk7CitpbnQgc2VsZWN0X3Nhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lKHN0cnVjdCBjb25m aWcgKmNvbmYsIHN0cnVjdCBtdWx0aXBhdGggKm1wKTsKZGlmZiAtLWdpdCBhL2xpYm11bHRpcGF0 aC9zdHJ1Y3RzLmggYi9saWJtdWx0aXBhdGgvc3RydWN0cy5oCmluZGV4IDM5NmY2OWQuLjZlZGQ5 MjcgMTAwNjQ0Ci0tLSBhL2xpYm11bHRpcGF0aC9zdHJ1Y3RzLmgKKysrIGIvbGlibXVsdGlwYXRo L3N0cnVjdHMuaApAQCAtMTUyLDkgKzE1Miw5IEBAIGVudW0gc2NzaV9wcm90b2NvbCB7CiAJU0NT SV9QUk9UT0NPTF9VTlNQRUMgPSAweGYsIC8qIE5vIHNwZWNpZmljIHByb3RvY29sICovCiB9Owog Ci1lbnVtIGRlbGF5X2NoZWNrc19zdGF0ZXMgewotCURFTEFZX0NIRUNLU19PRkYgPSAtMSwKLQlE RUxBWV9DSEVDS1NfVU5ERUYgPSAwLAorZW51bSBub191bmRlZl9zdGF0ZXMgeworCU5VX05PID0g LTEsCisJTlVfVU5ERUYgPSAwLAogfTsKIAogZW51bSBpbml0aWFsaXplZF9zdGF0ZXMgewpAQCAt MjIzLDcgKzIyMywxMCBAQCBzdHJ1Y3QgcGF0aCB7CiAJaW50IGluaXRpYWxpemVkOwogCWludCBy ZXRyaWdnZXJzOwogCWludCB3d2lkX2NoYW5nZWQ7Ci0KKwl1bnNpZ25lZCBpbnQgcGF0aF9mYWls dXJlczsKKwl0aW1lX3QgZGlzX3JlaW5zdGF0ZV90aW1lOworCWludCBkaXNhYmxlX3JlaW5zdGF0 ZTsKKwlpbnQgc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlOwogCS8qIGNvbmZpZ2xldCBwb2ludGVy cyAqLwogCXN0cnVjdCBod2VudHJ5ICogaHdlOwogfTsKQEAgLTI1NSw2ICsyNTgsOSBAQCBzdHJ1 Y3QgbXVsdGlwYXRoIHsKIAlpbnQgZGVmZXJyZWRfcmVtb3ZlOwogCWludCBkZWxheV93YXRjaF9j aGVja3M7CiAJaW50IGRlbGF5X3dhaXRfY2hlY2tzOworCWludCBzYW5fcGF0aF9lcnJfdGhyZXNo b2xkOworCWludCBzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGU7CisJaW50IHNhbl9wYXRoX2Vycl9y ZWNvdmVyeV90aW1lOwogCWludCBza2lwX2twYXJ0eDsKIAlpbnQgbWF4X3NlY3RvcnNfa2I7CiAJ dW5zaWduZWQgaW50IGRldl9sb3NzOwpkaWZmIC0tZ2l0IGEvbXVsdGlwYXRoL211bHRpcGF0aC5j b25mLjUgYi9tdWx0aXBhdGgvbXVsdGlwYXRoLmNvbmYuNQppbmRleCAzNjU4OWY1Li4zYzU2NGFk IDEwMDY0NAotLS0gYS9tdWx0aXBhdGgvbXVsdGlwYXRoLmNvbmYuNQorKysgYi9tdWx0aXBhdGgv bXVsdGlwYXRoLmNvbmYuNQpAQCAtNzUxLDYgKzc1MSw0NSBAQCBUaGUgZGVmYXVsdCBpczogXGZC L2V0Yy9tdWx0aXBhdGgvY29uZi5kL1xmUgogLgogLgogLlRQCisuQiBzYW5fcGF0aF9lcnJfdGhy ZXNob2xkCitJZiBzZXQgdG8gYSB2YWx1ZSBncmVhdGVyIHRoYW4gMCwgbXVsdGlwYXRoZCB3aWxs IHdhdGNoIHBhdGhzIGFuZCBjaGVjayBob3cgbWFueQordGltZXMgYSBwYXRoIGhhcyBiZWVuIGZh aWxlZCBkdWUgdG8gZXJyb3JzLklmIHRoZSBudW1iZXIgb2YgZmFpbHVyZXMgb24gYSBwYXJ0aWN1 bGFyCitwYXRoIGlzIGdyZWF0ZXIgdGhlbiB0aGUgc2FuX3BhdGhfZXJyX3RocmVzaG9sZCB0aGVu IHRoZSBwYXRoIHdpbGwgbm90ICByZWluc3RhbnRlCit0aWxsIHNhbl9wYXRoX2Vycl9yZWNvdmVy eV90aW1lLlRoZXNlIHBhdGggZmFpbHVyZXMgc2hvdWxkIG9jY3VyIHdpdGhpbiBhIAorc2FuX3Bh dGhfZXJyX2ZvcmdldF9yYXRlIGNoZWNrcywgaWYgbm90IHdlIHdpbGwgY29uc2lkZXIgdGhlIHBh dGggaXMgZ29vZCBlbm91Z2gKK3RvIHJlaW5zdGFudGF0ZS4KKy5SUworLlRQCitUaGUgZGVmYXVs dCBpczogXGZCbm9cZlIKKy5SRQorLgorLgorLlRQCisuQiBzYW5fcGF0aF9lcnJfZm9yZ2V0X3Jh dGUKK0lmIHNldCB0byBhIHZhbHVlIGdyZWF0ZXIgdGhhbiAwLCBtdWx0aXBhdGhkIHdpbGwgY2hl Y2sgd2hldGhlciB0aGUgcGF0aCBmYWlsdXJlcworaGFzIGV4Y2VlZGVkICB0aGUgc2FuX3BhdGhf ZXJyX3RocmVzaG9sZCB3aXRoaW4gdGhpcyBtYW55IGNoZWNrcyBpLmUgCitzYW5fcGF0aF9lcnJf Zm9yZ2V0X3JhdGUgLiBJZiBzbyB3ZSB3aWxsIG5vdCByZWluc3RhbnRlIHRoZSBwYXRoIHRpbGwK K3Nhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lLgorLlJTCisuVFAKK1RoZSBkZWZhdWx0IGlzOiBc ZkJub1xmUgorLlJFCisuCisuCisuVFAKKy5CIHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lCitJ ZiBzZXQgdG8gYSB2YWx1ZSBncmVhdGVyIHRoYW4gMCwgbXVsdGlwYXRoZCB3aWxsIG1ha2Ugc3Vy ZSB0aGF0IHdoZW4gcGF0aCBmYWlsdXJlcworaGFzIGV4Y2VlZGVkIHRoZSBzYW5fcGF0aF9lcnJf dGhyZXNob2xkIHdpdGhpbiBzYW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUgdGhlbiB0aGUgcGF0aAor d2lsbCBiZSBwbGFjZWQgaW4gZmFpbGVkIHN0YXRlIGZvciBzYW5fcGF0aF9lcnJfcmVjb3Zlcnlf dGltZSBkdXJhdGlvbi5PbmNlIHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lCitoYXMgdGltZW91 dCAgd2Ugd2lsbCByZWluc3RhbnRlIHRoZSBmYWlsZWQgcGF0aCAuCitzYW5fcGF0aF9lcnJfcmVj b3ZlcnlfdGltZSB2YWx1ZSBzaG91bGQgYmUgaW4gc2Vjcy4KKy5SUworLlRQCitUaGUgZGVmYXVs dCBpczogXGZCbm9cZlIKKy5SRQorLgorLgorLlRQCiAuQiBkZWxheV93YXRjaF9jaGVja3MKIElm IHNldCB0byBhIHZhbHVlIGdyZWF0ZXIgdGhhbiAwLCBtdWx0aXBhdGhkIHdpbGwgd2F0Y2ggcGF0 aHMgdGhhdCBoYXZlCiByZWNlbnRseSBiZWNvbWUgdmFsaWQgZm9yIHRoaXMgbWFueSBjaGVja3Mu IElmIHRoZXkgZmFpbCBhZ2FpbiB3aGlsZSB0aGV5IGFyZQpAQCAtMTAxNSw2ICsxMDU0LDEyIEBA IGFyZSB0YWtlbiBmcm9tIHRoZSBcZklkZWZhdWx0c1xmUiBvciBcZklkZXZpY2VzXGZSIHNlY3Rp b246CiAuVFAKIC5CIGRlZmVycmVkX3JlbW92ZQogLlRQCisuQiBzYW5fcGF0aF9lcnJfdGhyZXNo b2xkCisuVFAKKy5CIHNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZQorLlRQCisuQiBzYW5fcGF0aF9l cnJfcmVjb3ZlcnlfdGltZQorLlRQCiAuQiBkZWxheV93YXRjaF9jaGVja3MKIC5UUAogLkIgZGVs YXlfd2FpdF9jaGVja3MKQEAgLTExMjgsNiArMTE3MywxMiBAQCBzZWN0aW9uOgogLlRQCiAuQiBk ZWZlcnJlZF9yZW1vdmUKIC5UUAorLkIgc2FuX3BhdGhfZXJyX3RocmVzaG9sZAorLlRQCisuQiBz YW5fcGF0aF9lcnJfZm9yZ2V0X3JhdGUKKy5UUAorLkIgc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3Rp bWUKKy5UUAogLkIgZGVsYXlfd2F0Y2hfY2hlY2tzCiAuVFAKIC5CIGRlbGF5X3dhaXRfY2hlY2tz CkBAIC0xMTkyLDYgKzEyNDMsMTIgQEAgdGhlIHZhbHVlcyBhcmUgdGFrZW4gZnJvbSB0aGUgXGZJ ZGV2aWNlc1xmUiBvciBcZklkZWZhdWx0c1xmUiBzZWN0aW9uczoKIC5UUAogLkIgZGVmZXJyZWRf cmVtb3ZlCiAuVFAKKy5CIHNhbl9wYXRoX2Vycl90aHJlc2hvbGQKKy5UUAorLkIgc2FuX3BhdGhf ZXJyX2ZvcmdldF9yYXRlCisuVFAKKy5CIHNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1lCisuVFAK IC5CIGRlbGF5X3dhdGNoX2NoZWNrcwogLlRQCiAuQiBkZWxheV93YWl0X2NoZWNrcwpkaWZmIC0t Z2l0IGEvbXVsdGlwYXRoZC9tYWluLmMgYi9tdWx0aXBhdGhkL21haW4uYwppbmRleCBhZGMzMjU4 Li40YTFhN2VmIDEwMDY0NAotLS0gYS9tdWx0aXBhdGhkL21haW4uYworKysgYi9tdWx0aXBhdGhk L21haW4uYwpAQCAtMTQ4Nyw2ICsxNDg3LDgzIEBAIHZvaWQgcmVwYWlyX3BhdGgoc3RydWN0IHBh dGggKiBwcCkKIAlMT0dfTVNHKDEsIGNoZWNrZXJfbWVzc2FnZSgmcHAtPmNoZWNrZXIpKTsKIH0K IAorc3RhdGljIGludCBjaGVja19wYXRoX3JlaW5zdGF0ZV9zdGF0ZShzdHJ1Y3QgcGF0aCAqIHBw KSB7CisJc3RydWN0IHRpbWVzcGVjIGN1cnJfdGltZTsKKwlpZiAoISgocHAtPm1wcC0+c2FuX3Bh dGhfZXJyX3RocmVzaG9sZCA+IDApICYmCisJCQkJKHBwLT5tcHAtPnNhbl9wYXRoX2Vycl9mb3Jn ZXRfcmF0ZSA+IDApICYmCisJCQkJKHBwLT5tcHAtPnNhbl9wYXRoX2Vycl9yZWNvdmVyeV90aW1l ID4wKSkpIHsKKwkJcmV0dXJuIDA7CisJfQorCQorCWlmIChwcC0+ZGlzYWJsZV9yZWluc3RhdGUp IHsKKwkJLyogSWYgd2UgZG9uJ3Qga25vdyBob3cgbXVjaCB0aW1lIGhhcyBwYXNzZWQsIGF1dG9t YXRpY2FsbHkKKwkJICogcmVpbnN0YXRlIHRoZSBwYXRoLCBqdXN0IHRvIGJlIHNhZmUuIEFsc28s IGlmIHRoZXJlIGFyZQorCQkgKiBubyBvdGhlciB1c2FibGUgcGF0aHMsIHJlaW5zdGF0ZSB0aGUg cGF0aCAKKwkJICovCisJCWlmIChjbG9ja19nZXR0aW1lKENMT0NLX01PTk9UT05JQywgJmN1cnJf dGltZSkgIT0gMCB8fAorCQkJCXBwLT5tcHAtPm5yX2FjdGl2ZSA9PSAwKSB7CisJCQljb25kbG9n KDIsICIlcyA6IHJlaW5zdGF0aW5nIHBhdGggZWFybHkiLCBwcC0+ZGV2KTsKKwkJCWdvdG8gcmVp bnN0YXRlX3BhdGg7CisJCX0KKwkJaWYgKChjdXJyX3RpbWUudHZfc2VjIC0gcHAtPmRpc19yZWlu c3RhdGVfdGltZSApID4gcHAtPm1wcC0+c2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUpIHsKKwkJ CWNvbmRsb2coMiwiJXMgOiByZWluc3RhdGUgdGhlIHBhdGggYWZ0ZXIgZXJyIHJlY292ZXJ5IHRp bWUiLCBwcC0+ZGV2KTsKKwkJCWdvdG8gcmVpbnN0YXRlX3BhdGg7CisJCX0KKwkJcmV0dXJuIDE7 CisJfQorCS8qIGZvcmdldCBlcnJvcnMgb24gYSB3b3JraW5nIHBhdGggKi8KKwlpZiAoKHBwLT5z dGF0ZSA9PSBQQVRIX1VQIHx8IHBwLT5zdGF0ZSA9PSBQQVRIX0dIT1NUKSAmJgorCQkJcHAtPnBh dGhfZmFpbHVyZXMgPiAwKSB7CisJCWlmIChwcC0+c2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlID4g MCl7CisJCQlwcC0+c2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlLS07CisJCX0gZWxzZSB7CisJCQkv KiBmb3IgZXZlcnkgc2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlIG51bWJlciBvZiAKKwkJCSAqIHN1 Y2Nlc3NmdWwgcGF0aCBjaGVja3MgZGVjcmVtZW50IHBhdGhfZmFpbHVyZXMgYnkgMQorCQkJICov CisJCQlwcC0+cGF0aF9mYWlsdXJlcy0tOworCQkJcHAtPnNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0 ZSA9IHBwLT5tcHAtPnNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZTsKKwkJfQorCQlyZXR1cm4gMDsK Kwl9CisKKwkvKiBJZiB0aGUgcGF0aCBpc24ndCByZWNvdmVyaW5nIGZyb20gYSBmYWlsZWQgc3Rh dGUsIGRvIG5vdGhpbmcgKi8KKwlpZiAocHAtPnN0YXRlICE9IFBBVEhfRE9XTiAmJiBwcC0+c3Rh dGUgIT0gUEFUSF9TSEFLWSAmJgorCQkJcHAtPnN0YXRlICE9IFBBVEhfVElNRU9VVCkKKwkJcmV0 dXJuIDA7CisKKwlpZiAocHAtPnBhdGhfZmFpbHVyZXMgPT0gMCkKKwkJcHAtPnNhbl9wYXRoX2Vy cl9mb3JnZXRfcmF0ZSA9IHBwLT5tcHAtPnNhbl9wYXRoX2Vycl9mb3JnZXRfcmF0ZTsKKworCXBw LT5wYXRoX2ZhaWx1cmVzKys7CisKKwkvKiBpZiB3ZSBkb24ndCBrbm93IHRoZSBjdXJyZW50bHkg dGltZSwgd2UgZG9uJ3Qga25vdyBob3cgbG9uZyB0bworCSAqIGRlbGF5IHRoZSBwYXRoLCBzbyB0 aGVyZSdzIG5vIHBvaW50IGluIGNoZWNraW5nIGlmIHdlIHNob3VsZCAKKwkgKi8KKworCWlmIChj bG9ja19nZXR0aW1lKENMT0NLX01PTk9UT05JQywgJmN1cnJfdGltZSkgIT0gMCkKKwkJcmV0dXJu IDA7CisJLyogd2hlbiBwYXRoIGZhaWx1cmVzIGhhcyBleGNlZWRlZCB0aGUgc2FuX3BhdGhfZXJy X3RocmVzaG9sZAorCSAqIHBsYWNlIHRoZSBwYXRoIGluIGRlbGF5ZWQgc3RhdGUgdGlsbCBzYW5f cGF0aF9lcnJfcmVjb3ZlcnlfdGltZQorCSAqIHNvIHRoYXQgdGhlIGN1dG9tZXIgY2FuIHJlY3Rp ZnkgdGhlIGlzc3VlIHdpdGhpbiB0aGlzIHRpbWUuIEFmdGVyCisJICogdGhlIGNvbXBsZXRpb24g b2Ygc2FuX3BhdGhfZXJyX3JlY292ZXJ5X3RpbWUgaXQgc2hvdWxkCisJICogYXV0b21hdGljYWxs eSByZWluc3RhdGUgdGhlIHBhdGggCisJICovCisJaWYgKHBwLT5wYXRoX2ZhaWx1cmVzID4gcHAt Pm1wcC0+c2FuX3BhdGhfZXJyX3RocmVzaG9sZCkgeworCQljb25kbG9nKDIsICIlcyA6IGhpdCBl cnJvciB0aHJlc2hvbGQuIERlbGF5aW5nIHBhdGggcmVpbnN0YXRlbWVudCIsIHBwLT5kZXYpOwor CQlwcC0+ZGlzX3JlaW5zdGF0ZV90aW1lID0gY3Vycl90aW1lLnR2X3NlYzsKKwkJcHAtPmRpc2Fi bGVfcmVpbnN0YXRlID0gMTsKKwkJcmV0dXJuIDE7CisJfSBlbHNlIHsKKwkJcmV0dXJuIDA7CisJ fQorCityZWluc3RhdGVfcGF0aDoKKwlwcC0+cGF0aF9mYWlsdXJlcyA9IDA7CisJcHAtPmRpc2Fi bGVfcmVpbnN0YXRlID0gMDsKKwlwcC0+c2FuX3BhdGhfZXJyX2ZvcmdldF9yYXRlID0gMDsKKwly ZXR1cm4gMDsKK30KKwogLyoKICAqIFJldHVybnMgJzEnIGlmIHRoZSBwYXRoIGhhcyBiZWVuIGNo ZWNrZWQsICctMScgaWYgaXQgd2FzIGJsYWNrbGlzdGVkCiAgKiBhbmQgJzAnIG90aGVyd2lzZQpA QCAtMTYwMSw2ICsxNjc4LDEyIEBAIGNoZWNrX3BhdGggKHN0cnVjdCB2ZWN0b3JzICogdmVjcywg c3RydWN0IHBhdGggKiBwcCwgaW50IHRpY2tzKQogCQlyZXR1cm4gMDsKIAogCWlmICgobmV3c3Rh dGUgPT0gUEFUSF9VUCB8fCBuZXdzdGF0ZSA9PSBQQVRIX0dIT1NUKSAmJgorCQkJY2hlY2tfcGF0 aF9yZWluc3RhdGVfc3RhdGUocHApKSB7CisJCXBwLT5zdGF0ZSA9IFBBVEhfREVMQVlFRDsKKwkJ cmV0dXJuIDE7CisJfQorCisJaWYgKChuZXdzdGF0ZSA9PSBQQVRIX1VQIHx8IG5ld3N0YXRlID09 IFBBVEhfR0hPU1QpICYmCiAJICAgICBwcC0+d2FpdF9jaGVja3MgPiAwKSB7CiAJCWlmIChwcC0+ bXBwLT5ucl9hY3RpdmUgPiAwKSB7CiAJCQlwcC0+c3RhdGUgPSBQQVRIX0RFTEFZRUQ7Cg== --_002_3ab5d07903b44e0a913b5637862c4c96BRMWPEXMB12corpbrocadec_ Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline --_002_3ab5d07903b44e0a913b5637862c4c96BRMWPEXMB12corpbrocadec_-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Benjamin Marzinski" Subject: Re: deterministic io throughput in multipath Date: Thu, 2 Feb 2017 11:39:22 -0600 Message-ID: <20170202173922.GE22981@octiron.msp.redhat.com> References: <1486000201-3960-1-git-send-email-bmarzins@redhat.com> <3ab5d07903b44e0a913b5637862c4c96@BRMWP-EXMB12.corp.brocade.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <3ab5d07903b44e0a913b5637862c4c96@BRMWP-EXMB12.corp.brocade.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Muneendra Kumar M Cc: device-mapper development List-Id: dm-devel.ids This looks fine. Thanks for all your work on this ACK -Ben On Thu, Feb 02, 2017 at 11:48:39AM +0000, Muneendra Kumar M wrote: > Hi Ben, > The below changes suggested by you are good. Thanks for it. > I have taken your changes and made few changes to make the functionality working. > I have tested the same on the setup which works fine. > > We need to increment the path_failures every time checker fails. > if a device is down for a while, when it comes back up, it will get delayed only if the path failures exceeds the error threshold. > Whether checker fails or kernel identifies the failures we need to capture those as it tells the state of the path and target. > The below code has already taken care of this. > > Could you please review the attached patch and provide us your valuable comments . > > Below are the files that has been changed . > > libmultipath/config.c | 6 ++++++ > libmultipath/config.h | 9 +++++++++ > libmultipath/configure.c | 3 +++ > libmultipath/defaults.h | 3 ++- > libmultipath/dict.c | 86 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------- > libmultipath/dict.h | 3 +-- > libmultipath/propsel.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++-- > libmultipath/propsel.h | 3 +++ > libmultipath/structs.h | 14 ++++++++++---- > multipath/multipath.conf.5 | 57 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > multipathd/main.c | 83 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > 11 files changed, 281 insertions(+), 34 deletions(-) > > Regards, > Muneendra. > > > > > -----Original Message----- > From: Benjamin Marzinski [mailto:bmarzins@redhat.com] > Sent: Thursday, February 02, 2017 7:20 AM > To: Muneendra Kumar M > Cc: device-mapper development > Subject: RE: [dm-devel] deterministic io throughput in multipath > > This is certainly moving in the right direction. There are a couple of things I would change. check_path_reinstate_state() will automatically disable the path if there are configuration problems. If things aren't configured correctly, or the code can't get the current time, it seems like it should allow the path to get reinstated, to avoid keeping a perfectly good path down indefinitely. Also, if you look at the delay_*_checks code, it automatically reinstates a problematic path if there are no other paths to use. This seems like a good idea as well. > > Also, your code increments path_failures every time the checker fails. > This means that if a device is down for a while, when it comes back up, it will get delayed. I'm not sure if this is intentional, or if you were trying to track the number of times the path was restored and then failed again, instead of the total time a path was failed for. > > Perhaps it would be easier to show the kind of changes I would make with a patch. What do you think about this? I haven't done much testing on it at all, but these are the changes I would make. > > Signed-off-by: Benjamin Marzinski > --- > libmultipath/config.c | 3 + > libmultipath/dict.c | 2 +- > multipathd/main.c | 149 +++++++++++++++++++++++--------------------------- > 3 files changed, 72 insertions(+), 82 deletions(-) > > diff --git a/libmultipath/config.c b/libmultipath/config.c index be384af..5837dc6 100644 > --- a/libmultipath/config.c > +++ b/libmultipath/config.c > @@ -624,6 +624,9 @@ load_config (char * file) > conf->disable_changed_wwids = DEFAULT_DISABLE_CHANGED_WWIDS; > conf->remove_retries = 0; > conf->max_sectors_kb = DEFAULT_MAX_SECTORS_KB; > + conf->san_path_err_threshold = DEFAULT_ERR_CHECKS; > + conf->san_path_err_forget_rate = DEFAULT_ERR_CHECKS; > + conf->san_path_err_recovery_time = DEFAULT_ERR_CHECKS; > > /* > * preload default hwtable > diff --git a/libmultipath/dict.c b/libmultipath/dict.c index 4754572..ae94c88 100644 > --- a/libmultipath/dict.c > +++ b/libmultipath/dict.c > @@ -1050,7 +1050,7 @@ print_off_int_undef(char * buff, int len, void *ptr) > case NU_UNDEF: > return 0; > case NU_NO: > - return snprintf(buff, len, "\"off\""); > + return snprintf(buff, len, "\"no\""); > default: > return snprintf(buff, len, "%i", *int_ptr); > } > diff --git a/multipathd/main.c b/multipathd/main.c index d6d68a4..305e236 100644 > --- a/multipathd/main.c > +++ b/multipathd/main.c > @@ -1488,69 +1488,70 @@ void repair_path(struct path * pp) } > > static int check_path_reinstate_state(struct path * pp) { > - struct timespec start_time; > - int disable_reinstate = 1; > - > - if (!((pp->mpp->san_path_err_threshold > 0) && > - (pp->mpp->san_path_err_forget_rate > 0) && > - (pp->mpp->san_path_err_recovery_time >0))) { > - return disable_reinstate; > - } > - > - if (clock_gettime(CLOCK_MONOTONIC, &start_time) != 0) { > - return disable_reinstate; > + struct timespec curr_time; > + > + if (pp->disable_reinstate) { > + /* If we don't know how much time has passed, automatically > + * reinstate the path, just to be safe. Also, if there are > + * no other usable paths, reinstate the path */ > + if (clock_gettime(CLOCK_MONOTONIC, &curr_time) != 0 || > + pp->mpp->nr_active == 0) { > + condlog(2, "%s : reinstating path early", pp->dev); > + goto reinstate_path; > + } > + if ((curr_time.tv_sec - pp->dis_reinstate_time ) > pp->mpp->san_path_err_recovery_time) { > + condlog(2,"%s : reinstate the path after err recovery time", pp->dev); > + goto reinstate_path; > + } > + return 1; > } > > - if ((start_time.tv_sec - pp->dis_reinstate_time ) > pp->mpp->san_path_err_recovery_time) { > - disable_reinstate =0; > - pp->path_failures = 0; > - pp->disable_reinstate = 0; > - pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; > - condlog(3,"\npath %s :reinstate the path after err recovery time\n",pp->dev); > + /* forget errors on a working path */ > + if ((pp->state == PATH_UP || pp->state == PATH_GHOST) && > + pp->path_failures > 0) { > + if (pp->san_path_err_forget_rate > 0) > + pp->san_path_err_forget_rate--; > + else { > + /* for every san_path_err_forget_rate number of > + * successful path checks decrement path_failures by 1 > + */ > + pp->path_failures--; > + pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; > + } > + return 0; > } > - return disable_reinstate; > -} > > -static int check_path_validity_err (struct path * pp) { > - struct timespec start_time; > - int disable_reinstate = 0; > + /* If the path isn't recovering from a failed state, do nothing */ > + if (pp->state != PATH_DOWN && pp->state != PATH_SHAKY && > + pp->state != PATH_TIMEOUT) > + return 0; > > - if (!((pp->mpp->san_path_err_threshold > 0) && > - (pp->mpp->san_path_err_forget_rate > 0) && > - (pp->mpp->san_path_err_recovery_time >0))) { > - return disable_reinstate; > - } > + if (pp->path_failures == 0) > + pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; > + pp->path_failures++; > > - if (clock_gettime(CLOCK_MONOTONIC, &start_time) != 0) { > - return disable_reinstate; > - } > - if (!pp->disable_reinstate) { > - if (pp->path_failures) { > - /*if the error threshold has hit hit within the san_path_err_forget_rate > - *cycles donot reinstante the path till the san_path_err_recovery_time > - *place the path in failed state till san_path_err_recovery_time so that the > - *cutomer can rectify the issue within this time .Once the completion of > - *san_path_err_recovery_time it should automatically reinstantate the path > - */ > - if ((pp->path_failures > pp->mpp->san_path_err_threshold) && > - (pp->san_path_err_forget_rate > 0)) { > - printf("\n%s:%d: %s hit error threshold \n",__func__,__LINE__,pp->dev); > - pp->dis_reinstate_time = start_time.tv_sec ; > - pp->disable_reinstate = 1; > - disable_reinstate = 1; > - } else if ((pp->san_path_err_forget_rate > 0)) { > - pp->san_path_err_forget_rate--; > - } else { > - /*for every san_path_err_forget_rate number > - *of successful path checks decrement path_failures by 1 > - */ > - pp->path_failures --; > - pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; > - } > - } > + /* if we don't know the currently time, we don't know how long to > + * delay the path, so there's no point in checking if we should */ > + if (clock_gettime(CLOCK_MONOTONIC, &curr_time) != 0) > + return 0; > + /* when path failures has exceeded the san_path_err_threshold > + * place the path in delayed state till san_path_err_recovery_time > + * so that the cutomer can rectify the issue within this time. After > + * the completion of san_path_err_recovery_time it should > + * automatically reinstate the path */ > + if (pp->path_failures > pp->mpp->san_path_err_threshold) { > + condlog(2, "%s : hit error threshold. Delaying path reinstatement", pp->dev); > + pp->dis_reinstate_time = curr_time.tv_sec; > + pp->disable_reinstate = 1; > + return 1; > } > - return disable_reinstate; > + return 0; > +reinstate_path: > + pp->path_failures = 0; > + pp->disable_reinstate = 0; > + return 0; > } > + > /* > * Returns '1' if the path has been checked, '-1' if it was blacklisted > * and '0' otherwise > @@ -1566,7 +1567,7 @@ check_path (struct vectors * vecs, struct path * pp, int ticks) > int oldchkrstate = pp->chkrstate; > int retrigger_tries, checkint; > struct config *conf; > - int ret; > + int ret; > > if ((pp->initialized == INIT_OK || > pp->initialized == INIT_REQUESTED_UDEV) && !pp->mpp) @@ -1664,16 +1665,15 @@ check_path (struct vectors * vecs, struct path * pp, int ticks) > if (!pp->mpp) > return 0; > > + /* We only need to check if the path should be delayed when the > + * the path is actually usable and san_path_err is configured */ > if ((newstate == PATH_UP || newstate == PATH_GHOST) && > - pp->disable_reinstate) { > - /* > - * check if the path is in failed state for more than san_path_err_recovery_time > - * if not place the path in delayed state > - */ > - if (check_path_reinstate_state(pp)) { > - pp->state = PATH_DELAYED; > - return 1; > - } > + pp->mpp->san_path_err_threshold > 0 && > + pp->mpp->san_path_err_forget_rate > 0 && > + pp->mpp->san_path_err_recovery_time > 0 && > + check_path_reinstate_state(pp)) { > + pp->state = PATH_DELAYED; > + return 1; > } > > if ((newstate == PATH_UP || newstate == PATH_GHOST) && @@ -1685,31 +1685,18 @@ check_path (struct vectors * vecs, struct path * pp, int ticks) > } else > pp->wait_checks = 0; > } > - if ((newstate == PATH_DOWN || newstate == PATH_GHOST || > - pp->state == PATH_DOWN)) { > - /*assigned the path_err_forget_rate when we see the first failure on the path*/ > - if(pp->path_failures == 0){ > - pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; > - } > - pp->path_failures++; > - } > + > /* > * don't reinstate failed path, if its in stand-by > * and if target supports only implicit tpgs mode. > * this will prevent unnecessary i/o by dm on stand-by > * paths if there are no other active paths in map. > - * > - * when path failures has exceeded the san_path_err_threshold > - * within san_path_err_forget_rate then we don't reinstate > - * failed path for san_path_err_recovery_time > */ > - disable_reinstate = ((newstate == PATH_GHOST && > + disable_reinstate = (newstate == PATH_GHOST && > pp->mpp->nr_active == 0 && > - pp->tpgs == TPGS_IMPLICIT) ? 1 : > - check_path_validity_err(pp)); > + pp->tpgs == TPGS_IMPLICIT) ? 1 : 0; > > pp->chkrstate = newstate; > - > if (newstate != pp->state) { > int oldstate = pp->state; > pp->state = newstate; > -- > 1.8.3.1 > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Muneendra Kumar M Subject: Re: deterministic io throughput in multipath Date: Thu, 2 Feb 2017 18:02:57 +0000 Message-ID: References: <1486000201-3960-1-git-send-email-bmarzins@redhat.com> <3ab5d07903b44e0a913b5637862c4c96@BRMWP-EXMB12.corp.brocade.com> <20170202173922.GE22981@octiron.msp.redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20170202173922.GE22981@octiron.msp.redhat.com> Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Benjamin Marzinski Cc: device-mapper development List-Id: dm-devel.ids Hi Ben, Thanks for the review. So can I push my changes as mentioned by you in the below mail using git. Regards, Muneendra. -----Original Message----- From: Benjamin Marzinski [mailto:bmarzins@redhat.com] Sent: Thursday, February 02, 2017 11:09 PM To: Muneendra Kumar M Cc: device-mapper development Subject: Re: [dm-devel] deterministic io throughput in multipath This looks fine. Thanks for all your work on this ACK -Ben On Thu, Feb 02, 2017 at 11:48:39AM +0000, Muneendra Kumar M wrote: > Hi Ben, > The below changes suggested by you are good. Thanks for it. > I have taken your changes and made few changes to make the functionality working. > I have tested the same on the setup which works fine. > > We need to increment the path_failures every time checker fails. > if a device is down for a while, when it comes back up, it will get delayed only if the path failures exceeds the error threshold. > Whether checker fails or kernel identifies the failures we need to capture those as it tells the state of the path and target. > The below code has already taken care of this. > > Could you please review the attached patch and provide us your valuable comments . > > Below are the files that has been changed . > > libmultipath/config.c | 6 ++++++ > libmultipath/config.h | 9 +++++++++ > libmultipath/configure.c | 3 +++ > libmultipath/defaults.h | 3 ++- > libmultipath/dict.c | 86 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------- > libmultipath/dict.h | 3 +-- > libmultipath/propsel.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++-- > libmultipath/propsel.h | 3 +++ > libmultipath/structs.h | 14 ++++++++++---- > multipath/multipath.conf.5 | 57 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > multipathd/main.c | 83 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > 11 files changed, 281 insertions(+), 34 deletions(-) > > Regards, > Muneendra. > > > > > -----Original Message----- > From: Benjamin Marzinski [mailto:bmarzins@redhat.com] > Sent: Thursday, February 02, 2017 7:20 AM > To: Muneendra Kumar M > Cc: device-mapper development > Subject: RE: [dm-devel] deterministic io throughput in multipath > > This is certainly moving in the right direction. There are a couple of things I would change. check_path_reinstate_state() will automatically disable the path if there are configuration problems. If things aren't configured correctly, or the code can't get the current time, it seems like it should allow the path to get reinstated, to avoid keeping a perfectly good path down indefinitely. Also, if you look at the delay_*_checks code, it automatically reinstates a problematic path if there are no other paths to use. This seems like a good idea as well. > > Also, your code increments path_failures every time the checker fails. > This means that if a device is down for a while, when it comes back up, it will get delayed. I'm not sure if this is intentional, or if you were trying to track the number of times the path was restored and then failed again, instead of the total time a path was failed for. > > Perhaps it would be easier to show the kind of changes I would make with a patch. What do you think about this? I haven't done much testing on it at all, but these are the changes I would make. > > Signed-off-by: Benjamin Marzinski > --- > libmultipath/config.c | 3 + > libmultipath/dict.c | 2 +- > multipathd/main.c | 149 +++++++++++++++++++++++--------------------------- > 3 files changed, 72 insertions(+), 82 deletions(-) > > diff --git a/libmultipath/config.c b/libmultipath/config.c index be384af..5837dc6 100644 > --- a/libmultipath/config.c > +++ b/libmultipath/config.c > @@ -624,6 +624,9 @@ load_config (char * file) > conf->disable_changed_wwids = DEFAULT_DISABLE_CHANGED_WWIDS; > conf->remove_retries = 0; > conf->max_sectors_kb = DEFAULT_MAX_SECTORS_KB; > + conf->san_path_err_threshold = DEFAULT_ERR_CHECKS; > + conf->san_path_err_forget_rate = DEFAULT_ERR_CHECKS; > + conf->san_path_err_recovery_time = DEFAULT_ERR_CHECKS; > > /* > * preload default hwtable > diff --git a/libmultipath/dict.c b/libmultipath/dict.c index 4754572..ae94c88 100644 > --- a/libmultipath/dict.c > +++ b/libmultipath/dict.c > @@ -1050,7 +1050,7 @@ print_off_int_undef(char * buff, int len, void *ptr) > case NU_UNDEF: > return 0; > case NU_NO: > - return snprintf(buff, len, "\"off\""); > + return snprintf(buff, len, "\"no\""); > default: > return snprintf(buff, len, "%i", *int_ptr); > } > diff --git a/multipathd/main.c b/multipathd/main.c index d6d68a4..305e236 100644 > --- a/multipathd/main.c > +++ b/multipathd/main.c > @@ -1488,69 +1488,70 @@ void repair_path(struct path * pp) } > > static int check_path_reinstate_state(struct path * pp) { > - struct timespec start_time; > - int disable_reinstate = 1; > - > - if (!((pp->mpp->san_path_err_threshold > 0) && > - (pp->mpp->san_path_err_forget_rate > 0) && > - (pp->mpp->san_path_err_recovery_time >0))) { > - return disable_reinstate; > - } > - > - if (clock_gettime(CLOCK_MONOTONIC, &start_time) != 0) { > - return disable_reinstate; > + struct timespec curr_time; > + > + if (pp->disable_reinstate) { > + /* If we don't know how much time has passed, automatically > + * reinstate the path, just to be safe. Also, if there are > + * no other usable paths, reinstate the path */ > + if (clock_gettime(CLOCK_MONOTONIC, &curr_time) != 0 || > + pp->mpp->nr_active == 0) { > + condlog(2, "%s : reinstating path early", pp->dev); > + goto reinstate_path; > + } > + if ((curr_time.tv_sec - pp->dis_reinstate_time ) > pp->mpp->san_path_err_recovery_time) { > + condlog(2,"%s : reinstate the path after err recovery time", pp->dev); > + goto reinstate_path; > + } > + return 1; > } > > - if ((start_time.tv_sec - pp->dis_reinstate_time ) > pp->mpp->san_path_err_recovery_time) { > - disable_reinstate =0; > - pp->path_failures = 0; > - pp->disable_reinstate = 0; > - pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; > - condlog(3,"\npath %s :reinstate the path after err recovery time\n",pp->dev); > + /* forget errors on a working path */ > + if ((pp->state == PATH_UP || pp->state == PATH_GHOST) && > + pp->path_failures > 0) { > + if (pp->san_path_err_forget_rate > 0) > + pp->san_path_err_forget_rate--; > + else { > + /* for every san_path_err_forget_rate number of > + * successful path checks decrement path_failures by 1 > + */ > + pp->path_failures--; > + pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; > + } > + return 0; > } > - return disable_reinstate; > -} > > -static int check_path_validity_err (struct path * pp) { > - struct timespec start_time; > - int disable_reinstate = 0; > + /* If the path isn't recovering from a failed state, do nothing */ > + if (pp->state != PATH_DOWN && pp->state != PATH_SHAKY && > + pp->state != PATH_TIMEOUT) > + return 0; > > - if (!((pp->mpp->san_path_err_threshold > 0) && > - (pp->mpp->san_path_err_forget_rate > 0) && > - (pp->mpp->san_path_err_recovery_time >0))) { > - return disable_reinstate; > - } > + if (pp->path_failures == 0) > + pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; > + pp->path_failures++; > > - if (clock_gettime(CLOCK_MONOTONIC, &start_time) != 0) { > - return disable_reinstate; > - } > - if (!pp->disable_reinstate) { > - if (pp->path_failures) { > - /*if the error threshold has hit hit within the san_path_err_forget_rate > - *cycles donot reinstante the path till the san_path_err_recovery_time > - *place the path in failed state till san_path_err_recovery_time so that the > - *cutomer can rectify the issue within this time .Once the completion of > - *san_path_err_recovery_time it should automatically reinstantate the path > - */ > - if ((pp->path_failures > pp->mpp->san_path_err_threshold) && > - (pp->san_path_err_forget_rate > 0)) { > - printf("\n%s:%d: %s hit error threshold \n",__func__,__LINE__,pp->dev); > - pp->dis_reinstate_time = start_time.tv_sec ; > - pp->disable_reinstate = 1; > - disable_reinstate = 1; > - } else if ((pp->san_path_err_forget_rate > 0)) { > - pp->san_path_err_forget_rate--; > - } else { > - /*for every san_path_err_forget_rate number > - *of successful path checks decrement path_failures by 1 > - */ > - pp->path_failures --; > - pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; > - } > - } > + /* if we don't know the currently time, we don't know how long to > + * delay the path, so there's no point in checking if we should */ > + if (clock_gettime(CLOCK_MONOTONIC, &curr_time) != 0) > + return 0; > + /* when path failures has exceeded the san_path_err_threshold > + * place the path in delayed state till san_path_err_recovery_time > + * so that the cutomer can rectify the issue within this time. After > + * the completion of san_path_err_recovery_time it should > + * automatically reinstate the path */ > + if (pp->path_failures > pp->mpp->san_path_err_threshold) { > + condlog(2, "%s : hit error threshold. Delaying path reinstatement", pp->dev); > + pp->dis_reinstate_time = curr_time.tv_sec; > + pp->disable_reinstate = 1; > + return 1; > } > - return disable_reinstate; > + return 0; > +reinstate_path: > + pp->path_failures = 0; > + pp->disable_reinstate = 0; > + return 0; > } > + > /* > * Returns '1' if the path has been checked, '-1' if it was blacklisted > * and '0' otherwise > @@ -1566,7 +1567,7 @@ check_path (struct vectors * vecs, struct path * pp, int ticks) > int oldchkrstate = pp->chkrstate; > int retrigger_tries, checkint; > struct config *conf; > - int ret; > + int ret; > > if ((pp->initialized == INIT_OK || > pp->initialized == INIT_REQUESTED_UDEV) && !pp->mpp) @@ -1664,16 +1665,15 @@ check_path (struct vectors * vecs, struct path * pp, int ticks) > if (!pp->mpp) > return 0; > > + /* We only need to check if the path should be delayed when the > + * the path is actually usable and san_path_err is configured */ > if ((newstate == PATH_UP || newstate == PATH_GHOST) && > - pp->disable_reinstate) { > - /* > - * check if the path is in failed state for more than san_path_err_recovery_time > - * if not place the path in delayed state > - */ > - if (check_path_reinstate_state(pp)) { > - pp->state = PATH_DELAYED; > - return 1; > - } > + pp->mpp->san_path_err_threshold > 0 && > + pp->mpp->san_path_err_forget_rate > 0 && > + pp->mpp->san_path_err_recovery_time > 0 && > + check_path_reinstate_state(pp)) { > + pp->state = PATH_DELAYED; > + return 1; > } > > if ((newstate == PATH_UP || newstate == PATH_GHOST) && @@ -1685,31 +1685,18 @@ check_path (struct vectors * vecs, struct path * pp, int ticks) > } else > pp->wait_checks = 0; > } > - if ((newstate == PATH_DOWN || newstate == PATH_GHOST || > - pp->state == PATH_DOWN)) { > - /*assigned the path_err_forget_rate when we see the first failure on the path*/ > - if(pp->path_failures == 0){ > - pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; > - } > - pp->path_failures++; > - } > + > /* > * don't reinstate failed path, if its in stand-by > * and if target supports only implicit tpgs mode. > * this will prevent unnecessary i/o by dm on stand-by > * paths if there are no other active paths in map. > - * > - * when path failures has exceeded the san_path_err_threshold > - * within san_path_err_forget_rate then we don't reinstate > - * failed path for san_path_err_recovery_time > */ > - disable_reinstate = ((newstate == PATH_GHOST && > + disable_reinstate = (newstate == PATH_GHOST && > pp->mpp->nr_active == 0 && > - pp->tpgs == TPGS_IMPLICIT) ? 1 : > - check_path_validity_err(pp)); > + pp->tpgs == TPGS_IMPLICIT) ? 1 : 0; > > pp->chkrstate = newstate; > - > if (newstate != pp->state) { > int oldstate = pp->state; > pp->state = newstate; > -- > 1.8.3.1 > From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Benjamin Marzinski" Subject: Re: deterministic io throughput in multipath Date: Thu, 2 Feb 2017 12:29:59 -0600 Message-ID: <20170202182959.GF22981@octiron.msp.redhat.com> References: <1486000201-3960-1-git-send-email-bmarzins@redhat.com> <3ab5d07903b44e0a913b5637862c4c96@BRMWP-EXMB12.corp.brocade.com> <20170202173922.GE22981@octiron.msp.redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Muneendra Kumar M Cc: device-mapper development List-Id: dm-devel.ids On Thu, Feb 02, 2017 at 06:02:57PM +0000, Muneendra Kumar M wrote: > Hi Ben, > Thanks for the review. > So can I push my changes as mentioned by you in the below mail using git. Sure. -Ben > > Regards, > Muneendra. > > > -----Original Message----- > From: Benjamin Marzinski [mailto:bmarzins@redhat.com] > Sent: Thursday, February 02, 2017 11:09 PM > To: Muneendra Kumar M > Cc: device-mapper development > Subject: Re: [dm-devel] deterministic io throughput in multipath > > This looks fine. Thanks for all your work on this > > ACK > > -Ben > > On Thu, Feb 02, 2017 at 11:48:39AM +0000, Muneendra Kumar M wrote: > > Hi Ben, > > The below changes suggested by you are good. Thanks for it. > > I have taken your changes and made few changes to make the functionality working. > > I have tested the same on the setup which works fine. > > > > We need to increment the path_failures every time checker fails. > > if a device is down for a while, when it comes back up, it will get delayed only if the path failures exceeds the error threshold. > > Whether checker fails or kernel identifies the failures we need to capture those as it tells the state of the path and target. > > The below code has already taken care of this. > > > > Could you please review the attached patch and provide us your valuable comments . > > > > Below are the files that has been changed . > > > > libmultipath/config.c | 6 ++++++ > > libmultipath/config.h | 9 +++++++++ > > libmultipath/configure.c | 3 +++ > > libmultipath/defaults.h | 3 ++- > > libmultipath/dict.c | 86 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------- > > libmultipath/dict.h | 3 +-- > > libmultipath/propsel.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++-- > > libmultipath/propsel.h | 3 +++ > > libmultipath/structs.h | 14 ++++++++++---- > > multipath/multipath.conf.5 | 57 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > > multipathd/main.c | 83 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > > 11 files changed, 281 insertions(+), 34 deletions(-) > > > > Regards, > > Muneendra. > > > > > > > > > > -----Original Message----- > > From: Benjamin Marzinski [mailto:bmarzins@redhat.com] > > Sent: Thursday, February 02, 2017 7:20 AM > > To: Muneendra Kumar M > > Cc: device-mapper development > > Subject: RE: [dm-devel] deterministic io throughput in multipath > > > > This is certainly moving in the right direction. There are a couple of things I would change. check_path_reinstate_state() will automatically disable the path if there are configuration problems. If things aren't configured correctly, or the code can't get the current time, it seems like it should allow the path to get reinstated, to avoid keeping a perfectly good path down indefinitely. Also, if you look at the delay_*_checks code, it automatically reinstates a problematic path if there are no other paths to use. This seems like a good idea as well. > > > > Also, your code increments path_failures every time the checker fails. > > This means that if a device is down for a while, when it comes back up, it will get delayed. I'm not sure if this is intentional, or if you were trying to track the number of times the path was restored and then failed again, instead of the total time a path was failed for. > > > > Perhaps it would be easier to show the kind of changes I would make with a patch. What do you think about this? I haven't done much testing on it at all, but these are the changes I would make. > > > > Signed-off-by: Benjamin Marzinski > > --- > > libmultipath/config.c | 3 + > > libmultipath/dict.c | 2 +- > > multipathd/main.c | 149 +++++++++++++++++++++++--------------------------- > > 3 files changed, 72 insertions(+), 82 deletions(-) > > > > diff --git a/libmultipath/config.c b/libmultipath/config.c index be384af..5837dc6 100644 > > --- a/libmultipath/config.c > > +++ b/libmultipath/config.c > > @@ -624,6 +624,9 @@ load_config (char * file) > > conf->disable_changed_wwids = DEFAULT_DISABLE_CHANGED_WWIDS; > > conf->remove_retries = 0; > > conf->max_sectors_kb = DEFAULT_MAX_SECTORS_KB; > > + conf->san_path_err_threshold = DEFAULT_ERR_CHECKS; > > + conf->san_path_err_forget_rate = DEFAULT_ERR_CHECKS; > > + conf->san_path_err_recovery_time = DEFAULT_ERR_CHECKS; > > > > /* > > * preload default hwtable > > diff --git a/libmultipath/dict.c b/libmultipath/dict.c index 4754572..ae94c88 100644 > > --- a/libmultipath/dict.c > > +++ b/libmultipath/dict.c > > @@ -1050,7 +1050,7 @@ print_off_int_undef(char * buff, int len, void *ptr) > > case NU_UNDEF: > > return 0; > > case NU_NO: > > - return snprintf(buff, len, "\"off\""); > > + return snprintf(buff, len, "\"no\""); > > default: > > return snprintf(buff, len, "%i", *int_ptr); > > } > > diff --git a/multipathd/main.c b/multipathd/main.c index d6d68a4..305e236 100644 > > --- a/multipathd/main.c > > +++ b/multipathd/main.c > > @@ -1488,69 +1488,70 @@ void repair_path(struct path * pp) } > > > > static int check_path_reinstate_state(struct path * pp) { > > - struct timespec start_time; > > - int disable_reinstate = 1; > > - > > - if (!((pp->mpp->san_path_err_threshold > 0) && > > - (pp->mpp->san_path_err_forget_rate > 0) && > > - (pp->mpp->san_path_err_recovery_time >0))) { > > - return disable_reinstate; > > - } > > - > > - if (clock_gettime(CLOCK_MONOTONIC, &start_time) != 0) { > > - return disable_reinstate; > > + struct timespec curr_time; > > + > > + if (pp->disable_reinstate) { > > + /* If we don't know how much time has passed, automatically > > + * reinstate the path, just to be safe. Also, if there are > > + * no other usable paths, reinstate the path */ > > + if (clock_gettime(CLOCK_MONOTONIC, &curr_time) != 0 || > > + pp->mpp->nr_active == 0) { > > + condlog(2, "%s : reinstating path early", pp->dev); > > + goto reinstate_path; > > + } > > + if ((curr_time.tv_sec - pp->dis_reinstate_time ) > pp->mpp->san_path_err_recovery_time) { > > + condlog(2,"%s : reinstate the path after err recovery time", pp->dev); > > + goto reinstate_path; > > + } > > + return 1; > > } > > > > - if ((start_time.tv_sec - pp->dis_reinstate_time ) > pp->mpp->san_path_err_recovery_time) { > > - disable_reinstate =0; > > - pp->path_failures = 0; > > - pp->disable_reinstate = 0; > > - pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; > > - condlog(3,"\npath %s :reinstate the path after err recovery time\n",pp->dev); > > + /* forget errors on a working path */ > > + if ((pp->state == PATH_UP || pp->state == PATH_GHOST) && > > + pp->path_failures > 0) { > > + if (pp->san_path_err_forget_rate > 0) > > + pp->san_path_err_forget_rate--; > > + else { > > + /* for every san_path_err_forget_rate number of > > + * successful path checks decrement path_failures by 1 > > + */ > > + pp->path_failures--; > > + pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; > > + } > > + return 0; > > } > > - return disable_reinstate; > > -} > > > > -static int check_path_validity_err (struct path * pp) { > > - struct timespec start_time; > > - int disable_reinstate = 0; > > + /* If the path isn't recovering from a failed state, do nothing */ > > + if (pp->state != PATH_DOWN && pp->state != PATH_SHAKY && > > + pp->state != PATH_TIMEOUT) > > + return 0; > > > > - if (!((pp->mpp->san_path_err_threshold > 0) && > > - (pp->mpp->san_path_err_forget_rate > 0) && > > - (pp->mpp->san_path_err_recovery_time >0))) { > > - return disable_reinstate; > > - } > > + if (pp->path_failures == 0) > > + pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; > > + pp->path_failures++; > > > > - if (clock_gettime(CLOCK_MONOTONIC, &start_time) != 0) { > > - return disable_reinstate; > > - } > > - if (!pp->disable_reinstate) { > > - if (pp->path_failures) { > > - /*if the error threshold has hit hit within the san_path_err_forget_rate > > - *cycles donot reinstante the path till the san_path_err_recovery_time > > - *place the path in failed state till san_path_err_recovery_time so that the > > - *cutomer can rectify the issue within this time .Once the completion of > > - *san_path_err_recovery_time it should automatically reinstantate the path > > - */ > > - if ((pp->path_failures > pp->mpp->san_path_err_threshold) && > > - (pp->san_path_err_forget_rate > 0)) { > > - printf("\n%s:%d: %s hit error threshold \n",__func__,__LINE__,pp->dev); > > - pp->dis_reinstate_time = start_time.tv_sec ; > > - pp->disable_reinstate = 1; > > - disable_reinstate = 1; > > - } else if ((pp->san_path_err_forget_rate > 0)) { > > - pp->san_path_err_forget_rate--; > > - } else { > > - /*for every san_path_err_forget_rate number > > - *of successful path checks decrement path_failures by 1 > > - */ > > - pp->path_failures --; > > - pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; > > - } > > - } > > + /* if we don't know the currently time, we don't know how long to > > + * delay the path, so there's no point in checking if we should */ > > + if (clock_gettime(CLOCK_MONOTONIC, &curr_time) != 0) > > + return 0; > > + /* when path failures has exceeded the san_path_err_threshold > > + * place the path in delayed state till san_path_err_recovery_time > > + * so that the cutomer can rectify the issue within this time. After > > + * the completion of san_path_err_recovery_time it should > > + * automatically reinstate the path */ > > + if (pp->path_failures > pp->mpp->san_path_err_threshold) { > > + condlog(2, "%s : hit error threshold. Delaying path reinstatement", pp->dev); > > + pp->dis_reinstate_time = curr_time.tv_sec; > > + pp->disable_reinstate = 1; > > + return 1; > > } > > - return disable_reinstate; > > + return 0; > > +reinstate_path: > > + pp->path_failures = 0; > > + pp->disable_reinstate = 0; > > + return 0; > > } > > + > > /* > > * Returns '1' if the path has been checked, '-1' if it was blacklisted > > * and '0' otherwise > > @@ -1566,7 +1567,7 @@ check_path (struct vectors * vecs, struct path * pp, int ticks) > > int oldchkrstate = pp->chkrstate; > > int retrigger_tries, checkint; > > struct config *conf; > > - int ret; > > + int ret; > > > > if ((pp->initialized == INIT_OK || > > pp->initialized == INIT_REQUESTED_UDEV) && !pp->mpp) @@ -1664,16 +1665,15 @@ check_path (struct vectors * vecs, struct path * pp, int ticks) > > if (!pp->mpp) > > return 0; > > > > + /* We only need to check if the path should be delayed when the > > + * the path is actually usable and san_path_err is configured */ > > if ((newstate == PATH_UP || newstate == PATH_GHOST) && > > - pp->disable_reinstate) { > > - /* > > - * check if the path is in failed state for more than san_path_err_recovery_time > > - * if not place the path in delayed state > > - */ > > - if (check_path_reinstate_state(pp)) { > > - pp->state = PATH_DELAYED; > > - return 1; > > - } > > + pp->mpp->san_path_err_threshold > 0 && > > + pp->mpp->san_path_err_forget_rate > 0 && > > + pp->mpp->san_path_err_recovery_time > 0 && > > + check_path_reinstate_state(pp)) { > > + pp->state = PATH_DELAYED; > > + return 1; > > } > > > > if ((newstate == PATH_UP || newstate == PATH_GHOST) && @@ -1685,31 +1685,18 @@ check_path (struct vectors * vecs, struct path * pp, int ticks) > > } else > > pp->wait_checks = 0; > > } > > - if ((newstate == PATH_DOWN || newstate == PATH_GHOST || > > - pp->state == PATH_DOWN)) { > > - /*assigned the path_err_forget_rate when we see the first failure on the path*/ > > - if(pp->path_failures == 0){ > > - pp->san_path_err_forget_rate = pp->mpp->san_path_err_forget_rate; > > - } > > - pp->path_failures++; > > - } > > + > > /* > > * don't reinstate failed path, if its in stand-by > > * and if target supports only implicit tpgs mode. > > * this will prevent unnecessary i/o by dm on stand-by > > * paths if there are no other active paths in map. > > - * > > - * when path failures has exceeded the san_path_err_threshold > > - * within san_path_err_forget_rate then we don't reinstate > > - * failed path for san_path_err_recovery_time > > */ > > - disable_reinstate = ((newstate == PATH_GHOST && > > + disable_reinstate = (newstate == PATH_GHOST && > > pp->mpp->nr_active == 0 && > > - pp->tpgs == TPGS_IMPLICIT) ? 1 : > > - check_path_validity_err(pp)); > > + pp->tpgs == TPGS_IMPLICIT) ? 1 : 0; > > > > pp->chkrstate = newstate; > > - > > if (newstate != pp->state) { > > int oldstate = pp->state; > > pp->state = newstate; > > -- > > 1.8.3.1 > > > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Muneendra Kumar M Subject: Re: deterministic io throughput in multipath Date: Fri, 3 Feb 2017 11:43:52 +0000 Message-ID: <8e6b21c025d64b3cbbfa181516e242b0@BRMWP-EXMB12.corp.brocade.com> References: <1486000201-3960-1-git-send-email-bmarzins@redhat.com> <3ab5d07903b44e0a913b5637862c4c96@BRMWP-EXMB12.corp.brocade.com> <20170202173922.GE22981@octiron.msp.redhat.com> <20170202182959.GF22981@octiron.msp.redhat.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============8044040922084386134==" Return-path: In-Reply-To: <20170202182959.GF22981@octiron.msp.redhat.com> Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Benjamin Marzinski Cc: device-mapper development List-Id: dm-devel.ids --===============8044040922084386134== Content-Language: en-US Content-Type: multipart/alternative; boundary="_000_8e6b21c025d64b3cbbfa181516e242b0BRMWPEXMB12corpbrocadec_" --_000_8e6b21c025d64b3cbbfa181516e242b0BRMWPEXMB12corpbrocadec_ Content-Type: text/plain; charset="us-ascii" Hi Ben, I did commit my patches to a branch off the head of master But when I used the below command iam getting the below errors. Not sure whether the mail has been sent to dm-devel@redhat.com. # git send-email --to "device-mapper development " --cc "Christophe Varoqui " --no-chain-reply-to --suppress-from Iam seeing the below error. Could you please help me in this Content-Description: Notification Content-Type: text/plain; charset=us-ascii This is the mail system at host localhost.localdomain. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to postmaster. If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system : host spool.mail.gandi.net[217.70.184.6] said: 550 5.1.8 : Sender address rejected: Domain not found (in reply to RCPT TO command) --E52C4C13C372.1486112814/localhost.localdomain Content-Description: Delivery report Content-Type: message/delivery-status Reporting-MTA: dns; localhost.localdomain X-Postfix-Queue-ID: E52C4C13C372 X-Postfix-Sender: rfc822; root@localhost.localdomain Arrival-Date: Fri, 3 Feb 2017 14:36:22 +0530 (IST) Final-Recipient: rfc822; christophe.varoqui@opensvc.com Action: failed Status: 5.1.8 Remote-MTA: dns; spool.mail.gandi.net Diagnostic-Code: smtp; 550 5.1.8 : Sender address rejected: Domain not found --E52C4C13C372.1486112814/localhost.localdomain Content-Description: Undelivered Message Content-Type: message/rfc822 Return-Path: Received: by localhost.localdomain (Postfix, from userid 0) id E52C4C13C372; Fri, 3 Feb 2017 14:36:22 +0530 (IST) From: M Muneendra Kumar To: device-mapper development Cc: Christophe Varoqui , Benjamin Marzinski Subject: [PATCH 0/1] multipathd: deterministic io throughput in multipath Date: Fri, 3 Feb 2017 14:36:21 +0530 Message-Id: <1486112782-12706-1-git-send-email-mmandala@brocade.com> X-Mailer: git-send-email 1.8.3.1 Regards, Muneendra. -----Original Message----- From: Benjamin Marzinski [mailto:bmarzins@redhat.com] Sent: Friday, February 03, 2017 12:00 AM To: Muneendra Kumar M Cc: device-mapper development Subject: Re: [dm-devel] deterministic io throughput in multipath On Thu, Feb 02, 2017 at 06:02:57PM +0000, Muneendra Kumar M wrote: > Hi Ben, > Thanks for the review. > So can I push my changes as mentioned by you in the below mail using git. Sure. -Ben > > Regards, > Muneendra. > > > -----Original Message----- > From: Benjamin Marzinski [mailto:bmarzins@redhat.com] > Sent: Thursday, February 02, 2017 11:09 PM > To: Muneendra Kumar M > > Cc: device-mapper development > > Subject: Re: [dm-devel] deterministic io throughput in multipath > > This looks fine. Thanks for all your work on this > > ACK > > -Ben > > On Thu, Feb 02, 2017 at 11:48:39AM +0000, Muneendra Kumar M wrote: > > Hi Ben, > > The below changes suggested by you are good. Thanks for it. > > I have taken your changes and made few changes to make the functionality working. > > I have tested the same on the setup which works fine. > > > > We need to increment the path_failures every time checker fails. > > if a device is down for a while, when it comes back up, it will get delayed only if the path failures exceeds the error threshold. > > Whether checker fails or kernel identifies the failures we need to capture those as it tells the state of the path and target. > > The below code has already taken care of this. > > > > Could you please review the attached patch and provide us your valuable comments . > > > > Below are the files that has been changed . > > > > libmultipath/config.c | 6 ++++++ > > libmultipath/config.h | 9 +++++++++ > > libmultipath/configure.c | 3 +++ > > libmultipath/defaults.h | 3 ++- > > libmultipath/dict.c | 86 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------- > > libmultipath/dict.h | 3 +-- > > libmultipath/propsel.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++-- > > libmultipath/propsel.h | 3 +++ > > libmultipath/structs.h | 14 ++++++++++---- > > multipath/multipath.conf.5 | 57 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > > multipathd/main.c | 83 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > > 11 files changed, 281 insertions(+), 34 deletions(-) > > > > Regards, > > Muneendra. > > > > --_000_8e6b21c025d64b3cbbfa181516e242b0BRMWPEXMB12corpbrocadec_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable
Hi Ben,
I did commit my  patches to a branch off the head of master
But when I used the below command iam getting the below errors. Not su= re whether the mail has been sent to dm-devel@redhat.com.
 
# git send-email --to "device-mapper development <dm-devel@red= hat.com>" --cc "Christophe Varoqui <christophe.varoqui@open= svc.com>" --no-chain-reply-to --suppress-from <dir>
 
Iam seeing the below error. = Could you please help me in this
 
 
Content-Description: Notification
Content-Type: text/plain; charset=3Dus-ascii
 
This is the mail system at host localhost.localdom= ain.
 
I'm sorry to have to inform you that your message = could not
be delivered to one or more recipients. It's attac= hed below.
 
For further assistance, please send mail to postma= ster.
 
If you do so, please include this problem report. = You can
delete your own text from the attached returned me= ssage.
 
        &n= bsp;          The mail system<= /font>
 
<christophe.varoqui@opensvc.com>: host spool= .mail.gandi.net[217.70.184.6] said:
    550 5.1.8 <root@localhost.lo= caldomain>: Sender address rejected: Domain not
    found (in reply to RCPT TO comm= and)
 
--E52C4C13C372.1486112814/localhost.localdomain
Content-Description: Delivery report
Content-Type: message/delivery-status
 
Reporting-MTA: dns; localhost.localdomain
X-Postfix-Queue-ID: E52C4C13C372
X-Postfix-Sender: rfc822; root@localhost.localdoma= in
Arrival-Date: Fri,  3 Feb 2017 14:36:22 += 0530 (IST)
 
Final-Recipient: rfc822; christophe.varoqui@opensv= c.com
Action: failed
Status: 5.1.8
Remote-MTA: dns; spool.mail.gandi.net
Diagnostic-Code: smtp; 550 5.1.8 <root@localhos= t.localdomain>: Sender address
    rejected: Domain not found
 
--E52C4C13C372.1486112814/localhost.localdomain
Content-Description: Undelivered Message
Content-Type: message/rfc822
 
Return-Path: <root@localhost.localdomain>
Received: by localhost.localdomain (Postfix, from = userid 0)
        id E52C= 4C13C372; Fri,  3 Feb 2017 14:36:22 +0530 (IST)
From: M Muneendra Kumar <mmandala@brocade.com&g= t;
To: device-mapper development <dm-devel@redhat.= com>
Cc: Christophe Varoqui <christophe.varoqui@open= svc.com>,
        Benjami= n Marzinski <bmarzins@redhat.com>
Subject: [PATCH 0/1] multipathd: deterministic io = throughput in multipath
Date: Fri,  3 Feb 2017 14:36:21 +0530
Message-Id: <1486112782-12706-1-git-send-email-= mmandala@brocade.com>
X-Mailer: git-send-email 1.8.3.1
 
Regards,
Muneendra.
 
-----Original Message-----
From: Benjamin Marzinski [mailto:bma= rzins@redhat.com]
Sent: Friday, February 03, 2017 12:00 AM
To: Muneendra Kumar M <mmandala@Brocade.com>
Cc: device-mapper development <dm-devel@redhat.com>
Subject: Re: [dm-devel] deterministic io throughput in multipath
 
On Thu, Feb 02, 2017 at 06:02:57PM +0000, Muneendra Kumar M wrote:=
> Hi Ben,
> Thanks for the review.
> So can I push my changes as mentioned by you in the below mail us= ing git.
 
Sure.
 
-Ben
 
>
> Regards,
> Muneendra.
>
>
> -----Original Message-----
> From: Benjamin Marzinski [= mailto:bmarzins@redhat.com]
> Sent: Thursday, February 02, 2017 11:09 PM
> To: Muneendra Kumar M <mmandala@Brocade.com>
> Cc: device-mapper development <dm-devel@redhat.com>
> Subject: Re: [dm-devel] deterministic io throughput in multipath<= /div>
>
> This looks fine. Thanks for all your work on this
>
> ACK
>
> -Ben
>
> On Thu, Feb 02, 2017 at 11:48:39AM +0000, Muneendra Kumar M w= rote:
> > Hi Ben,
> > The below changes suggested by you are good. Thanks for it.<= /div>
> > I have taken your changes and made few changes to make the f= unctionality working.
> > I have tested the same on the setup which works fine.
> >
> > We need to increment the path_failures every time checker fa= ils.
> > if a device is down for a while, when it comes back up, it w= ill get delayed only if the path failures exceeds the error threshold.
> > Whether checker fails or kernel identifies the failures we n= eed  to capture those as it tells the state of the path and target.
> > The below code has already taken care of this.
> >
> > Could you please review the attached patch and provide us yo= ur valuable comments .
> >
> > Below are the files that has been changed .
> >
> > libmultipath/config.c      |  = 6 ++++++
> >  libmultipath/config.h      |&= nbsp; 9 +++++++++
> >  libmultipath/configure.c   |  3 +&#= 43;+
> >  libmultipath/defaults.h    |  3 &#= 43;+-
> >  libmultipath/dict.c      = ;  | 86 ++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;+++-------------------------
> >  libmultipath/dict.h      = ;  |  3 +--
> >  libmultipath/propsel.c     | 48 &#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;--
> >  libmultipath/propsel.h     | = 3 +++
> >  libmultipath/structs.h     | 14 &#= 43;+++++++++----
> >  multipath/multipath.conf.5 | 57 ++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;+++++++
> >  multipathd/main.c      &= nbsp;   | 83 ++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++++&#= 43;++++++++++++
> >  11 files changed, 281 insertions(+), 34 deletions(= -)
> >
> > Regards,
> > Muneendra.
> >
> >
 
--_000_8e6b21c025d64b3cbbfa181516e242b0BRMWPEXMB12corpbrocadec_-- --===============8044040922084386134== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline --===============8044040922084386134==--