kernelnewbies.kernelnewbies.org archive mirror
 help / color / mirror / Atom feed
From: Primoz Beltram <primoz.beltram@kate.si>
To: Muni Sekhar <munisekharrms@gmail.com>
Cc: kernelnewbies <kernelnewbies@kernelnewbies.org>
Subject: Re: read the memory mapped address - pcie - kernel hangs
Date: Fri, 10 Jan 2020 12:15:59 +0100	[thread overview]
Message-ID: <596b8223-774c-d249-2134-e143d77c9be9@kate.si> (raw)
In-Reply-To: <CAHhAz+ihy2R-2NxjLC6iCOyQ5=T7Rs_D8gH+1vxLFBSBnS8-_A@mail.gmail.com>

Hi,
Have read also other replays to this topic.
I have seen-debug such deadlock problems with FPGA based PCIe endpoint 
devices (Xilinx chips) and usually (if not signal integrity problems), 
the problem was in wrong AXI master/slave bus handling in FPGA design.
I guess you have FPGA Xilinx PCIe endpoint IP core attached as AXI 
master to FPGA internal AXI bus (access to AXI slaves inside FPGA design).
If FPGA code in your design does not handle correctly AXI master 
read/write requests, e.g. FPGA AXI slave does not generate bus ACK in 
correct way, the PCIe bus will stay locked (no PCIe completion sent 
back), resulting in complete system lock. Some PCIe root chips have 
diagnostic LEDs to help decode PCIe problems.
 From your notice about doing two 32bit reads on 64bit CPU, I would 
guess the problem is in handling AXI transfer size signals in FPGA slave 
code.
I would suggest you to check the code in FPGA design. You can use FPGA 
test bench simulation to check the behaviour of PCIe endpoint originated 
AXI read/write requests.
Xilinx provides test bench simulation code for their PCIe IP's.
They provide also PCIe root port model, so you can simulate AXI 
read/writes accesses as they would come from CPU I/O memory requests via 
PCIe TLPs.
WBR Primoz

On 8. 01. 20 20:00, Muni Sekhar wrote:
> Hi All,
>
> I have module with Xilinx FPGA. It implements UART(s), SPI(s),
> parallel I/O and interfaces them to the Host CPU via PCI Express bus.
> I see that my system freezes without capturing the crash dump for certain tests.
> I debugged this issue and it was tracked down to the ‘readl()’ in
> interrupt handler code
>
> In ISR, first reads the Interrupt Status register using ‘readl()’ as
> given below.
>      status = readl(ctrl->reg + INT_STATUS);
>
> And then clears the pending interrupts using ‘writel()’ as given blow.
>          writel(status, ctrl->reg + INT_STATUS);
>
> I've noticed a kernel hang if INT_STATUS register read again after
> clearing the pending interrupts.
>
> My system freezes only after executing the same ISR code after
> millions of interrupts. Basically reading the memory mapped register
> in ISR resulting this behavior.
> If I comment “status = readl(ctrl->reg + INT_STATUS);” after clearing
> the pending interrupts then system is stable .
>
> As a temporary workaround I avoided reading the INT_STATUS register
> after clearing the pending bits, and this code change works fine.
>
> Can someone clarify me why the kernel hangs without crash dump incase
> if I read the INT_STATUS register using readl() after
> clearing(writel()) the pending bits?
>
> To read the memory mapped IO kernel provides {read}{b,w,l,q}() API’s.
> If PCIe card is not responsive , can call to readl() from interrupt
> context makes system freeze?
>
> Thanks for any suggestions and solutions to this problem!
>
> Snippet of the ISR code is given blow:
> https://pastebin.com/as2tSPwE
>
>
> static irqreturn_t pcie_isr(int irq, void *data)
>
> {
>
>          struct test_device *ctrl = (struct test_device *)data;
>
>          u32 status;
>
> …
>
>
>
>          status = readl(ctrl->reg + INT_STATUS);
>
>          /*
>
>           * Check to see if it was our interrupt
>
>           */
>
>          if (!(status & 0x000C))
>
>                  return IRQ_NONE;
>
>
>
>          /* Clear the interrupt */
>
>          writel(status, ctrl->reg + INT_STATUS);
>
>
>
>          if (status & 0x0004) {
>
>                  /*
>
>                   * Tx interrupt pending.
>
>                   */
>
>                   ....
>
>         }
>
>
>
>          if (status & 0x0008) {
>
>                  /* Rx interrupt Pending */
>
>                  /* The system freezes if I read again the INT_STATUS
> register as given below */
>
>                  status = readl(ctrl->reg + INT_STATUS);
>
>                  ....
>
>          }
>
> ..
>
>          return IRQ_HANDLED;
> }
>


_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

  parent reply	other threads:[~2020-01-10 11:16 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-08 19:00 read the memory mapped address - pcie - kernel hangs Muni Sekhar
2020-01-08 19:45 ` Greg KH
2020-01-09 11:14   ` Muni Sekhar
2020-01-09 11:37     ` Greg KH
2020-01-09 12:20       ` Muni Sekhar
2020-01-09 18:12         ` Greg KH
2020-01-10 11:15 ` Primoz Beltram [this message]
2020-01-10 14:58   ` Muni Sekhar
2020-01-10 23:03     ` Onur Atilla
2020-01-11  3:13       ` Muni Sekhar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=596b8223-774c-d249-2134-e143d77c9be9@kate.si \
    --to=primoz.beltram@kate.si \
    --cc=kernelnewbies@kernelnewbies.org \
    --cc=munisekharrms@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).