linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Chun-Kuang Hu <chunkuang.hu@kernel.org>
To: Neal Liu <neal.liu@mediatek.com>
Cc: Chun-Kuang Hu <chunkuang.hu@kernel.org>,
	Rob Herring <robh+dt@kernel.org>,
	Matthias Brugger <matthias.bgg@gmail.com>,
	devicetree@vger.kernel.org,
	wsd_upstream <wsd_upstream@mediatek.com>,
	lkml <linux-kernel@vger.kernel.org>,
	"moderated list:ARM/Mediatek SoC support" 
	<linux-mediatek@lists.infradead.org>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH v4 2/2] soc: mediatek: add mtk-devapc driver
Date: Tue, 4 Aug 2020 00:04:30 +0800	[thread overview]
Message-ID: <CAAOTY__VPXMGcR9w8EdnGbJyVbxbLQY+SRAqLbOcTy0D_WLM0w@mail.gmail.com> (raw)
In-Reply-To: <1596427295.22971.20.camel@mtkswgap22>

Hi, Neal:

Neal Liu <neal.liu@mediatek.com> 於 2020年8月3日 週一 下午12:01寫道:
>
> Hi Chun-Kuang,
>
> On Sat, 2020-08-01 at 08:12 +0800, Chun-Kuang Hu wrote:
> > Hi, Neal:
> >
> > This patch is for "mediatek,mt6779-devapc", so I think commit title
> > should show the SoC ID.
>
> Okay, I'll change title to 'soc:mediatek: add mt6779 devapc driver'.
>
> >
> > Neal Liu <neal.liu@mediatek.com> 於 2020年7月29日 週三 下午4:29寫道:
> > >
> > > MediaTek bus fabric provides TrustZone security support and data
> > > protection to prevent slaves from being accessed by unexpected
> > > masters.
> > > The security violation is logged and sent to the processor for
> > > further analysis or countermeasures.
> > >
> > > Any occurrence of security violation would raise an interrupt, and
> > > it will be handled by mtk-devapc driver. The violation
> > > information is printed in order to find the murderer.
> > >
> > > Signed-off-by: Neal Liu <neal.liu@mediatek.com>
> > > ---
> >
> > [snip]
> >
> > > +
> > > +struct mtk_devapc_context {
> > > +       struct device *dev;
> > > +       u32 vio_idx_num;
> > > +       void __iomem *devapc_pd_base;
> > > +       struct mtk_devapc_vio_info *vio_info;
> > > +       const struct mtk_devapc_pd_offset *offset;
> > > +       const struct mtk_devapc_vio_dbgs *vio_dbgs;
> > > +};
> >
> > I think this structure should separate the constant part. The constant part is:
> >
> > struct mtk_devapc_data {
> >     const u32 vio_idx_num;
> >     const struct mtk_devapc_pd_offset *offset; /* I would like to
> > remove struct mtk_devapc_pd_offset and directly put its member into
> > this structure */
> >     const struct mtk_devapc_vio_dbgs *vio_dbgs; /* This may disappear */
> > };
> >
> > And the context is:
> >
> > struct mtk_devapc_context {
> >     struct device *dev;
> >     void __iomem *devapc_pd_base;
> >     const struct mtk_devapc_data *data;
> > };
> >
> > So when you define this, you would not waste memory to store non-constant data.
> >
> > static const struct mtk_devapc_data devapc_mt6779 = {
> >  .vio_idx_num = 510,
> >  .offset = &mt6779_pd_offset,
> >  .vio_dbgs = &mt6779_vio_dbgs,
> > };
> >
>
> Sorry, I still don't understand why this refactoring will not waste
> memory to store non-constant data. Could you explain more details?
> To my understanding, we still also have to allocate memory to store dev
> & devapc_pd_base.

In some situation, it is. You make the non-constant data a global
variable. I think the context data should be dynamic allocated. If
this driver is not probed, the non-constant data occupy the memory.

Regards,
Chun-Kuang.

>
> > Regards,
> > Chun-Kuang.
> >
> > > +
> > > +#endif /* __MTK_DEVAPC_H__ */
> > > --
> > > 1.7.9.5
> > > _______________________________________________
> > > Linux-mediatek mailing list
> > > Linux-mediatek@lists.infradead.org
> > > http://lists.infradead.org/mailman/listinfo/linux-mediatek
>

  reply	other threads:[~2020-08-03 16:04 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-29  8:18 [PATCH v4] Add MediaTek MT6779 devapc driver Neal Liu
2020-07-29  8:18 ` [PATCH v4 1/2] dt-bindings: devapc: add bindings for mtk-devapc Neal Liu
2020-07-29  8:18 ` [PATCH v4 2/2] soc: mediatek: add mtk-devapc driver Neal Liu
2020-07-29 16:38   ` Chun-Kuang Hu
2020-07-31  2:44     ` Neal Liu
2020-07-31 15:03       ` Chun-Kuang Hu
2020-08-03  3:32         ` Neal Liu
2020-08-03 16:13           ` Chun-Kuang Hu
2020-08-04  2:18             ` Neal Liu
2020-08-04 15:27               ` Chun-Kuang Hu
2020-07-29 22:47   ` Chun-Kuang Hu
2020-07-31  2:47     ` Neal Liu
2020-07-30 16:14   ` Chun-Kuang Hu
2020-07-31  2:52     ` Neal Liu
2020-07-31 15:55       ` Chun-Kuang Hu
2020-08-03  3:41         ` Neal Liu
2020-08-01  0:12   ` Chun-Kuang Hu
2020-08-03  4:01     ` Neal Liu
2020-08-03 16:04       ` Chun-Kuang Hu [this message]
2020-08-04  2:08         ` Neal Liu
2020-08-04 15:55           ` Chun-Kuang Hu
2020-08-01 23:50   ` Chun-Kuang Hu
2020-08-03  4:05     ` Neal Liu
2020-08-03 15:38       ` Chun-Kuang Hu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAAOTY__VPXMGcR9w8EdnGbJyVbxbLQY+SRAqLbOcTy0D_WLM0w@mail.gmail.com \
    --to=chunkuang.hu@kernel.org \
    --cc=devicetree@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mediatek@lists.infradead.org \
    --cc=matthias.bgg@gmail.com \
    --cc=neal.liu@mediatek.com \
    --cc=robh+dt@kernel.org \
    --cc=wsd_upstream@mediatek.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).