From: Babu Moger <babu.moger@amd.com>
To: <clemens@ladisch.de>, <jdelvare@suse.com>, <linux@roeck-us.net>
Cc: <linux-hwmon@vger.kernel.org>, <linux-kernel@vger.kernel.org>
Subject: [PATCH 1/2] hwmon: (k10temp) Move the CCD limit info inside k10temp_data structure
Date: Tue, 23 Nov 2021 15:16:28 -0600 [thread overview]
Message-ID: <163770216907.777059.6947726637265961161.stgit@bmoger-ubuntu> (raw)
It seems appropriate to move the CCD specific information inside the
k10temp_data structure.
Signed-off-by: Babu Moger <babu.moger@amd.com>
---
Note: Generated the patch on top of hwmon-next.
drivers/hwmon/k10temp.c | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/drivers/hwmon/k10temp.c b/drivers/hwmon/k10temp.c
index 880990fa4795..bd436b380a02 100644
--- a/drivers/hwmon/k10temp.c
+++ b/drivers/hwmon/k10temp.c
@@ -85,6 +85,7 @@ struct k10temp_data {
u32 show_temp;
bool is_zen;
u32 ccd_offset;
+ u32 ccd_limit;
};
#define TCTL_BIT 0
@@ -357,12 +358,12 @@ static const struct hwmon_chip_info k10temp_chip_info = {
};
static void k10temp_get_ccd_support(struct pci_dev *pdev,
- struct k10temp_data *data, int limit)
+ struct k10temp_data *data)
{
u32 regval;
int i;
- for (i = 0; i < limit; i++) {
+ for (i = 0; i < data->ccd_limit; i++) {
amd_smn_read(amd_pci_dev_to_node_id(pdev),
ZEN_CCD_TEMP(data->ccd_offset, i), ®val);
if (regval & ZEN_CCD_TEMP_VALID)
@@ -411,14 +412,16 @@ static int k10temp_probe(struct pci_dev *pdev, const struct pci_device_id *id)
case 0x11: /* Zen APU */
case 0x18: /* Zen+ APU */
data->ccd_offset = 0x154;
- k10temp_get_ccd_support(pdev, data, 4);
+ data->ccd_limit = 4;
+ k10temp_get_ccd_support(pdev, data);
break;
case 0x31: /* Zen2 Threadripper */
case 0x60: /* Renoir */
case 0x68: /* Lucienne */
case 0x71: /* Zen2 */
data->ccd_offset = 0x154;
- k10temp_get_ccd_support(pdev, data, 8);
+ data->ccd_limit = 8;
+ k10temp_get_ccd_support(pdev, data);
break;
}
} else if (boot_cpu_data.x86 == 0x19) {
@@ -431,13 +434,15 @@ static int k10temp_probe(struct pci_dev *pdev, const struct pci_device_id *id)
case 0x21: /* Zen3 Ryzen Desktop */
case 0x50 ... 0x5f: /* Green Sardine */
data->ccd_offset = 0x154;
- k10temp_get_ccd_support(pdev, data, 8);
+ data->ccd_limit = 8;
+ k10temp_get_ccd_support(pdev, data);
break;
case 0x10 ... 0x1f:
case 0x40 ... 0x4f: /* Yellow Carp */
case 0xa0 ... 0xaf:
data->ccd_offset = 0x300;
- k10temp_get_ccd_support(pdev, data, 8);
+ data->ccd_limit = 8;
+ k10temp_get_ccd_support(pdev, data);
break;
}
} else {
next reply other threads:[~2021-11-23 21:16 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-23 21:16 Babu Moger [this message]
2021-11-23 21:16 ` [PATCH 2/2] hwmon: (k10temp) Support up to 12 CCDs on AMD Family of processors Babu Moger
2021-11-23 21:40 ` [PATCH 1/2] hwmon: (k10temp) Move the CCD limit info inside k10temp_data structure Guenter Roeck
2021-11-23 21:52 ` Moger, Babu
2021-11-23 22:07 ` Guenter Roeck
2021-11-23 22:15 ` Babu Moger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=163770216907.777059.6947726637265961161.stgit@bmoger-ubuntu \
--to=babu.moger@amd.com \
--cc=clemens@ladisch.de \
--cc=jdelvare@suse.com \
--cc=linux-hwmon@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux@roeck-us.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).