From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E94E5C4743D for ; Tue, 8 Jun 2021 10:48:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C6756610A1 for ; Tue, 8 Jun 2021 10:48:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231807AbhFHKun (ORCPT ); Tue, 8 Jun 2021 06:50:43 -0400 Received: from new4-smtp.messagingengine.com ([66.111.4.230]:56379 "EHLO new4-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231640AbhFHKul (ORCPT ); Tue, 8 Jun 2021 06:50:41 -0400 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailnew.nyi.internal (Postfix) with ESMTP id 02F9F580597; Tue, 8 Jun 2021 06:48:48 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 08 Jun 2021 06:48:48 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aj.id.au; h=from :to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm3; bh=zzc7gB5yDI9rO hlQvK5HVCt4JFBNmFnr0uke3sSm5CQ=; b=fXZ724+XBrELPJ4IK/tFMZBUOO8vJ Z6nK/D/NGU8EacylPpG8KzOH6ZESQ6fxbHf4eVfrZj3qwp1bRN0VH49AepEdmVeV IBD3P79iKkRaBvbBG6PkzWnh72Gt25XBW6fWXzud7u2Zd+Vg+BdGyrxyFIl0a5h/ /FrRsYcKZaqXh2m6CaceX0WDUH+cuGmEm+tihr8BfKDQHzaZEA1QCdDB64kX6bmR fGVNmoL288yzGJcCiBft6nzc8i2wbQjavwc44WUYAyQhQ8ws+vDqSdRg9OUG8qob pKrnqsRCKm9g3NkaPA+8mGAiu1dtHT3GoELkHTTIZW09GodnAIPYCKUKw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=zzc7gB5yDI9rOhlQvK5HVCt4JFBNmFnr0uke3sSm5CQ=; b=rv9gU+In sqRt6Jz/RZAY4gMUyHg2POfKDbrtm6X4LtPXMF/B/VVg6p2LINBYsXoUUgyK5Mji ygx6b83+VBrg41GRu0oblH6iMCOrXMtp2xOXOM3vEwuk2TER3+9N3/+qFwvuRNz7 V1ASWXgejBSPNGeGJKKzPKD4EwlYyym8kpmAg4tR57KovDXJobUhe/NYkFD+yx7o gzm/r6zZtVvZCZbybDlUymGlWYGRAwKpz+pKycPtf8JFa1B6F+RvD59ekdEIN8dr uAmNDg9Ztf/A3CND4+ShlejRMmkMt+Zon2IlnvUkF7gZB5eHAQrttAHsDTeFdVqn WXb1Y0YE9aAM6g== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrfedtledgudejucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomheptehnughrvgifucflvghffhgvrhihuceorghnughrvgifsegr jhdrihgurdgruheqnecuggftrfgrthhtvghrnhepjefgvdevheetkeevgeegleelgfelte etjeffleffvdduudevieffgeetleevhfetnecuvehluhhsthgvrhfuihiivgeptdenucfr rghrrghmpehmrghilhhfrhhomheprghnughrvgifsegrjhdrihgurdgruh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 8 Jun 2021 06:48:39 -0400 (EDT) From: Andrew Jeffery To: openipmi-developer@lists.sourceforge.net, openbmc@lists.ozlabs.org, minyard@acm.org Cc: devicetree@vger.kernel.org, tmaimon77@gmail.com, linux-aspeed@lists.ozlabs.org, avifishman70@gmail.com, venture@google.com, linux-kernel@vger.kernel.org, tali.perry1@gmail.com, robh+dt@kernel.org, chiawei_wang@aspeedtech.com, linux-arm-kernel@lists.infradead.org, benjaminfair@google.com, arnd@arndb.de, zweiss@equinix.com, joel@jms.id.au, KWLIU@nuvoton.com Subject: [PATCH v4 04/16] ipmi: kcs_bmc: Split out kcs_bmc_cdev_ipmi Date: Tue, 8 Jun 2021 20:17:45 +0930 Message-Id: <20210608104757.582199-5-andrew@aj.id.au> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210608104757.582199-1-andrew@aj.id.au> References: <20210608104757.582199-1-andrew@aj.id.au> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Take steps towards defining a coherent API to separate the KCS device drivers from the userspace interface. Decreasing the coupling will improve the separation of concerns and enable the introduction of alternative userspace interfaces. For now, simply split the chardev logic out to a separate file. The code continues to build into the same module. Signed-off-by: Andrew Jeffery Reviewed-by: Zev Weiss --- drivers/char/ipmi/Makefile | 2 +- drivers/char/ipmi/kcs_bmc.c | 423 +------------------------ drivers/char/ipmi/kcs_bmc.h | 10 +- drivers/char/ipmi/kcs_bmc_cdev_ipmi.c | 428 ++++++++++++++++++++++++++ 4 files changed, 451 insertions(+), 412 deletions(-) create mode 100644 drivers/char/ipmi/kcs_bmc_cdev_ipmi.c diff --git a/drivers/char/ipmi/Makefile b/drivers/char/ipmi/Makefile index 0822adc2ec41..a302bc865370 100644 --- a/drivers/char/ipmi/Makefile +++ b/drivers/char/ipmi/Makefile @@ -22,7 +22,7 @@ obj-$(CONFIG_IPMI_SSIF) += ipmi_ssif.o obj-$(CONFIG_IPMI_POWERNV) += ipmi_powernv.o obj-$(CONFIG_IPMI_WATCHDOG) += ipmi_watchdog.o obj-$(CONFIG_IPMI_POWEROFF) += ipmi_poweroff.o -obj-$(CONFIG_IPMI_KCS_BMC) += kcs_bmc.o +obj-$(CONFIG_IPMI_KCS_BMC) += kcs_bmc.o kcs_bmc_cdev_ipmi.o obj-$(CONFIG_ASPEED_BT_IPMI_BMC) += bt-bmc.o obj-$(CONFIG_ASPEED_KCS_IPMI_BMC) += kcs_bmc_aspeed.o obj-$(CONFIG_NPCM7XX_KCS_IPMI_BMC) += kcs_bmc_npcm7xx.o diff --git a/drivers/char/ipmi/kcs_bmc.c b/drivers/char/ipmi/kcs_bmc.c index c4336c1f2d6d..ef5c48ffe74a 100644 --- a/drivers/char/ipmi/kcs_bmc.c +++ b/drivers/char/ipmi/kcs_bmc.c @@ -3,446 +3,51 @@ * Copyright (c) 2015-2018, Intel Corporation. */ -#define pr_fmt(fmt) "kcs-bmc: " fmt - -#include -#include -#include #include -#include -#include -#include -#include #include "kcs_bmc.h" -#define DEVICE_NAME "ipmi-kcs" - -#define KCS_MSG_BUFSIZ 1000 - -#define KCS_ZERO_DATA 0 - - -/* IPMI 2.0 - Table 9-1, KCS Interface Status Register Bits */ -#define KCS_STATUS_STATE(state) (state << 6) -#define KCS_STATUS_STATE_MASK GENMASK(7, 6) -#define KCS_STATUS_CMD_DAT BIT(3) -#define KCS_STATUS_SMS_ATN BIT(2) -#define KCS_STATUS_IBF BIT(1) -#define KCS_STATUS_OBF BIT(0) - -/* IPMI 2.0 - Table 9-2, KCS Interface State Bits */ -enum kcs_states { - IDLE_STATE = 0, - READ_STATE = 1, - WRITE_STATE = 2, - ERROR_STATE = 3, -}; - -/* IPMI 2.0 - Table 9-3, KCS Interface Control Codes */ -#define KCS_CMD_GET_STATUS_ABORT 0x60 -#define KCS_CMD_WRITE_START 0x61 -#define KCS_CMD_WRITE_END 0x62 -#define KCS_CMD_READ_BYTE 0x68 - -static inline u8 kcs_bmc_read_data(struct kcs_bmc *kcs_bmc) +u8 kcs_bmc_read_data(struct kcs_bmc *kcs_bmc) { return kcs_bmc->io_inputb(kcs_bmc, kcs_bmc->ioreg.idr); } +EXPORT_SYMBOL(kcs_bmc_read_data); -static inline void kcs_bmc_write_data(struct kcs_bmc *kcs_bmc, u8 data) +void kcs_bmc_write_data(struct kcs_bmc *kcs_bmc, u8 data) { kcs_bmc->io_outputb(kcs_bmc, kcs_bmc->ioreg.odr, data); } +EXPORT_SYMBOL(kcs_bmc_write_data); -static inline u8 kcs_bmc_read_status(struct kcs_bmc *kcs_bmc) +u8 kcs_bmc_read_status(struct kcs_bmc *kcs_bmc) { return kcs_bmc->io_inputb(kcs_bmc, kcs_bmc->ioreg.str); } +EXPORT_SYMBOL(kcs_bmc_read_status); -static inline void kcs_bmc_write_status(struct kcs_bmc *kcs_bmc, u8 data) +void kcs_bmc_write_status(struct kcs_bmc *kcs_bmc, u8 data) { kcs_bmc->io_outputb(kcs_bmc, kcs_bmc->ioreg.str, data); } +EXPORT_SYMBOL(kcs_bmc_write_status); -static void kcs_bmc_update_status(struct kcs_bmc *kcs_bmc, u8 mask, u8 val) +void kcs_bmc_update_status(struct kcs_bmc *kcs_bmc, u8 mask, u8 val) { kcs_bmc->io_updateb(kcs_bmc, kcs_bmc->ioreg.str, mask, val); } +EXPORT_SYMBOL(kcs_bmc_update_status); -static inline void set_state(struct kcs_bmc *kcs_bmc, u8 state) -{ - kcs_bmc_update_status(kcs_bmc, KCS_STATUS_STATE_MASK, - KCS_STATUS_STATE(state)); -} - -static void kcs_force_abort(struct kcs_bmc *kcs_bmc) -{ - set_state(kcs_bmc, ERROR_STATE); - kcs_bmc_read_data(kcs_bmc); - kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); - - kcs_bmc->phase = KCS_PHASE_ERROR; - kcs_bmc->data_in_avail = false; - kcs_bmc->data_in_idx = 0; -} - -static void kcs_bmc_handle_data(struct kcs_bmc *kcs_bmc) -{ - u8 data; - - switch (kcs_bmc->phase) { - case KCS_PHASE_WRITE_START: - kcs_bmc->phase = KCS_PHASE_WRITE_DATA; - fallthrough; - - case KCS_PHASE_WRITE_DATA: - if (kcs_bmc->data_in_idx < KCS_MSG_BUFSIZ) { - set_state(kcs_bmc, WRITE_STATE); - kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); - kcs_bmc->data_in[kcs_bmc->data_in_idx++] = - kcs_bmc_read_data(kcs_bmc); - } else { - kcs_force_abort(kcs_bmc); - kcs_bmc->error = KCS_LENGTH_ERROR; - } - break; - - case KCS_PHASE_WRITE_END_CMD: - if (kcs_bmc->data_in_idx < KCS_MSG_BUFSIZ) { - set_state(kcs_bmc, READ_STATE); - kcs_bmc->data_in[kcs_bmc->data_in_idx++] = - kcs_bmc_read_data(kcs_bmc); - kcs_bmc->phase = KCS_PHASE_WRITE_DONE; - kcs_bmc->data_in_avail = true; - wake_up_interruptible(&kcs_bmc->queue); - } else { - kcs_force_abort(kcs_bmc); - kcs_bmc->error = KCS_LENGTH_ERROR; - } - break; - - case KCS_PHASE_READ: - if (kcs_bmc->data_out_idx == kcs_bmc->data_out_len) - set_state(kcs_bmc, IDLE_STATE); - - data = kcs_bmc_read_data(kcs_bmc); - if (data != KCS_CMD_READ_BYTE) { - set_state(kcs_bmc, ERROR_STATE); - kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); - break; - } - - if (kcs_bmc->data_out_idx == kcs_bmc->data_out_len) { - kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); - kcs_bmc->phase = KCS_PHASE_IDLE; - break; - } - - kcs_bmc_write_data(kcs_bmc, - kcs_bmc->data_out[kcs_bmc->data_out_idx++]); - break; - - case KCS_PHASE_ABORT_ERROR1: - set_state(kcs_bmc, READ_STATE); - kcs_bmc_read_data(kcs_bmc); - kcs_bmc_write_data(kcs_bmc, kcs_bmc->error); - kcs_bmc->phase = KCS_PHASE_ABORT_ERROR2; - break; - - case KCS_PHASE_ABORT_ERROR2: - set_state(kcs_bmc, IDLE_STATE); - kcs_bmc_read_data(kcs_bmc); - kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); - kcs_bmc->phase = KCS_PHASE_IDLE; - break; - - default: - kcs_force_abort(kcs_bmc); - break; - } -} - -static void kcs_bmc_handle_cmd(struct kcs_bmc *kcs_bmc) -{ - u8 cmd; - - set_state(kcs_bmc, WRITE_STATE); - kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); - - cmd = kcs_bmc_read_data(kcs_bmc); - switch (cmd) { - case KCS_CMD_WRITE_START: - kcs_bmc->phase = KCS_PHASE_WRITE_START; - kcs_bmc->error = KCS_NO_ERROR; - kcs_bmc->data_in_avail = false; - kcs_bmc->data_in_idx = 0; - break; - - case KCS_CMD_WRITE_END: - if (kcs_bmc->phase != KCS_PHASE_WRITE_DATA) { - kcs_force_abort(kcs_bmc); - break; - } - - kcs_bmc->phase = KCS_PHASE_WRITE_END_CMD; - break; - - case KCS_CMD_GET_STATUS_ABORT: - if (kcs_bmc->error == KCS_NO_ERROR) - kcs_bmc->error = KCS_ABORTED_BY_COMMAND; - - kcs_bmc->phase = KCS_PHASE_ABORT_ERROR1; - kcs_bmc->data_in_avail = false; - kcs_bmc->data_in_idx = 0; - break; - - default: - kcs_force_abort(kcs_bmc); - kcs_bmc->error = KCS_ILLEGAL_CONTROL_CODE; - break; - } -} - +int kcs_bmc_ipmi_event(struct kcs_bmc *kcs_bmc); int kcs_bmc_handle_event(struct kcs_bmc *kcs_bmc) { - unsigned long flags; - int ret = -ENODATA; - u8 status; - - spin_lock_irqsave(&kcs_bmc->lock, flags); - - status = kcs_bmc_read_status(kcs_bmc); - if (status & KCS_STATUS_IBF) { - if (!kcs_bmc->running) - kcs_force_abort(kcs_bmc); - else if (status & KCS_STATUS_CMD_DAT) - kcs_bmc_handle_cmd(kcs_bmc); - else - kcs_bmc_handle_data(kcs_bmc); - - ret = 0; - } - - spin_unlock_irqrestore(&kcs_bmc->lock, flags); - - return ret; + return kcs_bmc_ipmi_event(kcs_bmc); } EXPORT_SYMBOL(kcs_bmc_handle_event); -static inline struct kcs_bmc *to_kcs_bmc(struct file *filp) -{ - return container_of(filp->private_data, struct kcs_bmc, miscdev); -} - -static int kcs_bmc_open(struct inode *inode, struct file *filp) -{ - struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); - int ret = 0; - - spin_lock_irq(&kcs_bmc->lock); - if (!kcs_bmc->running) - kcs_bmc->running = 1; - else - ret = -EBUSY; - spin_unlock_irq(&kcs_bmc->lock); - - return ret; -} - -static __poll_t kcs_bmc_poll(struct file *filp, poll_table *wait) -{ - struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); - __poll_t mask = 0; - - poll_wait(filp, &kcs_bmc->queue, wait); - - spin_lock_irq(&kcs_bmc->lock); - if (kcs_bmc->data_in_avail) - mask |= EPOLLIN; - spin_unlock_irq(&kcs_bmc->lock); - - return mask; -} - -static ssize_t kcs_bmc_read(struct file *filp, char __user *buf, - size_t count, loff_t *ppos) -{ - struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); - bool data_avail; - size_t data_len; - ssize_t ret; - - if (!(filp->f_flags & O_NONBLOCK)) - wait_event_interruptible(kcs_bmc->queue, - kcs_bmc->data_in_avail); - - mutex_lock(&kcs_bmc->mutex); - - spin_lock_irq(&kcs_bmc->lock); - data_avail = kcs_bmc->data_in_avail; - if (data_avail) { - data_len = kcs_bmc->data_in_idx; - memcpy(kcs_bmc->kbuffer, kcs_bmc->data_in, data_len); - } - spin_unlock_irq(&kcs_bmc->lock); - - if (!data_avail) { - ret = -EAGAIN; - goto out_unlock; - } - - if (count < data_len) { - pr_err("channel=%u with too large data : %zu\n", - kcs_bmc->channel, data_len); - - spin_lock_irq(&kcs_bmc->lock); - kcs_force_abort(kcs_bmc); - spin_unlock_irq(&kcs_bmc->lock); - - ret = -EOVERFLOW; - goto out_unlock; - } - - if (copy_to_user(buf, kcs_bmc->kbuffer, data_len)) { - ret = -EFAULT; - goto out_unlock; - } - - ret = data_len; - - spin_lock_irq(&kcs_bmc->lock); - if (kcs_bmc->phase == KCS_PHASE_WRITE_DONE) { - kcs_bmc->phase = KCS_PHASE_WAIT_READ; - kcs_bmc->data_in_avail = false; - kcs_bmc->data_in_idx = 0; - } else { - ret = -EAGAIN; - } - spin_unlock_irq(&kcs_bmc->lock); - -out_unlock: - mutex_unlock(&kcs_bmc->mutex); - - return ret; -} - -static ssize_t kcs_bmc_write(struct file *filp, const char __user *buf, - size_t count, loff_t *ppos) -{ - struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); - ssize_t ret; - - /* a minimum response size '3' : netfn + cmd + ccode */ - if (count < 3 || count > KCS_MSG_BUFSIZ) - return -EINVAL; - - mutex_lock(&kcs_bmc->mutex); - - if (copy_from_user(kcs_bmc->kbuffer, buf, count)) { - ret = -EFAULT; - goto out_unlock; - } - - spin_lock_irq(&kcs_bmc->lock); - if (kcs_bmc->phase == KCS_PHASE_WAIT_READ) { - kcs_bmc->phase = KCS_PHASE_READ; - kcs_bmc->data_out_idx = 1; - kcs_bmc->data_out_len = count; - memcpy(kcs_bmc->data_out, kcs_bmc->kbuffer, count); - kcs_bmc_write_data(kcs_bmc, kcs_bmc->data_out[0]); - ret = count; - } else { - ret = -EINVAL; - } - spin_unlock_irq(&kcs_bmc->lock); - -out_unlock: - mutex_unlock(&kcs_bmc->mutex); - - return ret; -} - -static long kcs_bmc_ioctl(struct file *filp, unsigned int cmd, - unsigned long arg) -{ - struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); - long ret = 0; - - spin_lock_irq(&kcs_bmc->lock); - - switch (cmd) { - case IPMI_BMC_IOCTL_SET_SMS_ATN: - kcs_bmc_update_status(kcs_bmc, KCS_STATUS_SMS_ATN, KCS_STATUS_SMS_ATN); - break; - - case IPMI_BMC_IOCTL_CLEAR_SMS_ATN: - kcs_bmc_update_status(kcs_bmc, KCS_STATUS_SMS_ATN, 0); - break; - - case IPMI_BMC_IOCTL_FORCE_ABORT: - kcs_force_abort(kcs_bmc); - break; - - default: - ret = -EINVAL; - break; - } - - spin_unlock_irq(&kcs_bmc->lock); - - return ret; -} - -static int kcs_bmc_release(struct inode *inode, struct file *filp) -{ - struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); - - spin_lock_irq(&kcs_bmc->lock); - kcs_bmc->running = 0; - kcs_force_abort(kcs_bmc); - spin_unlock_irq(&kcs_bmc->lock); - - return 0; -} - -static const struct file_operations kcs_bmc_fops = { - .owner = THIS_MODULE, - .open = kcs_bmc_open, - .read = kcs_bmc_read, - .write = kcs_bmc_write, - .release = kcs_bmc_release, - .poll = kcs_bmc_poll, - .unlocked_ioctl = kcs_bmc_ioctl, -}; - +struct kcs_bmc *kcs_bmc_ipmi_alloc(struct device *dev, int sizeof_priv, u32 channel); struct kcs_bmc *kcs_bmc_alloc(struct device *dev, int sizeof_priv, u32 channel) { - struct kcs_bmc *kcs_bmc; - - kcs_bmc = devm_kzalloc(dev, sizeof(*kcs_bmc) + sizeof_priv, GFP_KERNEL); - if (!kcs_bmc) - return NULL; - - spin_lock_init(&kcs_bmc->lock); - kcs_bmc->channel = channel; - - mutex_init(&kcs_bmc->mutex); - init_waitqueue_head(&kcs_bmc->queue); - - kcs_bmc->data_in = devm_kmalloc(dev, KCS_MSG_BUFSIZ, GFP_KERNEL); - kcs_bmc->data_out = devm_kmalloc(dev, KCS_MSG_BUFSIZ, GFP_KERNEL); - kcs_bmc->kbuffer = devm_kmalloc(dev, KCS_MSG_BUFSIZ, GFP_KERNEL); - - kcs_bmc->miscdev.minor = MISC_DYNAMIC_MINOR; - kcs_bmc->miscdev.name = devm_kasprintf(dev, GFP_KERNEL, "%s%u", - DEVICE_NAME, channel); - if (!kcs_bmc->data_in || !kcs_bmc->data_out || !kcs_bmc->kbuffer || - !kcs_bmc->miscdev.name) - return NULL; - kcs_bmc->miscdev.fops = &kcs_bmc_fops; - - return kcs_bmc; + return kcs_bmc_ipmi_alloc(dev, sizeof_priv, channel); } EXPORT_SYMBOL(kcs_bmc_alloc); diff --git a/drivers/char/ipmi/kcs_bmc.h b/drivers/char/ipmi/kcs_bmc.h index 970f53892f2d..febea0c8deb4 100644 --- a/drivers/char/ipmi/kcs_bmc.h +++ b/drivers/char/ipmi/kcs_bmc.h @@ -104,6 +104,12 @@ static inline void *kcs_bmc_priv(struct kcs_bmc *kcs_bmc) } int kcs_bmc_handle_event(struct kcs_bmc *kcs_bmc); -struct kcs_bmc *kcs_bmc_alloc(struct device *dev, int sizeof_priv, - u32 channel); +struct kcs_bmc *kcs_bmc_alloc(struct device *dev, int sizeof_priv, u32 channel); + +u8 kcs_bmc_read_data(struct kcs_bmc *kcs_bmc); +void kcs_bmc_write_data(struct kcs_bmc *kcs_bmc, u8 data); +u8 kcs_bmc_read_status(struct kcs_bmc *kcs_bmc); +void kcs_bmc_write_status(struct kcs_bmc *kcs_bmc, u8 data); +void kcs_bmc_update_status(struct kcs_bmc *kcs_bmc, u8 mask, u8 val); + #endif /* __KCS_BMC_H__ */ diff --git a/drivers/char/ipmi/kcs_bmc_cdev_ipmi.c b/drivers/char/ipmi/kcs_bmc_cdev_ipmi.c new file mode 100644 index 000000000000..82c77994e481 --- /dev/null +++ b/drivers/char/ipmi/kcs_bmc_cdev_ipmi.c @@ -0,0 +1,428 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2015-2018, Intel Corporation. + */ + +#define pr_fmt(fmt) "kcs-bmc: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kcs_bmc.h" + +#define DEVICE_NAME "ipmi-kcs" + +#define KCS_MSG_BUFSIZ 1000 + +#define KCS_ZERO_DATA 0 + + +/* IPMI 2.0 - Table 9-1, KCS Interface Status Register Bits */ +#define KCS_STATUS_STATE(state) (state << 6) +#define KCS_STATUS_STATE_MASK GENMASK(7, 6) +#define KCS_STATUS_CMD_DAT BIT(3) +#define KCS_STATUS_SMS_ATN BIT(2) +#define KCS_STATUS_IBF BIT(1) +#define KCS_STATUS_OBF BIT(0) + +/* IPMI 2.0 - Table 9-2, KCS Interface State Bits */ +enum kcs_states { + IDLE_STATE = 0, + READ_STATE = 1, + WRITE_STATE = 2, + ERROR_STATE = 3, +}; + +/* IPMI 2.0 - Table 9-3, KCS Interface Control Codes */ +#define KCS_CMD_GET_STATUS_ABORT 0x60 +#define KCS_CMD_WRITE_START 0x61 +#define KCS_CMD_WRITE_END 0x62 +#define KCS_CMD_READ_BYTE 0x68 + +static inline void set_state(struct kcs_bmc *kcs_bmc, u8 state) +{ + kcs_bmc_update_status(kcs_bmc, KCS_STATUS_STATE_MASK, + KCS_STATUS_STATE(state)); +} + +static void kcs_bmc_ipmi_force_abort(struct kcs_bmc *kcs_bmc) +{ + set_state(kcs_bmc, ERROR_STATE); + kcs_bmc_read_data(kcs_bmc); + kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); + + kcs_bmc->phase = KCS_PHASE_ERROR; + kcs_bmc->data_in_avail = false; + kcs_bmc->data_in_idx = 0; +} + +static void kcs_bmc_ipmi_handle_data(struct kcs_bmc *kcs_bmc) +{ + u8 data; + + switch (kcs_bmc->phase) { + case KCS_PHASE_WRITE_START: + kcs_bmc->phase = KCS_PHASE_WRITE_DATA; + fallthrough; + + case KCS_PHASE_WRITE_DATA: + if (kcs_bmc->data_in_idx < KCS_MSG_BUFSIZ) { + set_state(kcs_bmc, WRITE_STATE); + kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); + kcs_bmc->data_in[kcs_bmc->data_in_idx++] = + kcs_bmc_read_data(kcs_bmc); + } else { + kcs_bmc_ipmi_force_abort(kcs_bmc); + kcs_bmc->error = KCS_LENGTH_ERROR; + } + break; + + case KCS_PHASE_WRITE_END_CMD: + if (kcs_bmc->data_in_idx < KCS_MSG_BUFSIZ) { + set_state(kcs_bmc, READ_STATE); + kcs_bmc->data_in[kcs_bmc->data_in_idx++] = + kcs_bmc_read_data(kcs_bmc); + kcs_bmc->phase = KCS_PHASE_WRITE_DONE; + kcs_bmc->data_in_avail = true; + wake_up_interruptible(&kcs_bmc->queue); + } else { + kcs_bmc_ipmi_force_abort(kcs_bmc); + kcs_bmc->error = KCS_LENGTH_ERROR; + } + break; + + case KCS_PHASE_READ: + if (kcs_bmc->data_out_idx == kcs_bmc->data_out_len) + set_state(kcs_bmc, IDLE_STATE); + + data = kcs_bmc_read_data(kcs_bmc); + if (data != KCS_CMD_READ_BYTE) { + set_state(kcs_bmc, ERROR_STATE); + kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); + break; + } + + if (kcs_bmc->data_out_idx == kcs_bmc->data_out_len) { + kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); + kcs_bmc->phase = KCS_PHASE_IDLE; + break; + } + + kcs_bmc_write_data(kcs_bmc, + kcs_bmc->data_out[kcs_bmc->data_out_idx++]); + break; + + case KCS_PHASE_ABORT_ERROR1: + set_state(kcs_bmc, READ_STATE); + kcs_bmc_read_data(kcs_bmc); + kcs_bmc_write_data(kcs_bmc, kcs_bmc->error); + kcs_bmc->phase = KCS_PHASE_ABORT_ERROR2; + break; + + case KCS_PHASE_ABORT_ERROR2: + set_state(kcs_bmc, IDLE_STATE); + kcs_bmc_read_data(kcs_bmc); + kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); + kcs_bmc->phase = KCS_PHASE_IDLE; + break; + + default: + kcs_bmc_ipmi_force_abort(kcs_bmc); + break; + } +} + +static void kcs_bmc_ipmi_handle_cmd(struct kcs_bmc *kcs_bmc) +{ + u8 cmd; + + set_state(kcs_bmc, WRITE_STATE); + kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); + + cmd = kcs_bmc_read_data(kcs_bmc); + switch (cmd) { + case KCS_CMD_WRITE_START: + kcs_bmc->phase = KCS_PHASE_WRITE_START; + kcs_bmc->error = KCS_NO_ERROR; + kcs_bmc->data_in_avail = false; + kcs_bmc->data_in_idx = 0; + break; + + case KCS_CMD_WRITE_END: + if (kcs_bmc->phase != KCS_PHASE_WRITE_DATA) { + kcs_bmc_ipmi_force_abort(kcs_bmc); + break; + } + + kcs_bmc->phase = KCS_PHASE_WRITE_END_CMD; + break; + + case KCS_CMD_GET_STATUS_ABORT: + if (kcs_bmc->error == KCS_NO_ERROR) + kcs_bmc->error = KCS_ABORTED_BY_COMMAND; + + kcs_bmc->phase = KCS_PHASE_ABORT_ERROR1; + kcs_bmc->data_in_avail = false; + kcs_bmc->data_in_idx = 0; + break; + + default: + kcs_bmc_ipmi_force_abort(kcs_bmc); + kcs_bmc->error = KCS_ILLEGAL_CONTROL_CODE; + break; + } +} + +int kcs_bmc_ipmi_event(struct kcs_bmc *kcs_bmc); +int kcs_bmc_ipmi_event(struct kcs_bmc *kcs_bmc) +{ + unsigned long flags; + int ret = -ENODATA; + u8 status; + + spin_lock_irqsave(&kcs_bmc->lock, flags); + + status = kcs_bmc_read_status(kcs_bmc); + if (status & KCS_STATUS_IBF) { + if (!kcs_bmc->running) + kcs_bmc_ipmi_force_abort(kcs_bmc); + else if (status & KCS_STATUS_CMD_DAT) + kcs_bmc_ipmi_handle_cmd(kcs_bmc); + else + kcs_bmc_ipmi_handle_data(kcs_bmc); + + ret = 0; + } + + spin_unlock_irqrestore(&kcs_bmc->lock, flags); + + return ret; +} +EXPORT_SYMBOL(kcs_bmc_ipmi_event); + +static inline struct kcs_bmc *to_kcs_bmc(struct file *filp) +{ + return container_of(filp->private_data, struct kcs_bmc, miscdev); +} + +static int kcs_bmc_ipmi_open(struct inode *inode, struct file *filp) +{ + struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); + int ret = 0; + + spin_lock_irq(&kcs_bmc->lock); + if (!kcs_bmc->running) + kcs_bmc->running = 1; + else + ret = -EBUSY; + spin_unlock_irq(&kcs_bmc->lock); + + return ret; +} + +static __poll_t kcs_bmc_ipmi_poll(struct file *filp, poll_table *wait) +{ + struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); + __poll_t mask = 0; + + poll_wait(filp, &kcs_bmc->queue, wait); + + spin_lock_irq(&kcs_bmc->lock); + if (kcs_bmc->data_in_avail) + mask |= EPOLLIN; + spin_unlock_irq(&kcs_bmc->lock); + + return mask; +} + +static ssize_t kcs_bmc_ipmi_read(struct file *filp, char __user *buf, + size_t count, loff_t *ppos) +{ + struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); + bool data_avail; + size_t data_len; + ssize_t ret; + + if (!(filp->f_flags & O_NONBLOCK)) + wait_event_interruptible(kcs_bmc->queue, + kcs_bmc->data_in_avail); + + mutex_lock(&kcs_bmc->mutex); + + spin_lock_irq(&kcs_bmc->lock); + data_avail = kcs_bmc->data_in_avail; + if (data_avail) { + data_len = kcs_bmc->data_in_idx; + memcpy(kcs_bmc->kbuffer, kcs_bmc->data_in, data_len); + } + spin_unlock_irq(&kcs_bmc->lock); + + if (!data_avail) { + ret = -EAGAIN; + goto out_unlock; + } + + if (count < data_len) { + pr_err("channel=%u with too large data : %zu\n", + kcs_bmc->channel, data_len); + + spin_lock_irq(&kcs_bmc->lock); + kcs_bmc_ipmi_force_abort(kcs_bmc); + spin_unlock_irq(&kcs_bmc->lock); + + ret = -EOVERFLOW; + goto out_unlock; + } + + if (copy_to_user(buf, kcs_bmc->kbuffer, data_len)) { + ret = -EFAULT; + goto out_unlock; + } + + ret = data_len; + + spin_lock_irq(&kcs_bmc->lock); + if (kcs_bmc->phase == KCS_PHASE_WRITE_DONE) { + kcs_bmc->phase = KCS_PHASE_WAIT_READ; + kcs_bmc->data_in_avail = false; + kcs_bmc->data_in_idx = 0; + } else { + ret = -EAGAIN; + } + spin_unlock_irq(&kcs_bmc->lock); + +out_unlock: + mutex_unlock(&kcs_bmc->mutex); + + return ret; +} + +static ssize_t kcs_bmc_ipmi_write(struct file *filp, const char __user *buf, + size_t count, loff_t *ppos) +{ + struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); + ssize_t ret; + + /* a minimum response size '3' : netfn + cmd + ccode */ + if (count < 3 || count > KCS_MSG_BUFSIZ) + return -EINVAL; + + mutex_lock(&kcs_bmc->mutex); + + if (copy_from_user(kcs_bmc->kbuffer, buf, count)) { + ret = -EFAULT; + goto out_unlock; + } + + spin_lock_irq(&kcs_bmc->lock); + if (kcs_bmc->phase == KCS_PHASE_WAIT_READ) { + kcs_bmc->phase = KCS_PHASE_READ; + kcs_bmc->data_out_idx = 1; + kcs_bmc->data_out_len = count; + memcpy(kcs_bmc->data_out, kcs_bmc->kbuffer, count); + kcs_bmc_write_data(kcs_bmc, kcs_bmc->data_out[0]); + ret = count; + } else { + ret = -EINVAL; + } + spin_unlock_irq(&kcs_bmc->lock); + +out_unlock: + mutex_unlock(&kcs_bmc->mutex); + + return ret; +} + +static long kcs_bmc_ipmi_ioctl(struct file *filp, unsigned int cmd, + unsigned long arg) +{ + struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); + long ret = 0; + + spin_lock_irq(&kcs_bmc->lock); + + switch (cmd) { + case IPMI_BMC_IOCTL_SET_SMS_ATN: + kcs_bmc_update_status(kcs_bmc, KCS_STATUS_SMS_ATN, KCS_STATUS_SMS_ATN); + break; + + case IPMI_BMC_IOCTL_CLEAR_SMS_ATN: + kcs_bmc_update_status(kcs_bmc, KCS_STATUS_SMS_ATN, 0); + break; + + case IPMI_BMC_IOCTL_FORCE_ABORT: + kcs_bmc_ipmi_force_abort(kcs_bmc); + break; + + default: + ret = -EINVAL; + break; + } + + spin_unlock_irq(&kcs_bmc->lock); + + return ret; +} + +static int kcs_bmc_ipmi_release(struct inode *inode, struct file *filp) +{ + struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); + + spin_lock_irq(&kcs_bmc->lock); + kcs_bmc->running = 0; + kcs_bmc_ipmi_force_abort(kcs_bmc); + spin_unlock_irq(&kcs_bmc->lock); + + return 0; +} + +static const struct file_operations kcs_bmc_fops = { + .owner = THIS_MODULE, + .open = kcs_bmc_ipmi_open, + .read = kcs_bmc_ipmi_read, + .write = kcs_bmc_ipmi_write, + .release = kcs_bmc_ipmi_release, + .poll = kcs_bmc_ipmi_poll, + .unlocked_ioctl = kcs_bmc_ipmi_ioctl, +}; + +struct kcs_bmc *kcs_bmc_ipmi_alloc(struct device *dev, int sizeof_priv, u32 channel); +struct kcs_bmc *kcs_bmc_ipmi_alloc(struct device *dev, int sizeof_priv, u32 channel) +{ + struct kcs_bmc *kcs_bmc; + + kcs_bmc = devm_kzalloc(dev, sizeof(*kcs_bmc) + sizeof_priv, GFP_KERNEL); + if (!kcs_bmc) + return NULL; + + spin_lock_init(&kcs_bmc->lock); + kcs_bmc->channel = channel; + + mutex_init(&kcs_bmc->mutex); + init_waitqueue_head(&kcs_bmc->queue); + + kcs_bmc->data_in = devm_kmalloc(dev, KCS_MSG_BUFSIZ, GFP_KERNEL); + kcs_bmc->data_out = devm_kmalloc(dev, KCS_MSG_BUFSIZ, GFP_KERNEL); + kcs_bmc->kbuffer = devm_kmalloc(dev, KCS_MSG_BUFSIZ, GFP_KERNEL); + + kcs_bmc->miscdev.minor = MISC_DYNAMIC_MINOR; + kcs_bmc->miscdev.name = devm_kasprintf(dev, GFP_KERNEL, "%s%u", + DEVICE_NAME, channel); + if (!kcs_bmc->data_in || !kcs_bmc->data_out || !kcs_bmc->kbuffer || + !kcs_bmc->miscdev.name) + return NULL; + kcs_bmc->miscdev.fops = &kcs_bmc_fops; + + return kcs_bmc; +} +EXPORT_SYMBOL(kcs_bmc_ipmi_alloc); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Haiyue Wang "); +MODULE_DESCRIPTION("KCS BMC to handle the IPMI request from system software"); -- 2.30.2 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32C1BC47082 for ; Tue, 8 Jun 2021 10:51:02 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AB13761278 for ; Tue, 8 Jun 2021 10:51:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AB13761278 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=aj.id.au Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=openbmc-bounces+openbmc=archiver.kernel.org@lists.ozlabs.org Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4Fzn8h5ZFYz3bsM for ; Tue, 8 Jun 2021 20:51:00 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=aj.id.au header.i=@aj.id.au header.a=rsa-sha256 header.s=fm3 header.b=fXZ724+X; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=rv9gU+In; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=aj.id.au (client-ip=66.111.4.230; helo=new4-smtp.messagingengine.com; envelope-from=andrew@aj.id.au; receiver=) Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=aj.id.au header.i=@aj.id.au header.a=rsa-sha256 header.s=fm3 header.b=fXZ724+X; dkim=pass (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=rv9gU+In; dkim-atps=neutral Received: from new4-smtp.messagingengine.com (new4-smtp.messagingengine.com [66.111.4.230]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4Fzn6B3yzHz306Q; Tue, 8 Jun 2021 20:48:50 +1000 (AEST) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailnew.nyi.internal (Postfix) with ESMTP id 02F9F580597; Tue, 8 Jun 2021 06:48:48 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 08 Jun 2021 06:48:48 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aj.id.au; h=from :to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm3; bh=zzc7gB5yDI9rO hlQvK5HVCt4JFBNmFnr0uke3sSm5CQ=; b=fXZ724+XBrELPJ4IK/tFMZBUOO8vJ Z6nK/D/NGU8EacylPpG8KzOH6ZESQ6fxbHf4eVfrZj3qwp1bRN0VH49AepEdmVeV IBD3P79iKkRaBvbBG6PkzWnh72Gt25XBW6fWXzud7u2Zd+Vg+BdGyrxyFIl0a5h/ /FrRsYcKZaqXh2m6CaceX0WDUH+cuGmEm+tihr8BfKDQHzaZEA1QCdDB64kX6bmR fGVNmoL288yzGJcCiBft6nzc8i2wbQjavwc44WUYAyQhQ8ws+vDqSdRg9OUG8qob pKrnqsRCKm9g3NkaPA+8mGAiu1dtHT3GoELkHTTIZW09GodnAIPYCKUKw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=zzc7gB5yDI9rOhlQvK5HVCt4JFBNmFnr0uke3sSm5CQ=; b=rv9gU+In sqRt6Jz/RZAY4gMUyHg2POfKDbrtm6X4LtPXMF/B/VVg6p2LINBYsXoUUgyK5Mji ygx6b83+VBrg41GRu0oblH6iMCOrXMtp2xOXOM3vEwuk2TER3+9N3/+qFwvuRNz7 V1ASWXgejBSPNGeGJKKzPKD4EwlYyym8kpmAg4tR57KovDXJobUhe/NYkFD+yx7o gzm/r6zZtVvZCZbybDlUymGlWYGRAwKpz+pKycPtf8JFa1B6F+RvD59ekdEIN8dr uAmNDg9Ztf/A3CND4+ShlejRMmkMt+Zon2IlnvUkF7gZB5eHAQrttAHsDTeFdVqn WXb1Y0YE9aAM6g== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrfedtledgudejucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomheptehnughrvgifucflvghffhgvrhihuceorghnughrvgifsegr jhdrihgurdgruheqnecuggftrfgrthhtvghrnhepjefgvdevheetkeevgeegleelgfelte etjeffleffvdduudevieffgeetleevhfetnecuvehluhhsthgvrhfuihiivgeptdenucfr rghrrghmpehmrghilhhfrhhomheprghnughrvgifsegrjhdrihgurdgruh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 8 Jun 2021 06:48:39 -0400 (EDT) From: Andrew Jeffery To: openipmi-developer@lists.sourceforge.net, openbmc@lists.ozlabs.org, minyard@acm.org Subject: [PATCH v4 04/16] ipmi: kcs_bmc: Split out kcs_bmc_cdev_ipmi Date: Tue, 8 Jun 2021 20:17:45 +0930 Message-Id: <20210608104757.582199-5-andrew@aj.id.au> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210608104757.582199-1-andrew@aj.id.au> References: <20210608104757.582199-1-andrew@aj.id.au> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: openbmc@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development list for OpenBMC List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, tmaimon77@gmail.com, linux-aspeed@lists.ozlabs.org, KWLIU@nuvoton.com, avifishman70@gmail.com, venture@google.com, chiawei_wang@aspeedtech.com, linux-kernel@vger.kernel.org, tali.perry1@gmail.com, robh+dt@kernel.org, arnd@arndb.de, zweiss@equinix.com, linux-arm-kernel@lists.infradead.org, benjaminfair@google.com Errors-To: openbmc-bounces+openbmc=archiver.kernel.org@lists.ozlabs.org Sender: "openbmc" Take steps towards defining a coherent API to separate the KCS device drivers from the userspace interface. Decreasing the coupling will improve the separation of concerns and enable the introduction of alternative userspace interfaces. For now, simply split the chardev logic out to a separate file. The code continues to build into the same module. Signed-off-by: Andrew Jeffery Reviewed-by: Zev Weiss --- drivers/char/ipmi/Makefile | 2 +- drivers/char/ipmi/kcs_bmc.c | 423 +------------------------ drivers/char/ipmi/kcs_bmc.h | 10 +- drivers/char/ipmi/kcs_bmc_cdev_ipmi.c | 428 ++++++++++++++++++++++++++ 4 files changed, 451 insertions(+), 412 deletions(-) create mode 100644 drivers/char/ipmi/kcs_bmc_cdev_ipmi.c diff --git a/drivers/char/ipmi/Makefile b/drivers/char/ipmi/Makefile index 0822adc2ec41..a302bc865370 100644 --- a/drivers/char/ipmi/Makefile +++ b/drivers/char/ipmi/Makefile @@ -22,7 +22,7 @@ obj-$(CONFIG_IPMI_SSIF) += ipmi_ssif.o obj-$(CONFIG_IPMI_POWERNV) += ipmi_powernv.o obj-$(CONFIG_IPMI_WATCHDOG) += ipmi_watchdog.o obj-$(CONFIG_IPMI_POWEROFF) += ipmi_poweroff.o -obj-$(CONFIG_IPMI_KCS_BMC) += kcs_bmc.o +obj-$(CONFIG_IPMI_KCS_BMC) += kcs_bmc.o kcs_bmc_cdev_ipmi.o obj-$(CONFIG_ASPEED_BT_IPMI_BMC) += bt-bmc.o obj-$(CONFIG_ASPEED_KCS_IPMI_BMC) += kcs_bmc_aspeed.o obj-$(CONFIG_NPCM7XX_KCS_IPMI_BMC) += kcs_bmc_npcm7xx.o diff --git a/drivers/char/ipmi/kcs_bmc.c b/drivers/char/ipmi/kcs_bmc.c index c4336c1f2d6d..ef5c48ffe74a 100644 --- a/drivers/char/ipmi/kcs_bmc.c +++ b/drivers/char/ipmi/kcs_bmc.c @@ -3,446 +3,51 @@ * Copyright (c) 2015-2018, Intel Corporation. */ -#define pr_fmt(fmt) "kcs-bmc: " fmt - -#include -#include -#include #include -#include -#include -#include -#include #include "kcs_bmc.h" -#define DEVICE_NAME "ipmi-kcs" - -#define KCS_MSG_BUFSIZ 1000 - -#define KCS_ZERO_DATA 0 - - -/* IPMI 2.0 - Table 9-1, KCS Interface Status Register Bits */ -#define KCS_STATUS_STATE(state) (state << 6) -#define KCS_STATUS_STATE_MASK GENMASK(7, 6) -#define KCS_STATUS_CMD_DAT BIT(3) -#define KCS_STATUS_SMS_ATN BIT(2) -#define KCS_STATUS_IBF BIT(1) -#define KCS_STATUS_OBF BIT(0) - -/* IPMI 2.0 - Table 9-2, KCS Interface State Bits */ -enum kcs_states { - IDLE_STATE = 0, - READ_STATE = 1, - WRITE_STATE = 2, - ERROR_STATE = 3, -}; - -/* IPMI 2.0 - Table 9-3, KCS Interface Control Codes */ -#define KCS_CMD_GET_STATUS_ABORT 0x60 -#define KCS_CMD_WRITE_START 0x61 -#define KCS_CMD_WRITE_END 0x62 -#define KCS_CMD_READ_BYTE 0x68 - -static inline u8 kcs_bmc_read_data(struct kcs_bmc *kcs_bmc) +u8 kcs_bmc_read_data(struct kcs_bmc *kcs_bmc) { return kcs_bmc->io_inputb(kcs_bmc, kcs_bmc->ioreg.idr); } +EXPORT_SYMBOL(kcs_bmc_read_data); -static inline void kcs_bmc_write_data(struct kcs_bmc *kcs_bmc, u8 data) +void kcs_bmc_write_data(struct kcs_bmc *kcs_bmc, u8 data) { kcs_bmc->io_outputb(kcs_bmc, kcs_bmc->ioreg.odr, data); } +EXPORT_SYMBOL(kcs_bmc_write_data); -static inline u8 kcs_bmc_read_status(struct kcs_bmc *kcs_bmc) +u8 kcs_bmc_read_status(struct kcs_bmc *kcs_bmc) { return kcs_bmc->io_inputb(kcs_bmc, kcs_bmc->ioreg.str); } +EXPORT_SYMBOL(kcs_bmc_read_status); -static inline void kcs_bmc_write_status(struct kcs_bmc *kcs_bmc, u8 data) +void kcs_bmc_write_status(struct kcs_bmc *kcs_bmc, u8 data) { kcs_bmc->io_outputb(kcs_bmc, kcs_bmc->ioreg.str, data); } +EXPORT_SYMBOL(kcs_bmc_write_status); -static void kcs_bmc_update_status(struct kcs_bmc *kcs_bmc, u8 mask, u8 val) +void kcs_bmc_update_status(struct kcs_bmc *kcs_bmc, u8 mask, u8 val) { kcs_bmc->io_updateb(kcs_bmc, kcs_bmc->ioreg.str, mask, val); } +EXPORT_SYMBOL(kcs_bmc_update_status); -static inline void set_state(struct kcs_bmc *kcs_bmc, u8 state) -{ - kcs_bmc_update_status(kcs_bmc, KCS_STATUS_STATE_MASK, - KCS_STATUS_STATE(state)); -} - -static void kcs_force_abort(struct kcs_bmc *kcs_bmc) -{ - set_state(kcs_bmc, ERROR_STATE); - kcs_bmc_read_data(kcs_bmc); - kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); - - kcs_bmc->phase = KCS_PHASE_ERROR; - kcs_bmc->data_in_avail = false; - kcs_bmc->data_in_idx = 0; -} - -static void kcs_bmc_handle_data(struct kcs_bmc *kcs_bmc) -{ - u8 data; - - switch (kcs_bmc->phase) { - case KCS_PHASE_WRITE_START: - kcs_bmc->phase = KCS_PHASE_WRITE_DATA; - fallthrough; - - case KCS_PHASE_WRITE_DATA: - if (kcs_bmc->data_in_idx < KCS_MSG_BUFSIZ) { - set_state(kcs_bmc, WRITE_STATE); - kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); - kcs_bmc->data_in[kcs_bmc->data_in_idx++] = - kcs_bmc_read_data(kcs_bmc); - } else { - kcs_force_abort(kcs_bmc); - kcs_bmc->error = KCS_LENGTH_ERROR; - } - break; - - case KCS_PHASE_WRITE_END_CMD: - if (kcs_bmc->data_in_idx < KCS_MSG_BUFSIZ) { - set_state(kcs_bmc, READ_STATE); - kcs_bmc->data_in[kcs_bmc->data_in_idx++] = - kcs_bmc_read_data(kcs_bmc); - kcs_bmc->phase = KCS_PHASE_WRITE_DONE; - kcs_bmc->data_in_avail = true; - wake_up_interruptible(&kcs_bmc->queue); - } else { - kcs_force_abort(kcs_bmc); - kcs_bmc->error = KCS_LENGTH_ERROR; - } - break; - - case KCS_PHASE_READ: - if (kcs_bmc->data_out_idx == kcs_bmc->data_out_len) - set_state(kcs_bmc, IDLE_STATE); - - data = kcs_bmc_read_data(kcs_bmc); - if (data != KCS_CMD_READ_BYTE) { - set_state(kcs_bmc, ERROR_STATE); - kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); - break; - } - - if (kcs_bmc->data_out_idx == kcs_bmc->data_out_len) { - kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); - kcs_bmc->phase = KCS_PHASE_IDLE; - break; - } - - kcs_bmc_write_data(kcs_bmc, - kcs_bmc->data_out[kcs_bmc->data_out_idx++]); - break; - - case KCS_PHASE_ABORT_ERROR1: - set_state(kcs_bmc, READ_STATE); - kcs_bmc_read_data(kcs_bmc); - kcs_bmc_write_data(kcs_bmc, kcs_bmc->error); - kcs_bmc->phase = KCS_PHASE_ABORT_ERROR2; - break; - - case KCS_PHASE_ABORT_ERROR2: - set_state(kcs_bmc, IDLE_STATE); - kcs_bmc_read_data(kcs_bmc); - kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); - kcs_bmc->phase = KCS_PHASE_IDLE; - break; - - default: - kcs_force_abort(kcs_bmc); - break; - } -} - -static void kcs_bmc_handle_cmd(struct kcs_bmc *kcs_bmc) -{ - u8 cmd; - - set_state(kcs_bmc, WRITE_STATE); - kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); - - cmd = kcs_bmc_read_data(kcs_bmc); - switch (cmd) { - case KCS_CMD_WRITE_START: - kcs_bmc->phase = KCS_PHASE_WRITE_START; - kcs_bmc->error = KCS_NO_ERROR; - kcs_bmc->data_in_avail = false; - kcs_bmc->data_in_idx = 0; - break; - - case KCS_CMD_WRITE_END: - if (kcs_bmc->phase != KCS_PHASE_WRITE_DATA) { - kcs_force_abort(kcs_bmc); - break; - } - - kcs_bmc->phase = KCS_PHASE_WRITE_END_CMD; - break; - - case KCS_CMD_GET_STATUS_ABORT: - if (kcs_bmc->error == KCS_NO_ERROR) - kcs_bmc->error = KCS_ABORTED_BY_COMMAND; - - kcs_bmc->phase = KCS_PHASE_ABORT_ERROR1; - kcs_bmc->data_in_avail = false; - kcs_bmc->data_in_idx = 0; - break; - - default: - kcs_force_abort(kcs_bmc); - kcs_bmc->error = KCS_ILLEGAL_CONTROL_CODE; - break; - } -} - +int kcs_bmc_ipmi_event(struct kcs_bmc *kcs_bmc); int kcs_bmc_handle_event(struct kcs_bmc *kcs_bmc) { - unsigned long flags; - int ret = -ENODATA; - u8 status; - - spin_lock_irqsave(&kcs_bmc->lock, flags); - - status = kcs_bmc_read_status(kcs_bmc); - if (status & KCS_STATUS_IBF) { - if (!kcs_bmc->running) - kcs_force_abort(kcs_bmc); - else if (status & KCS_STATUS_CMD_DAT) - kcs_bmc_handle_cmd(kcs_bmc); - else - kcs_bmc_handle_data(kcs_bmc); - - ret = 0; - } - - spin_unlock_irqrestore(&kcs_bmc->lock, flags); - - return ret; + return kcs_bmc_ipmi_event(kcs_bmc); } EXPORT_SYMBOL(kcs_bmc_handle_event); -static inline struct kcs_bmc *to_kcs_bmc(struct file *filp) -{ - return container_of(filp->private_data, struct kcs_bmc, miscdev); -} - -static int kcs_bmc_open(struct inode *inode, struct file *filp) -{ - struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); - int ret = 0; - - spin_lock_irq(&kcs_bmc->lock); - if (!kcs_bmc->running) - kcs_bmc->running = 1; - else - ret = -EBUSY; - spin_unlock_irq(&kcs_bmc->lock); - - return ret; -} - -static __poll_t kcs_bmc_poll(struct file *filp, poll_table *wait) -{ - struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); - __poll_t mask = 0; - - poll_wait(filp, &kcs_bmc->queue, wait); - - spin_lock_irq(&kcs_bmc->lock); - if (kcs_bmc->data_in_avail) - mask |= EPOLLIN; - spin_unlock_irq(&kcs_bmc->lock); - - return mask; -} - -static ssize_t kcs_bmc_read(struct file *filp, char __user *buf, - size_t count, loff_t *ppos) -{ - struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); - bool data_avail; - size_t data_len; - ssize_t ret; - - if (!(filp->f_flags & O_NONBLOCK)) - wait_event_interruptible(kcs_bmc->queue, - kcs_bmc->data_in_avail); - - mutex_lock(&kcs_bmc->mutex); - - spin_lock_irq(&kcs_bmc->lock); - data_avail = kcs_bmc->data_in_avail; - if (data_avail) { - data_len = kcs_bmc->data_in_idx; - memcpy(kcs_bmc->kbuffer, kcs_bmc->data_in, data_len); - } - spin_unlock_irq(&kcs_bmc->lock); - - if (!data_avail) { - ret = -EAGAIN; - goto out_unlock; - } - - if (count < data_len) { - pr_err("channel=%u with too large data : %zu\n", - kcs_bmc->channel, data_len); - - spin_lock_irq(&kcs_bmc->lock); - kcs_force_abort(kcs_bmc); - spin_unlock_irq(&kcs_bmc->lock); - - ret = -EOVERFLOW; - goto out_unlock; - } - - if (copy_to_user(buf, kcs_bmc->kbuffer, data_len)) { - ret = -EFAULT; - goto out_unlock; - } - - ret = data_len; - - spin_lock_irq(&kcs_bmc->lock); - if (kcs_bmc->phase == KCS_PHASE_WRITE_DONE) { - kcs_bmc->phase = KCS_PHASE_WAIT_READ; - kcs_bmc->data_in_avail = false; - kcs_bmc->data_in_idx = 0; - } else { - ret = -EAGAIN; - } - spin_unlock_irq(&kcs_bmc->lock); - -out_unlock: - mutex_unlock(&kcs_bmc->mutex); - - return ret; -} - -static ssize_t kcs_bmc_write(struct file *filp, const char __user *buf, - size_t count, loff_t *ppos) -{ - struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); - ssize_t ret; - - /* a minimum response size '3' : netfn + cmd + ccode */ - if (count < 3 || count > KCS_MSG_BUFSIZ) - return -EINVAL; - - mutex_lock(&kcs_bmc->mutex); - - if (copy_from_user(kcs_bmc->kbuffer, buf, count)) { - ret = -EFAULT; - goto out_unlock; - } - - spin_lock_irq(&kcs_bmc->lock); - if (kcs_bmc->phase == KCS_PHASE_WAIT_READ) { - kcs_bmc->phase = KCS_PHASE_READ; - kcs_bmc->data_out_idx = 1; - kcs_bmc->data_out_len = count; - memcpy(kcs_bmc->data_out, kcs_bmc->kbuffer, count); - kcs_bmc_write_data(kcs_bmc, kcs_bmc->data_out[0]); - ret = count; - } else { - ret = -EINVAL; - } - spin_unlock_irq(&kcs_bmc->lock); - -out_unlock: - mutex_unlock(&kcs_bmc->mutex); - - return ret; -} - -static long kcs_bmc_ioctl(struct file *filp, unsigned int cmd, - unsigned long arg) -{ - struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); - long ret = 0; - - spin_lock_irq(&kcs_bmc->lock); - - switch (cmd) { - case IPMI_BMC_IOCTL_SET_SMS_ATN: - kcs_bmc_update_status(kcs_bmc, KCS_STATUS_SMS_ATN, KCS_STATUS_SMS_ATN); - break; - - case IPMI_BMC_IOCTL_CLEAR_SMS_ATN: - kcs_bmc_update_status(kcs_bmc, KCS_STATUS_SMS_ATN, 0); - break; - - case IPMI_BMC_IOCTL_FORCE_ABORT: - kcs_force_abort(kcs_bmc); - break; - - default: - ret = -EINVAL; - break; - } - - spin_unlock_irq(&kcs_bmc->lock); - - return ret; -} - -static int kcs_bmc_release(struct inode *inode, struct file *filp) -{ - struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); - - spin_lock_irq(&kcs_bmc->lock); - kcs_bmc->running = 0; - kcs_force_abort(kcs_bmc); - spin_unlock_irq(&kcs_bmc->lock); - - return 0; -} - -static const struct file_operations kcs_bmc_fops = { - .owner = THIS_MODULE, - .open = kcs_bmc_open, - .read = kcs_bmc_read, - .write = kcs_bmc_write, - .release = kcs_bmc_release, - .poll = kcs_bmc_poll, - .unlocked_ioctl = kcs_bmc_ioctl, -}; - +struct kcs_bmc *kcs_bmc_ipmi_alloc(struct device *dev, int sizeof_priv, u32 channel); struct kcs_bmc *kcs_bmc_alloc(struct device *dev, int sizeof_priv, u32 channel) { - struct kcs_bmc *kcs_bmc; - - kcs_bmc = devm_kzalloc(dev, sizeof(*kcs_bmc) + sizeof_priv, GFP_KERNEL); - if (!kcs_bmc) - return NULL; - - spin_lock_init(&kcs_bmc->lock); - kcs_bmc->channel = channel; - - mutex_init(&kcs_bmc->mutex); - init_waitqueue_head(&kcs_bmc->queue); - - kcs_bmc->data_in = devm_kmalloc(dev, KCS_MSG_BUFSIZ, GFP_KERNEL); - kcs_bmc->data_out = devm_kmalloc(dev, KCS_MSG_BUFSIZ, GFP_KERNEL); - kcs_bmc->kbuffer = devm_kmalloc(dev, KCS_MSG_BUFSIZ, GFP_KERNEL); - - kcs_bmc->miscdev.minor = MISC_DYNAMIC_MINOR; - kcs_bmc->miscdev.name = devm_kasprintf(dev, GFP_KERNEL, "%s%u", - DEVICE_NAME, channel); - if (!kcs_bmc->data_in || !kcs_bmc->data_out || !kcs_bmc->kbuffer || - !kcs_bmc->miscdev.name) - return NULL; - kcs_bmc->miscdev.fops = &kcs_bmc_fops; - - return kcs_bmc; + return kcs_bmc_ipmi_alloc(dev, sizeof_priv, channel); } EXPORT_SYMBOL(kcs_bmc_alloc); diff --git a/drivers/char/ipmi/kcs_bmc.h b/drivers/char/ipmi/kcs_bmc.h index 970f53892f2d..febea0c8deb4 100644 --- a/drivers/char/ipmi/kcs_bmc.h +++ b/drivers/char/ipmi/kcs_bmc.h @@ -104,6 +104,12 @@ static inline void *kcs_bmc_priv(struct kcs_bmc *kcs_bmc) } int kcs_bmc_handle_event(struct kcs_bmc *kcs_bmc); -struct kcs_bmc *kcs_bmc_alloc(struct device *dev, int sizeof_priv, - u32 channel); +struct kcs_bmc *kcs_bmc_alloc(struct device *dev, int sizeof_priv, u32 channel); + +u8 kcs_bmc_read_data(struct kcs_bmc *kcs_bmc); +void kcs_bmc_write_data(struct kcs_bmc *kcs_bmc, u8 data); +u8 kcs_bmc_read_status(struct kcs_bmc *kcs_bmc); +void kcs_bmc_write_status(struct kcs_bmc *kcs_bmc, u8 data); +void kcs_bmc_update_status(struct kcs_bmc *kcs_bmc, u8 mask, u8 val); + #endif /* __KCS_BMC_H__ */ diff --git a/drivers/char/ipmi/kcs_bmc_cdev_ipmi.c b/drivers/char/ipmi/kcs_bmc_cdev_ipmi.c new file mode 100644 index 000000000000..82c77994e481 --- /dev/null +++ b/drivers/char/ipmi/kcs_bmc_cdev_ipmi.c @@ -0,0 +1,428 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2015-2018, Intel Corporation. + */ + +#define pr_fmt(fmt) "kcs-bmc: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kcs_bmc.h" + +#define DEVICE_NAME "ipmi-kcs" + +#define KCS_MSG_BUFSIZ 1000 + +#define KCS_ZERO_DATA 0 + + +/* IPMI 2.0 - Table 9-1, KCS Interface Status Register Bits */ +#define KCS_STATUS_STATE(state) (state << 6) +#define KCS_STATUS_STATE_MASK GENMASK(7, 6) +#define KCS_STATUS_CMD_DAT BIT(3) +#define KCS_STATUS_SMS_ATN BIT(2) +#define KCS_STATUS_IBF BIT(1) +#define KCS_STATUS_OBF BIT(0) + +/* IPMI 2.0 - Table 9-2, KCS Interface State Bits */ +enum kcs_states { + IDLE_STATE = 0, + READ_STATE = 1, + WRITE_STATE = 2, + ERROR_STATE = 3, +}; + +/* IPMI 2.0 - Table 9-3, KCS Interface Control Codes */ +#define KCS_CMD_GET_STATUS_ABORT 0x60 +#define KCS_CMD_WRITE_START 0x61 +#define KCS_CMD_WRITE_END 0x62 +#define KCS_CMD_READ_BYTE 0x68 + +static inline void set_state(struct kcs_bmc *kcs_bmc, u8 state) +{ + kcs_bmc_update_status(kcs_bmc, KCS_STATUS_STATE_MASK, + KCS_STATUS_STATE(state)); +} + +static void kcs_bmc_ipmi_force_abort(struct kcs_bmc *kcs_bmc) +{ + set_state(kcs_bmc, ERROR_STATE); + kcs_bmc_read_data(kcs_bmc); + kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); + + kcs_bmc->phase = KCS_PHASE_ERROR; + kcs_bmc->data_in_avail = false; + kcs_bmc->data_in_idx = 0; +} + +static void kcs_bmc_ipmi_handle_data(struct kcs_bmc *kcs_bmc) +{ + u8 data; + + switch (kcs_bmc->phase) { + case KCS_PHASE_WRITE_START: + kcs_bmc->phase = KCS_PHASE_WRITE_DATA; + fallthrough; + + case KCS_PHASE_WRITE_DATA: + if (kcs_bmc->data_in_idx < KCS_MSG_BUFSIZ) { + set_state(kcs_bmc, WRITE_STATE); + kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); + kcs_bmc->data_in[kcs_bmc->data_in_idx++] = + kcs_bmc_read_data(kcs_bmc); + } else { + kcs_bmc_ipmi_force_abort(kcs_bmc); + kcs_bmc->error = KCS_LENGTH_ERROR; + } + break; + + case KCS_PHASE_WRITE_END_CMD: + if (kcs_bmc->data_in_idx < KCS_MSG_BUFSIZ) { + set_state(kcs_bmc, READ_STATE); + kcs_bmc->data_in[kcs_bmc->data_in_idx++] = + kcs_bmc_read_data(kcs_bmc); + kcs_bmc->phase = KCS_PHASE_WRITE_DONE; + kcs_bmc->data_in_avail = true; + wake_up_interruptible(&kcs_bmc->queue); + } else { + kcs_bmc_ipmi_force_abort(kcs_bmc); + kcs_bmc->error = KCS_LENGTH_ERROR; + } + break; + + case KCS_PHASE_READ: + if (kcs_bmc->data_out_idx == kcs_bmc->data_out_len) + set_state(kcs_bmc, IDLE_STATE); + + data = kcs_bmc_read_data(kcs_bmc); + if (data != KCS_CMD_READ_BYTE) { + set_state(kcs_bmc, ERROR_STATE); + kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); + break; + } + + if (kcs_bmc->data_out_idx == kcs_bmc->data_out_len) { + kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); + kcs_bmc->phase = KCS_PHASE_IDLE; + break; + } + + kcs_bmc_write_data(kcs_bmc, + kcs_bmc->data_out[kcs_bmc->data_out_idx++]); + break; + + case KCS_PHASE_ABORT_ERROR1: + set_state(kcs_bmc, READ_STATE); + kcs_bmc_read_data(kcs_bmc); + kcs_bmc_write_data(kcs_bmc, kcs_bmc->error); + kcs_bmc->phase = KCS_PHASE_ABORT_ERROR2; + break; + + case KCS_PHASE_ABORT_ERROR2: + set_state(kcs_bmc, IDLE_STATE); + kcs_bmc_read_data(kcs_bmc); + kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); + kcs_bmc->phase = KCS_PHASE_IDLE; + break; + + default: + kcs_bmc_ipmi_force_abort(kcs_bmc); + break; + } +} + +static void kcs_bmc_ipmi_handle_cmd(struct kcs_bmc *kcs_bmc) +{ + u8 cmd; + + set_state(kcs_bmc, WRITE_STATE); + kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); + + cmd = kcs_bmc_read_data(kcs_bmc); + switch (cmd) { + case KCS_CMD_WRITE_START: + kcs_bmc->phase = KCS_PHASE_WRITE_START; + kcs_bmc->error = KCS_NO_ERROR; + kcs_bmc->data_in_avail = false; + kcs_bmc->data_in_idx = 0; + break; + + case KCS_CMD_WRITE_END: + if (kcs_bmc->phase != KCS_PHASE_WRITE_DATA) { + kcs_bmc_ipmi_force_abort(kcs_bmc); + break; + } + + kcs_bmc->phase = KCS_PHASE_WRITE_END_CMD; + break; + + case KCS_CMD_GET_STATUS_ABORT: + if (kcs_bmc->error == KCS_NO_ERROR) + kcs_bmc->error = KCS_ABORTED_BY_COMMAND; + + kcs_bmc->phase = KCS_PHASE_ABORT_ERROR1; + kcs_bmc->data_in_avail = false; + kcs_bmc->data_in_idx = 0; + break; + + default: + kcs_bmc_ipmi_force_abort(kcs_bmc); + kcs_bmc->error = KCS_ILLEGAL_CONTROL_CODE; + break; + } +} + +int kcs_bmc_ipmi_event(struct kcs_bmc *kcs_bmc); +int kcs_bmc_ipmi_event(struct kcs_bmc *kcs_bmc) +{ + unsigned long flags; + int ret = -ENODATA; + u8 status; + + spin_lock_irqsave(&kcs_bmc->lock, flags); + + status = kcs_bmc_read_status(kcs_bmc); + if (status & KCS_STATUS_IBF) { + if (!kcs_bmc->running) + kcs_bmc_ipmi_force_abort(kcs_bmc); + else if (status & KCS_STATUS_CMD_DAT) + kcs_bmc_ipmi_handle_cmd(kcs_bmc); + else + kcs_bmc_ipmi_handle_data(kcs_bmc); + + ret = 0; + } + + spin_unlock_irqrestore(&kcs_bmc->lock, flags); + + return ret; +} +EXPORT_SYMBOL(kcs_bmc_ipmi_event); + +static inline struct kcs_bmc *to_kcs_bmc(struct file *filp) +{ + return container_of(filp->private_data, struct kcs_bmc, miscdev); +} + +static int kcs_bmc_ipmi_open(struct inode *inode, struct file *filp) +{ + struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); + int ret = 0; + + spin_lock_irq(&kcs_bmc->lock); + if (!kcs_bmc->running) + kcs_bmc->running = 1; + else + ret = -EBUSY; + spin_unlock_irq(&kcs_bmc->lock); + + return ret; +} + +static __poll_t kcs_bmc_ipmi_poll(struct file *filp, poll_table *wait) +{ + struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); + __poll_t mask = 0; + + poll_wait(filp, &kcs_bmc->queue, wait); + + spin_lock_irq(&kcs_bmc->lock); + if (kcs_bmc->data_in_avail) + mask |= EPOLLIN; + spin_unlock_irq(&kcs_bmc->lock); + + return mask; +} + +static ssize_t kcs_bmc_ipmi_read(struct file *filp, char __user *buf, + size_t count, loff_t *ppos) +{ + struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); + bool data_avail; + size_t data_len; + ssize_t ret; + + if (!(filp->f_flags & O_NONBLOCK)) + wait_event_interruptible(kcs_bmc->queue, + kcs_bmc->data_in_avail); + + mutex_lock(&kcs_bmc->mutex); + + spin_lock_irq(&kcs_bmc->lock); + data_avail = kcs_bmc->data_in_avail; + if (data_avail) { + data_len = kcs_bmc->data_in_idx; + memcpy(kcs_bmc->kbuffer, kcs_bmc->data_in, data_len); + } + spin_unlock_irq(&kcs_bmc->lock); + + if (!data_avail) { + ret = -EAGAIN; + goto out_unlock; + } + + if (count < data_len) { + pr_err("channel=%u with too large data : %zu\n", + kcs_bmc->channel, data_len); + + spin_lock_irq(&kcs_bmc->lock); + kcs_bmc_ipmi_force_abort(kcs_bmc); + spin_unlock_irq(&kcs_bmc->lock); + + ret = -EOVERFLOW; + goto out_unlock; + } + + if (copy_to_user(buf, kcs_bmc->kbuffer, data_len)) { + ret = -EFAULT; + goto out_unlock; + } + + ret = data_len; + + spin_lock_irq(&kcs_bmc->lock); + if (kcs_bmc->phase == KCS_PHASE_WRITE_DONE) { + kcs_bmc->phase = KCS_PHASE_WAIT_READ; + kcs_bmc->data_in_avail = false; + kcs_bmc->data_in_idx = 0; + } else { + ret = -EAGAIN; + } + spin_unlock_irq(&kcs_bmc->lock); + +out_unlock: + mutex_unlock(&kcs_bmc->mutex); + + return ret; +} + +static ssize_t kcs_bmc_ipmi_write(struct file *filp, const char __user *buf, + size_t count, loff_t *ppos) +{ + struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); + ssize_t ret; + + /* a minimum response size '3' : netfn + cmd + ccode */ + if (count < 3 || count > KCS_MSG_BUFSIZ) + return -EINVAL; + + mutex_lock(&kcs_bmc->mutex); + + if (copy_from_user(kcs_bmc->kbuffer, buf, count)) { + ret = -EFAULT; + goto out_unlock; + } + + spin_lock_irq(&kcs_bmc->lock); + if (kcs_bmc->phase == KCS_PHASE_WAIT_READ) { + kcs_bmc->phase = KCS_PHASE_READ; + kcs_bmc->data_out_idx = 1; + kcs_bmc->data_out_len = count; + memcpy(kcs_bmc->data_out, kcs_bmc->kbuffer, count); + kcs_bmc_write_data(kcs_bmc, kcs_bmc->data_out[0]); + ret = count; + } else { + ret = -EINVAL; + } + spin_unlock_irq(&kcs_bmc->lock); + +out_unlock: + mutex_unlock(&kcs_bmc->mutex); + + return ret; +} + +static long kcs_bmc_ipmi_ioctl(struct file *filp, unsigned int cmd, + unsigned long arg) +{ + struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); + long ret = 0; + + spin_lock_irq(&kcs_bmc->lock); + + switch (cmd) { + case IPMI_BMC_IOCTL_SET_SMS_ATN: + kcs_bmc_update_status(kcs_bmc, KCS_STATUS_SMS_ATN, KCS_STATUS_SMS_ATN); + break; + + case IPMI_BMC_IOCTL_CLEAR_SMS_ATN: + kcs_bmc_update_status(kcs_bmc, KCS_STATUS_SMS_ATN, 0); + break; + + case IPMI_BMC_IOCTL_FORCE_ABORT: + kcs_bmc_ipmi_force_abort(kcs_bmc); + break; + + default: + ret = -EINVAL; + break; + } + + spin_unlock_irq(&kcs_bmc->lock); + + return ret; +} + +static int kcs_bmc_ipmi_release(struct inode *inode, struct file *filp) +{ + struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); + + spin_lock_irq(&kcs_bmc->lock); + kcs_bmc->running = 0; + kcs_bmc_ipmi_force_abort(kcs_bmc); + spin_unlock_irq(&kcs_bmc->lock); + + return 0; +} + +static const struct file_operations kcs_bmc_fops = { + .owner = THIS_MODULE, + .open = kcs_bmc_ipmi_open, + .read = kcs_bmc_ipmi_read, + .write = kcs_bmc_ipmi_write, + .release = kcs_bmc_ipmi_release, + .poll = kcs_bmc_ipmi_poll, + .unlocked_ioctl = kcs_bmc_ipmi_ioctl, +}; + +struct kcs_bmc *kcs_bmc_ipmi_alloc(struct device *dev, int sizeof_priv, u32 channel); +struct kcs_bmc *kcs_bmc_ipmi_alloc(struct device *dev, int sizeof_priv, u32 channel) +{ + struct kcs_bmc *kcs_bmc; + + kcs_bmc = devm_kzalloc(dev, sizeof(*kcs_bmc) + sizeof_priv, GFP_KERNEL); + if (!kcs_bmc) + return NULL; + + spin_lock_init(&kcs_bmc->lock); + kcs_bmc->channel = channel; + + mutex_init(&kcs_bmc->mutex); + init_waitqueue_head(&kcs_bmc->queue); + + kcs_bmc->data_in = devm_kmalloc(dev, KCS_MSG_BUFSIZ, GFP_KERNEL); + kcs_bmc->data_out = devm_kmalloc(dev, KCS_MSG_BUFSIZ, GFP_KERNEL); + kcs_bmc->kbuffer = devm_kmalloc(dev, KCS_MSG_BUFSIZ, GFP_KERNEL); + + kcs_bmc->miscdev.minor = MISC_DYNAMIC_MINOR; + kcs_bmc->miscdev.name = devm_kasprintf(dev, GFP_KERNEL, "%s%u", + DEVICE_NAME, channel); + if (!kcs_bmc->data_in || !kcs_bmc->data_out || !kcs_bmc->kbuffer || + !kcs_bmc->miscdev.name) + return NULL; + kcs_bmc->miscdev.fops = &kcs_bmc_fops; + + return kcs_bmc; +} +EXPORT_SYMBOL(kcs_bmc_ipmi_alloc); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Haiyue Wang "); +MODULE_DESCRIPTION("KCS BMC to handle the IPMI request from system software"); -- 2.30.2 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 279A2C47082 for ; Tue, 8 Jun 2021 10:52:29 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D8BFE60FDB for ; Tue, 8 Jun 2021 10:52:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D8BFE60FDB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=aj.id.au Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WU9ALuAnJBy4O/XeadZph6InFw1JMjOsYvL5c7bc81c=; b=CBwNS3t/vpuuJL /iWvQZ3JyAjMFVfeLWHiDfTaYftiaj1Peq/9K5c6zCWvFS0NFlEaW4fNL4V4vPn1dtHbHnOIpSnv0 n4okLrUR8aP8Ucllw/mfeKtdZcwazT5fa8QnBwg6oOyADvYa8nhxUF5w+eBcKjtbL1RaTCufl9pTu 3HaclghpGiGvlj1MZHePeJPQgKx8Xu3IPLS8rbQvwF+LcURhWpQCle0JvBD2Yn3y2+IYLy5zqJiwU WSf71Pcik+dZj4LQVR/W/Pf0F9rQAfok9F1hoSd9Hj6avfCMFvl7nqJl+sETVyshO5OyuGrBVChUv ZKD1ye7kKN+qbIr1/Pgw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqZIx-007mEo-2A; Tue, 08 Jun 2021 10:49:55 +0000 Received: from new4-smtp.messagingengine.com ([66.111.4.230]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqZHt-007lqG-2P for linux-arm-kernel@lists.infradead.org; Tue, 08 Jun 2021 10:48:54 +0000 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailnew.nyi.internal (Postfix) with ESMTP id 02F9F580597; Tue, 8 Jun 2021 06:48:48 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 08 Jun 2021 06:48:48 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aj.id.au; h=from :to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm3; bh=zzc7gB5yDI9rO hlQvK5HVCt4JFBNmFnr0uke3sSm5CQ=; b=fXZ724+XBrELPJ4IK/tFMZBUOO8vJ Z6nK/D/NGU8EacylPpG8KzOH6ZESQ6fxbHf4eVfrZj3qwp1bRN0VH49AepEdmVeV IBD3P79iKkRaBvbBG6PkzWnh72Gt25XBW6fWXzud7u2Zd+Vg+BdGyrxyFIl0a5h/ /FrRsYcKZaqXh2m6CaceX0WDUH+cuGmEm+tihr8BfKDQHzaZEA1QCdDB64kX6bmR fGVNmoL288yzGJcCiBft6nzc8i2wbQjavwc44WUYAyQhQ8ws+vDqSdRg9OUG8qob pKrnqsRCKm9g3NkaPA+8mGAiu1dtHT3GoELkHTTIZW09GodnAIPYCKUKw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=zzc7gB5yDI9rOhlQvK5HVCt4JFBNmFnr0uke3sSm5CQ=; b=rv9gU+In sqRt6Jz/RZAY4gMUyHg2POfKDbrtm6X4LtPXMF/B/VVg6p2LINBYsXoUUgyK5Mji ygx6b83+VBrg41GRu0oblH6iMCOrXMtp2xOXOM3vEwuk2TER3+9N3/+qFwvuRNz7 V1ASWXgejBSPNGeGJKKzPKD4EwlYyym8kpmAg4tR57KovDXJobUhe/NYkFD+yx7o gzm/r6zZtVvZCZbybDlUymGlWYGRAwKpz+pKycPtf8JFa1B6F+RvD59ekdEIN8dr uAmNDg9Ztf/A3CND4+ShlejRMmkMt+Zon2IlnvUkF7gZB5eHAQrttAHsDTeFdVqn WXb1Y0YE9aAM6g== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrfedtledgudejucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomheptehnughrvgifucflvghffhgvrhihuceorghnughrvgifsegr jhdrihgurdgruheqnecuggftrfgrthhtvghrnhepjefgvdevheetkeevgeegleelgfelte etjeffleffvdduudevieffgeetleevhfetnecuvehluhhsthgvrhfuihiivgeptdenucfr rghrrghmpehmrghilhhfrhhomheprghnughrvgifsegrjhdrihgurdgruh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 8 Jun 2021 06:48:39 -0400 (EDT) From: Andrew Jeffery To: openipmi-developer@lists.sourceforge.net, openbmc@lists.ozlabs.org, minyard@acm.org Cc: devicetree@vger.kernel.org, tmaimon77@gmail.com, linux-aspeed@lists.ozlabs.org, avifishman70@gmail.com, venture@google.com, linux-kernel@vger.kernel.org, tali.perry1@gmail.com, robh+dt@kernel.org, chiawei_wang@aspeedtech.com, linux-arm-kernel@lists.infradead.org, benjaminfair@google.com, arnd@arndb.de, zweiss@equinix.com, joel@jms.id.au, KWLIU@nuvoton.com Subject: [PATCH v4 04/16] ipmi: kcs_bmc: Split out kcs_bmc_cdev_ipmi Date: Tue, 8 Jun 2021 20:17:45 +0930 Message-Id: <20210608104757.582199-5-andrew@aj.id.au> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210608104757.582199-1-andrew@aj.id.au> References: <20210608104757.582199-1-andrew@aj.id.au> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210608_034849_322953_6BA29541 X-CRM114-Status: GOOD ( 15.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Take steps towards defining a coherent API to separate the KCS device drivers from the userspace interface. Decreasing the coupling will improve the separation of concerns and enable the introduction of alternative userspace interfaces. For now, simply split the chardev logic out to a separate file. The code continues to build into the same module. Signed-off-by: Andrew Jeffery Reviewed-by: Zev Weiss --- drivers/char/ipmi/Makefile | 2 +- drivers/char/ipmi/kcs_bmc.c | 423 +------------------------ drivers/char/ipmi/kcs_bmc.h | 10 +- drivers/char/ipmi/kcs_bmc_cdev_ipmi.c | 428 ++++++++++++++++++++++++++ 4 files changed, 451 insertions(+), 412 deletions(-) create mode 100644 drivers/char/ipmi/kcs_bmc_cdev_ipmi.c diff --git a/drivers/char/ipmi/Makefile b/drivers/char/ipmi/Makefile index 0822adc2ec41..a302bc865370 100644 --- a/drivers/char/ipmi/Makefile +++ b/drivers/char/ipmi/Makefile @@ -22,7 +22,7 @@ obj-$(CONFIG_IPMI_SSIF) += ipmi_ssif.o obj-$(CONFIG_IPMI_POWERNV) += ipmi_powernv.o obj-$(CONFIG_IPMI_WATCHDOG) += ipmi_watchdog.o obj-$(CONFIG_IPMI_POWEROFF) += ipmi_poweroff.o -obj-$(CONFIG_IPMI_KCS_BMC) += kcs_bmc.o +obj-$(CONFIG_IPMI_KCS_BMC) += kcs_bmc.o kcs_bmc_cdev_ipmi.o obj-$(CONFIG_ASPEED_BT_IPMI_BMC) += bt-bmc.o obj-$(CONFIG_ASPEED_KCS_IPMI_BMC) += kcs_bmc_aspeed.o obj-$(CONFIG_NPCM7XX_KCS_IPMI_BMC) += kcs_bmc_npcm7xx.o diff --git a/drivers/char/ipmi/kcs_bmc.c b/drivers/char/ipmi/kcs_bmc.c index c4336c1f2d6d..ef5c48ffe74a 100644 --- a/drivers/char/ipmi/kcs_bmc.c +++ b/drivers/char/ipmi/kcs_bmc.c @@ -3,446 +3,51 @@ * Copyright (c) 2015-2018, Intel Corporation. */ -#define pr_fmt(fmt) "kcs-bmc: " fmt - -#include -#include -#include #include -#include -#include -#include -#include #include "kcs_bmc.h" -#define DEVICE_NAME "ipmi-kcs" - -#define KCS_MSG_BUFSIZ 1000 - -#define KCS_ZERO_DATA 0 - - -/* IPMI 2.0 - Table 9-1, KCS Interface Status Register Bits */ -#define KCS_STATUS_STATE(state) (state << 6) -#define KCS_STATUS_STATE_MASK GENMASK(7, 6) -#define KCS_STATUS_CMD_DAT BIT(3) -#define KCS_STATUS_SMS_ATN BIT(2) -#define KCS_STATUS_IBF BIT(1) -#define KCS_STATUS_OBF BIT(0) - -/* IPMI 2.0 - Table 9-2, KCS Interface State Bits */ -enum kcs_states { - IDLE_STATE = 0, - READ_STATE = 1, - WRITE_STATE = 2, - ERROR_STATE = 3, -}; - -/* IPMI 2.0 - Table 9-3, KCS Interface Control Codes */ -#define KCS_CMD_GET_STATUS_ABORT 0x60 -#define KCS_CMD_WRITE_START 0x61 -#define KCS_CMD_WRITE_END 0x62 -#define KCS_CMD_READ_BYTE 0x68 - -static inline u8 kcs_bmc_read_data(struct kcs_bmc *kcs_bmc) +u8 kcs_bmc_read_data(struct kcs_bmc *kcs_bmc) { return kcs_bmc->io_inputb(kcs_bmc, kcs_bmc->ioreg.idr); } +EXPORT_SYMBOL(kcs_bmc_read_data); -static inline void kcs_bmc_write_data(struct kcs_bmc *kcs_bmc, u8 data) +void kcs_bmc_write_data(struct kcs_bmc *kcs_bmc, u8 data) { kcs_bmc->io_outputb(kcs_bmc, kcs_bmc->ioreg.odr, data); } +EXPORT_SYMBOL(kcs_bmc_write_data); -static inline u8 kcs_bmc_read_status(struct kcs_bmc *kcs_bmc) +u8 kcs_bmc_read_status(struct kcs_bmc *kcs_bmc) { return kcs_bmc->io_inputb(kcs_bmc, kcs_bmc->ioreg.str); } +EXPORT_SYMBOL(kcs_bmc_read_status); -static inline void kcs_bmc_write_status(struct kcs_bmc *kcs_bmc, u8 data) +void kcs_bmc_write_status(struct kcs_bmc *kcs_bmc, u8 data) { kcs_bmc->io_outputb(kcs_bmc, kcs_bmc->ioreg.str, data); } +EXPORT_SYMBOL(kcs_bmc_write_status); -static void kcs_bmc_update_status(struct kcs_bmc *kcs_bmc, u8 mask, u8 val) +void kcs_bmc_update_status(struct kcs_bmc *kcs_bmc, u8 mask, u8 val) { kcs_bmc->io_updateb(kcs_bmc, kcs_bmc->ioreg.str, mask, val); } +EXPORT_SYMBOL(kcs_bmc_update_status); -static inline void set_state(struct kcs_bmc *kcs_bmc, u8 state) -{ - kcs_bmc_update_status(kcs_bmc, KCS_STATUS_STATE_MASK, - KCS_STATUS_STATE(state)); -} - -static void kcs_force_abort(struct kcs_bmc *kcs_bmc) -{ - set_state(kcs_bmc, ERROR_STATE); - kcs_bmc_read_data(kcs_bmc); - kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); - - kcs_bmc->phase = KCS_PHASE_ERROR; - kcs_bmc->data_in_avail = false; - kcs_bmc->data_in_idx = 0; -} - -static void kcs_bmc_handle_data(struct kcs_bmc *kcs_bmc) -{ - u8 data; - - switch (kcs_bmc->phase) { - case KCS_PHASE_WRITE_START: - kcs_bmc->phase = KCS_PHASE_WRITE_DATA; - fallthrough; - - case KCS_PHASE_WRITE_DATA: - if (kcs_bmc->data_in_idx < KCS_MSG_BUFSIZ) { - set_state(kcs_bmc, WRITE_STATE); - kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); - kcs_bmc->data_in[kcs_bmc->data_in_idx++] = - kcs_bmc_read_data(kcs_bmc); - } else { - kcs_force_abort(kcs_bmc); - kcs_bmc->error = KCS_LENGTH_ERROR; - } - break; - - case KCS_PHASE_WRITE_END_CMD: - if (kcs_bmc->data_in_idx < KCS_MSG_BUFSIZ) { - set_state(kcs_bmc, READ_STATE); - kcs_bmc->data_in[kcs_bmc->data_in_idx++] = - kcs_bmc_read_data(kcs_bmc); - kcs_bmc->phase = KCS_PHASE_WRITE_DONE; - kcs_bmc->data_in_avail = true; - wake_up_interruptible(&kcs_bmc->queue); - } else { - kcs_force_abort(kcs_bmc); - kcs_bmc->error = KCS_LENGTH_ERROR; - } - break; - - case KCS_PHASE_READ: - if (kcs_bmc->data_out_idx == kcs_bmc->data_out_len) - set_state(kcs_bmc, IDLE_STATE); - - data = kcs_bmc_read_data(kcs_bmc); - if (data != KCS_CMD_READ_BYTE) { - set_state(kcs_bmc, ERROR_STATE); - kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); - break; - } - - if (kcs_bmc->data_out_idx == kcs_bmc->data_out_len) { - kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); - kcs_bmc->phase = KCS_PHASE_IDLE; - break; - } - - kcs_bmc_write_data(kcs_bmc, - kcs_bmc->data_out[kcs_bmc->data_out_idx++]); - break; - - case KCS_PHASE_ABORT_ERROR1: - set_state(kcs_bmc, READ_STATE); - kcs_bmc_read_data(kcs_bmc); - kcs_bmc_write_data(kcs_bmc, kcs_bmc->error); - kcs_bmc->phase = KCS_PHASE_ABORT_ERROR2; - break; - - case KCS_PHASE_ABORT_ERROR2: - set_state(kcs_bmc, IDLE_STATE); - kcs_bmc_read_data(kcs_bmc); - kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); - kcs_bmc->phase = KCS_PHASE_IDLE; - break; - - default: - kcs_force_abort(kcs_bmc); - break; - } -} - -static void kcs_bmc_handle_cmd(struct kcs_bmc *kcs_bmc) -{ - u8 cmd; - - set_state(kcs_bmc, WRITE_STATE); - kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); - - cmd = kcs_bmc_read_data(kcs_bmc); - switch (cmd) { - case KCS_CMD_WRITE_START: - kcs_bmc->phase = KCS_PHASE_WRITE_START; - kcs_bmc->error = KCS_NO_ERROR; - kcs_bmc->data_in_avail = false; - kcs_bmc->data_in_idx = 0; - break; - - case KCS_CMD_WRITE_END: - if (kcs_bmc->phase != KCS_PHASE_WRITE_DATA) { - kcs_force_abort(kcs_bmc); - break; - } - - kcs_bmc->phase = KCS_PHASE_WRITE_END_CMD; - break; - - case KCS_CMD_GET_STATUS_ABORT: - if (kcs_bmc->error == KCS_NO_ERROR) - kcs_bmc->error = KCS_ABORTED_BY_COMMAND; - - kcs_bmc->phase = KCS_PHASE_ABORT_ERROR1; - kcs_bmc->data_in_avail = false; - kcs_bmc->data_in_idx = 0; - break; - - default: - kcs_force_abort(kcs_bmc); - kcs_bmc->error = KCS_ILLEGAL_CONTROL_CODE; - break; - } -} - +int kcs_bmc_ipmi_event(struct kcs_bmc *kcs_bmc); int kcs_bmc_handle_event(struct kcs_bmc *kcs_bmc) { - unsigned long flags; - int ret = -ENODATA; - u8 status; - - spin_lock_irqsave(&kcs_bmc->lock, flags); - - status = kcs_bmc_read_status(kcs_bmc); - if (status & KCS_STATUS_IBF) { - if (!kcs_bmc->running) - kcs_force_abort(kcs_bmc); - else if (status & KCS_STATUS_CMD_DAT) - kcs_bmc_handle_cmd(kcs_bmc); - else - kcs_bmc_handle_data(kcs_bmc); - - ret = 0; - } - - spin_unlock_irqrestore(&kcs_bmc->lock, flags); - - return ret; + return kcs_bmc_ipmi_event(kcs_bmc); } EXPORT_SYMBOL(kcs_bmc_handle_event); -static inline struct kcs_bmc *to_kcs_bmc(struct file *filp) -{ - return container_of(filp->private_data, struct kcs_bmc, miscdev); -} - -static int kcs_bmc_open(struct inode *inode, struct file *filp) -{ - struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); - int ret = 0; - - spin_lock_irq(&kcs_bmc->lock); - if (!kcs_bmc->running) - kcs_bmc->running = 1; - else - ret = -EBUSY; - spin_unlock_irq(&kcs_bmc->lock); - - return ret; -} - -static __poll_t kcs_bmc_poll(struct file *filp, poll_table *wait) -{ - struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); - __poll_t mask = 0; - - poll_wait(filp, &kcs_bmc->queue, wait); - - spin_lock_irq(&kcs_bmc->lock); - if (kcs_bmc->data_in_avail) - mask |= EPOLLIN; - spin_unlock_irq(&kcs_bmc->lock); - - return mask; -} - -static ssize_t kcs_bmc_read(struct file *filp, char __user *buf, - size_t count, loff_t *ppos) -{ - struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); - bool data_avail; - size_t data_len; - ssize_t ret; - - if (!(filp->f_flags & O_NONBLOCK)) - wait_event_interruptible(kcs_bmc->queue, - kcs_bmc->data_in_avail); - - mutex_lock(&kcs_bmc->mutex); - - spin_lock_irq(&kcs_bmc->lock); - data_avail = kcs_bmc->data_in_avail; - if (data_avail) { - data_len = kcs_bmc->data_in_idx; - memcpy(kcs_bmc->kbuffer, kcs_bmc->data_in, data_len); - } - spin_unlock_irq(&kcs_bmc->lock); - - if (!data_avail) { - ret = -EAGAIN; - goto out_unlock; - } - - if (count < data_len) { - pr_err("channel=%u with too large data : %zu\n", - kcs_bmc->channel, data_len); - - spin_lock_irq(&kcs_bmc->lock); - kcs_force_abort(kcs_bmc); - spin_unlock_irq(&kcs_bmc->lock); - - ret = -EOVERFLOW; - goto out_unlock; - } - - if (copy_to_user(buf, kcs_bmc->kbuffer, data_len)) { - ret = -EFAULT; - goto out_unlock; - } - - ret = data_len; - - spin_lock_irq(&kcs_bmc->lock); - if (kcs_bmc->phase == KCS_PHASE_WRITE_DONE) { - kcs_bmc->phase = KCS_PHASE_WAIT_READ; - kcs_bmc->data_in_avail = false; - kcs_bmc->data_in_idx = 0; - } else { - ret = -EAGAIN; - } - spin_unlock_irq(&kcs_bmc->lock); - -out_unlock: - mutex_unlock(&kcs_bmc->mutex); - - return ret; -} - -static ssize_t kcs_bmc_write(struct file *filp, const char __user *buf, - size_t count, loff_t *ppos) -{ - struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); - ssize_t ret; - - /* a minimum response size '3' : netfn + cmd + ccode */ - if (count < 3 || count > KCS_MSG_BUFSIZ) - return -EINVAL; - - mutex_lock(&kcs_bmc->mutex); - - if (copy_from_user(kcs_bmc->kbuffer, buf, count)) { - ret = -EFAULT; - goto out_unlock; - } - - spin_lock_irq(&kcs_bmc->lock); - if (kcs_bmc->phase == KCS_PHASE_WAIT_READ) { - kcs_bmc->phase = KCS_PHASE_READ; - kcs_bmc->data_out_idx = 1; - kcs_bmc->data_out_len = count; - memcpy(kcs_bmc->data_out, kcs_bmc->kbuffer, count); - kcs_bmc_write_data(kcs_bmc, kcs_bmc->data_out[0]); - ret = count; - } else { - ret = -EINVAL; - } - spin_unlock_irq(&kcs_bmc->lock); - -out_unlock: - mutex_unlock(&kcs_bmc->mutex); - - return ret; -} - -static long kcs_bmc_ioctl(struct file *filp, unsigned int cmd, - unsigned long arg) -{ - struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); - long ret = 0; - - spin_lock_irq(&kcs_bmc->lock); - - switch (cmd) { - case IPMI_BMC_IOCTL_SET_SMS_ATN: - kcs_bmc_update_status(kcs_bmc, KCS_STATUS_SMS_ATN, KCS_STATUS_SMS_ATN); - break; - - case IPMI_BMC_IOCTL_CLEAR_SMS_ATN: - kcs_bmc_update_status(kcs_bmc, KCS_STATUS_SMS_ATN, 0); - break; - - case IPMI_BMC_IOCTL_FORCE_ABORT: - kcs_force_abort(kcs_bmc); - break; - - default: - ret = -EINVAL; - break; - } - - spin_unlock_irq(&kcs_bmc->lock); - - return ret; -} - -static int kcs_bmc_release(struct inode *inode, struct file *filp) -{ - struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); - - spin_lock_irq(&kcs_bmc->lock); - kcs_bmc->running = 0; - kcs_force_abort(kcs_bmc); - spin_unlock_irq(&kcs_bmc->lock); - - return 0; -} - -static const struct file_operations kcs_bmc_fops = { - .owner = THIS_MODULE, - .open = kcs_bmc_open, - .read = kcs_bmc_read, - .write = kcs_bmc_write, - .release = kcs_bmc_release, - .poll = kcs_bmc_poll, - .unlocked_ioctl = kcs_bmc_ioctl, -}; - +struct kcs_bmc *kcs_bmc_ipmi_alloc(struct device *dev, int sizeof_priv, u32 channel); struct kcs_bmc *kcs_bmc_alloc(struct device *dev, int sizeof_priv, u32 channel) { - struct kcs_bmc *kcs_bmc; - - kcs_bmc = devm_kzalloc(dev, sizeof(*kcs_bmc) + sizeof_priv, GFP_KERNEL); - if (!kcs_bmc) - return NULL; - - spin_lock_init(&kcs_bmc->lock); - kcs_bmc->channel = channel; - - mutex_init(&kcs_bmc->mutex); - init_waitqueue_head(&kcs_bmc->queue); - - kcs_bmc->data_in = devm_kmalloc(dev, KCS_MSG_BUFSIZ, GFP_KERNEL); - kcs_bmc->data_out = devm_kmalloc(dev, KCS_MSG_BUFSIZ, GFP_KERNEL); - kcs_bmc->kbuffer = devm_kmalloc(dev, KCS_MSG_BUFSIZ, GFP_KERNEL); - - kcs_bmc->miscdev.minor = MISC_DYNAMIC_MINOR; - kcs_bmc->miscdev.name = devm_kasprintf(dev, GFP_KERNEL, "%s%u", - DEVICE_NAME, channel); - if (!kcs_bmc->data_in || !kcs_bmc->data_out || !kcs_bmc->kbuffer || - !kcs_bmc->miscdev.name) - return NULL; - kcs_bmc->miscdev.fops = &kcs_bmc_fops; - - return kcs_bmc; + return kcs_bmc_ipmi_alloc(dev, sizeof_priv, channel); } EXPORT_SYMBOL(kcs_bmc_alloc); diff --git a/drivers/char/ipmi/kcs_bmc.h b/drivers/char/ipmi/kcs_bmc.h index 970f53892f2d..febea0c8deb4 100644 --- a/drivers/char/ipmi/kcs_bmc.h +++ b/drivers/char/ipmi/kcs_bmc.h @@ -104,6 +104,12 @@ static inline void *kcs_bmc_priv(struct kcs_bmc *kcs_bmc) } int kcs_bmc_handle_event(struct kcs_bmc *kcs_bmc); -struct kcs_bmc *kcs_bmc_alloc(struct device *dev, int sizeof_priv, - u32 channel); +struct kcs_bmc *kcs_bmc_alloc(struct device *dev, int sizeof_priv, u32 channel); + +u8 kcs_bmc_read_data(struct kcs_bmc *kcs_bmc); +void kcs_bmc_write_data(struct kcs_bmc *kcs_bmc, u8 data); +u8 kcs_bmc_read_status(struct kcs_bmc *kcs_bmc); +void kcs_bmc_write_status(struct kcs_bmc *kcs_bmc, u8 data); +void kcs_bmc_update_status(struct kcs_bmc *kcs_bmc, u8 mask, u8 val); + #endif /* __KCS_BMC_H__ */ diff --git a/drivers/char/ipmi/kcs_bmc_cdev_ipmi.c b/drivers/char/ipmi/kcs_bmc_cdev_ipmi.c new file mode 100644 index 000000000000..82c77994e481 --- /dev/null +++ b/drivers/char/ipmi/kcs_bmc_cdev_ipmi.c @@ -0,0 +1,428 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2015-2018, Intel Corporation. + */ + +#define pr_fmt(fmt) "kcs-bmc: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kcs_bmc.h" + +#define DEVICE_NAME "ipmi-kcs" + +#define KCS_MSG_BUFSIZ 1000 + +#define KCS_ZERO_DATA 0 + + +/* IPMI 2.0 - Table 9-1, KCS Interface Status Register Bits */ +#define KCS_STATUS_STATE(state) (state << 6) +#define KCS_STATUS_STATE_MASK GENMASK(7, 6) +#define KCS_STATUS_CMD_DAT BIT(3) +#define KCS_STATUS_SMS_ATN BIT(2) +#define KCS_STATUS_IBF BIT(1) +#define KCS_STATUS_OBF BIT(0) + +/* IPMI 2.0 - Table 9-2, KCS Interface State Bits */ +enum kcs_states { + IDLE_STATE = 0, + READ_STATE = 1, + WRITE_STATE = 2, + ERROR_STATE = 3, +}; + +/* IPMI 2.0 - Table 9-3, KCS Interface Control Codes */ +#define KCS_CMD_GET_STATUS_ABORT 0x60 +#define KCS_CMD_WRITE_START 0x61 +#define KCS_CMD_WRITE_END 0x62 +#define KCS_CMD_READ_BYTE 0x68 + +static inline void set_state(struct kcs_bmc *kcs_bmc, u8 state) +{ + kcs_bmc_update_status(kcs_bmc, KCS_STATUS_STATE_MASK, + KCS_STATUS_STATE(state)); +} + +static void kcs_bmc_ipmi_force_abort(struct kcs_bmc *kcs_bmc) +{ + set_state(kcs_bmc, ERROR_STATE); + kcs_bmc_read_data(kcs_bmc); + kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); + + kcs_bmc->phase = KCS_PHASE_ERROR; + kcs_bmc->data_in_avail = false; + kcs_bmc->data_in_idx = 0; +} + +static void kcs_bmc_ipmi_handle_data(struct kcs_bmc *kcs_bmc) +{ + u8 data; + + switch (kcs_bmc->phase) { + case KCS_PHASE_WRITE_START: + kcs_bmc->phase = KCS_PHASE_WRITE_DATA; + fallthrough; + + case KCS_PHASE_WRITE_DATA: + if (kcs_bmc->data_in_idx < KCS_MSG_BUFSIZ) { + set_state(kcs_bmc, WRITE_STATE); + kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); + kcs_bmc->data_in[kcs_bmc->data_in_idx++] = + kcs_bmc_read_data(kcs_bmc); + } else { + kcs_bmc_ipmi_force_abort(kcs_bmc); + kcs_bmc->error = KCS_LENGTH_ERROR; + } + break; + + case KCS_PHASE_WRITE_END_CMD: + if (kcs_bmc->data_in_idx < KCS_MSG_BUFSIZ) { + set_state(kcs_bmc, READ_STATE); + kcs_bmc->data_in[kcs_bmc->data_in_idx++] = + kcs_bmc_read_data(kcs_bmc); + kcs_bmc->phase = KCS_PHASE_WRITE_DONE; + kcs_bmc->data_in_avail = true; + wake_up_interruptible(&kcs_bmc->queue); + } else { + kcs_bmc_ipmi_force_abort(kcs_bmc); + kcs_bmc->error = KCS_LENGTH_ERROR; + } + break; + + case KCS_PHASE_READ: + if (kcs_bmc->data_out_idx == kcs_bmc->data_out_len) + set_state(kcs_bmc, IDLE_STATE); + + data = kcs_bmc_read_data(kcs_bmc); + if (data != KCS_CMD_READ_BYTE) { + set_state(kcs_bmc, ERROR_STATE); + kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); + break; + } + + if (kcs_bmc->data_out_idx == kcs_bmc->data_out_len) { + kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); + kcs_bmc->phase = KCS_PHASE_IDLE; + break; + } + + kcs_bmc_write_data(kcs_bmc, + kcs_bmc->data_out[kcs_bmc->data_out_idx++]); + break; + + case KCS_PHASE_ABORT_ERROR1: + set_state(kcs_bmc, READ_STATE); + kcs_bmc_read_data(kcs_bmc); + kcs_bmc_write_data(kcs_bmc, kcs_bmc->error); + kcs_bmc->phase = KCS_PHASE_ABORT_ERROR2; + break; + + case KCS_PHASE_ABORT_ERROR2: + set_state(kcs_bmc, IDLE_STATE); + kcs_bmc_read_data(kcs_bmc); + kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); + kcs_bmc->phase = KCS_PHASE_IDLE; + break; + + default: + kcs_bmc_ipmi_force_abort(kcs_bmc); + break; + } +} + +static void kcs_bmc_ipmi_handle_cmd(struct kcs_bmc *kcs_bmc) +{ + u8 cmd; + + set_state(kcs_bmc, WRITE_STATE); + kcs_bmc_write_data(kcs_bmc, KCS_ZERO_DATA); + + cmd = kcs_bmc_read_data(kcs_bmc); + switch (cmd) { + case KCS_CMD_WRITE_START: + kcs_bmc->phase = KCS_PHASE_WRITE_START; + kcs_bmc->error = KCS_NO_ERROR; + kcs_bmc->data_in_avail = false; + kcs_bmc->data_in_idx = 0; + break; + + case KCS_CMD_WRITE_END: + if (kcs_bmc->phase != KCS_PHASE_WRITE_DATA) { + kcs_bmc_ipmi_force_abort(kcs_bmc); + break; + } + + kcs_bmc->phase = KCS_PHASE_WRITE_END_CMD; + break; + + case KCS_CMD_GET_STATUS_ABORT: + if (kcs_bmc->error == KCS_NO_ERROR) + kcs_bmc->error = KCS_ABORTED_BY_COMMAND; + + kcs_bmc->phase = KCS_PHASE_ABORT_ERROR1; + kcs_bmc->data_in_avail = false; + kcs_bmc->data_in_idx = 0; + break; + + default: + kcs_bmc_ipmi_force_abort(kcs_bmc); + kcs_bmc->error = KCS_ILLEGAL_CONTROL_CODE; + break; + } +} + +int kcs_bmc_ipmi_event(struct kcs_bmc *kcs_bmc); +int kcs_bmc_ipmi_event(struct kcs_bmc *kcs_bmc) +{ + unsigned long flags; + int ret = -ENODATA; + u8 status; + + spin_lock_irqsave(&kcs_bmc->lock, flags); + + status = kcs_bmc_read_status(kcs_bmc); + if (status & KCS_STATUS_IBF) { + if (!kcs_bmc->running) + kcs_bmc_ipmi_force_abort(kcs_bmc); + else if (status & KCS_STATUS_CMD_DAT) + kcs_bmc_ipmi_handle_cmd(kcs_bmc); + else + kcs_bmc_ipmi_handle_data(kcs_bmc); + + ret = 0; + } + + spin_unlock_irqrestore(&kcs_bmc->lock, flags); + + return ret; +} +EXPORT_SYMBOL(kcs_bmc_ipmi_event); + +static inline struct kcs_bmc *to_kcs_bmc(struct file *filp) +{ + return container_of(filp->private_data, struct kcs_bmc, miscdev); +} + +static int kcs_bmc_ipmi_open(struct inode *inode, struct file *filp) +{ + struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); + int ret = 0; + + spin_lock_irq(&kcs_bmc->lock); + if (!kcs_bmc->running) + kcs_bmc->running = 1; + else + ret = -EBUSY; + spin_unlock_irq(&kcs_bmc->lock); + + return ret; +} + +static __poll_t kcs_bmc_ipmi_poll(struct file *filp, poll_table *wait) +{ + struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); + __poll_t mask = 0; + + poll_wait(filp, &kcs_bmc->queue, wait); + + spin_lock_irq(&kcs_bmc->lock); + if (kcs_bmc->data_in_avail) + mask |= EPOLLIN; + spin_unlock_irq(&kcs_bmc->lock); + + return mask; +} + +static ssize_t kcs_bmc_ipmi_read(struct file *filp, char __user *buf, + size_t count, loff_t *ppos) +{ + struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); + bool data_avail; + size_t data_len; + ssize_t ret; + + if (!(filp->f_flags & O_NONBLOCK)) + wait_event_interruptible(kcs_bmc->queue, + kcs_bmc->data_in_avail); + + mutex_lock(&kcs_bmc->mutex); + + spin_lock_irq(&kcs_bmc->lock); + data_avail = kcs_bmc->data_in_avail; + if (data_avail) { + data_len = kcs_bmc->data_in_idx; + memcpy(kcs_bmc->kbuffer, kcs_bmc->data_in, data_len); + } + spin_unlock_irq(&kcs_bmc->lock); + + if (!data_avail) { + ret = -EAGAIN; + goto out_unlock; + } + + if (count < data_len) { + pr_err("channel=%u with too large data : %zu\n", + kcs_bmc->channel, data_len); + + spin_lock_irq(&kcs_bmc->lock); + kcs_bmc_ipmi_force_abort(kcs_bmc); + spin_unlock_irq(&kcs_bmc->lock); + + ret = -EOVERFLOW; + goto out_unlock; + } + + if (copy_to_user(buf, kcs_bmc->kbuffer, data_len)) { + ret = -EFAULT; + goto out_unlock; + } + + ret = data_len; + + spin_lock_irq(&kcs_bmc->lock); + if (kcs_bmc->phase == KCS_PHASE_WRITE_DONE) { + kcs_bmc->phase = KCS_PHASE_WAIT_READ; + kcs_bmc->data_in_avail = false; + kcs_bmc->data_in_idx = 0; + } else { + ret = -EAGAIN; + } + spin_unlock_irq(&kcs_bmc->lock); + +out_unlock: + mutex_unlock(&kcs_bmc->mutex); + + return ret; +} + +static ssize_t kcs_bmc_ipmi_write(struct file *filp, const char __user *buf, + size_t count, loff_t *ppos) +{ + struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); + ssize_t ret; + + /* a minimum response size '3' : netfn + cmd + ccode */ + if (count < 3 || count > KCS_MSG_BUFSIZ) + return -EINVAL; + + mutex_lock(&kcs_bmc->mutex); + + if (copy_from_user(kcs_bmc->kbuffer, buf, count)) { + ret = -EFAULT; + goto out_unlock; + } + + spin_lock_irq(&kcs_bmc->lock); + if (kcs_bmc->phase == KCS_PHASE_WAIT_READ) { + kcs_bmc->phase = KCS_PHASE_READ; + kcs_bmc->data_out_idx = 1; + kcs_bmc->data_out_len = count; + memcpy(kcs_bmc->data_out, kcs_bmc->kbuffer, count); + kcs_bmc_write_data(kcs_bmc, kcs_bmc->data_out[0]); + ret = count; + } else { + ret = -EINVAL; + } + spin_unlock_irq(&kcs_bmc->lock); + +out_unlock: + mutex_unlock(&kcs_bmc->mutex); + + return ret; +} + +static long kcs_bmc_ipmi_ioctl(struct file *filp, unsigned int cmd, + unsigned long arg) +{ + struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); + long ret = 0; + + spin_lock_irq(&kcs_bmc->lock); + + switch (cmd) { + case IPMI_BMC_IOCTL_SET_SMS_ATN: + kcs_bmc_update_status(kcs_bmc, KCS_STATUS_SMS_ATN, KCS_STATUS_SMS_ATN); + break; + + case IPMI_BMC_IOCTL_CLEAR_SMS_ATN: + kcs_bmc_update_status(kcs_bmc, KCS_STATUS_SMS_ATN, 0); + break; + + case IPMI_BMC_IOCTL_FORCE_ABORT: + kcs_bmc_ipmi_force_abort(kcs_bmc); + break; + + default: + ret = -EINVAL; + break; + } + + spin_unlock_irq(&kcs_bmc->lock); + + return ret; +} + +static int kcs_bmc_ipmi_release(struct inode *inode, struct file *filp) +{ + struct kcs_bmc *kcs_bmc = to_kcs_bmc(filp); + + spin_lock_irq(&kcs_bmc->lock); + kcs_bmc->running = 0; + kcs_bmc_ipmi_force_abort(kcs_bmc); + spin_unlock_irq(&kcs_bmc->lock); + + return 0; +} + +static const struct file_operations kcs_bmc_fops = { + .owner = THIS_MODULE, + .open = kcs_bmc_ipmi_open, + .read = kcs_bmc_ipmi_read, + .write = kcs_bmc_ipmi_write, + .release = kcs_bmc_ipmi_release, + .poll = kcs_bmc_ipmi_poll, + .unlocked_ioctl = kcs_bmc_ipmi_ioctl, +}; + +struct kcs_bmc *kcs_bmc_ipmi_alloc(struct device *dev, int sizeof_priv, u32 channel); +struct kcs_bmc *kcs_bmc_ipmi_alloc(struct device *dev, int sizeof_priv, u32 channel) +{ + struct kcs_bmc *kcs_bmc; + + kcs_bmc = devm_kzalloc(dev, sizeof(*kcs_bmc) + sizeof_priv, GFP_KERNEL); + if (!kcs_bmc) + return NULL; + + spin_lock_init(&kcs_bmc->lock); + kcs_bmc->channel = channel; + + mutex_init(&kcs_bmc->mutex); + init_waitqueue_head(&kcs_bmc->queue); + + kcs_bmc->data_in = devm_kmalloc(dev, KCS_MSG_BUFSIZ, GFP_KERNEL); + kcs_bmc->data_out = devm_kmalloc(dev, KCS_MSG_BUFSIZ, GFP_KERNEL); + kcs_bmc->kbuffer = devm_kmalloc(dev, KCS_MSG_BUFSIZ, GFP_KERNEL); + + kcs_bmc->miscdev.minor = MISC_DYNAMIC_MINOR; + kcs_bmc->miscdev.name = devm_kasprintf(dev, GFP_KERNEL, "%s%u", + DEVICE_NAME, channel); + if (!kcs_bmc->data_in || !kcs_bmc->data_out || !kcs_bmc->kbuffer || + !kcs_bmc->miscdev.name) + return NULL; + kcs_bmc->miscdev.fops = &kcs_bmc_fops; + + return kcs_bmc; +} +EXPORT_SYMBOL(kcs_bmc_ipmi_alloc); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Haiyue Wang "); +MODULE_DESCRIPTION("KCS BMC to handle the IPMI request from system software"); -- 2.30.2 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel