From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7A24C432BE for ; Sun, 29 Aug 2021 11:21:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C707760F36 for ; Sun, 29 Aug 2021 11:21:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235110AbhH2LVz (ORCPT ); Sun, 29 Aug 2021 07:21:55 -0400 Received: from mo4-p00-ob.smtp.rzone.de ([85.215.255.22]:35906 "EHLO mo4-p00-ob.smtp.rzone.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235195AbhH2LVz (ORCPT ); Sun, 29 Aug 2021 07:21:55 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1630235881; s=strato-dkim-0002; d=hartkopp.net; h=In-Reply-To:Date:Message-ID:From:References:Cc:To:Subject:Cc:Date: From:Subject:Sender; bh=zqs6o0lmE1KrWwqmqeM8BZTY9UNJICybUvn5Y6tw6fI=; b=oaA3uaKhgA9CVixLrNRAJFC73MAyW6sbFQWEOAgjrJ1bxY4WrwlPtpZ2tWoKcD5R6p BpUFPhAjB28Ngyr4b5G58q4xvzo6GgEWgizQxpZ076fwxp8yljHg2MTN6dtbG9TfpMtu 6YewO6aZbEpHZRSR02rQRN67Hlwpp1vzXPdbN/ujbnPXtBxbI4p1QAwsB8y/GOR5UOBd yN40qtxTb9LHuwQJuuuwk4hlMYYPJqgjhD7XMt0IzaTbdclXZtU4iEDDrTTA2+4S8u2v e3XApnWvTw5NLL3gqyBNqDxd4rzeJSspkKt6QuqFqNgiwKflSEhoSp/L+KhN8R+6jZtX bClQ== Authentication-Results: strato.com; dkim=none X-RZG-AUTH: ":P2MHfkW8eP4Mre39l357AZT/I7AY/7nT2yrDxb8mjG14FZxedJy6qgO1o3TMaFqTEVR8J8xryV0=" X-RZG-CLASS-ID: mo00 Received: from [192.168.10.31] by smtp.strato.de (RZmta 47.31.0 DYNA|AUTH) with ESMTPSA id Q09fd7x7TBI0xZb (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits)) (Client did not present a certificate); Sun, 29 Aug 2021 13:18:00 +0200 (CEST) Subject: Re: AW: [PATCH] can: isotp: omit unintended hrtimer restart on socket release To: Sven Schuchmann , "linux-can@vger.kernel.org" Cc: Marc Kleine-Budde References: <20210618173713.2296-1-socketcan@hartkopp.net> From: Oliver Hartkopp Message-ID: Date: Sun, 29 Aug 2021 13:17:55 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.12.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-can@vger.kernel.org Hello Sven, On 28.08.21 15:20, Sven Schuchmann wrote: > sorry, I'm late for the party :-) NP ;-) > But I found that this patch decreases the performance of ISO-TP Stack. AFAICS the performance (aka throughput) of the ISO-TP stack is not touched but the grace period when closing an ISO-TP socket is increased. > I have created two testscripts where one plays the server and the > other one is running a test and measuring the time how long > it takes to transfer an ISO-TP Frame with 1000 Bytes. > > Without this patch it takes about 35ms to transfer the frame, > with this patch it takes about 145ms over vcan0. > > Anyone an idea on this? Yes. We now syncronize the removal of data structures to prevent a use-after-free issue at socket close time. The synchronize_rcu() call does this job at specific times which leads to this extended time the close() syscall needs to perform. > bring up a vcan0 interface with: > sudo modprobe vcan > sudo ip link add dev vcan0 type vcan > sudo ifconfig vcan0 up > > here are the scripts: > --- isotp_server.sh --- > #!/bin/bash > iface=vcan0 > echo "Wait for Messages on $iface" > while true; do > exec 3< <(isotprecv -s 77E -d 714 -b F -p AA:AA $iface) > rxpid=$! > wait $rxpid > output=$(cat <&3) > echo "7F 01 11" | isotpsend -s 77E -d 714 -p AA:AA -L 16:8:0 $iface > done IMO the issue arises with the use of isotpsend and isotprecv. These tools are intended to get a hands-on impression how the isotp stack works. This kind of use in a script leads to the creation and (now delayed) *removal* of isotp sockets for *each* single PDU transfer. The better approach would be to write a C program that creates ONE socket and simply read() from that socket and write() to it. This should boost your performance even more. Is the performance a real requirement for your use-case or is this decreased socket close rate a finding which does not really affect your work? Best regards, Oliver