From: kernel test robot <lkp@intel.com>
To: Paulo Alcantara <pc@cjr.nz>,
linux-cifs@vger.kernel.org, smfrench@gmail.com
Cc: kbuild-all@lists.01.org, Paulo Alcantara <pc@cjr.nz>
Subject: Re: [PATCH 2/3] cifs: handle multiple ip addresses per hostname
Date: Wed, 12 May 2021 02:47:50 +0800 [thread overview]
Message-ID: <202105120225.7zo2Ihbo-lkp@intel.com> (raw)
In-Reply-To: <20210511163609.11019-3-pc@cjr.nz>
[-- Attachment #1: Type: text/plain, Size: 11668 bytes --]
Hi Paulo,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on cifs/for-next]
[also build test ERROR on v5.13-rc1 next-20210511]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Paulo-Alcantara/Support-multiple-ips-per-hostname/20210512-003751
base: git://git.samba.org/sfrench/cifs-2.6.git for-next
config: microblaze-randconfig-s032-20210511 (attached as .config)
compiler: microblaze-linux-gcc (GCC) 9.3.0
reproduce:
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# apt-get install sparse
# sparse version: v0.6.3-341-g8af24329-dirty
# https://github.com/0day-ci/linux/commit/210f8e08a6bb153136929af6da6e0a7289ba5931
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Paulo-Alcantara/Support-multiple-ips-per-hostname/20210512-003751
git checkout 210f8e08a6bb153136929af6da6e0a7289ba5931
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' W=1 ARCH=microblaze
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All error/warnings (new ones prefixed by >>):
fs/cifs/connect.c: In function 'cifs_create_socket':
>> fs/cifs/connect.c:177:6: warning: variable 'slen' set but not used [-Wunused-but-set-variable]
177 | int slen, sfamily;
| ^~~~
fs/cifs/connect.c: In function 'cifs_reconnect':
>> fs/cifs/connect.c:726:46: error: 'cifs_sb' undeclared (first use in this function); did you mean 'cifs_ses'?
726 | if (IS_ENABLED(CONFIG_CIFS_DFS_UPCALL) && cifs_sb &&
| ^~~~~~~
| cifs_ses
fs/cifs/connect.c:726:46: note: each undeclared identifier is reported only once for each function it appears in
>> fs/cifs/connect.c:733:5: error: implicit declaration of function 'reconn_set_next_dfs_target' [-Werror=implicit-function-declaration]
733 | reconn_set_next_dfs_target(server, cifs_sb, &tgt_list, &tgt_it);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
>> fs/cifs/connect.c:733:50: error: 'tgt_list' undeclared (first use in this function)
733 | reconn_set_next_dfs_target(server, cifs_sb, &tgt_list, &tgt_it);
| ^~~~~~~~
>> fs/cifs/connect.c:733:61: error: 'tgt_it' undeclared (first use in this function)
733 | reconn_set_next_dfs_target(server, cifs_sb, &tgt_list, &tgt_it);
| ^~~~~~
cc1: some warnings being treated as errors
--
fs/cifs/sess.c: In function 'cifs_ses_add_channel':
>> fs/cifs/sess.c:310:1: warning: the frame size of 2608 bytes is larger than 1024 bytes [-Wframe-larger-than=]
310 | }
| ^
vim +726 fs/cifs/connect.c
577
578 /*
579 * cifs tcp session reconnection
580 *
581 * mark tcp session as reconnecting so temporarily locked
582 * mark all smb sessions as reconnecting for tcp session
583 * reconnect tcp session
584 * wake up waiters on reconnection? - (not needed currently)
585 */
586 int
587 cifs_reconnect(struct TCP_Server_Info *server)
588 {
589 int rc = 0;
590 struct list_head *tmp, *tmp2;
591 struct cifs_ses *ses;
592 struct cifs_tcon *tcon;
593 struct mid_q_entry *mid_entry;
594 struct list_head retry_list;
595 #ifdef CONFIG_CIFS_DFS_UPCALL
596 struct super_block *sb = NULL;
597 struct cifs_sb_info *cifs_sb = NULL;
598 struct dfs_cache_tgt_list tgt_list = {0};
599 struct dfs_cache_tgt_iterator *tgt_it = NULL;
600 #endif
601 struct sockaddr_storage *addrs = NULL;
602 unsigned int numaddrs;
603
604 addrs = kmalloc(sizeof(*addrs) * CIFS_MAX_ADDR_COUNT, GFP_KERNEL);
605 if (!addrs) {
606 rc = -ENOMEM;
607 goto out;
608 }
609
610 spin_lock(&GlobalMid_Lock);
611 server->nr_targets = 1;
612 #ifdef CONFIG_CIFS_DFS_UPCALL
613 spin_unlock(&GlobalMid_Lock);
614 sb = cifs_get_tcp_super(server);
615 if (IS_ERR(sb)) {
616 rc = PTR_ERR(sb);
617 cifs_dbg(FYI, "%s: will not do DFS failover: rc = %d\n",
618 __func__, rc);
619 sb = NULL;
620 } else {
621 cifs_sb = CIFS_SB(sb);
622 rc = reconn_setup_dfs_targets(cifs_sb, &tgt_list);
623 if (rc) {
624 cifs_sb = NULL;
625 if (rc != -EOPNOTSUPP) {
626 cifs_server_dbg(VFS, "%s: no target servers for DFS failover\n",
627 __func__);
628 }
629 } else {
630 server->nr_targets = dfs_cache_get_nr_tgts(&tgt_list);
631 }
632 }
633 cifs_dbg(FYI, "%s: will retry %d target(s)\n", __func__,
634 server->nr_targets);
635 spin_lock(&GlobalMid_Lock);
636 #endif
637 if (server->tcpStatus == CifsExiting) {
638 /* the demux thread will exit normally
639 next time through the loop */
640 spin_unlock(&GlobalMid_Lock);
641 #ifdef CONFIG_CIFS_DFS_UPCALL
642 dfs_cache_free_tgts(&tgt_list);
643 cifs_put_tcp_super(sb);
644 #endif
645 goto out;
646 } else
647 server->tcpStatus = CifsNeedReconnect;
648 spin_unlock(&GlobalMid_Lock);
649 server->maxBuf = 0;
650 server->max_read = 0;
651
652 cifs_dbg(FYI, "Mark tcp session as need reconnect\n");
653 trace_smb3_reconnect(server->CurrentMid, server->conn_id, server->hostname);
654
655 /* before reconnecting the tcp session, mark the smb session (uid)
656 and the tid bad so they are not used until reconnected */
657 cifs_dbg(FYI, "%s: marking sessions and tcons for reconnect\n",
658 __func__);
659 spin_lock(&cifs_tcp_ses_lock);
660 list_for_each(tmp, &server->smb_ses_list) {
661 ses = list_entry(tmp, struct cifs_ses, smb_ses_list);
662 ses->need_reconnect = true;
663 list_for_each(tmp2, &ses->tcon_list) {
664 tcon = list_entry(tmp2, struct cifs_tcon, tcon_list);
665 tcon->need_reconnect = true;
666 }
667 if (ses->tcon_ipc)
668 ses->tcon_ipc->need_reconnect = true;
669 }
670 spin_unlock(&cifs_tcp_ses_lock);
671
672 /* do not want to be sending data on a socket we are freeing */
673 cifs_dbg(FYI, "%s: tearing down socket\n", __func__);
674 mutex_lock(&server->srv_mutex);
675 if (server->ssocket) {
676 cifs_dbg(FYI, "State: 0x%x Flags: 0x%lx\n",
677 server->ssocket->state, server->ssocket->flags);
678 kernel_sock_shutdown(server->ssocket, SHUT_WR);
679 cifs_dbg(FYI, "Post shutdown state: 0x%x Flags: 0x%lx\n",
680 server->ssocket->state, server->ssocket->flags);
681 sock_release(server->ssocket);
682 server->ssocket = NULL;
683 }
684 server->sequence_number = 0;
685 server->session_estab = false;
686 kfree(server->session_key.response);
687 server->session_key.response = NULL;
688 server->session_key.len = 0;
689 server->lstrp = jiffies;
690
691 /* mark submitted MIDs for retry and issue callback */
692 INIT_LIST_HEAD(&retry_list);
693 cifs_dbg(FYI, "%s: moving mids to private list\n", __func__);
694 spin_lock(&GlobalMid_Lock);
695 list_for_each_safe(tmp, tmp2, &server->pending_mid_q) {
696 mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
697 kref_get(&mid_entry->refcount);
698 if (mid_entry->mid_state == MID_REQUEST_SUBMITTED)
699 mid_entry->mid_state = MID_RETRY_NEEDED;
700 list_move(&mid_entry->qhead, &retry_list);
701 mid_entry->mid_flags |= MID_DELETED;
702 }
703 spin_unlock(&GlobalMid_Lock);
704 mutex_unlock(&server->srv_mutex);
705
706 cifs_dbg(FYI, "%s: issuing mid callbacks\n", __func__);
707 list_for_each_safe(tmp, tmp2, &retry_list) {
708 mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
709 list_del_init(&mid_entry->qhead);
710 mid_entry->callback(mid_entry);
711 cifs_mid_q_entry_release(mid_entry);
712 }
713
714 if (cifs_rdma_enabled(server)) {
715 mutex_lock(&server->srv_mutex);
716 smbd_destroy(server);
717 mutex_unlock(&server->srv_mutex);
718 }
719
720 do {
721 try_to_freeze();
722
723 mutex_lock(&server->srv_mutex);
724
725 if (!cifs_swn_set_server_dstaddr(server)) {
> 726 if (IS_ENABLED(CONFIG_CIFS_DFS_UPCALL) && cifs_sb &&
727 cifs_sb->origin_fullpath) {
728 /*
729 * Set up next DFS target server (if any) for reconnect. If DFS
730 * feature is disabled, then we will retry last server we
731 * connected to before.
732 */
> 733 reconn_set_next_dfs_target(server, cifs_sb, &tgt_list, &tgt_it);
734 }
735 /*
736 * Resolve the hostname again to make sure that IP address is up-to-date.
737 */
738 numaddrs = CIFS_MAX_ADDR_COUNT;
739 reconn_resolve_hostname(server, addrs, &numaddrs);
740
741 if (cifs_rdma_enabled(server)) {
742 /* FIXME: handle multiple ips for RDMA */
743 server->dst_addr_list[0] = server->dstaddr = addrs[0];
744 server->dst_addr_count = 1;
745 }
746 } else {
747 addrs[0] = server->dstaddr;
748 numaddrs = 1;
749 }
750
751 if (cifs_rdma_enabled(server)) {
752 rc = smbd_reconnect(server);
753 } else {
754 struct socket **socks, *sock;
755
756 socks = connect_all_ips(server, addrs, numaddrs);
757 if (IS_ERR(socks)) {
758 rc = PTR_ERR(socks);
759 cifs_server_dbg(VFS, "%s: connect_all_ips() failed: %d\n", __func__, rc);
760 } else {
761 mutex_unlock(&server->srv_mutex);
762 sock = get_first_connected_socket(socks, addrs, numaddrs, true);
763 release_sockets(socks, numaddrs);
764 mutex_lock(&server->srv_mutex);
765
766 if (IS_ERR(sock)) {
767 rc = PTR_ERR(sock);
768 cifs_server_dbg(FYI, "%s: couldn't find a connected socket: %d\n", __func__, rc);
769 } else {
770 rc = kernel_getpeername(sock, (struct sockaddr *)&server->dstaddr);
771 if (rc < 0) {
772 cifs_server_dbg(VFS, "%s: getpeername() failed: %d\n", __func__, rc);
773 sock_release(sock);
774 } else
775 rc = 0;
776 }
777 if (!rc) {
778 memcpy(server->dst_addr_list, addrs,
779 sizeof(addrs[0]) * numaddrs);
780 server->dst_addr_count = numaddrs;
781 server->ssocket = sock;
782 }
783 }
784 }
785
786 if (rc) {
787 mutex_unlock(&server->srv_mutex);
788 cifs_server_dbg(FYI, "%s: reconnect error %d\n", __func__, rc);
789 msleep(3000);
790 } else {
791 atomic_inc(&tcpSesReconnectCount);
792 set_credits(server, 1);
793 spin_lock(&GlobalMid_Lock);
794 if (server->tcpStatus != CifsExiting)
795 server->tcpStatus = CifsNeedNegotiate;
796 spin_unlock(&GlobalMid_Lock);
797 cifs_swn_reset_server_dstaddr(server);
798 mutex_unlock(&server->srv_mutex);
799 }
800 } while (server->tcpStatus == CifsNeedReconnect);
801
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 30931 bytes --]
next prev parent reply other threads:[~2021-05-11 18:48 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-11 16:36 [PATCH 0/3] Support multiple ips per hostname Paulo Alcantara
2021-05-11 16:36 ` [PATCH 1/3] cifs: introduce smb3_options_for_each() helper Paulo Alcantara
2021-05-11 18:15 ` kernel test robot
2021-05-11 16:36 ` [PATCH 2/3] cifs: handle multiple ip addresses per hostname Paulo Alcantara
2021-05-11 18:47 ` kernel test robot [this message]
2021-05-11 18:47 ` kernel test robot
2021-05-11 16:36 ` [PATCH 3/3] cifs: fix find_root_ses() when refresing dfs cache Paulo Alcantara
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=202105120225.7zo2Ihbo-lkp@intel.com \
--to=lkp@intel.com \
--cc=kbuild-all@lists.01.org \
--cc=linux-cifs@vger.kernel.org \
--cc=pc@cjr.nz \
--cc=smfrench@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).