日本免费高清视频-国产福利视频导航-黄色在线播放国产-天天操天天操天天操天天操|www.shdianci.com

學(xué)無先后,達(dá)者為師

網(wǎng)站首頁 編程語言 正文

k8s conntrack 表項(xiàng)超時導(dǎo)致tcp長連接中斷

作者:分享放大價值 更新時間: 2022-07-12 編程語言

此問題是在公司業(yè)務(wù)中出現(xiàn)的,經(jīng)過分析感覺和具體業(yè)務(wù)沒啥關(guān)系,所以嘗試在自搭的k8s環(huán)境中模擬復(fù)現(xiàn),事實(shí)證明確實(shí)可以復(fù)現(xiàn)。拓?fù)淙缦?/p>

image.png

拓?fù)浔容^簡單,client和server建立http長連接后,過大概一天后,client再發(fā)送數(shù)據(jù)到server,會收到server端的rst消息,導(dǎo)致client端發(fā)送數(shù)據(jù)時收到error(reset by peer)關(guān)閉socket連接。

先說下復(fù)現(xiàn)步驟,再分析原因。

復(fù)現(xiàn)步驟

  1. 創(chuàng)建pod和svc
    用到的yaml文件如下,創(chuàng)建一個client pod,兩個server pod,和一個service,監(jiān)聽端口2222,后端pod為server pod。
    pod鏡像用的是nginx,這個無所謂,只要能執(zhí)行后面的client和server bin即可。
    使用命令 kubectl apply -f service.yaml 應(yīng)用yaml配置。

root@master:~# cat service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: client
spec:
  selector:
    matchLabels:
      app: myapp1
  replicas: 1
  template:
    metadata:
      labels:
        app: myapp1
    spec:
      nodeName: master

      containers:
      - name: nginx
        image: nginx

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: server
spec:
  selector:
    matchLabels:
      app: myapp
  replicas: 2
  template:
    metadata:
      labels:
        app: myapp
    spec:
      nodeName: node1

      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 2222

---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  selector:
    app: myapp
  ports:
  - protocol: TCP
    port: 2222
    targetPort: 2222
  1. client和server 簡單c程序,用來建立tcp連接
    client端口代碼,用來連接server的service ip 10.108.33.37

root@master:~/test# cat client.c
#include <netinet/in.h>
#include <netinet/tcp.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <unistd.h>
#include <sys/socket.h>
#include <stdio.h>
#include <errno.h>
#include <unistd.h>
#include <netinet/in.h>
#include <arpa/inet.h>

void main(void)
{
        int fd, ret;
        struct sockaddr_in addr;

        fd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
        if(fd < 0) {
                perror("socket create failed");
                return ;
        }

        addr.sin_family  = AF_INET;
        addr.sin_port  = htons(2222);
        addr.sin_addr.s_addr = inet_addr("10.108.33.37");
        ret = connect(fd, (const struct sockaddr *)&addr, sizeof(addr));
        if( ret != 0) {
                perror("socket connect1 failed");
                return ;
        }

        char buff[10];
        while(1) {
            printf("please input:");
            gets(buff);
            ret = send(fd, buff, sizeof(buff), 0);
            perror("send result\n");
            sleep(1);
        }
}

server端代碼,監(jiān)聽2222端口

root@master:~/test# cat server.c
#include<stdio.h>
#include<stdlib.h>
#include<sys/types.h>
#include<sys/socket.h>
#include<sys/wait.h>
#include<string.h>
#include<netinet/in.h>
#include<unistd.h>
#include <errno.h>

int main()
{
    int fd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
    int connfd;
    int pid;
    int ret;

    char buff[1024];
    struct sockaddr_in serveraddr;
    serveraddr.sin_family = AF_INET;
    serveraddr.sin_addr.s_addr = htonl(INADDR_ANY);
    serveraddr.sin_port = htons(2222);
    bind(fd, (struct sockaddr *)&serveraddr, sizeof(serveraddr));
    listen(fd, 1024);

    while(1){
        connfd = accept(fd, (struct sockaddr *)NULL, NULL);
        if(connfd != -1)
        {
            while(1) {
                ret = 0;
                memset(buff, 0, sizeof(buff));
                ret = recv(connfd, buff, strlen(buff)+1, 0);
            }
        }
    }

    return 0;
}

編譯client和server

root@master:~/test# gcc -o client client.c
root@master:~/test# gcc -o server server.c
  1. 將client和server拷貝到pod中

//獲取pod name
root@master:~/test# kubectl get pod -o wide
NAME                      READY   STATUS    RESTARTS   AGE    IP               NODE     NOMINATED NODE   READINESS GATES
client-797b85996c-tqhhh   1/1     Running   0          7d1h   172.18.219.65    master   <none>           <none>
server-65d547c44-5mjgl    1/1     Running   0          7d1h   172.18.166.130   node1    <none>           <none>
server-65d547c44-d9p9d    1/1     Running   0          13h    172.18.166.131   node1    <none>           <none>

//將client和server分別拷貝到對應(yīng)的pod中
root@master:~/test# kubectl cp client client-797b85996c-tqhhh:/
root@master:~/test# kubectl cp server server-65d547c44-5mjgl:/
root@master:~/test# kubectl cp server server-65d547c44-d9p9d:/
  1. 接下來需要開幾個終端,開始復(fù)現(xiàn)

//兩個終端上,啟動server
//terminal 1
root@master:~# kubectl exec -it server-65d547c44-5mjgl bash
root@server-65d547c44-5mjgl:/# ./server
//terminal 2
root@master:~# kubectl exec -it server-65d547c44-d9p9d bash
root@server-65d547c44-d9p9d:/# ./server

//terminal 3,在client pod中執(zhí)行client,主動連接server
root@master:~# kubectl exec -it client-797b85996c-tqhhh bash
root@client-797b85996c-tqhhh:/# ./client
please input:  --->出現(xiàn)此提示,說明connect server成功,即三次握手完成

root@client-797b85996c-tqhhh:/# ./client
please input:1   ---> 輸入1
send result: Success ---> 發(fā)送1成功
                       ---> 輸入2之前,在另一個終端使用命令 conntrack -F 將連接跟蹤表清空
please input:2  ---> 輸入2
send result: Success --->雖然顯示發(fā)送成功,同時也會接受到server的rst消息
please input:3  ---> 再輸入3,
send result: Connection reset by peer --->因?yàn)槭盏絩st消息,這次發(fā)送失敗

分析原因

client向server的service ip10.108.33.37發(fā)起連接時,會在client pod所在node上經(jīng)過netfilter/conntrack的處理,將service ip轉(zhuǎn)換成server的pod id,具體轉(zhuǎn)換規(guī)則可參見上面拓?fù)鋱D中,也可以參考這篇文章。
簡單點(diǎn)說就是每次新連接都會首先查找iptables規(guī)則,將service ip轉(zhuǎn)換成server的pod ip,如果service的后端有多個pod,會將連接random到多個pod上,同時也會建立conntrack表項(xiàng),后續(xù)的報文直接查找conntrack表項(xiàng)即可,不用再查找iptables規(guī)則。但是conntrack表項(xiàng)是有超時時間的,可通過 nf_conntrack_tcp_timeout_established 調(diào)整,我的環(huán)境上,默認(rèn)值為86400,也就是24小時。

root@master:~# sysctl -n net.netfilter.nf_conntrack_tcp_timeout_established
86400

所以可通過減小此值,縮短復(fù)現(xiàn)所用時間,或者干脆使用 "conntrack -F" 將所有的表項(xiàng)刪除。為了盡快復(fù)現(xiàn),上面步驟采用了后一種方法。在復(fù)現(xiàn)過程中,還可以使用 conntrack -E 獲取表項(xiàng)創(chuàng)建,更新和刪除事件,如下

//terminal 4
root@master:~# conntrack -E | grep 10.108.33.37
//client發(fā)送的第一個syn報文,經(jīng)過iptables將service ip 10.108.33.37轉(zhuǎn)換成 pod id 172.18.166.131
[NEW] tcp      6 120 SYN_SENT src=172.18.219.65 dst=10.108.33.37 sport=59468 dport=2222 [UNREPLIED] src=172.18.166.131 dst=172.18.219.65 sport=2222 dport=59468
//收到server發(fā)送的syn和ack報文,狀態(tài)轉(zhuǎn)換到 SYN_RECV
[UPDATE] tcp      6 60 SYN_RECV src=172.18.219.65 dst=10.108.33.37 sport=59468 dport=2222 src=172.18.166.131 dst=172.18.219.65 sport=2222 dport=59468
//收到client發(fā)送的ack報文后,認(rèn)為三次握手成功,狀態(tài)轉(zhuǎn)換到 ESTABLISHED。后續(xù)的數(shù)據(jù)報文根據(jù)此表項(xiàng)進(jìn)行轉(zhuǎn)發(fā)
[UPDATE] tcp      6 86400 ESTABLISHED src=172.18.219.65 dst=10.108.33.37 sport=59468 dport=2222 src=172.18.166.131 dst=172.18.219.65 sport=2222 dport=59468 [ASSURED]

//執(zhí)行 conntrack -F 后,會將此表項(xiàng)刪除
[DESTROY] tcp      6 src=172.18.219.65 dst=10.108.33.37 sport=59468 dport=2222 src=172.18.166.131 dst=172.18.219.65 sport=2222 dport=59468 [ASSURED]

//client再次發(fā)送數(shù)據(jù)2時,因?yàn)橹暗谋眄?xiàng)被刪除了,需要重新查找iptables規(guī)則,但這次轉(zhuǎn)換的pod id為另一個pod的ip 172.18.166.130
[NEW] tcp      6 300 ESTABLISHED src=172.18.219.65 dst=10.108.33.37 sport=59468 dport=2222 [UNREPLIED] src=172.18.166.130 dst=172.18.219.65 sport=2222 dport=59468
//server收到新數(shù)據(jù)流,但是不是syn報文,就認(rèn)為是不合法的,回復(fù)rst。conntrack收到rst后,也會將表項(xiàng)刪除
[DESTROY] tcp      6 src=172.18.219.65 dst=10.108.33.37 sport=59468 dport=2222 [UNREPLIED] src=172.18.166.130 dst=172.18.219.65 sport=2222 dport=59468

除了查看conntrack表項(xiàng)事件外,也可以抓包查看,這里就不抓包了。
kernel中tcp連接的處理
對于監(jiān)聽的server來說,收到的新連接的第一個報文應(yīng)該是syn報文,如果收到了ack報文,會回復(fù)rst給對端

int tcp_v4_do_rcv(struct sock *sk, struct sk_buff *skb)
    //處于listen狀態(tài)的socket,收到ack報文,回復(fù)rst消息給對端
    if (tcp_rcv_state_process(sk, skb)) {
        rsk = sk;
        goto reset;
    }
    return 0;

reset:
    tcp_v4_send_reset(rsk, skb);
discard:
    kfree_skb(skb);
    /* Be careful here. If this function gets more complicated and
     * gcc suffers from register pressure on the x86, sk (in %ebx)
     * might be destroyed here. This current version compiles correctly,
     * but you have been warned.
     */
    return 0;

int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb)
    switch (sk->sk_state) {
    //處于listen狀態(tài)的socket,收到ack報文,返回1
    case TCP_LISTEN:
        if (th->ack)
            return 1;

client收到rst的處理,會設(shè)置 sk->sk_err 為 ECONNRESET,當(dāng)下一次client send發(fā)送數(shù)據(jù)時,就會返回 ECONNRESET。

tcp_v4_do_rcv -> tcp_rcv_established -> tcp_validate_incoming -> tcp_reset

#define ECONNRESET  54  /* Connection reset by peer */

/* When we get a reset we do this. */
void tcp_reset(struct sock *sk)
    /* We want the right error as BSD sees it (and indeed as we do). */
    switch (sk->sk_state) {
    case TCP_SYN_SENT:
        sk->sk_err = ECONNREFUSED;
        break;
    case TCP_CLOSE_WAIT:
        sk->sk_err = EPIPE;
        break;
    case TCP_CLOSE:
        return;
    default:
        sk->sk_err = ECONNRESET;
    }

總結(jié)
綜上可知,此問題出現(xiàn)主要是因?yàn)閏onntrack表項(xiàng)超時被刪除,但是應(yīng)用是不知道的,下次client發(fā)送數(shù)據(jù)時(ack報文),需要重新查找iptables規(guī)則轉(zhuǎn)換目的ip,但是不一定會使用上次的pod id,如果是新的pod ip,對于監(jiān)聽此pod ip的server來說,收到的新連接的報文是ack報文,被認(rèn)為是不合法的報文,所以才會給client回復(fù)rst消息。
值得注意的是,此問題只在service有多個后端pod情況下才會出現(xiàn),如果只有一個后端pod,每次新連接都能找到同一個pod ip,也就不會出問題。

解決辦法

a. 可以調(diào)整 nf_conntrack_tcp_timeout_established 為更大值,但是只會減小問題發(fā)生的概率,不能根本解決問題。
b. 應(yīng)用設(shè)置keepalive使用保活機(jī)制,不讓conntrack表項(xiàng)超時。這個需要合理設(shè)置keepalive時間和conntrack表項(xiàng)超時時間。

使用如下函數(shù)使能keepalive

int flag = 1;
setsockopt(fd, SOL_SOCKET, SO_KEEPALIVE, (void*)&flag, sizeof(flag));

使用keepalive機(jī)制后,有三個參數(shù)也需要設(shè)置一下

tcp_keepalive_time: 沒有數(shù)據(jù)報文多長時間后發(fā)送keepalive報文
tcp_keepalive_probes: 發(fā)送keepalive報文次數(shù)
tcp_keepalive_intvl: 發(fā)送keepalive報文間隔

如果發(fā)送了tcp_keepalive_probes次keepalive報文后,仍然沒有收到響應(yīng)報文,則認(rèn)為連接已經(jīng)端口。

這三個參數(shù)可以在代碼中設(shè)置socket級別

int _idle  = 60;
int _intvl = 3;
int _cnt   = 3;
setsockopt(fd, SOL_TCP, TCP_KEEPIDLE, (void*)&_idle, sizeof(_idle));
setsockopt(fd, SOL_TCP, TCP_KEEPINTVL, (void*)&_intvl, sizeof(_intvl));
setsockopt(fd, SOL_TCP, TCP_KEEPCNT, (void*)&_cnt, sizeof(_cnt));

也可以在node上設(shè)置,如下是默認(rèn)值

root@master:~# sysctl -a | grep keepalive
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200

可以通過下面命令查看是否使能keepalive機(jī)制,如果Timer字段為keepalive,則說明已經(jīng)使能

root@master:~# netstat -altpn --timers
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name     Timer
tcp6       0      0 192.168.122.20:6443     192.168.122.20:32864    ESTABLISHED 3572/kube-apiserver  keepalive (34.34/0/0)

也可參考:k8s conntrack 表項(xiàng)超時導(dǎo)致tcp長連接中斷 - 簡書 (jianshu.com)?

原文鏈接:https://blog.csdn.net/fengcai_ke/article/details/125717134

欄目分類
最近更新