Elastic Stack 7集群在kubernetes上的部署实践

默认情况下,ES的节点类型有如下几种:

  • Master-eligible node: 有资格被选为控制群集的主节点(配置: node.master: true)
  • Data node: 数据节点保存数据并执行与数据相关的操作(配置: node.data: true)
  • Ingest node: 进行数据处理的节点(配置: node.ingest: true)
  • Machine learning node: 机器学习节点,用于运行作业和处理机器学习API请求(配置: xpack.ml.enabled: true)(node.ml: true)

详细: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html

这里我先行部署3master、3data、3ingest的ES集群,再部署filebeat、logstash、kibana,同时我这里整合了kafka进行传输。

es00

创建 elasticsearch 集群

创建 ES-Master

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# vim es-master-config.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
name: es-master
namespace: kafka
labels:
component: elasticsearch
role: master
data:
elasticsearch.yml: |
cluster.name: "es-${NAMESPACE}"
node.name: "${POD_NAME}"
network.host: 0.0.0.0
http.host: 0.0.0.0
transport.host: 0.0.0.0
bootstrap.memory_lock: false
discovery.seed_hosts: "es-master"
cluster.initial_master_nodes: "es-master-0,es-master-1,es-master-2"
node.master: true
node.data: false
node.ingest: false
cluster.remote.connect: false
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
# vim es-master.yaml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-master
namespace: kafka
labels:
k8s-app: es-master
component: elasticsearch
role: master
spec:
serviceName: es-master
replicas: 3
selector:
matchLabels:
k8s-app: es-master
template:
metadata:
labels:
k8s-app: es-master
component: elasticsearch
role: master
spec:
initContainers:
- name: es-init
image: 192.168.100.100/library/alpine:3.9
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es-master
securityContext:
privileged: true
capabilities:
add:
- IPC_LOCK
- SYS_RESOURCE
image: 192.168.100.100/library/elasticsearch:7.1.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ES_JAVA_OPTS
value: "-Xms2g -Xmx2g"
resources:
limits:
cpu: '1'
memory: 2Gi
requests:
cpu: '1'
memory: 2Gi
ports:
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: es-master-config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
volumes:
- name: es-master-config
configMap:
name: es-master
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# vim es-master-service.yaml 
apiVersion: v1
kind: Service
metadata:
name: es-master
namespace: kafka
labels:
k8s-app: es-master
component: elasticsearch
role: master
spec:
clusterIP: None
ports:
- name: transport
port: 9300
targetPort: 9300
selector:
k8s-app: es-master
component: elasticsearch
role: master

创建 ES-Data

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# vim es-data-config.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
name: es-data
namespace: kafka
labels:
component: elasticsearch
role: data
data:
elasticsearch.yml: |
cluster.name: "es-${NAMESPACE}"
node.name: "${POD_NAME}"
network.host: 0.0.0.0
http.host: 0.0.0.0
transport.host: 0.0.0.0
bootstrap.memory_lock: false
discovery.seed_hosts: "es-master"
cluster.initial_master_nodes: "es-master-0,es-master-1,es-master-2"
node.master: false
node.data: true
node.ingest: false
cluster.remote.connect: false
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
# vim es-data.yaml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-data
namespace: kafka
labels:
k8s-app: es-data
component: elasticsearch
role: data
spec:
serviceName: es-data
replicas: 3
selector:
matchLabels:
k8s-app: es-data
template:
metadata:
labels:
k8s-app: es-data
component: elasticsearch
role: data
spec:
securityContext:
fsGroup: 1000
initContainers:
- name: es-init
image: 192.168.100.100/library/alpine:3.9
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es-data
securityContext:
privileged: true
capabilities:
add:
- IPC_LOCK
- SYS_RESOURCE
image: 192.168.100.100/library/elasticsearch:7.1.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: ES_JAVA_OPTS
value: "-Xms2g -Xmx2g"
resources:
limits:
cpu: '1'
memory: 2Gi
requests:
cpu: '1'
memory: 2Gi
ports:
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: es-data-config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
- name: es-data
mountPath: /usr/share/elasticsearch/data
volumes:
- name: es-data-config
configMap:
name: es-data
volumeClaimTemplates:
- metadata:
name: es-data
annotations:
volume.beta.kubernetes.io/storage-class: "kafka-rbd"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# vim es-data-service.yaml 
apiVersion: v1
kind: Service
metadata:
name: es-data
namespace: kafka
labels:
k8s-app: es-data
component: elasticsearch
role: data
spec:
clusterIP: None
ports:
- name: transport
port: 9300
targetPort: 9300
selector:
k8s-app: es-data
component: elasticsearch
role: data

创建 ES-Ingest

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# vim es-ingest-config.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
name: es-ingest
namespace: kafka
labels:
component: elasticsearch
role: ingest
data:
elasticsearch.yml: |
cluster.name: "es-${NAMESPACE}"
node.name: "${POD_NAME}"
network.host: 0.0.0.0
http.host: 0.0.0.0
transport.host: 0.0.0.0
bootstrap.memory_lock: false
discovery.seed_hosts: "es-master"
cluster.initial_master_nodes: "es-master-0,es-master-1,es-master-2"
node.master: false
node.data: false
node.ingest: true
cluster.remote.connect: false
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
# vim es-ingest.yaml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-ingest
namespace: kafka
labels:
k8s-app: es-ingest
component: elasticsearch
role: ingest
spec:
serviceName: es-ingest
replicas: 3
selector:
matchLabels:
k8s-app: es-ingest
template:
metadata:
labels:
k8s-app: es-ingest
component: elasticsearch
role: ingest
spec:
initContainers:
- name: es-init
image: 192.168.100.100/library/alpine:3.9
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es-ingest
securityContext:
privileged: true
capabilities:
add:
- IPC_LOCK
- SYS_RESOURCE
image: 192.168.100.100/library/elasticsearch:7.1.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ES_JAVA_OPTS
value: "-Xms2g -Xmx2g"
resources:
limits:
cpu: '1'
memory: 2Gi
requests:
cpu: '1'
memory: 2Gi
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: es-ingest-config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
volumes:
- name: es-ingest-config
configMap:
name: es-ingest
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# vim es-ingest-service.yaml 
apiVersion: v1
kind: Service
metadata:
name: es-ingest
namespace: kafka
labels:
k8s-app: es-ingest
component: elasticsearch
role: ingest
spec:
type: LoadBalancer
ports:
- name: http
port: 9200
targetPort: 9200
selector:
k8s-app: es-ingest
component: elasticsearch
role: ingest

部署 elasticsearch 集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# kubectl create -f es-master,es-ingest,es-data
configmap/es-master created
service/es-master created
deployment.apps/es-master created
configmap/es-ingest created
service/es-ingest created
deployment.apps/es-ingest created
configmap/es-data created
statefulset.apps/es-data created
# kubectl -n kafka get pod|grep es
es-data-0 1/1 Running 0 8d
es-data-1 1/1 Running 0 8d
es-data-2 1/1 Running 0 8d
es-ingest-0 1/1 Running 0 8d
es-ingest-1 1/1 Running 0 8d
es-ingest-2 1/1 Running 0 8d
es-master-0 1/1 Running 0 8d
es-master-1 1/1 Running 0 8d
es-master-2 1/1 Running 0 8d
# kubectl -n kafka get services|grep es
es-data ClusterIP None <none> 9300/TCP 8d
es-ingest LoadBalancer 10.108.81.201 <pending> 9200:32039/TCP 8d
es-master ClusterIP None <none> 9300/TCP 8d

查看相关信息

查看集群健康状况

1
2
3
# curl http://192.168.100.128:32039/_cat/health?v
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1562396264 06:57:44 es-kafka green 9 3 16 8 0 0 0 0 - 100.0%
  • status: red表示集群故障;yellow表示集群可用但不可靠(单节点即为此状况);green表示集群正常。
  • node.total: 集群节点数
  • node.data: 存储数据的节点数
  • shards: 分片数
  • pri: 主分片数

查看集群节点

带*星号表明该节点是主节点;带-表明该节点是从节点。

1
2
3
4
5
6
7
8
9
10
11
# curl http://192.168.100.128:32039/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.244.2.156 21 96 11 2.29 1.17 1.22 d - es-data-1
10.244.2.154 22 96 11 2.29 1.17 1.22 i - es-ingest-2
10.244.2.155 18 96 11 2.29 1.17 1.22 m * es-master-2
10.244.1.93 18 98 7 0.55 0.33 0.54 m - es-master-1
10.244.2.152 15 96 11 2.29 1.17 1.22 m - es-master-0
10.244.1.92 23 98 7 0.55 0.33 0.54 i - es-ingest-1
10.244.1.94 59 98 7 0.55 0.33 0.54 d - es-data-0
10.244.2.153 21 96 11 2.29 1.17 1.22 i - es-ingest-0
10.244.2.157 47 96 11 2.29 1.17 1.22 d - es-data-2

查看索引状况

1
2
3
4
5
# curl http://192.168.100.128:32039/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .kibana_task_manager Tr36EqEnSDutce244ltNJA 1 1 2 0 42.8kb 21.4kb
green open pod-2019-07-01 yq7v1rCETlKpeG4--wBGIw 1 1 173919 0 131.8mb 61mb
green open .kibana_1 ypI9i51BSiaCN1OwMnMaUQ 1 1 5 0 54.4kb 27.2kb

部署Filebeat

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
# vim filebeat.yaml 
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kafka
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
inputs:
path: ${path.config}/inputs.d/*.yml
reload.enabled: false

modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false

output.kafka:
hosts: ["bootstrap:9092"]
topic: '%{[fields.log_topic]}'
enabled: true
partition.round_robin:
reachable_only: false

required_acks: 1
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-inputs
namespace: kafka
labels:
k8s-app: filebeat
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
fields:
log_topic: pod
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: kafka
labels:
k8s-app: filebeat
spec:
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: 192.168.100.100/library/filebeat:7.1.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: KAFKA_HOST
value: bootstrap
- name: KAFKA_PORT
value: "9092"
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: inputs
mountPath: /usr/share/filebeat/inputs.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: inputs
configMap:
defaultMode: 0600
name: filebeat-inputs
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kafka
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""]
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kafka
labels:
k8s-app: filebeat
# kubectl apply -f filebeat.yaml
configmap/filebeat-config changed
configmap/filebeat-inputs changed
daemonset.extensions/filebeat changed
clusterrolebinding.rbac.authorization.k8s.io/filebeat changed
clusterrole.rbac.authorization.k8s.io/filebeat changed
serviceaccount/filebeat changed

关于kafka-output,同时要注意的是这里无法自动创建topic(我在本地进行的伪分布式kafka集群可自动创建topic),这里可通过kafka-manager进行创建。

部署 Logstash

部署 pipeline

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# vim pipeline-config.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
name: pipeline-config
namespace: kafka
data:
logstash.conf: |
input{
kafka{
bootstrap_servers => 'bootstrap:9092'
topics_pattern => "pod"
consumer_threads => 3
decorate_events => true
codec => "json"
auto_offset_reset => "latest"
group_id => "logstash"
}
}
output {
elasticsearch {
hosts => ["es-ingest:9200"]
index => "%{[@metadata][topic]}-%{+YYYY-MM-dd}"
}
}
# kubectl apply -f pipeline-config.yaml
configmap/pipeline-config configured

关于Kafka-input

关于Elasticsearch-output

部署 logstash

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# vim logstash.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash
namespace: kafka
labels:
app: logstash
spec:
replicas: 1
selector:
matchLabels:
app: logstash
template:
metadata:
labels:
app: logstash
spec:
securityContext:
fsGroup: 1000
containers:
- name: logstash
image: 192.168.100.100/library/logstash:7.1.0
env:
- name: XPACK_MONITORING_ENABLED
value: "true"
- name: xpack.monitoring.elasticsearch.hosts
value: "http://es-ingest:9200"
resources:
limits:
cpu: '1'
memory: 2Gi
requests:
cpu: '1'
memory: 2Gi
volumeMounts:
- name: config
mountPath: /usr/share/logstash/pipeline/
volumes:
- name: config
configMap:
name: pipeline-config
# kubectl apply -f logstash.yaml
deployment.apps/logstash changed

部署 Kibana

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# vim kibana-config.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
name: kibana-config
namespace: kafka
data:
kibana.yml: |
server.name: kibana
server.port: 5601
server.host: "0.0.0.0"
xpack.monitoring.ui.container.elasticsearch.enabled: true
# kubectl apply -f kibana-config.yaml
configmap/kibana-config changed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# vim kibana.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: kafka
labels:
k8s-app: kibana
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana
template:
metadata:
labels:
k8s-app: kibana
spec:
containers:
- name: kibana
image: 192.168.100.100/library/kibana:7.1.0
resources:
requests:
memory: 2Gi
cpu: 1
limits:
memory: 4Gi
cpu: 1
env:
- name: ELASTICSEARCH_HOSTS
value: "http://es-ingest:9200"
ports:
- containerPort: 5601
name: http
protocol: TCP
volumeMounts:
- name: config
mountPath: /usr/share/kibana/config/kibana.yml
subPath: kibana.yml
volumes:
- name: config
configMap:
name: kibana-config
# kubectl apply -f kibana.yaml
deployment.apps/kibana configured
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# vim kibana-service.yaml 
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: kafka
labels:
k8s-app: kibana
spec:
type: LoadBalancer
ports:
- port: 5601
protocol: TCP
targetPort: http
selector:
k8s-app: kibana
# kubectl apply -f kibana-service.yaml
service/kibana changed

进行查看

查看pod和service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# kubectl get pod -n kafka | egrep "filebeat|logstash|es|kibana"
es-data-0 1/1 Running 0 9d
es-data-1 1/1 Running 0 9d
es-data-2 1/1 Running 0 9d
es-ingest-0 1/1 Running 0 9d
es-ingest-1 1/1 Running 0 9d
es-ingest-2 1/1 Running 0 9d
es-master-0 1/1 Running 0 9d
es-master-1 1/1 Running 0 9d
es-master-2 1/1 Running 0 9d
filebeat-skwf8 1/1 Running 0 9d
filebeat-zmjs7 1/1 Running 0 9d
kibana-fc98f5b47-hml5g 1/1 Running 0 9d
logstash-5d86f89fdc-x5jgh 1/1 Running 0 9d
# kubectl get service -n kafka | egrep "filebeat|logstash|es|kibana"
es-data ClusterIP None <none> 9300/TCP 9d
es-ingest LoadBalancer 10.108.81.201 <pending> 9200:32039/TCP 9d
es-master ClusterIP None <none> 9300/TCP 9d
kibana LoadBalancer 10.105.150.55 <pending> 5601:32431/TCP 9d

查看elasticsearch是否收录索引

1
2
3
4
5
6
7
8
9
10
11
# curl http://192.168.100.128:32039/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open pod-2019-07-05 eG9vVDWlRGqVJBxIqznoHg 1 1 357001 0 196.1mb 97.5mb
green open pod-2019-07-03 RIGZT8VpQVCBAieZU0mPbw 1 1 266771 125 199.9mb 99.9mb
green open pod-2019-07-06 sLSU4y6YSoO8rc1dJFKhmw 1 1 338327 0 229.9mb 114.9mb
green open .kibana_task_manager Tr36EqEnSDutce244ltNJA 1 1 2 0 42.8kb 21.4kb
green open pod-2019-07-04 vOm3X34xR6CxOVz2YpHzww 1 1 333639 0 226.3mb 106.2mb
green open pod-2019-07-07 GaUrBGjkR_asjOl8xnv74Q 1 1 19 0 27mb 13.5mb
green open pod-2019-07-02 menu1_B6TdGv4rHhYY2DDA 1 1 505044 0 262.2mb 131.1mb
green open pod-2019-07-01 yq7v1rCETlKpeG4--wBGIw 1 1 173919 0 131.8mb 61mb
green open .kibana_1 ypI9i51BSiaCN1OwMnMaUQ 1 1 5 0 54.4kb 27.2kb

登录kibana

创建index

es01
es02
es03
es04

查看

es05
es06

ZhiJian wechat
欢迎您扫一扫上面的二维码,订阅我的微信公众号!
-------------本文结束,感谢您的阅读-------------