Kubernetes使用ECK部署Elasticsearch和Kibana集群

发布时间:2023年12月28日

Kubernetes使用ECK部署Elasticsearch和Kibana集群

一、安装ECK

二、部署Elasticsearch集群

  • 安装出现异常,内存不够,也没有绑定PVC,
  • 加内存,加CPU,绑定PVC
    • Warning FailedScheduling 14s default-scheduler 0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
    • Warning FailedScheduling 12s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod..
    • Warning FailedScheduling 48s default-scheduler 0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
$ vim nfs-pvc.yaml
-------------------------------------
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: elasticsearch-data
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: nfs-sc
-------------------------------------
$ kubectl apply -f nfs-pvc.yaml

cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: 7.10.0
  image:  yourharbor:3xxxx/elastic/elasticsearch:7.10.0 #如果使用私有仓库,可配置image并指定版本
  nodeSets:
  - name: default
    count: 3   #节点数量,根据需求修改
    config:
      node.store.allow_mmap: false
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
    podTemplate:
      spec:
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
        containers:
        - name: elasticsearch
          env:
          - name: ES_JAVA_OPTS
            value: -Xms512m -Xmx512m
          resources:
            requests:
              cpu: 1
              memory: 1Gi
            limits:
              cpu: 1
              memory: 1Gi
  http:
    tls:
      selfSignedCertificate:
        disabled: true
EOF
  • 运行完成等待pod就绪即可

[root@k8s-master01 ~]# kubectl get pods -owide
NAME                            READY   STATUS    RESTARTS      AGE     IP             NODE         NOMINATED NODE   READINESS GATES
quickstart-es-default-0         1/1     Running   0             28m     10.244.1.121   k8s-node02   <none>           <none>
quickstart-es-default-1         1/1     Running   0             28m     10.244.2.115   k8s-node03   <none>           <none>
quickstart-es-default-2         1/1     Running   0             28m     10.244.1.122   k8s-node02   <none>           <none>
quickstart-kb-5bd78dcb9-8rlfq   1/1     Running   0             14m     10.244.2.116   k8s-node03   <none>           <none>




[root@k8s-master01 ~]# kubectl get secret
NAME                                       TYPE     DATA   AGE
default-quickstart-kibana-user             Opaque   3      22m
quickstart-es-default-es-config            Opaque   1      36m
quickstart-es-default-es-transport-certs   Opaque   7      36m
quickstart-es-elastic-user                 Opaque   1      36m
quickstart-es-http-ca-internal             Opaque   2      36m
quickstart-es-http-certs-internal          Opaque   3      36m
quickstart-es-http-certs-public            Opaque   2      36m
quickstart-es-internal-users               Opaque   4      36m
quickstart-es-remote-ca                    Opaque   1      36m
quickstart-es-transport-ca-internal        Opaque   2      36m
quickstart-es-transport-certs-public       Opaque   1      36m
quickstart-es-xpack-file-realm             Opaque   4      36m
quickstart-kb-config                       Opaque   1      22m
quickstart-kb-es-ca                        Opaque   2      22m
quickstart-kibana-user                     Opaque   1      22m


[root@k8s-master01 ~]# kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'
yG197EPv635nf60IcvsIh35

[root@k8s-master01 ~]# curl -u "elastic:yG197EPv635nf60IcvsIh35X" -k "http://10.244.1.121:9200"
{
  "name" : "quickstart-es-default-0",
  "cluster_name" : "quickstart",
  "cluster_uuid" : "mMEuTMd0QQCeQKsFvsTU2w",
  "version" : {
    "number" : "7.10.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "51e9d6f22758d0374a0f3f5c6e8f3a7997850f96",
    "build_date" : "2020-11-09T21:30:33.964949Z",
    "build_snapshot" : false,
    "lucene_version" : "8.7.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
[root@k8s-master01 ~]#

三.部署kibana集群

cat <<EOF | kubectl apply -f -
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: quickstart
spec:
  version: 7.10.0
  image:  yourharbor:3xxxx/elastic/kibana:7.10.0 #如果使用私有仓库,可配置image并指定版本
  count: 1
  elasticsearchRef:
    name: quickstart
    namespace: default
  http:
    tls:
      selfSignedCertificate:
        disabled: true
EOF
  • 运行完成等待pod就绪即可
  • 四.访问测试

  • 给quickstart-kb-http服务编辑外部访问,我这里直接用Kubesphere编写修改了
  • 访问节点加端口 我这里是 http://192.168.221.131:3xxxx/
  • elastic/密码
  • 启动成功
  • 密码获取方式
  • kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'
  • 这一串就是密码,默认用户名: elastic
  • 部署完毕,使用ECK部署确实方便
文章来源:https://blog.csdn.net/alksjdfp32r/article/details/135272717
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。