? prevExist:key当前赋值前是否存在
? prevValue:key当前赋值前的值
? prevIndex:key当前赋值前的Index
-
etcd v3 的watch机制支持watch某个固定的key,也支持watch一个范围
每个 WatchableStore 包含两种 watcherGroup,一种是synced,一种是unsynced,
前者表示该group的watcher数据都已经同步完毕,在等待新的变更,后者表示该group的
watcher数据同步落后于当前最新变更,还在追赶
成员相关参数
–name ‘default’
Human-readable name for this member.
–data-dir ‘${name}.etcd’
Path to the data directory.
–listen-peer-urls ‘http://localhost:2380’
List of URLs to listen on for peer traffic.
–listen-client-urls ‘http://localhost:2379’
List of URLs to listen on for client tra
–initial-advertise-peer-urls ‘http://localhost:2380’
List of this member’s peer URLs to advertise to the rest of the cluster.
–initial-cluster ‘default=http://localhost:2380’
Initial cluster configuration for bootstrapping.
–initial-cluster-state ‘new’
Initial cluster state (‘new’ or ‘existing’).
–initial-cluster-token ‘etcd-cluster’
Initial cluster token for the etcd cluster during bootstrap.
–advertise-client-urls ‘http://localhost:2379’
List of this member’s client URLs to advertise to the public
–cert-file ‘’
Path to the client server TLS cert file.
–key-file ‘’
Path to the client server TLS key file.
–client-crl-file ‘’
Path to the client certificate revocation list file.
–trusted-ca-file ‘’
Path to the client server TLS trusted CA cert file.
–peer-cert-file ‘’
Path to the peer server TLS cert file.
–peer-key-file ‘’
Path to the peer server TLS key file.
–peer-trusted-ca-file ‘’
Path to the peer server TLS trusted CA file
? 创建Snapshot
etcdctl --endpoints https://127.0.0.1:3379 --cert /tmp/etcd-certs/certs/127.0.0.1.pem –
key /tmp/etcd-certs/certs/127.0.0.1-key.pem --cacert /tmp/etcd-certs/certs/ca.pem
snapshot save snapshot.db
? 恢复数据
etcdctl snapshot restore snapshot.db
–name infra2
–data-dir=/tmp/etcd/infra2
–initial-cluster
infra0=http://127.0.0.1:3380,infra1=http://127.0.0.1:4380,infra2=http://127.0.0.1:5380
–initial-cluster-token etcd-cluster-1
–initial-advertise-peer-urls http://127.0.0.1:538
? 设置etcd存储大小
$ etcd --quota-backend-bytes=$((1610241024))
? 写爆磁盘
$ while [ 1 ]; do dd if=/dev/urandom bs=1024 count=1024 | ETCDCTL_API=3 etcdctl put key
|| break; done
? 查看endpoint状态
$ ETCDCTL_API=3 etcdctl --write-out=table endpoint status
? 查看alarm
$ ETCDCTL_API=3 etcdctl alarm list
? 清理碎片
$ ETCDCTL_API=3 etcdctl defrag
? 清理alarm
$ ETCDCTL_API=3 etcdctl alarm disarm
$ etcd --auto-compaction-retention=1
$ etcdctl compact 3
$ etcdctl defrag
Finished defragmenting etcd member[127.0.0.1:2379]
etcd-operator: coreos开源的,基于kubernetes CRD完成etcd集群配置。Archived
https://github.com/coreos/etcd-operator
Etcd statefulset Helm chart: Bitnami(powered by vmware)
https://bitnami.com/stack/etcd/helm
https://github.com/bitnami/charts/blob/master/bitnami
? 安装helm
https://github.com/helm/helm/releases
? 通过helm安装etcd
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-release bitnami/etcd
? 通过客户端与serve交互
kubectl run my-release-etcd-client --restart=‘Never’ --image
docker.io/bitnami/etcd:3.5.0-debian-10-r94 --env ROOT_PASSWORD=$(kubectl get
secret --namespace default my-release-etcd -o jsonpath=“{.data.etcd-root-password}” |
base64 --decode) --env ETCDCTL_ENDPOINTS=“my-release-
etcd.default.svc.cluster.local:2379” --namespace default --command – sleep infinity
etcd是kubernetes的后端存储
对于每一个kubernetes Object,都有对应的storage.go 负责对象的存储操作
API server 启动脚本中指定etcd servers集群
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.34.2
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:237
? apiserver和etcd 部署在同一节点
? apiserver和etcd之间的通讯基于gRPC
? 针对每一个object,apiserver和etcd之间的Connection -> stream 共享
? http2的特性
? Stream quota
? 带来的问题?对于大规模集群,会造成链路阻塞
? 10000个pod,一次list操作需要返回的数据可能超过100M
peer和peer之间的通讯加密
数据加密
事件分离
$ ionice -c2 -n0 -p 'pgrep etcd'