Rancher 是為使用容器的公司打造的容器管理平臺,Rancher 簡化了使用 Kubernetes 的流程,開發者可以隨處執行 Kubernetes(Run Kubernetes Everywhere),滿足 IT 需求規範,賦能 DevOps 團隊,Rancher 2.x 已經完全轉向了 Kubernetes, 可以部署和管理在任何地方執行的 Kubernetes 叢集。
目前ElasticSearch在K8s上有兩種部署方式,一種是All-in-on(https://github.com/elastic/cloud-on-k8s) ,一種是透過Helm。
本文不是講述最基礎的rancher,k8s,elastic cloud kubernetes概念,需要 基礎知識的朋友,請查閱如下連結:
Rancher: https://rancher.com/ ECK: https://www.elastic.co/guide/en/cloud-on-k8s/current/index.html Kubernetes: https://kubernetes.io/安裝Rancher
參考官方文件安裝k3s ,
$ sudo docker run --privileged -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher
這裡Docker執行引數" --privileged"一定要加,給容器新增擴充套件許可權。建議大家升級到2.5.5版本,這樣可以體驗最新版本的Cluster Explorer介面。
從專業角度來說,之後的一些操作透過Kubectl來執行,推薦使用CLI方式,Web介面操作效率實在太低。
之後進行登入操作:
$ ./rancher login https://localhost/v3 --token token-vvc4g:jf46m79l2ml49n8qpkq7cdnd8wlfwndjvnwvjds4rcvq5cb8z4gn6b
The authenticity of server 'https://localhost' can't be established.Cert chain is : [Certificate: Data: Version: 3 (0x2) Serial Number: 3145037638851565576 (0x2ba56b11c4fb1808) ( 為了篇幅,此處刪除了輸出資訊 ...... ) Signature Algorithm: ECDSA-SHA256 30:45:02:20:61:9d:15:bc:aa:18:88:9e:54:7b:06:33:62:15:]Do you want to continue connecting (yes/no)? yesINFO[0001] Saving config to /home/billy/.rancher/cli2.json
安裝ECK
ECK的安裝方式有兩種,一種是All-in-One,一種是Helm .
All-in-One方式$ kubectl apply -f https://download.elastic.co/downloads/eck/1.4.0/all-in-one.yaml
Helm方式
$ helm repo add elastic https://helm.elastic.co$ helm repo update
執行預設的安裝模式:
$ helm install elastic-operator elastic/eck-operator -n elastic-system --create-namespace
安全完之後,按照國內網速,需要過一段時間檢視記錄
$ kubectl -n elastic-system logs -f statefulset.apps/elastic-operator
安裝ES叢集節點
這個例子裡面我們將建立3個master節點和3個data節點,按照ES官方的文件去安裝ES叢集節點,會碰到很多PV(Persistent Volumes )和PVC的問題(PersistentVolumeClaim ) ,這是因為Rancher on Docker 並不是真正意義上的k8s叢集,是個“假”叢集,需要自己手動建立PV 。Rancher推薦的Longhorn也不能安裝在這個“假”叢集之後,這裡推薦使用Local Storage。
建立Storage ClassapiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: local-storageprovisioner: kubernetes.io/no-provisionervolumeBindingMode: WaitForFirstConsumer
建立PV(透過yaml檔案建立)
$ kubectl apply -f pvdemo.yaml
pvdemo.yaml檔案如下
apiVersion: v1 kind: PersistentVolume metadata: name: elasticsearch1 spec: capacity: storage: 20Gi accessModes: - ReadWriteOnce hostPath: path: "/data/elasticsearch1/" storageClassName: local-storage --- apiVersion: v1 kind: PersistentVolume metadata: name: elasticsearch2 spec: capacity: storage: 20Gi accessModes: - ReadWriteOnce hostPath: path: "/data/elasticsearch2/" storageClassName: local-storage --- apiVersion: v1 kind: PersistentVolume metadata: name: elasticsearch3 spec: capacity: storage: 20Gi accessModes: - ReadWriteOnce hostPath: path: "/data/elasticsearch3/" storageClassName: local-storage --- apiVersion: v1 kind: PersistentVolume metadata: name: elasticsearch4 spec: capacity: storage: 20Gi accessModes: - ReadWriteOnce hostPath: path: "/data/elasticsearch4/" storageClassName: local-storage --- apiVersion: v1 kind: PersistentVolume metadata: name: elasticsearch5 spec: capacity: storage: 20Gi accessModes: - ReadWriteOnce hostPath: path: "/data/elasticsearch5/" storageClassName: local-storage --- apiVersion: v1 kind: PersistentVolume metadata: name: elasticsearch6 spec: capacity: storage: 20Gi accessModes: - ReadWriteOnce hostPath: path: "/data/elasticsearch6/" storageClassName: local-storage
建立完了PV之後,就建立ES節點的yaml檔案了
$ kubectl apply -f esdemo.yaml
esdemo.yaml檔案格式如下
apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: es-demo-cluster spec: version: 7.11.1 nodeSets: - name: masters count: 3 config: node.roles: ["master"] volumeClaimTemplates: - metadata: name: elasticsearch-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: local-storage - name: data count: 5 config: node.roles: ["data"] volumeClaimTemplates: - metadata: name: elasticsearch-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: local-storage---apiVersion: kibana.k8s.elastic.co/v1kind: Kibanametadata: name: kibana-demospec: version: 7.11.1 count: 1 elasticsearchRef: name: "es-demo-cluster" namespace: default
經過一段時間等待,就可以看到如下結果了:
$ kubectl get elasticsearchNAME HEALTH NODES VERSION PHASE AGEes-demo-cluster green 6 7.11.1 Ready 2d4h$ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEes-demo-cluster-es-data ClusterIP None <none> 9200/TCP 2d4hes-demo-cluster-es-http ClusterIP 10.43.182.49 <none> 9200/TCP 2d4hes-demo-cluster-es-masters ClusterIP None <none> 9200/TCP 2d4hes-demo-cluster-es-transport ClusterIP None <none> 9300/TCP 2d4hkibana-demo-kb-http ClusterIP 10.43.149.151 <none> 5601/TCP 2d5hkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 3d
大功告成。
訪問ES服務
還有一個最重要的事情就是訪問ES叢集服務,因為預設ECK裡面的ES服務都已經透過k8s證書系統加密,也可以配置成自定義的證書,詳細還是參考ECK官方文件,下面就用預設的k8s證書來演示訪問ES服務,考慮到是單機演示ES服務,所以我們用port forward到本地的埠:
$ kubectl port-forward --address 0.0.0.0 service/es-demo-cluster-es-http 9200:9200
檢視ES叢集使用者及密碼(token)
$ kubectl get secretNAME TYPE DATA AGEdefault-kibana-demo-kibana-user Opaque 3 2d4hdefault-token-5xs96 kubernetes.io/service-account-token 3 3des-demo-cluster-es-data-es-config Opaque 1 2d4hes-demo-cluster-es-data-es-transport-certs Opaque 7 2d4hes-demo-cluster-es-elastic-user Opaque 1 2d4hes-demo-cluster-es-http-ca-internal Opaque 2 2d4hes-demo-cluster-es-http-certs-internal Opaque 3 2d4hes-demo-cluster-es-http-certs-public Opaque 2 2d4hes-demo-cluster-es-internal-users Opaque 2 2d4hes-demo-cluster-es-masters-es-config Opaque 1 2d4hes-demo-cluster-es-masters-es-transport-certs Opaque 7 2d4hes-demo-cluster-es-remote-ca Opaque 1 2d4hes-demo-cluster-es-transport-ca-internal Opaque 2 2d4hes-demo-cluster-es-transport-certs-public Opaque 1 2d4hes-demo-cluster-es-xpack-file-realm Opaque 3 2d4hkibana-demo-kb-config Opaque 2 2d5hkibana-demo-kb-es-ca Opaque 2 2d5hkibana-demo-kb-http-ca-internal Opaque 2 2d5hkibana-demo-kb-http-certs-internal Opaque 3 2d5hkibana-demo-kb-http-certs-public Opaque 2 2d5hkibana-demo-kibana-user Opaque 1 2d5h
列表顯示了ES叢集的證書資訊,我們使用使用者“elastic”來訪問ES服務
$ kubectl get secret es-demo-cluster-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echoz0ioASEfZ883Vqq46ci562g9
得到密碼(z0ioASEfZ883Vqq46ci562g9)之後,就可以訪問ES服務了
$ curl https://localhost:9200 -u elastic:z0ioASEfZ883Vqq46ci562g9 -k{ "name" : "es-demo-cluster-es-masters-1", "cluster_name" : "es-demo-cluster", "cluster_uuid" : "5i3kiPMURgKfa1H4mm9bmg", "version" : { "number" : "7.11.1", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "ff17057114c2199c9c1bbecc727003a907c0db7a", "build_date" : "2021-02-15T13:44:09.394032Z", "build_snapshot" : false, "lucene_version" : "8.7.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search"}
同樣,轉發5601埠,透過web頁面訪問kibana ,這裡就不囉嗦了,記得是https服務(https://localhost:5601)
結束語
ES安裝過程中,有一些小坑,但如果對k8s有一定程度瞭解的話,還是可以輕鬆跨過去,希望本篇文件對您有一些幫助。