elasticsearch¶
elasticsearch manifest¶
This uses the PVs created previously. More PVs would have to be added to cover more nodes.
This example creates multi-tasking nodes, instead of dedicated master nodes. If I had more of a lab I’d consider adding more nodes.
The attempt at a service is still questionable, hopefully it works out.
---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch
spec:
version: 8.16.1
nodeSets:
- name: default
config:
node.roles: [ "master", "data", "ingest", "transform", "ml" ]
node.store.allow_mmap: false
podTemplate:
metadata:
labels:
deployment: prod
spec:
containers:
- name: elasticsearch
resources:
limits:
memory: 8Gi
cpu: 1
env:
- name: ES_JAVA_OPTS
value: "-Xms3g -Xmx3g"
count: 3
volumeClaimTemplates:
- metadata:
name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path.
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: local-data
http:
service:
spec:
type: LoadBalancer
loadBalancerIP: 192.168.8.80
tls:
selfSignedCertificate:
subjectAltNames:
- ip: 192.168.8.80
- dns: elasticsearch.k8s.wafflelab.online
manifest metadata¶
This first section just defined that we’re using the elastic APIs, and that these containers will be Elasticsearch containers. Naming them elasticsearch is an easy choice, and this will be referenced in other container definitions. So make sure the name is unique enough for your k8s installation.
---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch
container spec¶
This section contains a bit of information about the containers. version is the version of elasticsearch to run, this should be kept in sync with kibana and agents. Under config are config options for elasticsearch. In this instance I’ve included the node.roles so each node knows what it should be doing.
spec:
version: 8.12.1
nodeSets:
- name: default
config:
node.roles: [ "master", "data", "ingest", "transform", "ml" ]
node.store.allow_mmap: false
podTemplate¶
The next section includes more configuration information, but this time for the containers. The labels under metadata can contain whatever information you want. Under spec we name the containers. In this case we use elasticsearch, but it could be any name. resources has the resources dedicated to the containers, in this case 8Gi of RAM and 1 CPU. These values should be adjusted depending on the resources available (and I have very few). Finally, the env contains an environment variable being used for elasticsearch. This one limits the amount of RAM elasticsearch will use for data.
podTemplate:
metadata:
labels:
deployment: prod
spec:
containers:
- name: elasticsearch
resources:
limits:
memory: 8Gi
cpu: 1
env:
- name: ES_JAVA_OPTS
value: "-Xms4g -Xmx4g"
number of nodes and storage volumes¶
count is the number of these nodes to run, in this case they’re all hot nodes. This section also defines the PersistentVolume Claims As noted, the name should not change. I’m using local PVs, and I think ReadWriteOnce is necessary, but not positive. I’ve requested 100Gi for storage, and it’s using the Storage Class local-data. .. XXX link to SC definition here
count: 3
volumeClaimTemplates:
- metadata:
name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path.
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: local-data
http service¶
Elasticsearch runs an http service, and this defines some of the options. I’m using a LoadBalancer (MetalLB) to handle connections, and I’ve reserved the IP 192.168.18.70. I’ve also setup a TLS cert/key combo in a k8s secret names elasticsearch-tls. Elasticsearch uses that cert/key for incoming connections, but not elasticsearch to elasticsearch connections. A self-signed cert option is available.
http:
service:
spec:
type: LoadBalancer
loadBalancerIP: 192.168.18.70
tls:
certificate:
secretName: elasticsearch-tls