...
Overall
⭐ EPS = Event Per Second
Requirement | Recommendation | ||||
---|---|---|---|---|---|
EPS | ES Configuration | Hardware per node (vCPU, RAM) | Elastic JVM RAM | Shards | Replica |
Up to 1K - without Replica | All-in-one | (8,16GB) | 8GB | 5 | 0 |
Up to 1K - with Replica | 3 node cluster | (8,16GB) | 8GB | 5 | 1 |
1K-5K - with Replica | 3 node cluster | (8,64GB) | 30GB | 5 | 1 |
5K-10K - with Replica | Coordinating and Master Node | (8,32GB) | 16GB | ||
3 Data Nodes | (8,64GB) | 30GB | 5 | 1 | |
10K-15K - with Replica | Coordinating Node | (16,32GB) | 16GB | ||
Master Node | (8,16GB) | 8GB | |||
3 Data Nodes | (16,64GB) | 30GB | 10 | 1 | |
... | |||||
35K-45K - with Replica | Coordinating Node | (16,64GB) | 30GB | ||
Master Node | (8,16GB) | 8GB | |||
9 Data Nodes | (16,64GB) | 30GB | 25 | 1 | |
Add 5K EPS - with Replica | Add 1 Data Node | (16,64GB) | 30GB | Add 3 Shards | 1 |
https://docs.fortinet.com/document/fortisiem/6.1.0/sizing-guide/307212/fortisiem-sizing-information
Storage per Day
Suppose
R: Number of Replica (at least 1 is recommended)
E: EPS
D: Retention (days) in Hot nodes
Data per day = E * #seconds in a day (86400) * 500 Bytes * (R +1)
Storage per day = 1.25 * Data per day
Recommended Elasticsearch Configuration
...
- In Elasticsearch 6.x, Fortinet has observed that Elasticsearch CLI performance degrades when the total number of shards in the cluster (including Hot and Warm nodes) is more than 15K. Newer versions may have a higher upper limit.
Dedicated master nodes in Amazon OpenSearch Service
https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-dedicatedmasternodes.html
Instance count | Master node RAM size | Maximum supported shard count | Recommended minimum dedicated master instance type |
---|---|---|---|
1–10 | 8 GiB | 10K | m5.large.search or m6g.large.search |
11–30 | 16 GiB | 30K | c5.2xlarge.search or c6g.2xlarge.search |
31–75 | 32 GiB | 40K | r5.xlarge.search or r6g.xlarge.search |
76 – 125 | 64 GiB | 75K | r5.2xlarge.search or r6g.2xlarge.search |
126 – 200 | 128 GiB | 75K | r5.4xlarge.search or r6g.4xlarge.search |
Best Practice in AWS, Elasticsearch
https://www.elastic.co/guide/en/elasticsearch/plugins/current/cloud-aws-best-practices.html
...
30K EPS configuuration on AWS EC2
Type | AWS Instance Type | Hardware Spec | Num of Instances | Note |
---|---|---|---|---|
Collector | c4.xlarge | 4vCPU, 7 GB RAM | ||
Worker | c4.2xlarge | 8vCPU, 15 GB RAM | 3 | logstash |
Super | m4.4xlarge | 16vCPU, 64 GB RAM, CMDB Disk 10K IOPS | 1 | kibana |
Elastic Search Master Node | c3.2xlarge (구세대 인스턴스) | 8vCPU, 16 GB RAM 8 GB JVM 2 x 80GB Instance Store | 1 | |
Elastic Search Coordinating Node | m5.4xlarge | 16vCPU, 64 GB RAM 30 GB JVM | 1 | |
Elastic Search Data Node | i3.4xlarge | 16vCPU, 122 GB RAM, 2 x 1900 NVMe SSD 30 GB JVM | 5 | hot, warm |
EPS | Storage per Day | Retention (Days) | Hot Data Nodes (32vCPU, 64GB RAM, SSD) | |
---|---|---|---|---|
Node Count | Disk Size | |||
10K | 1TB | 7 | 4 | 2TB |
30 | 16 | 2TB |
EPS | Storage per Day | Retention (Days) | Warm Data Nodes (32vCPU, 64GB RAM and ~100Gbps Disk I/O) | |
---|---|---|---|---|
Node Count | Disk Size | |||
10K | 1TB | 30 | 3 | 10TB |
60 | 6 | 10TB | ||
90 | 9 | 10TB |
...