The machine that runs your instance of Elasticsearch will indicate vital signs of performance. Eyes on the CPU, memory usage, and disk I/O will ensure optimal Elasticsearch node performance in production.
You may notice that your Elasticsearch instance can easily eat up CPU. CPU peaks are expected but underlying issues could be lurking. Whether it is clear performance issues or not, there will certainly be an opportunity for performance optimization. The Java Virtual Machine (JVM) indicators will likely coincide with the spikes in CPU you see in your Elasticsearch node performance. Match the spikes in JVM metrics with Elasticsearch node performance CPU to uncover the underlying cause.
It is particularly normal to expect no free memory on the machine running your Elasticsearch instance. This is not an indicator to panic because you want your machine to be utilizing all of the available memory. However, the cached memory availability is something to keep your eye on. If you see the cached memory is running low, then you can expect available RAM to be running low.
When Elasticsearch is deployed as a search engine it is expected that disk I/O will be put to the test. When a reduction in disk I/O is materializing in the machine, underlying problems are present. Let this be a catalyst to troubleshoot what the culprit issue may be.
The ratio between read and write operations will vary based on the particular usage of Elasticsearch you have deployed. Depending on the ratio within the node, indexing and query performance could be sources of optimization.