Part 1. Wazuh Indexer — SIEM Backend

SOCFortress
11 min readOct 3, 2022

--

Let’s install the backend service that will store our collected security logs, Wazuh-Indexer.

Wazuh Documentation: https://documentation.wazuh.com/current/getting-started/components/wazuh-indexer.html

PART ONE: Backend Storage

PART TWO: Log Ingestion

Video Walkthrough

Intro

We need a backend system that will store all of our security logs. This will allow our security analysts to search for security events within their desired timeframe. We need a solution that is fast, reliable, and can easily be scaled out to keep up with the load of logs we are collecting.

The Wazuh team has forked a version of OpenSearch, which was a fork of Elasticsearch version 7.10.2 (Elastic changed their licensing). Wazuh’s forked version is called Wazuh-Indexer . So those already familiar with OpenSearch will have no problem adjusting to the Wazuh-Indexer .

Advanced

By no means is this an Wazuh-Indexer deep dive, but I will detail some key elements that need to be understood by those working with the wazuh-indexer .

Master Node

Master nodes are in charge of cluster-wide settings and changes — deleting or creating indices and fields, adding or removing nodes and allocating shards to nodes. Each cluster has a single master node that is elected from the master eligible nodes using a distributed consensus algorithm and is reelected if the current master node fails. AT LEAST ONE NODE IN THE CLUSTER REQUIRES THE MASTER ROLE

Data Node

Stores and searches data. Performs all data-related operations (indexing, searching, aggregating) on local shards. These are the worker nodes of your cluster and need more disk space than any other node type.

Ingest

Pre-processes data before storing it in the cluster. Runs an ingest pipeline that transforms your data before adding it to an index.

Other node types can be found here: https://opensearch.org/docs/latest/opensearch/cluster/

High Availability

Besides its performance capabilities, wazuh-indexer nodes can be clustered together to provide us high availability. For example, if we have a three node cluster and node 1 is the master node and goes offline, nodes 2 and 3 will rebalance out and data will still be able to be written and read to/from the cluster. When node 1 comes back online, the node will rejoin the cluster and data will again be rebalanced amongst the nodes.

Indices and Shards

Data within the wazuh-indexeris organized into indices and each wazuh-indexerindex is composed of one or mores shards. By default, wazuh-indexerindices are configured with one primary shard.

Important: A replica shard is a copy of a primary shard and provides redundant copies of your data to increase capacity of requests such as queries, and to protect against hardware failures. If you have a multi-node cluster, ensure you have at least one replica to ensure high availability. Lets say no replica shards were configured in my three node cluster and node 1 stored all my data for the timeframe of 09–29–2022 up to 10–01–2022. If node 1 were to go offline then I would not be able to view the data within our 09–29 to 10–01 timeframe because neither node 2 or 3 would have a replica of that data. When node 1 comes back online, that data will be restored. Keep in mind that a replica is expensive in terms of disk space. One replica means the data is copied, so if one day of ingested data took up 100GB of disk, and we have one replica configured, that would turn into 200GB of disk consumed.

As the wazuh-indexercluster grows or contracts, wazuh-indexerautomatically migrates shards to rebalance the cluster based on shard count with the primary goal to keep each node under the high watermark. For more information about indices and shards, see the Elastic documentation Scalability and resilience.

Disk

Sizing your needed disk space for your wazuh-indexer cluster is not an easy task and is more than likely impossible to get right the first time. Amount of disk space required will range significantly based upon your data retention requirements. Another thing to consider when allocating disk size is what type of logs are you ingesting. For example, firewall’s sending syslog messages will likely take up more disk space then endpoint logs. Thankfully we can always grow our disk assigned to each wazuh-indexer node with no downtime.

Wazuh provides the below chart to assist in calculating disk space requirements:

https://documentation.wazuh.com/current/installation-guide/wazuh-indexer/index.html

ALWAYS USE SSDs FOR ANY PRODUCTION BUILDS.

Varun Subramanian has put together Capacity Planning for Elasticsearch to help assist in correctly sizing your stack.

1 node vs 3 node cluster
Focus Point for This Blog Post
Required Ports

Installation

Let’s now install our wazuh-indexer . For this demonstration I will be deploying a single node wazuh-indexer whose base OS is Debian 11. Other supported operating systems include Ubuntu 16.x–22.x, Centos 7–8, Red Hat 7–9, Amazon Linux 2.

Certificates Creation

Download the wazuh-certs-tool.sh script and the config.yml configuration file. This creates the certificates that encrypt communications between the Wazuh central components.

You can also use your own internal PKI if desired. Wazuh provides an easy to use bash script that we will use to generate our own internal certs to encrypt logs being sent to the wazuh-indexer .

curl -sO https://packages.wazuh.com/4.3/wazuh-certs-tool.sh
curl -sO https://packages.wazuh.com/4.3/config.yml

Edit ./config.yml and replace the node names and IP values with the corresponding names and IP addresses. You need to do this for the Wazuh indexer, the Wazuh dashboard nodes, and any servers that will be sending logs to the wazuh-indexer such as Graylog (if you are following our World’s Best Open Source SOC series).

nodes:
# Wazuh indexer nodes
indexer:
- name: node-1
ip: <indexer-node-ip>
# - name: node-2
# ip: <indexer-node-ip>
# - name: node-3
# ip: <indexer-node-ip>
# Wazuh server nodes
# Use node_type only with more than one Wazuh manager
server:
- name: wazuh-1
ip: <wazuh-manager-ip>
# node_type: master
# - name: wazuh-2
# ip: <wazuh-manager-ip>
# node_type: worker
# Wazuh dashboard node
dashboard:
- name: dashboard
ip: <dashboard-node-ip>

Run the ./wazuh-certs-tool.sh to create the certificates. For a multi-node cluster, these certificates need to be later deployed to all Wazuh instances in your cluster.

bash ./wazuh-certs-tool.sh -Atar -cvf ./wazuh-certificates.tar -C ./wazuh-certificates/ .

Install

apt-get install debconf adduser procps

Install the following packages if missing.

apt-get install gnupg apt-transport-https

Install the GPG key.

curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --no-default-keyring --keyring gnupg-ring:/usr/share/keyrings/wazuh.gpg --import && chmod 644 /usr/share/keyrings/wazuh.gpg

Add the repository.

echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/4.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list

Update the packages information.

apt-get -y install wazuh-indexer

Edit /etc/wazuh-indexer/opensearch.yml and replace the following values:

  1. network.host: Sets the address of this node for both HTTP and transport traffic. The node will bind to this address and will also use it as its publish address. Accepts an IP address or a hostname. Use the same node address set in config.yml to create the SSL certificates.
  2. node.name: Name of the Wazuh indexer node as defined in the config.yml file. For example, node-1.
  3. cluster.initial_master_nodes: List of the names of the master-eligible nodes. These names are defined in the config.yml file. Uncomment the node-2 and node-3 lines, change the names, or add more lines, according to your config.yml definitions.
cluster.initial_master_nodes:
- "node-1"
- "node-2"
- "node-3"

4. discovery.seed_hosts: List of the addresses of the master-eligible nodes. Each element can be either an IP address or a hostname. You may leave this setting commented if you are configuring the Wazuh indexer as a single-node. For multi-node configurations, uncomment this setting and set your master-eligible nodes addresses.

discovery.seed_hosts:
- "10.0.0.1"
- "10.0.0.2"
- "10.0.0.3"

5. plugins.security.nodes_dn: List of the Distinguished Names of the certificates of all the Wazuh indexer cluster nodes. Uncomment the lines for node-2 and node-3 and change the common names (CN) and values according to your settings and your config.yml definitions.

plugins.security.nodes_dn:
- "CN=node-1,OU=Wazuh,O=Wazuh,L=California,C=US"
- "CN=node-2,OU=Wazuh,O=Wazuh,L=California,C=US"
- "CN=node-3,OU=Wazuh,O=Wazuh,L=California,C=US"

Run the below command to view your certs CN

openssl x509 -subject -nameopt RFC2253 -noout -in hostname.pem

Run the following commands replacing <indexer-node-name> with the name of the Wazuh indexer node you are configuring as defined in config.yml. For example node-1. This deploys the SSL certificates to encrypt communications between the Wazuh central components.

NODE_NAME=<indexer-node-name>

Recommended action — If no other Wazuh components are going to be installed on this node, remove wazuh-certificates.tar running rm -f ./wazuh-certificates.tar to increase security.

6. compatibility.override_main_response_version — Comment this value out if present:

#compatibility.override_main_response_version: true

https://community.graylog.org/t/elasticsearch-exception-reason-key-types-is-not-supported-in-the-metadata-section/27468/6

Memory Locking

Wazuh-indexermalfunctions when the system is swapping memory. It is crucial for the health of the node that none of the JVM is ever swapped out to disk. The following steps show how to set the bootstrap.memory_lock setting to true so wazuh-indexerwill lock the process address space into RAM. This prevents any wazuh-indexermemory from being swapped out.

  1. Set bootstrap.memory_lock:

Uncomment or add this line to the /etc/wazuh-indexer/opensearch.yml file:

bootstrap.memory_lock: true

2. Edit the limit of system resources:

nano /usr/lib/systemd/system/wazuh-indexer.service[Service]
LimitMEMLOCK=infinity

3. Limit memory

  • The previous configuration might cause node instability or even node death with an OutOfMemory exception if wazuh-indexer tries to allocate more memory than is available. JVM heap limits will help limit memory usage and prevent this situation. Two rules must be applied when setting wazuh-indexer's heap size:
  • 1. Use no more than 50% of available RAM.
  • 2. Use no more than 32 GB.
  • It is also important to consider the memory usage of the operating system, services and software running on the host. By default, wazuh-indexer is configured with a heap of 1 GB. It can be changed via JVM flags using the /etc/wazuh-indexer/jvm.options file:
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms4g
-Xmx4g

Warning: The values min (Xms) and max (Xmx) sizes must be the same to prevent JVM heap resizing at runtime as this is a very costly process.

Let’s now start the service:

systemctl daemon-reload
systemctl enable wazuh-indexer
systemctl start wazuh-indexer

Cluster Initialization

Run the Wazuh indexer indexer-security-init.sh script on any Wazuh indexer node to load the new certificates information and start the single-node or multi-node cluster.

/usr/share/wazuh-indexer/bin/indexer-security-init.sh

NOTE: You only have to initialize the cluster once, there is no need to run this command on every node.

Replace <WAZUH_INDEXER_IP> and run the following command to check if the single-node or multi-node cluster is working correctly.

curl -k -u admin:admin https://<WAZUH_INDEXER_IP>:9200/_cat/nodes?v

Now let’s install the Wazuh-Dashboards that provides us a WebUI to interact with our Wazuh-Indexer cluster.

Wazuh-Dashboard

The Wazuh dashboard is a flexible and intuitive web user interface for mining, analyzing, and visualizing security events and alerts data. It is also used for the management and monitoring of the Wazuh platform. Additionally, it provides features for role-based access control (RBAC) and single sign on (SSO).

Documentation: https://documentation.wazuh.com/current/getting-started/components/wazuh-dashboard.html

This also provides a WebUI that allows us to interact with the wazuh-indexer nodes in an easier way.

https://documentation.wazuh.com/current/getting-started/components/wazuh-dashboard.html

Architecture

The wazuh-dashboards service will need to communicate with our previously deployed wazuh-indexer cluster. This can either be installed onto a node already running the wazuh-indexer , or deployed onto a dedicated server.

Installation

  1. Install the following packages if missing.
apt-get install debhelper tar curl libcap2-bin

2. Install the GPG key.

curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --no-default-keyring --keyring gnupg-ring:/usr/share/keyrings/wazuh.gpg --import && chmod 644 /usr/share/keyrings/wazuh.gpg

3. Add the repository.

echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/4.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list

4. Update the packages information.

apt-get update

5. Install the Wazuh dashboard package.

apt-get -y install wazuh-dashboard

Configuration

  1. Edit the /etc/wazuh-dashboard/opensearch_dashboards.yml file and replace the following values:
  • server.host: This setting specifies the host of the back end server. To allow remote users to connect, set the value to the IP address or DNS name of the Wazuh dashboard server. The value 0.0.0.0 will accept all the available IP addresses of the host.
  • opensearch.hosts: The URLs of the Wazuh indexer instances to use for all your queries. Wazuh dashboard can be configured to connect to multiple Wazuh indexer nodes in the same cluster. The addresses of the nodes can be separated by commas. For example, ["https://10.0.0.2:9200","https://10.0.0.3:9200","https://10.0.0.4:9200"]
server.host: 0.0.0.0
server.port: 443
opensearch.hosts: https://localhost:9200
opensearch.ssl.verificationMode: certificate

Deploying Certificates

Replace <dashboard-node-name> with your Wazuh dashboard node name, the same used in config.yml to create the certificates, and move the certificates to their corresponding location.

NODE_NAME=<dashboard-node-name>

Start the service:

systemctl daemon-reload
systemctl enable wazuh-dashboard
systemctl start wazuh-dashboard

Edit the file /usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml and replace the url value with the IP address or hostname of the Wazuh server master node.

hosts:
- default:
url: https://localhost
port: 55000
username: wazuh-wui
password: wazuh-wui
run_as: false

Securing the Installation

On any Wazuh indexer node, use the Wazuh passwords tool to change the passwords of the Wazuh indexer users.

/usr/share/wazuh-indexer/plugins/opensearch-security/tools/wazuh-passwords-tool.sh --change-all

On your Wazuh dashboard node, run the following command to update the kibanaserver password in the Wazuh dashboard keystore. Replace <kibanaserver-password> with the random password generated in the first step.

echo <kibanaserver-password> | /usr/share/wazuh-dashboard/bin/opensearch-dashboards-keystore --allow-root add -f --stdin opensearch.password

Restart the wazuh-dashboard.

systemctl restart wazuh-dashboard

Conclusion

Throughout this post we discussed the components of our backend storage, architecture design, and installation steps. Our backend storage is a crucial link in our SIEM stack (arguably the biggest) as it allows us to store and view all of our collected security events.

It is imperative that we ensure a highly available cluster and proper system resource monitoring (CPU, RAM, Disk) when deploying in a production environment.

The next post in this series will detail Graylog (https://docs.graylog.org/docs/installing). Graylog will be the tool we use to normalize and enrich our logs before writing them to the wazuh-indexer . Stay tuned and happy defending :).

Need Help?

The functionality discussed in this post, and so much more, are available via the SOCFortress platform. Let SOCFortress help you and your team keep your infrastructure secure.

Website: https://www.socfortress.co/

Platform Demo: https://www.socfortress.co/demo_access.html

--

--

SOCFortress
SOCFortress

Written by SOCFortress

SOCFortress is a SaaS company that unifies Observability, Security Monitoring, Threat Intelligence and Security Orchestration, Automation, and Response (SOAR).