Part 5. Intelligent SIEM Logging
Seamlessly parse your security logs with Graylog!
PART ONE: Backend Storage
PART TWO: Log Ingestion
PART THREE: Log Analysis
PART FOUR: Wazuh Agent Install
In PART THREE we started our Graylog input and configured Fluent Bit on our Wazuh Manager to forward the
/var/ossec/logs/alerts/alerts.json file to Graylog. Without this step, Graylog would not receive our security logs nor would any data be stored within our Wazuh-Indexer.
Please follow PART THREE before progressing with this tutorial.
Taking a look at our Graylog Input, we see that it is receiving data from our Wazuh Manager.
Let’s view our received messages and look at the data coming in.
Select a message to expand it out and view all of the metadata for that specific event:
Gross! You can see that our message is not parsed out into key value pairs. All data is written to the
message field. This makes it difficult for us to build dashboards, alerts, and slice and dice our data. Our log ingestion engine must be able to parse keys and their values or we are going to have a difficult time detecting, visualizing, and responding to security events.
Wouldn’t it be nice to be able to search for all blocked packages of a given source IP or to get a quick terms analysis of recently failed SSH login usernames? Hard to do when all you have is just a single long text message.
WELCOME GRAYLOG EXTRACTORS
The extractors allow you to instruct Graylog nodes on how to extract data from any text in a received message (regardless of the format and even if it’s an extracted field) to message fields. Full text searches provide a great deal of possibilities for analysis but the real power of log analytics unveils when you can run queries like
http_response_code:>;=500 AND user_id:9001 to get all internal server errors triggered by a specific user.
More can be read up on Graylog’s Extractors here: Graylog Docs
The JSON extractor is the perfect extractor for our Wazuh logs. By default, the Wazuh Manager is writing to the
alerts.json file in a single line json format. We are also setting our
OUTPUT format to
json_lines in our
The beauty of the JSON extractor is that it will do the heavy lifting for us. We simply need to provide it a few details and Graylog will handle the rest!
Configuring the JSON Extractor
Let’s now instruct Graylog to parse through our received messages with the JSON extractor.
- Select anywhere in the
messagefield (not on the title but the data itself) and select
3. Set the below configuration settings:
Try to see some magic :)
Isn’t it beautiful, now Graylog will parse through our received messages and write out our key value pairs.
Give your extractor a name and select
Head back over to your ingested messages and see them now being parsed correctly!
An Index is how our Wazuh-Indexer stores our ingested logs. We need to create an index that will store all of our Wazuh Alerts. Graylog also gives the ability to configure index settings such as the number of shards, replicas, etc.
More can be read up on indices in PART ONE
System / Indices
Create index set
3. Configure your Index settings. Below is just an example, you should customize to fit your needs.
- Index Prefix — Name of the Index that is used to store the data into our Wazuh-Indexer
- Rotation Strategy — How often the index will rotate. For example, our first index created will be named
wazuh-alerts-socfortress_0. Once that index hits a size of 10GB, Graylog will rotate to the next index,
- Retention Strategy — How long an index will remain in our Wazuh Indexer. For example, I have a
rotation strategyof 10GB and a
retention strategyof 10. This means that I will hold at a max 100GB (10 x 10) of total Wazuh Alerts data. Once 10 indices are reached, Graylog will delete the first index,
wazuh-alerts-socfortress_0, to make room for
wazuh-alerts-socfortress_11. Keep in mind that this data is permanently deleted.
Creating the Stream
Graylog streams are a mechanism that route messages into categories in real time while they are being processed. We can define rules in Graylog to route messages into certain streams.
Streams allow us to route received messages to the correct index. Creating multiple indices and multiple streams gives us the ability to provide a multi tenant solution!
Streamson the top menu and select
2. Create a Stream name and set it to your newly created Index. Select
Remove matches from 'All messages' stream . We only want the data to go to our one index.
3. Select Save.
4. Head over to our
Inputs and select
Add static field
wazuh . Or use whatever field name you’d like.
This will add the key value pair of
log_type:wazuh to every log ingested by our
Wazuh Events Fluent Bit — TCP Input. We can now use this field name as a rule for our Stream to route all Wazuh Alerts to our correct Index.
Add stream rule
9. Start the stream
Select the Stream to view the ingested messages:
Select your message and view the Index that our Wazuh logs are now being written to:
Throughout this post we learned how to parse through our received Wazuh alerts, create unique indices, and route our logs to the correct index. The ability to slice, dice, and route our logs to fit our needs is crucial for any SOC team / MSPs / etc. And it doesnt stop here! You can implement this same approach to ingest Firewall logs, AWS/GCP/Azure logs, other 3rd party logs, etc. You now have the power, now go take back control of your logs! Happy Defending 😄.
The functionality discussed in this post, and so much more, are available via SOCFortress’s Professional Services. Let SOCFortress help you and your team keep your infrastructure secure.
Professional Services: https://www.socfortress.co/ps.html