Part 12. SIGMA rules for the OpenSource SIEM
Integrate SIGMA rule detection into your SIEM for advanced rule detection capabilities
Intro
Throughout this series we have been relying on Wazuh rules to serve as our detection engine when it comes to spotting malicious activity occurring on our endpoints. While Wazuh gives us the ability to create complex rules for detection, there are other mechanisms we can add onto the stack, such as Sigma rules and Praeco.
Throughout this post we explore what Sigma rules are and why they are beneficial. We then deploy Praeco and integrate it into our SIEM stack as a means for us to provide Sigma level rules to our detection capabilities.
Why SIGMA Rules?
In the past, SIEM detections existed in vendor / platform specific silos. Partners wishing to share detection content often had to translate a query from one vendor into another. This is not sustainable, the defensive cyber security community must improve how we share detections to keep pace with our ever-evolving adversaries.
Much like YARA, SIGMA is another tool for the open sharing of detection, except focused on SIEM instead of files or network traffic. SIGMA allows defenders to share detections (alerts, use cases) in a common language.
SIGMA has gained a lot of popularity throughout the community and is backed by strong repositories such as SigmaHQ
Praeco
Deploying Praeco within the stack allows us to build complex search queries that run against our logs stored within the Wazuh-Indexer to detect matches. SIGMA rules are a great canidate for Praeco!
Praeco is made up of two pieces:
- Elastalert2
- Praeco
Elastalert2 is the software that is actually doing all of the heavy lifting. Elastalert2 is taking our configured search query and is making an API call to the Wazuh-Indexer to return the results.
It works by combining Wazuh-Indexer with two types of components, rule types and alerts. Wazuh-Indexer is periodically queried and the data is passed to the rule type, which determines when a match is found. When a match occurs, it is given to one or more alerts, which take action based on the match.
This is configured by a set of rules, each of which defines a query, a rule type, and a set of alerts.
Several rule types with common monitoring paradigms are included with ElastAlert 2:
- “Match where there are X events in Y time” (frequency type)
- “Match when the rate of events increases or decreases” (spike type)
- “Match when there are less than X events in Y time” (flatline type)
- “Match when a certain field matches a blacklist/whitelist” (blacklist and whitelist type)
- “Match on any event matching a given filter” (any type)
- “Match when a field has two different values within some time” (change type)
Install Praeco
- Install Docker and Docker Compose (Docker Install Docs)
- Clone the Praeco Repo
git clone https://github.com/johnsusek/praeco.git
cd praeco
3. Create rule template directory
mkdir -p rules rule_templates
chmod -R 777 rules rule_templates
4. Edit config/api.config.json
{
"appName": "elastalert-server",
"port": 3030,
"wsport": 3333,
"elastalertPath": "/opt/elastalert",
"verbose": false,
"es_debug": false,
"debug": false,
"rulesPath": {
"relative": true,
"path": "/rules"
},
"templatesPath": {
"relative": true,
"path": "/rule_templates"
},
"dataPath": {
"relative": true,
"path": "/server_data"
},
"es_host": "*WAZUH-INDEXER*",
"es_port": 9200,
"es_username": "praeco",
"es_password": "*YOUR_PASSWORD*",
"es_ssl": true,
"ea_verify_certs": false,
"opensearch_flg": true,
"writeback_index": "praeco_elastalert_status"
}
5. Edit config/elastalert.yaml
# The elasticsearch hostname for metadata writeback
# Note that every rule can have its own elasticsearch host
es_host: *WAZUH-INDEXER*
# The elasticsearch port
es_port: 9200
# This is the folder that contains the rule yaml files
# Any .yaml file will be loaded as a rule
rules_folder: rules
# How often ElastAlert 2 will query elasticsearch
# The unit can be anything from weeks to seconds
run_every:
seconds: 60
# ElastAlert 2 will buffer results from the most recent
# period of time, in case some log sources are not in real time
buffer_time:
minutes: 1
# Optional URL prefix for elasticsearch
#es_url_prefix: elasticsearch
# Connect with TLS to elasticsearch
use_ssl: True
# Verify TLS certificates
verify_certs: False
# GET request with body is the default option for Elasticsearch.
# If it fails for some reason, you can pass 'GET', 'POST' or 'source'.
# See http://elasticsearch-py.readthedocs.io/en/master/connection.html?highlight=send_get_body_as#transport
# for details
#es_send_get_body_as: GET
# Option basic-auth username and password for elasticsearch
es_username: praeco
es_password: *YOUR_PASSWORD*
# The index on es_host which is used for metadata storage
# This can be a unmapped index, but it is recommended that you run
# elastalert-create-index to set a mapping
writeback_index: praeco_elastalert_status
# If an alert fails for some reason, ElastAlert will retry
# sending the alert until this time period has elapsed
alert_time_limit:
days: 2
skip_invalid: True
5. Start Praeco
export PRAECO_ELASTICSEARCH=<your elasticsearch ip>
docker compose up -d
Graylog Pipeline (If Needed)
rule "Timestamp - UTC"
when
has_field("msg_timestamp")
then
let msg_timestamp = $message.msg_timestamp;
set_field("timestamp_utc", msg_timestamp);
remove_field("msg_timestamp");
end
Using Praeco
Praeco serves as detection rules that can be used outside of the Wazuh rules. SOCFortress implements Praeco rules to build more complex/advanced search queries on our logs that are difficult/unsupported within Wazuh.
Let’s build a search filter to detect logs that match our desired search query. In this example we will build a rule that detects when a scheduled task is created and given a username to run as /RU
via the command line. I will simulate this action by running the below command:
SCHTASKS /CREATE /RU "NT AUTHORITY\SYSTEM" /SC DAILY /TN "MyTasks\Notepad task" /TR "C:\Windows\System32\notepad.exe" /ST 11:00
We can first look at Grafana to decide what field names we want to build our search on.
The above shows two good field names. data_win_eventdata_commandLine
is the field name that stores the command that was ran. In our example we see our simulated command, SCHTASKS /CREATE /RU “NT AUTHORITY\SYSTEM” /SC DAILY /TN “MyTasks\Notepad task” /TR “C:\Windows\System32\notepad.exe” /ST 11:00
. data_win_eventdata_parentImage
details the process that ran the command. Since we used a command prompt to run the command, the cmd.exe
is shown as our parentImage
.
We now have everything to build our query.
(data_win_eventdata_parentImage:(*\\cmd.exe) AND data_win_eventdata_commandLine:(*SCHTASKS*\/CREATE*\/RU*))
Praeco Alerting
Praeco’s built in alerting is another great feature that allows us to configure alerting output channels when Praeco matches on a rule.
Using the HTTP POST type, we can post the JSON of the alert to any URL of our choice. Below I am using a free online service for my webhook, but in the future we will integrate with Shuffle.
Run a test to ensure POSTing to your web listener is properly working:
Conclusion
Throughout this post we briefly discussed SIGMA rules and how they can benefit our detection capabilities. We can then use Praeco to build search queries that match on the SIGMA rules. While it is true you can also build Wazuh rules for SIGMA rule detection, users may find Praeco more user friendly and human readable than Wazuh’s rule syntax. Praeco’s built in alerting options is also a nice touch. Happy Defending 😄.
Need Help?
The functionality discussed in this post, and so much more, are available via SOCFortress’s Professional Services. Let SOCFortress help you and your team keep your infrastructure secure.
Website: https://www.socfortress.co/
Professional Services: https://www.socfortress.co/ps.html