Open Source SIEM: [Integrating the Wazuh open source security platform on my Homelab.]
Introduction
Wazuh is an open source security platform that provides a unified Extended Detection and Response (XDR) and Security Information and Event Management (SIEM) for networked endpoints and cloud workloads. As an engineer and IT professional, I know the necessity for having a secure network, especially when dealing with sensitive materials. To expand on my knowledge, I’ve decided to improve my local appliance network by enhancing it’s security posture and improving my threat detection capabilities.
Initial Processes
Planning and Research
- Conducted research on Wazuh’s deployment methods, hardware, and operating system requirements
- Determined that running Wazuh in a Docker container will work for my needs
- Identified any encryption and certificates that would be necessary for the installation of Wazuh
- Decided to utilize a single-node system, rather than a multi-node stack
Tools and Resources
- [Nginx Proxy Manager]: Reverse proxy tool to put the web interface behind a SSL encrypted address, and to block external addresses from connecting to the service
- [MobaXTerm]: A versatile toolbox and SSH client designed for connecting to machines using a variety of protocols
- [Docker Compose]: A tool for defining and running multi-container applications
- [Ubuntu 24.04 Server]: A distribution of Ubuntu 24.04 with no graphical interface
- [Visual Studio Code]: A standalone, customizable integrated development environment developed by Microsoft
- [TrueNAS SCALE]: An open source network attached storage / infrastructure solution created by iXsystems
Execution
Step 1: [Configuring the Linux environment to run Docker]
When I was doing my initial research into dockerizing Wazuh, I wanted to run it on my Dockge server. Dockge is an easy to use Docker Compose interface that allows me to keep track of all my Docker containers in one place and via a clean graphical interface, rather than running everything in command line. While it’s convenient for deploying simplistic services that don’t utilize a lot of hardware capabilities, Dockge is deployed on my network as a Docker container within my TrueNAS SCALE setup.
Because it’s directly linked back to my NAS, any Docker container I deploy has to have a mount configured within my NAS’s Apps directory, I’d have to individually create directories and modify any compose file to play nicely with my NAS. When I was reading the documentation for Wazuh, I immediately ruled out using Dockge due to the amount of volumes within the compose file. Additionally, some of the ports that are required for use within the container were already in use by other docker containers. Rather than setting the external port for each container to something not in use, I figured it easier to move the container to a separate machine running Linux, as I already had a virtual machine configured within my local cloud.
To begin prepping my Linux server, I wanted to ensure that it was up to date, as well as installing Docker. To do this, I ran the following commands:
sudo apt update -y && sudo apt upgrade -y
sudo apt install docker.io docker-compose -y
With these commands, I initiated an update/upgrade of all packages on my system, followed by the installation of Docker and Docker Compose.
Step 2: [Creating the Wazuh container and certificate signing service]
Now that I have a Linux machine that’s up to date with Docker installed, I can now move on to pulling the latest version of Wazuh to my machine. Since I decided to utilize the single-node deployment method of Wazuh, I’ll have to pull the repository from Github and configure the install from there. Since I’ll be using Wazuh v4.11.1, I’ll pull that from the repository and put it in it’s own folder:
mkdir Wazuh && cd Wazuh
git clone https://github.com/wazuh/wazuh-docker.git -b v4.11.1
Since the repository comes with both the single-node and multi-node compose files, I’ll make sure to utilize the single-node files via entering it’s directory with:
cd wazuh-docker/single-node/
Now that I have the Docker compose file, I’ll have to edit the compose to fit my needs. Because I don’t want to use a command line editor such as Nano, I’ll just connect my Visual Studio Code instance to my Linux server via SSH. I can do this by going to [Remote Explorer] in the left hand pane, and connecting to my server. To add a server, you can click the plus icon next to SSH and add the IP address of the server you are trying to connect to. For me, I already had it configured, so I just connected to it.
With the compose file now open in my editor, I’ll have to edit a few key environment variables. Since I don’t want my indexer to have the username of ‘admin’ and password of ‘SecretPassword’, I’ll take note of these entries so that I can change these defaults to be my administrative login password for Wazuh later on.
In addition to changing the indexer credentials, I also noted the API password to be change, making sure to use the same API password in both the Dashboard and Indexer containers. Since this is a multi-level Docker stack, credentials to use the API need to be the same.
After noting down these credentials, I needed to generate certificates for the site. Since I plan to put it behind my reverse proxy which utilizes a wildcard certificate, I just used the default Lets Encrypt certificate generator that came with Wazuh. To do this, I ran the following Docker command:
docker-compose -f generate-indexer-certs.yml run --rm generator
After running this, the certificates necessary for Wazuh were generated and placed in my config/wazuh_indexer_ssl_certs
directory. With my certificates now generated, it’s time to launch Wazuh and begin the configuration process To launch the container, I ran the following:
docker-compose up -d
Running this command with the -d
flag will launch the container in the background. Since this is a rather large Docker stack (few gigabytes), I let this run in the background while I worked on other tasks. After the container was launched, I verified it’s operational status by running docker stats
, which yielded:
This showed me that the dashboard, indexer, and manager for Wazuh were operational.
Step 3: [Enhancing security of the dashboard]
Now that Wazuh is up and running, it’s time to enhance the security of the dashboard. I decided to do this prior to configuring the dashboard in order to prevent any issues with the agents later on. The first thing on my list is to configure my reverse proxy to point to the server’s local IP address and place it behind my wildcard certificate. For this, I utilize my Nginx Proxy Manager instance.
With this added, I’m now able to securely access the Wazuh page via wazuh.retconshop.com
. This address is a local site, meaning I have to be connected via my VPN or on the same network to access it.
Since the indexer and API are already behind a self-signed certificate, I selected the scheme to be HTTPS. Doing this will only place the dashboard behind my wildcard certificate though, and not any internal communications within the stack. I don’t have to worry too much about this though, since it’s already encrypted (using it’s self-signed certificate), and I won’t be connecting directly to any other Wazuh service. In order to start changing the credentials, I’ll have to shut down the container via:
docker-compose down
Now that my Wazuh stack is offline, I can take advantage of the password hash generator provided by Wazuh to create encrypted passwords for the built-in users:
docker run --rm -ti wazuh/wazuh-indexer:4.11.1 bash /usr/share/wazuh-indexer/plugins/opensearch-security/tools/hash.sh
That being said, I ran into a slight issue when trying to run this container. Wazuh has specific CPU requirements, and since I configured my Linux machine to have a custom CPU mode, Wazuh was improperly being configured. To repair this, I had to go into my KVM virtualization and change the CPU mode to Host Model
. This set my virtual machine to run on the same CPU architecture as the host (Zen 3, Cezanne), which has Process Context Identifiers (PCID). After a quick restart, I was able run the command without error.
I then repeated this process of generating the password has for both the admin
user account, as well as the kibanaserver
account. All other accounts I left as defaults, as they are not meant to be changed according to the documentation. With these hash values, I updated the account hash values in config/wazuh_indexer/internal_users.yml
.
Before I can launch the server again, I also need to update the passwords in the Docker Compose file. With the compose file, the passwords there are unencrypted, so I used the unencrypted password rather than the hash I just generated. For the admin
account, I updated both instances of INDEXER_PASSWORD
, and for kibanaserver
, I updated the one instance of DASHBOARD_PASSWORD
.
With my changes now in place, I restarted the Docker container with docker-compose up -d
to run it in the background. After relaunching the container, I had to enter the container to run some commands. To do this, I ran the following:
docker exec -it single-node_wazuh.indexer_1 bash
This command allows me to enter the Docker image of the indexer. Now that my command line is inside the Docker container, I have to set a few variables within the container. These variables are for the security manager, allowing it to manipulate certain files within the Wazuh config.
export INSTALLATION_DIR=/usr/share/wazuh-indexer
CACERT=$INSTALLATION_DIR/certs/root-ca.pem
KEY=$INSTALLATION_DIR/certs/admin-key.pem
CERT=$INSTALLATION_DIR/certs/admin.pem
export JAVA_HOME=/usr/share/wazuh-indexer/jdk
After pasting these variables in, all I had to do was run the security manager command and let it update the Wazuh install with the new credentials:
bash /usr/share/wazuh-indexer/plugins/opensearch-security/tools/securityadmin.sh -cd /usr/share/wazuh-indexer/opensearch-security/ -nhnv -cacert $CACERT -cert $CERT -key $KEY -p 9200 -icl
This password change process had to be repeated once more, as you are only able to change one account password at a time. I tried to change both passwords at the same time, though attempting to do so breaks the install. If you are recreating this setup on your own and accidentally try this, you can grab the default hashes and passwords from the Wazuh Github repository here.
Step 4: [Configuring the Wazuh dashboard]
Now that Wazuh is up and running, it’s time to configure the dashboard and add my agents. On my first login, I was greeted with a very nice landing page, showing me some initial metrics for my network:
In order to verify that everything is working properly prior to deploying the agent on all my computers, I began the agent install process for my Ubuntu virtual machine running Wazuh. That way, if something goes wrong, I only lose a virtual machine rather than a production computer. Before clicking on the Deploy new agent
button, I added two new groups for machines: Servers
and Workstations
. That way, I can separate these two categories of machines from each other, rather than using the default Default
group.
With my new groups added, I clicked on the deploy new agent button, and followed through with the instructions. Since the machine I am deploying the first agent on is a Debian amd64
machine, I selected DEB amd64
for the system. For my server address, I used the IP address of the machine (rather than the FQDN), and opted to remember the address for future agents. And lastly, I set the name of the agent to the computer’s name (DFW-RT-VLIN04
) and the groups to Servers
. With all of these configuration values set, it generated the command for me to download and install the agent, where all I have to do is paste and wait.
The reason why I opted to use the IP address instead of the FQDN is because the FQDN is a reverse proxy address running through a Docker container. Rather than stressing out the container and risking all my agents going offline in the event of a container issue/maintenance, using the IP address of the Wazuh server would bypass Nginx Proxy Manager and go straight to the Wazuh service. In doing so, I am not opening up any potential vulnerabilities, since communications between the agent and server are already encrypted.
After adding my Ubuntu server, I also added my personal workstation, and was provided a list of my agents currently connected to Wazuh:
Now that I’ve confirmed my agents connecting to my Wazuh server successfully, all that’s left for me to do is begin the installation process of the Wazuh agents on all of my networked machines, and wait. Each agent will begin to index any and all issues within my network, allowing me to explore vulnerabilities and mitigate them as they appear ensuring my network is locked down as tight as possible.
Recap
During this project, I utilized Ubuntu 24.04 server and Docker Compose to deploy a Wazuh SIEM/XDR management service on my network. This system directly improves my network by providing me a list of CVE (Common Vulnerability and Exposures) and provides mitigation techniques for each entry, allowing me to monitor any issues and execute mitigations to lock down my network.
What I Learned
Throughout this project, I encountered a few issues during the install of agents, the Wuzah server, and bcrypt hashing. Through troubleshooting these issues, I was able to deepen my understanding of Linux based servers and Docker through the configuration of a complex, multi-level stack on a dedicated virtual machine. Additionally, I expanded my knowledge with KVM virtualization, specifically CPU architecture emulation and the needs of the host for emulation.
Conclusion
Overall, I am really happy with my setup of Wuzah, and I am very excited to see what vulnerabilities it is able to identify. A lot of the interface is very similar to what I did back at Canyon AeroConnect’s IT department, where I would receive a list of vulnerabilities and had to come up with mitigation techniques based on their threat levels.
Have you worked on similar projects or faced comparable challenges? I’d love to hear about your experiences! Send me an email at [email protected] so we can discuss!