In a previous blog post, I described tuning as the key to reducing alert fatigue, not SOAR. You can find that article here Alert Fatigue. Tuning isn’t easy, and this post will only talk about half of the process.
When you look up the word tuning, you will find the definition of “to adjust for precise functioning.” A SIEM that provides only high-quality alerts and real insight into something malicious happening can significantly improve the organization’s security and help combat analyst fatigue.
Tuning is an art, though. Just like a doctor doing surgery on a patient, much time and effort is put into research before the surgery. MRIs, X-rays, communication with the patient, and discussions of alternative options must be done before determining if surgery is the best course of action. Once the surgery is decided, a plan must be advised to ensure the least amount of risk is caused to the patient and maximum effectiveness of the procedure. This might appear to be an over-the-top analogy to compare tuning a SIEM to surgery, but I assure you it isn’t if done correctly.
So what’s a good tuning request, and what’s a bad tuning request? My definition of a good tuning request is when a process repeatedly happens for an extended period of time (typically weeks or indefinitely) and when the activity can be accounted for and approved. Lastly, if tuning is performed, it doesn’t put the company at any additional risk and simply removes known approved activity.
A bad tuning request is a process that only happens once or very seldom (typically once or twice a month or even less frequently). It could also be a process that is not fully understood or is firing due to a system misconfiguration. If visibility decreases and risk increases to the organization because of the tuning, it’s not a reasonable tuning request.
Like doctors before surgery, we need to determine if tuning an alert is the best course of action. Even though an alert might be noisy, it doesn’t mean it’s the best option. In the following section, I will provide examples of a good candidate for tuning and not.
Example 1. Every Monday, you receive several alerts for a system doing enumeration in a specific subnet. In your research of the system, you can identify that this is a vulnerability scanner. Talking to the owner of the vulnerability scanner, you find that every Monday, there is a weekly scan done in that subnet, and at the end of the month, that system will do a more in-depth scan.
This would be an example of a good alert to tune. It meets all my criteria. We have an alert that fires off every Monday, causing several alerts. We discover the system is a vulnerability scanner, so we know this activity isn’t going away. We identify by talking to the owner that there might be more alerts from this system at the end of the month. Since we determined that this is a vulnerability scanner, we identified that this approved system/software in our environment is used to increase the organization’s security posture. Lastly, does not getting these alerts cause any risk to the company? No, but it’s expected and even documented activity.
Example 2. You get multiple daily alerts about a brute force attempt on an internet-facing web server. You noticed that multiple IPs worldwide are making login attempts, and it’s pretty consistent throughout the day. You talk to the system owner, and you’re notified that there are no users set up on the site, so they can never successfully login, and 2MFA is also not set up since there are no users; this is all because the system is in development. The system owner determines there’s no risk to the organization for the traffic you see.
You might be tempted to tune out any brute force attempts for this server. In this instance, tuning doesn’t need to be done; instead, a fix or risk assessment should be done on the situation. The system owner has told you it was a development system not ready for production. Still, it’s on the internet, which could also mean that necessary harding precautions haven’t taken place. We know that multiple IPs are attempting usernames and passwords, but are attackers looking for web vulnerabilities as well? Probably. If there are any vulnerabilities, it could be a possible avenue for an attacker to pivot into your network. Tuning this out could put the organization at more risk. This system must be evaluated and possibly taken off the internet until it’s ready for production.
Example 3. You get alerts daily for “Suspicious data transfer” from one server to another. As you dig deeper, you discover that one server sends data to a backup server. You find the processes that initiate data transfer and determine that it is, in fact, the backup software.
In this example, this is something we could and should tune. We understand that this server will be sending data to the backup server daily, and we see the software initiating the process of sending data as the backup software. Not seeing this traffic won’t put the company at risk and is a normal business process.
Example 4. You get an alert of “Mass Download & copy to USB device by single user”. You find out that this user is given a monthly task of going to a web portal and downloading information off a site. This information must go to a standalone machine where data is updated. To do this, they transfer the files via USB. You find authorization for this user to use the USB to transfer the files. You also learn the process only happens once a month.
In this case, even though the user is authorized and the process is approved, tuning the rule for this user is not advised. The rule should only fire off at most once a month; any more than that might indicate improper use of a USB drive and could be an indicator of data exfiltration. The risk of taking the user out of the rule outweighs the benefit of never seeing this activity for this user because it’s approved. Just the risk of not seeing this alert for the user and the frequency of the alert has affected the decision not to tune.
Tuning isn’t easy. Sometimes there’s a straightforward way to tune an alert, and sometimes an alert indicates a risk to the environment and needs to be fixed. I didn’t cover all the different ways that an alert could be tuned, just what a good candidate to tune is. The skill of actually tuning is an extension of identifying good opportunities. So the next time you’re evaluating if something needs tuning ask yourself: Does the alert happen several times or daily? Can you identify the process that is occurring? Is the process approved or part of normal business operations? And lastly, can this be tuned in such a way that won’t put the company at more of a risk? If you can answer those questions as yes, you might have a good candidate for tuning.