Vulnerability Management with FleetDM and Splunk

Dmitrii Pushkarev
4 min readFeb 27, 2024

Hi everyone! I won’t describe what it is and how to install it, I’ll just say it’s a great soft. You can find a lot materials in the Internet about it. I’m going to show you how to get vulnerabilities list from FleetDM and process them in Splunk.

FleetDM API

They have API methods for many things including collecting installed software. We will use this one. I’ve prepared a small script to collect vulnerabilities to file in JSON format. It will be sent to Splunk. We will get output similar to this, but in one line.

{
"cve_id": "CVE-2020-26570",
"host_name": "test-21",
"os_version": "Ubuntu 20.04.6 LTS",
"pkg_name": "opensc-pkcs11",
"platform": "ubuntu",
"primary_ip": "10.0.0.121",
"primary_mac": "00:00:00:00:00:00",
"public_ip": "8.8.8.8",
"source": "deb_packages",
"timestamp": "2024-02-26T15:49:42.180043",
"version": "0.20.0-3"
}

My script

#!/usr/bin/env python3
import requests
import json
import configparser
from datetime import datetime

def fetch_hosts(base_url, headers):
hosts_list = []
hosts_response = requests.get(f'{base_url}/fleet/hosts', headers=headers)
hosts_data = hosts_response.json()
for host in hosts_data.get('hosts', []):
hosts_list.append(host['id'])
return hosts_list

def log_vulnerabilities(pkgs, log_file):
with open(log_file, 'a') as log:
log.write(json.dumps(pkgs, sort_keys=True) + '\n')

def main():
config = configparser.ConfigParser()
config.read('config')
api_token = config['fleet']['api_token']
base_url = config['fleet']['base_url']
headers = {'Authorization': f'Bearer {api_token}'}
hosts_list = fetch_hosts(base_url, headers)

for host_id in hosts_list:
host_response = requests.get(f'{base_url}/fleet/hosts/{host_id}', headers=headers)
host_data = host_response.json()
for software_item in host_data.get('host', {}).get('software', []):
if software_item.get('vulnerabilities'):
for vulnerability in software_item['vulnerabilities']:
my_date = datetime.now().isoformat()
pkgs = {
'timestamp': my_date,
'cve_id': vulnerability['cve'],
'host_name': host_data['host']['hostname'],
'pkg_name': software_item['name'],
'pkg_source': software_item['source'],
'version': software_item['version'],
'os_version': host_data['host']['os_version'],
'platform': host_data['host']['platform'],
'public_ip': host_data['host']['public_ip'],
'primary_ip': host_data['host']['primary_ip'],
'primary_mac': host_data['host']['primary_mac']
}
log_vulnerabilities(pkgs, '/tmp/vulners.log')

if __name__ == "__main__":
main()

config example

[fleet]
api_token = tokentoken
base_url = https://fleet.tututu.com/api/v1

Now we have all vulnerabilities in /tmp/vulners.log. I’ve sent it to Splunk.

Let’s Splunk

How it looks in Splunk.

It looks nice but let’s extend our data by matching CVE-ID with National Vulnerability Database (NVD). They provide archives with JSON bases:

  • https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1–2017.json.zip
  • https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1–{your year}.json.zip
  • https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1–2024.json.zip

We will have additional fields like these:

{'baseMetricV2': {'cvssV2': {'accessComplexity': 'LOW',
'accessVector': 'LOCAL',
'authentication': 'NONE',
'availabilityImpact': 'COMPLETE',
'baseScore': 7.2,
'confidentialityImpact': 'COMPLETE',
'integrityImpact': 'COMPLETE',
'vectorString': 'AV:L/AC:L/Au:N/C:C/I:C/A:C',
'version': '2.0'},
'exploitabilityScore': 3.9,
'impactScore': 10.0,
'obtainAllPrivilege': False,
'obtainOtherPrivilege': False,
'obtainUserPrivilege': False,
'severity': 'HIGH',
'userInteractionRequired': False},
'baseMetricV3': {'cvssV3': {'attackComplexity': 'LOW',
'attackVector': 'LOCAL',
'availabilityImpact': 'HIGH',
'baseScore': 7.8,
'baseSeverity': 'HIGH',
'confidentialityImpact': 'HIGH',
'integrityImpact': 'HIGH',
'privilegesRequired': 'LOW',
'scope': 'UNCHANGED',
'userInteraction': 'NONE',
'vectorString': 'CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H',
'version': '3.0'},
'exploitabilityScore': 1.8,
'impactScore': 5.9}}

Now we need to convert this JSON to CSV because Splunk uses CSV files as lookups. I’ve prepared a script for this goal it’s not beautiful but it works. I’ve added the downloaded CSV to Splunk lookups. Let’s add it to our search.

We now have additional information to make decisions and prioritize the vulnerabilities fixing. There are no public exploits for all vulnerabilities, let’s add information about public exploits to our search. We can get such csv table here.

Hooray! We can find all critical vulnerabilities and with public exploits as well. I think it is really cool point for fix prioritization.

Let’s imagine that we have CMDB or Assset Inventory when we have special parameter that shows us the criticality(business value) of the assets from 0 to 10. Someone(business oriented person) counted it before by using some metrics and we decided that assets with value from 5 are critical for us. I imported this data from the CMDB in CSV format into Splunk.

# our CSV asset.csv
host_name,business_value
host1.app,5
k8s32.ml.local,8
mysql-db-123-star3,3
antifraud.test,2
samba-pro.local,3

Let’s use these data in search and find vulnerabilities with CVSS more than 7 with public exploit and for critical assets. We decided that these were the vulnerabilities we would address first.

The End.

Thanks to Osquery, FleetDM, Python, Splunk, NVD and ExploitDB.

--

--