Elasticsearch Recon Ingestion Scripts (ERIS) 🔎
Go to file
2024-03-06 14:33:21 -05:00
helpers Started asyncronous implementation of bulk streaming data, altered ERIS defaults, etc 2024-03-04 17:44:09 -05:00
ingestors Updated massdns ingestion script with sentinal value checking and using the ip address as the document id 2024-03-06 14:33:21 -05:00
old Asyncronous refactorization pushed as main version 💯 2024-03-05 22:19:11 -05:00
eris.py Asyncronous refactorization pushed as main version 💯 2024-03-05 22:19:11 -05:00
LICENSE Updated README, fixed issue using the wrong domain in records for zone file ingestion (woops) 2024-01-20 10:53:55 -05:00
README.md Updated cause I am OCD about spaces and formatting 2024-03-05 22:29:31 -05:00
sniff_patch.py Asyncronous refactorization pushed as main version 💯 2024-03-05 22:19:11 -05:00

Elasticsearch Recon Ingestion Scripts (ERIS)

A utility for ingesting various large scale reconnaissance data logs into Elasticsearch

The is a suite of tools to aid in the ingestion of recon data from various sources (httpx, masscan, zonefiles, etc) into an Elasticsearch cluster. The entire codebase is designed with asynconous processing, aswell as load balancing ingestion across all of the nodes in your cluster. Additionally, live data ingestion is supported from many of the sources supported. This means data can be directly processed and ingested into your Elasticsearch cluster instantly. The structure allows for the developement of "modules" or "plugins" if you will, to quickly create custom ingestion helpers for anything!

Prerequisites

Usage

python eris.py [options] <input>

Note: The <input> can be a file or a directory of files, depending on the ingestion script.

Options

General arguments
Argument Description
input_path Path to the input file or directory
--watch Create or watch a FIFO for real-time indexing
Elasticsearch arguments
Argument Description Default
--host Elasticsearch host http://localhost/
--port Elasticsearch port 9200
--user Elasticsearch username elastic
--password Elasticsearch password $ES_PASSWORD
--api-key Elasticsearch API Key for authentication $ES_APIKEY
--self-signed Elasticsearch connection with a self-signed certificate
Elasticsearch indexing arguments
Argument Description Default
--index Elasticsearch index name Depends on ingestor
--pipeline Use an ingest pipeline for the index
--replicas Number of replicas for the index 1
--shards Number of shards for the index 1
Performance arguments
Argument Description Default
--chunk-max Maximum size in MB of a chunk 100
--chunk-size Number of records to index in a chunk 50000
--retries Number of times to retry indexing a chunk before failing 100
--timeout Number of seconds to wait before retrying a chunk 60
Ingestion arguments
Argument Description
--certs Index Certstream records
--httpx Index HTTPX records
--masscan Index Masscan records
--massdns Index massdns records
--zone Index zone DNS records

This ingestion suite will use the built in node sniffer, so by connecting to a single node, you can load balance across the entire cluster. It is good to know how much nodes you have in the cluster to determine how to fine tune the arguments for the best performance, based on your environment.

GeoIP Pipeline

Create & add a geoip pipeline and use the following in your index mappings:

"geoip": {
    "city_name": "City",
    "continent_name": "Continent",
    "country_iso_code": "CC",
    "country_name": "Country",
    "location": {
        "lat": 0.0000,
        "lon": 0.0000
    },
    "region_iso_code": "RR",
    "region_name": "Region"
}

Changelog

  • Added ingestion script for certificate transparency logs in real time using websockets.
  • --dry-run removed as this nears production level
  • Implemented async elasticsearch into the codebase & refactored some of the logic to accomadate.
  • The --watch feature now uses a FIFO to do live ingestion.
  • Isolated eris.py into it's own file and seperated the ingestion agents into their own modules.

Roadmap

  • Fix issue with ingest_certs.py and not needing to pass a file to it.
  • Create a module for RIR database ingestion (WHOIS, delegations, transfer, ASN mapping, peering, etc)
  • Dynamically update the batch metrics when the sniffer adds or removes nodes.

Mirrors for this repository: acid.vegasSuperNETsGitHubGitLabCodeberg