BLACKSHIELD

Network Runtime

Network Sensor Ingestion

Forward Suricata, Zeek, and eBPF telemetry through the SIEM ingestion API to enrich layered detection coverage.

Supported providers

  • suricata: Live AF_PACKET capture or EVE JSON/NDJSON stream.
  • zeek: Live interface capture or JSON/NDJSON conn/DNS/HTTP telemetry.
  • ebpf: Runtime event telemetry from eBPF-based sensors (file ingestion).

Quick start — live packet capture

Deploy the container anywhere in your environment, configure packet mirroring to deliver traffic to the host's network interface, and set your API key. Findings stream automatically — no PCAP files, no pre-existing sensor infrastructure.

Use BLACKSHIELD_API_URL for the API origin, such as https://api.blackshield.chaplau.com. Do not point the sensor at the browser/dashboard URL.

docker run -d \
  --network host \
  --cap-add NET_ADMIN \
  --cap-add NET_RAW \
  -e BLACKSHIELD_API_URL=https://api.blackshield.chaplau.com \
  -e BLACKSHIELD_API_KEY=sp_your_key \
  -e MIN_SEVERITY=high \
  -e SENSOR_TYPE=suricata \
  public.ecr.aws/blackshield-security/network-sensor:1.0.0

Stable tags are the default rollout path. Switch to public.ecr.aws/blackshield-security/network-sensor:latest only if you want the preview channel.

Then point your cloud packet mirroring at the instance:

CloudFeature
AWSVPC Traffic Mirroring → ENI of the instance
GCPPacket Mirroring → VM network interface
AzureVirtual Network TAP → NIC of the VM
On-premSPAN port / port mirror → host running the container

Findings begin streaming within the first SCAN_INTERVAL_SECONDS (default: 300 s). Use SENSOR_INTERFACE to override the capture interface (default: eth0). Use MIN_SEVERITY (default: high) to keep only high-signal alerts.

Ingestion contract

The sensor client forwards normalized alerts to /api/v1/ingest/siem-alerts. If finding_type is omitted, all three providers map to runtime_threat by default.

{
  "provider": "suricata",
  "connector_instance": "suricata-edge-a",
  "alerts": [
    {
      "alert_id": "suricata-20260328-001",
      "title": "ET POLICY Outbound Connection",
      "severity": "medium",
      "status": "new",
      "resource_type": "network_flow",
      "resource_id": "10.0.0.10:54545->10.0.1.12:443"
    }
  ]
}

eBPF file ingestion (continuous)

For eBPF sensors (Falco, Tetragon, etc.) that write events to a file, use file mode and mount the NDJSON output read-only.

docker run -d \
  -e BLACKSHIELD_API_URL=https://api.blackshield.chaplau.com \
  -e BLACKSHIELD_API_KEY=sp_your_key \
  -e SENSOR_TYPE=ebpf \
  -e COLLECTOR_MODE=file \
  -e SENSOR_INPUT_FILE=/var/log/ebpf-events.jsonl \
  -e SCAN_INTERVAL_SECONDS=60 \
  -v /var/log/ebpf-events.jsonl:/var/log/ebpf-events.jsonl:ro \
  public.ecr.aws/blackshield-security/network-sensor:1.0.0

Bundled eBPF runtime capture

If you want the image to run its own runtime collector, bundled eBPF mode uses Falco's modern eBPF engine and forwards the resulting runtime alerts.

docker run --rm \
  --privileged \
  -v /proc:/host/proc:ro \
  -v /etc:/host/etc:ro \
  -v /sys/kernel/tracing:/sys/kernel/tracing:ro \
  -e BLACKSHIELD_API_URL=https://api.blackshield.chaplau.com \
  -e BLACKSHIELD_API_KEY=sp_your_key \
  -e SENSOR_TYPE=ebpf \
  -e COLLECTOR_MODE=bundled \
  -e SCAN_INTERVAL_SECONDS=0 \
  public.ecr.aws/blackshield-security/network-sensor:1.0.0

Minimum host requirements are /proc mounted to /host/proc and /etcmounted to /host/etc. Mounting /sys/kernel/tracing is recommended for modern eBPF.

Advanced — PCAP replay

For offline analysis or air-gapped environments, replay a pre-recorded PCAP file. Suricata and Zeek are bundled in the image.

docker run --rm \
  -e BLACKSHIELD_API_URL=https://api.blackshield.chaplau.com \
  -e BLACKSHIELD_API_KEY=sp_your_key \
  -e SENSOR_TYPE=suricata \
  -e COLLECTOR_MODE=bundled \
  -e SENSOR_CAPTURE_FILE=/captures/traffic.pcap \
  -e SCAN_INTERVAL_SECONDS=0 \
  -v /path/to/traffic.pcap:/captures/traffic.pcap:ro \
  public.ecr.aws/blackshield-security/network-sensor:1.0.0

Collector modes

COLLECTOR_MODEBehaviourRequirements
auto (default)live for suricata/zeek; file for ebpfNET_ADMIN + NET_RAW for live
liveAlways-on capture on SENSOR_INTERFACE--cap-add NET_ADMIN --cap-add NET_RAW
bundledRuns the bundled collector: PCAP replay for suricata/zeek or Falco runtime capture for ebpfPCAP file for suricata/zeek; host mounts and runtime privileges for ebpf
fileReads existing JSON/NDJSON file incrementallySensor output file mounted read-only

Per-cloud deployment guides

Comprehensive step-by-step guides for deploying sensors in your cloud environment:

  • AWS VPC Traffic Mirroring: Setup traffic mirror sessions, IAM roles, security groups, and multi-account deployments with CloudFormation StackSets.Full Guide →
  • GCP Packet Mirroring: Configure packet mirroring policies, backend services, firewall rules, and multi-project Terraform rollout.Full Guide →
  • Azure Virtual Network TAP: Set up VNet TAPs, managed identities, Key Vault integration, and multi-subscription Azure Bicep deployments.Full Guide →

Operational guides & best practices

Advanced topics for production deployments:

  • Troubleshooting Runbook: Diagnosis and fix for container startup, traffic capture, API connectivity, health checks, resource usage, and ingestion gaps.Troubleshooting →
  • Scaling & Performance Tuning: Sizing by traffic volume, configuration parameters, sensor type comparison, monitoring, and cost optimization.Scaling Guide →
  • High-Availability & Architecture Patterns: Single-sensor baseline, active-passive failover, active-active load distribution, multi-region deployments, HA checklist, and disaster recovery.HA Patterns →

Infrastructure as Code templates

Ready-to-deploy templates for each cloud environment:

  • AWS CDK (Python): EC2 instance stack with security groups, IAM roles, CloudWatch Logs, and auto-scaling. Deploy with: cdk deploy
  • GCP Terraform: Compute instance, backend service, health checks, packet mirroring policy. Deploy with: terraform apply
  • Azure Bicep: VM with managed identity, Key Vault, Virtual Network TAP integration. Deploy with: az deployment group create

Find templates in the developer_guide/aws-network-sensor/,developer_guide/gcp-network-sensor/, and developer_guide/azure-network-sensor/ directories.

The container exits with code 2 on startup if the required sensor binary, capture file, or network interface is missing, surfacing misconfiguration immediately.

Rollout guidance

  • Deploy one connector instance per network boundary for clearer ownership and alert triage.
  • For high-volume captures, tune MAX_EVENTS_PER_SCAN (default: 5000) and SCAN_INTERVAL_SECONDS.
  • State is persisted in SENSOR_STATE_FILE for incremental forwarding across container restarts.
  • In Kubernetes, use a DaemonSet with hostNetwork: true and NET_ADMIN/NET_RAW capabilities for cluster-wide capture.
Network Sensor Ingestion