You scanned your images with Trivy. You enforced policies with Kyverno. Your workloads have cryptographic identity via SPIFFE.

But what happens after deployment? What if a container gets compromised at runtime? What if an attacker exploits a zero-day?

Prevention isn’t enough. You need detection.

Falco is a runtime security tool that monitors system calls in your cluster. It sees everything containers do — file access, network connections, process execution — and alerts when something looks wrong.

Why Runtime Security?

Security has layers:

  1. Build time — Scan images for known vulnerabilities (Trivy)
  2. Deploy time — Enforce policies (Kyverno, admission controllers)
  3. Runtime — Detect anomalous behavior (Falco)

Most teams focus on layers 1 and 2. But attackers don’t care about your pipeline — they exploit what’s running.

Runtime examples that other tools miss:

  • Container executing /bin/bash (shouldn’t happen in production)
  • Process reading /etc/shadow
  • Unexpected outbound connection to crypto mining pool
  • Container writing to /etc/ directories
  • Kubernetes secrets accessed from unusual pods

Falco catches these because it monitors actual behavior, not just configuration.

How Falco Works

Falco uses eBPF (or a kernel module) to intercept system calls:

flowchart TD
    subgraph userspace["User Space"]
        A["Container A"]
        B["Container B"]
        C["Container C"]
    end

    subgraph kernel["Kernel Space"]
        EBPF["eBPF / Kernel Module<br/>(intercepts syscalls)"]
    end

    A --> EBPF
    B --> EBPF
    C --> EBPF

    EBPF --> FALCO["Falco<br/>(rules engine)"]
    FALCO --> ALERTS["Alerts<br/>(stdout, webhook, Kafka...)"]

Every syscall is evaluated against rules. Matching syscalls generate alerts.

Installing Falco

Using Helm with eBPF driver (recommended):

helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update

helm install falco falcosecurity/falco \
  --namespace falco \
  --create-namespace \
  --set driver.kind=ebpf \
  --set falcosidekick.enabled=true

For GitOps with ArgoCD:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: falco
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://falcosecurity.github.io/charts
    chart: falco
    targetRevision: 4.0.0
    helm:
      values: |
        driver:
          kind: ebpf
        falcosidekick:
          enabled: true
          config:
            slack:
              webhookurl: "https://hooks.slack.com/services/xxx"
        tty: true
  destination:
    server: https://kubernetes.default.svc
    namespace: falco
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

Understanding Falco Rules

Rules define what behavior to detect. The syntax:

- rule: Terminal shell in container
  desc: Detect shell being spawned in a container
  condition: >
    spawned_process and
    container and
    shell_procs and
    proc.tty != 0
  output: >
    Shell spawned in container
    (user=%user.name container=%container.name shell=%proc.name
     parent=%proc.pname cmdline=%proc.cmdline)
  priority: WARNING
  tags: [container, shell, mitre_execution]

Key parts:

  • condition — When to trigger (using Falco’s filter language)
  • output — What to include in the alert
  • priority — Severity level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
  • tags — For categorization and filtering

Built-in Rules

Falco ships with comprehensive default rules:

# View loaded rules
kubectl exec -n falco -it falco-xxx -- falco --list

Categories include:

  • Container escape attempts
  • Privilege escalation
  • Sensitive file access
  • Suspicious network activity
  • Kubernetes API abuse

Example built-in rules:

# Detect container escape via mount
- rule: Launch Sensitive Mount Container
  condition: >
    spawned_process and container and
    sensitive_mount

# Detect reading sensitive files
- rule: Read sensitive file untrusted
  condition: >
    open_read and sensitive_files and
    not trusted_containers

# Detect crypto mining
- rule: Detect outbound connections to crypto mining pools
  condition: >
    outbound and
    fd.sip.name in (cryptomining_pool_domains)

Custom Rules

Add your own rules for application-specific detection:

# custom-rules.yaml
customRules:
  rules-custom.yaml: |-
    # Alert when our API server spawns unexpected processes
    - rule: Unexpected process in api-server
      desc: Detect processes other than the main app in api-server containers
      condition: >
        spawned_process and
        container.image.repository contains "api-server" and
        not proc.name in (api-server, node, npm)
      output: >
        Unexpected process in api-server
        (command=%proc.cmdline container=%container.name image=%container.image.repository)
      priority: WARNING

    # Alert on database connection from unexpected pods
    - rule: Database connection from non-backend pod
      desc: Detect connections to PostgreSQL from pods that shouldn't connect
      condition: >
        outbound and
        fd.sport = 5432 and
        not k8s.pod.label.app in (api-server, worker, migration-job)
      output: >
        Unexpected database connection
        (pod=%k8s.pod.name namespace=%k8s.ns.name dest=%fd.sip)
      priority: ERROR

Deploy custom rules via Helm values:

# values.yaml
customRules:
  rules-custom.yaml: |-
    - rule: My custom rule
      ...

Falco’s Filter Language

Understanding the filter language is key to writing effective rules.

Classes and Fields

# Process fields
proc.name         # Process name
proc.cmdline      # Full command line
proc.pname        # Parent process name
proc.exepath      # Executable path

# Container fields
container.name    # Container name
container.id      # Container ID
container.image.repository  # Image name

# File fields
fd.name           # File descriptor name (file path)
fd.directory      # Directory of the file

# Network fields
fd.sip            # Server IP
fd.sport          # Server port
fd.cip            # Client IP

# Kubernetes fields
k8s.pod.name      # Pod name
k8s.ns.name       # Namespace
k8s.pod.label.app # Pod label

Macros

Reusable condition fragments:

- macro: container
  condition: container.id != host

- macro: shell_procs
  condition: proc.name in (bash, sh, zsh, dash, ksh)

- macro: spawned_process
  condition: evt.type = execve and evt.dir = <

- macro: open_read
  condition: evt.type in (open, openat) and evt.is_open_read = true

Lists

Named sets of values:

- list: sensitive_files
  items: [/etc/shadow, /etc/sudoers, /etc/pam.d, /root/.ssh]

- list: package_managers
  items: [apt, apt-get, yum, dnf, apk, pip, npm]

- list: shell_binaries
  items: [bash, sh, zsh, csh, tcsh, ksh, dash]

Real-World Detection Examples

Detect Reverse Shells

- rule: Reverse shell detected
  desc: Detect reverse shells via common patterns
  condition: >
    spawned_process and
    container and
    ((proc.name = bash and proc.args contains ">&" and proc.args contains "/dev/tcp") or
     (proc.name = nc and proc.args contains "-e") or
     (proc.name = python and proc.args contains "socket" and proc.args contains "subprocess"))
  output: >
    Reverse shell detected (user=%user.name command=%proc.cmdline container=%container.name)
  priority: CRITICAL
  tags: [mitre_execution, reverse_shell]

Detect Kubernetes Secret Access

- rule: K8s secret accessed from unexpected namespace
  desc: Detect when secrets are read from non-standard locations
  condition: >
    open_read and
    fd.name startswith "/var/run/secrets/kubernetes.io" and
    not k8s.ns.name in (kube-system, monitoring, vault)
  output: >
    K8s secret accessed (file=%fd.name pod=%k8s.pod.name namespace=%k8s.ns.name)
  priority: WARNING

Detect Package Installation at Runtime

- rule: Package manager in container
  desc: Detect package installations in running containers
  condition: >
    spawned_process and
    container and
    proc.name in (apt, apt-get, yum, dnf, apk, pip, npm) and
    not container.image.repository in (allowed_builder_images)
  output: >
    Package manager run in container
    (command=%proc.cmdline container=%container.name image=%container.image.repository)
  priority: ERROR
  tags: [mitre_persistence, package_install]

Falco Sidekick: Alert Routing

Falco outputs alerts to stdout by default. Falcosidekick routes alerts to various destinations:

falcosidekick:
  enabled: true
  config:
    slack:
      webhookurl: "https://hooks.slack.com/services/xxx"
      minimumpriority: warning

    prometheus:
      enabled: true

    elasticsearch:
      hostport: "elasticsearch.logging:9200"
      index: "falco"

    alertmanager:
      hostport: "http://alertmanager.monitoring:9093"

    webhook:
      address: "http://security-responder.security:8080/falco"

This sends:

  • Warnings and above to Slack
  • All alerts as Prometheus metrics
  • All alerts to Elasticsearch for searching
  • Critical alerts to Alertmanager for paging
  • Everything to a custom webhook for automated response

Integration with Prometheus

Falcosidekick exposes Prometheus metrics:

# ServiceMonitor
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: falco-sidekick
  namespace: monitoring
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: falcosidekick
  endpoints:
    - port: http

Create alerts based on Falco detections:

# PrometheusRule
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: falco-alerts
spec:
  groups:
    - name: falco
      rules:
        - alert: FalcoCriticalAlert
          expr: increase(falco_events{priority="Critical"}[5m]) > 0
          for: 0m
          labels:
            severity: critical
          annotations:
            summary: "Falco critical security event detected"
            description: "{{ $labels.rule }} triggered in {{ $labels.k8s_ns_name }}/{{ $labels.k8s_pod_name }}"

Reducing Noise

Default rules can be noisy. Tune them:

Disable Rules

# values.yaml
falco:
  rules_file:
    - /etc/falco/falco_rules.yaml
    - /etc/falco/falco_rules.local.yaml  # Overrides
    - /etc/falco/rules.d  # Custom rules

customRules:
  falco_rules.local.yaml: |-
    # Disable noisy rule
    - rule: Terminal shell in container
      enabled: false

    # Modify existing rule to exclude certain containers
    - rule: Read sensitive file untrusted
      condition: >
        open_read and sensitive_files and
        not trusted_containers and
        not container.image.repository in (my-trusted-app)

Tune Macros

customRules:
  falco_rules.local.yaml: |-
    # Add to trusted containers
    - macro: trusted_containers
      condition: >
        (container.image.repository in (
          my-company/trusted-app,
          my-company/another-app
        ))

    # Exclude namespaces from monitoring
    - macro: user_namespace
      condition: >
        k8s.ns.name not in (kube-system, falco, monitoring)

My Production Configuration

driver:
  kind: ebpf
  ebpf:
    hostNetwork: true

falco:
  grpc:
    enabled: true
  grpc_output:
    enabled: true
  json_output: true
  log_level: info

  rules_file:
    - /etc/falco/falco_rules.yaml
    - /etc/falco/falco_rules.local.yaml
    - /etc/falco/rules.d

falcosidekick:
  enabled: true
  replicaCount: 2
  config:
    slack:
      webhookurl: "${SLACK_WEBHOOK}"
      minimumpriority: error

    prometheus:
      enabled: true

    alertmanager:
      hostport: "http://alertmanager.monitoring:9093"
      minimumpriority: critical

customRules:
  rules-custom.yaml: |-
    # Our application-specific rules
    - rule: Unexpected outbound connection from backend
      desc: Backend pods should only connect to known services
      condition: >
        outbound and
        k8s.pod.label.app = "backend" and
        not fd.sip.name in (postgres.db, redis.cache, api.internal)
      output: >
        Backend made unexpected outbound connection
        (pod=%k8s.pod.name dest=%fd.sip:%fd.sport)
      priority: WARNING

  falco_rules.local.yaml: |-
    # Tune default rules for our environment
    - macro: user_known_write_etc_conditions
      condition: >
        (container.image.repository = "my-company/config-manager")

Key decisions:

  • eBPF driver — Better performance than kernel module
  • JSON output — Easier parsing for downstream tools
  • Tiered alerting — Errors to Slack, Critical to PagerDuty
  • Custom rules — Application-aware detection

Response Automation

Falco detects; you decide what to do. Example automated responses:

# security-responder deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: security-responder
spec:
  template:
    spec:
      containers:
        - name: responder
          image: my-company/security-responder
          env:
            - name: SLACK_WEBHOOK
              valueFrom:
                secretKeyRef:
                  name: security-responder
                  key: slack-webhook

Response script example:

@app.route('/falco', methods=['POST'])
def handle_falco_alert():
    alert = request.json

    if alert['priority'] == 'Critical':
        # Kill the pod immediately
        pod = alert['output_fields']['k8s.pod.name']
        namespace = alert['output_fields']['k8s.ns.name']

        # Delete pod (Deployment will recreate it)
        v1.delete_namespaced_pod(pod, namespace)

        # Notify
        send_slack(f"Killed pod {namespace}/{pod} due to: {alert['rule']}")

    return 'ok'

Why This Matters

You can’t secure what you can’t see.

Static analysis catches known vulnerabilities. Admission control enforces policies. But runtime security sees actual behavior — what’s really happening in your cluster, right now.

When something unexpected happens:

  • Without Falco: You don’t know until the damage is done
  • With Falco: You see it as it happens

This is understanding applied to security. Not hoping your prevention works, but knowing what’s actually happening and responding immediately.


Prevention is important, but detection is essential. Falco gives you eyes inside your containers — seeing what they do, not just what they should do.