Skip to main content

libCRS — CRS Communication Library

Overview

libCRS is a Python CLI library installed in every CRS container. It provides a uniform interface for CRS containers to interact with the OSS-CRS infrastructure — submitting artifacts (seeds, PoVs, bug candidates), sharing files between containers within a CRS, managing build outputs, and resolving service endpoints.

By abstracting these operations behind a single CLI, libCRS allows CRS developers to write infrastructure-agnostic code: the same libCRS commands work regardless of whether the CRS runs locally via Docker Compose or (in the future) on a cloud deployment.

Installation

libCRS is installed inside CRS container images during the Docker build phase. The provided install.sh script handles the full setup:

# Inside a Dockerfile
COPY libCRS /opt/libCRS
RUN /opt/libCRS/install.sh

install.sh accepts flags to control what gets installed:

FlagInstalls
(no flag)CLI + Python library (same as --all)
--allCLI + Python library
--cliCLI entry point only (/usr/local/bin/libCRS)
--pythonImportable Python library only (import libcrs)

Flags can be combined (e.g., --cli --python is equivalent to --all).

# CLI only (e.g. containers that only shell out to libCRS)
RUN /opt/libCRS/install.sh --cli

# Python library only (e.g. containers that import libcrs)
RUN /opt/libCRS/install.sh --python

This installs:

  • uv (Python package manager)
  • rsync (used for file operations)
  • The libCRS CLI tool, available at /usr/local/bin/libCRS (when --cli or --all)
  • The libcrs Python package into the system Python (when --python or --all)

Dependencies

  • Python >= 3.10
  • watchdog >= 6.0.0 (filesystem event monitoring for register-submit-dir)
  • requests >= 2.28.0 (HTTP client for builder sidecar communication)
  • rsync (installed automatically by install.sh)

Environment Variables

libCRS relies on several environment variables injected by CRS Compose at container startup:

VariableDescription
OSS_CRS_RUN_ENV_TYPEExecution environment type (local)
OSS_CRS_NAMEName of the CRS (used for network domain resolution and metadata)
OSS_CRS_BUILD_OUT_DIRShared filesystem path for build outputs
OSS_CRS_SUBMIT_DIRShared filesystem path for submitted artifacts (seeds, PoVs, etc.)
OSS_CRS_SHARED_DIRShared filesystem path for inter-container file sharing within a CRS
OSS_CRS_LOG_DIRWritable filesystem path for persisting CRS agent/internal logs to the host
OSS_CRS_FETCH_DIRRead-only filesystem path for fetching inter-CRS data and bootup data (set on run containers, and on build-target builder containers when directed inputs are provided)
OSS_CRS_REBUILD_OUT_DIRShared filesystem path for rebuild artifacts (written by builder sidecar, read by CRS modules)
BUILDER_MODULEBuilder sidecar service name for DNS resolution (framework-injected, defaults to builder-sidecar)
OSS_CRS_FUZZ_PROJRead-only mount containing the fuzz project directory (set on all CRS containers)
OSS_CRS_TARGET_SOURCERead-only mount containing the target source directory (set on all CRS containers)

Architecture

┌──────────────────────────────────────────────────────────┐
│ CRS Container │
│ │
│ CRS Code ──▶ libCRS CLI ──▶ libCRS Library │
│ │ │
│ ▼ │
│ ┌──────────┐ │
│ │ CRSUtils │ │
│ └────┬─────┘ │
│ │ │
└───────────────────────────────────────┼──────────────────┘

┌────────────┬───────────┼──────────┬──────────┐
▼ ▼ ▼ ▼ ▼
┌─────────┐ ┌──────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│ Build │ │ Submit / │ │ Shared │ │ Log │ │ Network │
│ Output │ │ Fetch │ │ FS │ │ Dir │ │ (DNS) │
└─────────┘ └──────────┘ └─────────┘ └─────────┘ └─────────┘

libCRS uses the strategy pattern via the abstract CRSUtils base class. Currently, LocalCRSUtils implements all operations for local Docker Compose deployments. An AzureCRSUtils implementation is planned to support Azure-based deployments (e.g., using Azure Blob Storage for shared filesystems and Azure Container Instances for CRS execution). New deployment backends can be added by implementing the CRSUtils interface without changing the CLI or any CRS code.

Data Exchange Flow

CRS containers never communicate directly. All inter-CRS data flows through a two-tier filesystem managed by the exchange sidecar:

CRS-A Container CRS-B Container
┌──────────────────┐ ┌──────────────────┐
│ │ │ │
│ SubmitHelper │ │ FetchHelper │
│ (watchdog + │ │ (poll every 5s) │
│ MD5 dedup + │ │ │
│ batch flush) │ │ InfraClient │
│ │ │ │ .fetch_new() │
│ ▼ │ │ ▲ │
│ SUBMIT_DIR/ │ Exchange Sidecar │ FETCH_DIR/ │
│ ├─ povs/ │ (oss-crs-infra) │ ├─ povs/ │
│ ├─ seeds/ │ ┌──────────────────┐ │ ├─ seeds/ │
│ ├─ patches/ │───▶│ EXCHANGE_DIR/ │───▶ │ ├─ patches/ │
│ └─ bug- │ │ (shared global) │ │ └─ bug- │
│ candidates/ │ └──────────────────┘ │ candidates/ │
│ │ │ │
│ (per-CRS, │ │ (per-CRS, │
│ write-only) │ │ read-only) │
└──────────────────┘ └──────────────────┘
  • SUBMIT_DIR — Per-CRS write area. SubmitHelper watches a local directory for new files, deduplicates by MD5 hash, batches them (100 files or 10 seconds), and copies to SUBMIT_DIR/<type>/ with hash-based filenames.
  • EXCHANGE_DIR — Global shared area managed by the exchange sidecar. The sidecar copies files from each CRS's SUBMIT_DIR into EXCHANGE_DIR and distributes them to other CRSs' FETCH_DIRs.
  • FETCH_DIR — Per-CRS read-only area. FetchHelper polls every 5 seconds via InfraClient, deduplicates by filename (hash-based names from submit provide natural content dedup), and copies new files to the local directory.

Data Types

All submission and fetching operations work with one of the following data types:

TypeDescription
povProof-of-vulnerability inputs that trigger bugs
seedFuzzing seed inputs
bug-candidatePotential bug reports for verification
patchPatches to fix discovered bugs
diffReference diffs for delta-mode analysis

CLI Reference

Build Output Commands

submit-build-output

Submit a build artifact from the container to the shared build output filesystem.

$ libCRS submit-build-output <src_path> <dst_path>
ArgumentDescription
src_pathSource file/directory path inside the container
dst_pathDestination path on the build output filesystem

Example — Submit a compiled binary after target build:

$ libCRS submit-build-output /out/fuzzer /fuzzer

download-build-output

Download a build artifact from the shared build output filesystem into the container.

$ libCRS download-build-output <src_path> <dst_path>
ArgumentDescription
src_pathSource path on the build output filesystem
dst_pathDestination path inside the container

Example — Retrieve a compiled binary during the run phase:

$ libCRS download-build-output /fuzzer /opt/fuzzer

download-source

Download source tree into the container from mount paths.

$ libCRS download-source <fuzz-proj|target-source> <dst_path>
ArgumentDescription
fuzz-projCopy from /OSS_CRS_FUZZ_PROJ (fuzz project mount)
target-sourceCopy from /OSS_CRS_TARGET_SOURCE (target source mount)
dst_pathDestination path inside the container

Example — Copy fuzz project files for analysis:

$ libCRS download-source fuzz-proj /work/project

Example — Copy target source tree for patch generation:

$ libCRS download-source target-source /work/src

skip-build-output

Mark a build output path as intentionally skipped (creates a .skip sentinel file).

$ libCRS skip-build-output <dst_path>
ArgumentDescription
dst_pathPath on the build output filesystem to skip

Example — Skip an optional build output:

$ libCRS skip-build-output /optional-sanitizer-build

Directory Registration Commands

These commands set up automatic background syncing between the container and the shared infrastructure. Registration commands fork a daemon process that watches the directory using filesystem events.

register-submit-dir

Register a local directory for automatic submission to oss-crs-infra. A background daemon watches for new files and submits them in batches.

$ libCRS register-submit-dir [--log <log_path>] <type> <path>
ArgumentDescription
typeData type: pov, seed, bug-candidate, or patch
pathLocal directory to watch
--log(Optional) Log file path for the daemon

How it works:

  1. A daemon process is forked into the background.
  2. The daemon uses watchdog to monitor the directory for new data files (dotfiles are ignored).
  3. New files are deduplicated by MD5 hash and queued for submission.
  4. Queued files are flushed in batches (every 10 seconds or when 100 files accumulate).
  5. Flushed files are copied to SUBMIT_DIR/<type_dir>/ (host-visible, per-CRS). The exchange sidecar handles copying to EXCHANGE_DIR.

Example:

$ libCRS register-submit-dir seed /output/seeds
$ libCRS register-submit-dir --log /var/log/pov-submit.log pov /output/povs

register-shared-dir

Create a symlink from a local path to a shared filesystem path, enabling file sharing between containers within the same CRS.

$ libCRS register-shared-dir <local_path> <shared_fs_path>
ArgumentDescription
local_pathLocal directory path inside the container (must not already exist)
shared_fs_pathPath on the shared filesystem visible to all containers in the CRS

How it works:

  1. Creates the shared directory on the shared filesystem if it doesn't exist.
  2. Creates a symlink from local_path$OSS_CRS_SHARED_DIR/<shared_fs_path>.
  3. Any container in the CRS that registers the same shared_fs_path will see the same files.

Example — Share a corpus between a fuzzer and an analyzer container:

# In the fuzzer container:
$ libCRS register-shared-dir /shared-corpus corpus

# In the analyzer container:
$ libCRS register-shared-dir /shared-corpus corpus

register-log-dir

Create a symlink from a local path to a subdirectory under LOG_DIR, so that any files written to the local path are persisted on the host and available via oss-crs artifacts.

$ libCRS register-log-dir <local_path>
ArgumentDescription
local_pathLocal directory path inside the container (must not already exist)

How it works:

  1. Creates a subdirectory named after local_path's basename under $OSS_CRS_LOG_DIR.
  2. Creates a symlink from local_path$OSS_CRS_LOG_DIR/<basename>.
  3. Any files written to local_path are persisted on the host and visible via oss-crs artifacts.

Example — Persist agent logs from a patcher module:

# In the patcher container:
$ libCRS register-log-dir /var/log/agent
# Now writing to /var/log/agent/trace.log persists to the host

register-fetch-dir

Register a local directory for automatic fetching of shared data from other CRSs. A background daemon polls the fetch directory periodically for new files and copies them to the registered path.

$ libCRS register-fetch-dir [--log <log_path>] <type> <path>
ArgumentDescription
typeData type: pov, seed, bug-candidate, patch, or diff
pathLocal directory path to receive fetched data
--log(Optional) Log file path for the daemon

How it works:

  1. A daemon process is forked into the background.
  2. The daemon performs an initial sync: copies existing files from FETCH_DIR/<type_dir>/ (bootup data + inter-CRS data) into the local path.
  3. The daemon periodically polls FETCH_DIR/<type_dir>/ for new files via InfraClient.fetch_new().
  4. Files are deduplicated by name (hash-based names from submit provide natural content dedup).

Example:

$ libCRS register-fetch-dir pov /shared-povs
$ libCRS register-fetch-dir --log /var/log/fetch-seeds.log seed /shared-seeds

Manual Data Operations

submit

Manually submit a single file to oss-crs-infra.

$ libCRS submit <type> <file_path>
ArgumentDescription
typeData type: pov, seed, bug-candidate, or patch
file_pathPath to the file to submit

Example:

$ libCRS submit pov /tmp/crash-input
$ libCRS submit seed /tmp/interesting-input
$ libCRS submit bug-candidate /tmp/bug-report

fetch

Fetch shared data from other CRSs (and bootup data) to a local directory. Returns a list of newly downloaded file names (one per line). Files already present in the destination are skipped.

$ libCRS fetch <type> <dst_dir_path>
ArgumentDescription
typeData type: pov, seed, bug-candidate, patch, or diff
dst_dir_pathLocal directory to download files into

How it works:

  1. Scans FETCH_DIR/<type_dir>/ for all available data (bootup data + inter-CRS data).
  2. Copies only files not already present in the destination directory.
  3. Returns the list of newly copied file names.

Example:

$ libCRS fetch seed /tmp/shared-seeds
$ libCRS fetch pov /tmp/shared-povs
$ libCRS fetch diff /tmp/ref-diffs

Network Commands

get-service-domain

Resolve the Docker network domain name for a service within the CRS. Returns the domain string and verifies it via DNS resolution.

$ libCRS get-service-domain <service_name>
ArgumentDescription
service_nameName of the service (as defined in crs.yaml modules)

The returned domain follows the pattern <service_name>.<crs_name>.

Example:

$ libCRS get-service-domain my-analyzer
# Output: my-analyzer.my-crs

Builder Sidecar Commands

These commands communicate with the framework-injected builder and runner sidecars to apply patches, rebuild, run PoVs, and run tests. The sidecars are always available during the run phase — CRS developers do not need to declare them in crs.yaml.

The BUILDER_MODULE environment variable (framework-injected as builder-sidecar) is used for DNS resolution. The --builder flag is optional and only needed to override the default.

apply-patch-build

Apply a unified diff patch and rebuild in an ephemeral container. Sends the patch to the builder sidecar's /build endpoint, polls until completion, and writes results to the response directory.

$ libCRS apply-patch-build <patch_path> <response_dir> [--builder-name <name>]
ArgumentDescription
patch_pathPath to the unified diff file
response_dirDirectory to receive build results
--builder-nameBuilder config name (e.g., default-build). Auto-detected if omitted.
--rebuild-idRebuild ID (auto-increments if omitted)

The command exits with the build's exit code (0 = success, non-zero = failure). The response directory contains:

  • retcode — Exit code (0 = success)
  • rebuild_id — The rebuild identifier for use with run-pov
  • stdout.log / stderr.log — Build output

Example:

$ libCRS apply-patch-build /tmp/fix.diff /tmp/build-result
$ cat /tmp/build-result/rebuild_id
2

run-pov

Run a PoV (proof-of-vulnerability) binary against a specific rebuild's output. Sends the PoV to the runner sidecar's /run-pov endpoint.

$ libCRS run-pov <pov_path> <response_dir> --harness <name> --rebuild-id <id>
ArgumentDescription
pov_pathPath to the PoV binary file
response_dirDirectory to receive PoV results
--harness(Required) Harness binary name in /out/
--rebuild-id(Required) Rebuild ID from a prior apply-patch-build call

The response directory contains:

  • retcode — Exit code (0 = no crash / fix works, non-zero = still crashes)
  • stdout.log / stderr.log — PoV execution output

Example:

$ libCRS run-pov /tmp/crash-input /tmp/pov-result \
--harness fuzz_target --rebuild-id 2

apply-patch-test

Apply a unified diff patch and run the project's bundled test.sh in an ephemeral container. Sends the patch to the builder sidecar's /test endpoint.

$ libCRS apply-patch-test <patch_path> <response_dir>
ArgumentDescription
patch_pathPath to the unified diff file
response_dirDirectory to receive test results

The response directory contains:

  • retcode — Exit code (0 = tests pass, non-zero = failure)
  • stdout.log / stderr.log — Test output

Example:

$ libCRS apply-patch-test /tmp/fix.diff /tmp/test-result

apply-patch-test contract notes:

  • test.sh is resolved by the builder sidecar (checked at /src/run_tests.sh, /src/test.sh, $OSS_CRS_PROJ_PATH/test.sh).
  • If no test script is found, the sidecar returns a skipped-success result (retcode=0) by contract.

Typical Usage in a CRS

During Target Build Phase

#!/bin/bash
# build.sh — executed during oss-crs build-target

# Compile the target with custom instrumentation
cd /src && make CC=afl-clang-fast

# Submit the compiled binary
libCRS submit-build-output /src/target /target

# If an optional build is not needed, skip it
libCRS skip-build-output /optional-target

During Run Phase

#!/bin/bash
# run.sh — executed during oss-crs run

# Retrieve build outputs
libCRS download-build-output /target /opt/target

# Set up shared directories for inter-container communication
libCRS register-shared-dir /shared-corpus corpus

# Register directories for automatic submission to infra
libCRS register-submit-dir seed /output/seeds &
libCRS register-submit-dir pov /output/povs &
libCRS register-submit-dir bug-candidate /output/bugs &

# Resolve service endpoints
ANALYZER_HOST=$(libCRS get-service-domain analyzer)

# Start the fuzzer
/opt/fuzzer --target /opt/target --output /output --seeds /shared-corpus

During Run Phase (Builder Sidecar / Patcher)

#!/bin/bash
# run-patcher.sh — executed in a patcher module
# Builder/runner sidecars are injected by the framework (BUILDER_MODULE env var is set automatically)

# Generate a patch (your CRS logic)
generate_patch > /tmp/patch.diff

# Apply the patch and rebuild in an ephemeral container
libCRS apply-patch-build /tmp/patch.diff /tmp/build-result
REBUILD_ID=$(cat /tmp/build-result/rebuild_id)

# Run PoV against the patched build
libCRS run-pov /tmp/crash-input /tmp/pov-result \
--harness fuzz_target --rebuild-id "$REBUILD_ID"

# Run the project's test suite
libCRS apply-patch-test /tmp/patch.diff /tmp/test-result

# If all pass, submit the patch
libCRS submit patch /tmp/patch.diff

Implementation Status

FeatureStatusNotes
submit-build-output✅ ImplementedUses rsync for file copying
download-build-output✅ ImplementedUses rsync for file copying
download-source✅ ImplementedCopies from /OSS_CRS_FUZZ_PROJ or /OSS_CRS_TARGET_SOURCE mounts
skip-build-output✅ ImplementedCreates .skip sentinel file
register-submit-dir✅ ImplementedDaemon with watchdog + batch submission
register-shared-dir✅ ImplementedSymlink-based sharing
register-log-dir✅ ImplementedSymlink-based log persistence to host
submit✅ ImplementedSingle-file submission
get-service-domain✅ ImplementedDNS-verified domain resolution
register-fetch-dir✅ ImplementedDaemon with periodic polling of FETCH_DIR via InfraClient
apply-patch-build✅ ImplementedEphemeral rebuild via builder sidecar
run-pov✅ ImplementedPoV reproduction via runner sidecar
apply-patch-test✅ ImplementedPatch + test.sh via builder sidecar
fetch✅ ImplementedOne-shot fetch from FETCH_DIR via InfraClient
apply-patch-build✅ ImplementedSends patch to builder sidecar /build endpoint
run-pov✅ ImplementedSends PoV to builder sidecar /run-pov endpoint
AzureCRSUtils📝 PlannedAzure deployment backend for CRSUtils
InfraClient integration✅ ImplementedExchange sidecar copies from SUBMIT_DIR to EXCHANGE_DIR; InfraClient fetches from FETCH_DIR