Sinks

Sinks are plugins that receive Records and push them to destination systems. Each Record contains an Entity (with a flat properties map) and a list of Edges representing relationships such as ownership and lineage.

To use a sink, add a sinks block to your recipe:

sinks:
  - name: compass
    config:
      host: https://compass.example.com

Supported Sinks

SinkDescriptionOutput
compassSend entities and edges to CompassCompass API (HTTP)
kafkaPublish entity as protobuf to a Kafka topicKafka topic
fileWrite records to a local fileNDJSON or YAML file
consolePrint records to stdoutStandard output
httpSend entity JSON to any HTTP endpointHTTP API
stencilRegister table schemas in StencilStencil API (HTTP)
gcsWrite records as NDJSON to Google Cloud StorageGCS bucket
s3Write records as NDJSON to Amazon S3 or S3-compatible storageS3 bucket
azure_blobWrite records as NDJSON to Azure Blob StorageAzure container

Compass

Sends each Record to Compass via HTTP. The entity is upserted with its properties, and all edges are upserted uniformly via the UpsertEdge endpoint.

sinks:
  - name: compass
    config:
      host: https://compass.example.com
      headers:
        Compass-User-UUID: meteor@raystack.io
KeyDescriptionRequired
hostCompass service hostnameYes
headersAdditional HTTP headers (comma-separated values for multiple)No

Kafka

Serializes the entity as a protobuf message and publishes it to a Kafka topic. The optional key_path extracts a field from the entity to use as the Kafka message key.

sinks:
  - name: kafka
    config:
      brokers: localhost:9092
      topic: metadata-topic
      key_path: .Urn
KeyDescriptionRequired
brokersComma-separated list of Kafka broker addressesYes
topicKafka topic to publish messages toYes
key_pathField path on the entity proto to use as the message key (e.g. .Urn)No

File

Writes records to a local file in NDJSON or YAML format. Each record is serialized as JSON (entity + edges).

sinks:
  - name: file
    config:
      path: ./output.ndjson
      format: ndjson
      overwrite: true
KeyDescriptionRequired
pathOutput file pathYes
formatOutput format: ndjson or yamlYes
overwriteOverwrite existing file (default true)No

Console

Prints each record as JSON to stdout. Useful for debugging recipes.

sinks:
  - name: console

No configuration required.

HTTP

Sends the entity as JSON to an arbitrary HTTP endpoint. The URL supports Go template variables from the entity (e.g. {{ .Type }}, {{ .Urn }}). An optional Tengo script can transform the payload before sending.

sinks:
  - name: http
    config:
      url: https://example.com/metadata/{{ .Type }}
      method: PUT
      success_code: 200
      headers:
        Authorization: Bearer token
KeyDescriptionRequired
urlTarget URL (supports Go template variables)Yes
methodHTTP method (GET, POST, PUT, PATCH, etc.)Yes
success_codeExpected HTTP status code for success (default 200)No
headersAdditional HTTP headersNo
script.engineScript engine for payload transformation (tengo)No
script.sourceTengo script source codeNo

Stencil

Registers table column schemas in Stencil as JSON Schema or Avro. Only entities with a columns field in their properties are processed. Column types from BigQuery and PostgreSQL are automatically mapped to the target schema format.

sinks:
  - name: stencil
    config:
      host: https://stencil.example.com
      namespace_id: myNamespace
      format: json
KeyDescriptionRequired
hostStencil service hostnameYes
namespace_idStencil namespace to register schemas underYes
formatSchema format: json or avro (default json)No

GCS

Writes records as NDJSON to a Google Cloud Storage bucket. Each record is serialized as a JSON line. The output object is named with an optional prefix and a timestamp.

sinks:
  - name: gcs
    config:
      project_id: my-gcp-project
      url: gcs://bucket_name/target_folder
      object_prefix: github-users
      service_account_base64: <base64-encoded-service-account-key>
KeyDescriptionRequired
project_idGCP project IDYes
urlGCS destination in the form gcs://bucket/pathYes
object_prefixPrefix for the output object nameNo
service_account_base64Base64-encoded service account JSON keyNo*
service_account_jsonService account JSON key as a stringNo*

*One of service_account_base64 or service_account_json is required.

S3

Writes records as NDJSON to an Amazon S3 bucket or any S3-compatible storage (MinIO, DigitalOcean Spaces, etc.). Each record is serialized as a JSON line. The output object is named with an optional prefix and a timestamp.

sinks:
  - name: s3
    config:
      bucket_url: s3://my-bucket/metadata
      region: us-east-1
      object_prefix: github-users
      access_key_id: AKIAIOSFODNN7EXAMPLE
      secret_access_key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
KeyDescriptionRequired
bucket_urlS3 destination in the form s3://bucket/pathYes
regionAWS regionYes
object_prefixPrefix for the output object nameNo
access_key_idAWS access key IDNo*
secret_access_keyAWS secret access keyNo*
endpointCustom S3 endpoint for S3-compatible storesNo

*If credentials are omitted, the default AWS credential chain is used (env vars, instance profile, etc.).

Azure Blob

Writes records as NDJSON to an Azure Blob Storage container. Each record is serialized as a JSON line. The output blob is named with an optional prefix and a timestamp.

sinks:
  - name: azure_blob
    config:
      storage_account_url: https://myaccount.blob.core.windows.net
      container_name: my-container
      object_prefix: github-users
      account_key: <account-key>
KeyDescriptionRequired
storage_account_urlAzure storage account URLYes
container_nameBlob container nameYes
object_prefixPrefix for the output blob nameNo
account_keyAzure storage account keyNo*
connection_stringAzure storage connection stringNo*

*One of account_key or connection_string is required.