Skip to main content

The .raftt File


What is a .raftt File

A .raftt file is a file in which your Raftt environment is defined. It is written in Starlark - a python-like scripting language open-sourced by Google.

The .raftt is expected to be committed to your project’s repo and shared between all devs. Different repo branches may contain different .raftt files, which allows Raftt env definition to differ between branches.

Configuring the .raftt File in raftt.yml

The envDefinition field in the raftt.yml is used to configure the path to the .raftt file that defines the environment spawned by Raftt.

# An example for a raftt.yml that spawns an env defined in example.raftt
envDefinition: acme.raftt

When is the .raftt File Evaluated

The output of the interpretation of a .raftt file is a set of resource definitions to-be-deployed.

On raftt up

What exactly raftt up does depends on the status of the environment for the current branch. The status of all your envs can be viewed using raftt list. The .raftt file will be interpreted on raftt up only if the branch doesn’t have any existing env. If the current branch has a connected env, raftt up connects to the env. If the env is running, raftt up connects to it and makes it connected, without modifying the env. If the env is hibernated, raftt up will wake it up (without re-interpreting the .raftt file).

On raftt rebuild

The .raftt file is interpreted on every raftt rebuild. The interpretation result is a set of resources. What happens with this set, depends on whether any resources were specified in the raftt rebuild command. If you didn’t specify resources, all of the existing resources are taken down, and the output of the interpretation is deployed. If one or more resources were specified, any changes to these resources will be applied, and the rest will not change, even if the result of the interpretation differs from their current state.

The deploy() Function

The deploy() function receives a Resources object and adds the resources to the set of resources to-be-deployed when running raftt up or rebuild.

deploy() can be called multiple times, each call adds the input resources to the previously added set. Trying to deploy a resource that shares the same type and name as a previously-deployed resource will result in an error. Resources of different types can share a name.


resources = load_docker_compose("./docker-compose.yml")
print(type(resources)) # Will print "Resources"

Data Types

Starlark Data Types

As the script in .raftt files is written in Starlark, all Starlak data types can be used in .raftt files.


A Resources object contains several dicts, one for each supported resource type. The object types that are currently supported are pods, deployments, ingresses, services, and secrets (these are also the names of the Resources object attributes). If you need Raftt to support additional object types, join our community or contact us, and we'll be glad to look into it.

The keys in the dicts are the names of the resources, and the values are the objects that represent the resources themselves, as represented by Raftt.

When importing resource definitions (from a docker-compose file, Helm charts, or K8s manifests), the result is a Resources object.

resources = k8s_manifests("./k8s_manifests") # Can also use Docker Compose and Helm
print(type(resources)) # Will print "Resources"

# You can modify the imported resources
nginx = resources.pods["nginx"]
nginx.map_port(local=8080, remote=80)

# These are the rest of the supported object types

Merging Resources Objects

Multiple Resources objects can be merged using the + operator. Note that the names must be unique - merging Resources objects that both have a resource of the same type and the same name will result in an error

k8s_resources = k8s_manifests("./k8s_manifests")
compose_resources = load_docker_compose("./path/to/compose.yml", ".")
all_resources = k8s_resources + compose_resources


In multi-repo scenarios, you may need to access or reference a file that’s in a different repo from the current .raftt file. The handle to the other repo is an object called RepoWorkingTree.

To get this handle you need to either call clone_repo_branch() or get_cloned_repo(). See here for more details.

A subpath can be added to the RepoWorkingTree object using the + operator, to access specific files. This can as the file/folder path in functions that receive them as an argument.

# Clone the repo
secondary_repo = clone_repo_branch("", "main")

# Mount the source code dir to one of the workloads
secondary_repo_mount = repo_volume(secondary_repo + "/src")
resources.deployments["web"].mount(secondary_repo_mount, dst="/app")

# Use it to build an image
resources.deployments["web"].image_builder = build_image("web", secondary_repo, dockerfile="Dockerfile")

Importing Resource Definitions

The resource definitions of the env can be imported in the .raftt file using standard formats - Docker Compose, K8s manifests, and Helm charts. Kustomize support coming soon - if you need it now, join our community or contact us, and we'll be glad to look into it.

Import a Docker-compose File

To import resources defined in a docker-compose file, use the load_docker_compose() function. It receives the docker-compose file path and optionally the workdir and the docker-compose profiles. The profiles are passed as a list of profiles to be used. The function returns a Resources object. Docker-compose services are converted to Kubernetes pods.

services = load_docker_compose("./path/to/compose.yml", ".", profiles=["default", "gui"])
print(type(services)) # Will print "Resources"

Import Kubernetes Manifests

To import resources defined using raw Kubernetes manifests, use the k8s_manifests() function.
It receives the path of a K8s manifest, or a directory of manifests. Note that only .yml and .yaml files are parsed.

The function returns a Resources object containing all the resources defined in the manifests.

The object types that are currently supported are pods, deployments, ingresses, services, configmaps, and secrets (these are also the names of the Resources object attributes). If you need Raftt to support additional object types, join our community or contact us, and we'll be glad to look into it.

resources_from_directory = k8s_manifests("./k8s_manifests") # Get manifests from a directory
resources_from_file = k8s_manifests("./k8s_manifests/pod.yml") # Or a specific file
print(type(resources_from_directory)) # Will print "Resources"
print(type(resources_from_file)) # Will print "Resources"

Import Helm Charts

Raftt lets you use environment definitions defined in local Helm charts using the helm_local_chart function. Raftt will run helm with the template command and then load and return the generated resources identically to the k8s_manifests function.


  • Helm release name.

  • Path to chart (relative to repo root).

  • values_files (optional) - string / array of strings of the values files to use. These can be paths to files in the repo / URLs. Same as when using helm template, when passing multiple values files, the last one will override the previous files.

  • values (optional) - a dictionary of values to be passed to Helm. Will override values defined in the files.

  • version (optional) - the version of Helm to use.

  • Currently, 3.10.1 (default), 3.8.2 and 2.13.0 are supported. Supporting additional versions is easy - contact us if you need a different one.

Returns: A Resources object containing all the resources defined in the manifests.

resources = helm_local_chart("blah", "./helm/blah", values_files="./helm/", version="3.8.2")

resources = helm_local_chart("blah", "./helm/blah", values_files=["", "./helm/"], values={"serviceName": "serv", "replicaCount": 1, "ingressPort": 80}, version="3.8.2")

print(type(resources)) # Will print "Resources"

Image Building

The docker images used in your environment can be defined in the .raftt file and built by Raftt. To define an image you wish to build, use the build_image() function.

The return value of build_image() can be assigned to a workload to set the image as the workload’s image. This will overwrite any previous definition of the workload image.

build_image() receives:

  • The ID to be used when referring to the image, e.g. in other Dockerfiles
  • The context for building the Dockerfile.
  • dockerfile (optional, default is CONTEXT/Dockerfile) - The path of the Dockerfile. Equivalent to --file when using docker build.
  • args (optional) - the arguments passed to Dockerfile. Similar to --build-arg when using docker build.
  • target (optional) - Set the target build stage to build. Equivalent to --target when using docker build.
  • prebuild (optional) - a command to run in the dev container before the image is built. It is passed as a string or an iterable of strings, e.g.
    • prebuild="/path/to/"
    • prebuild=["python3", "-m", "compileall", "-f", ""]
workload.image_builder = build_image('web', "./docker/web/", dockerfile="./docker/web/Dockerfile", args={"VERSION_ARG": "latest"}, target="builder", prebuild=["python3", "-m", "compileall", "-f", ""])

Using Base Images

Sometimes you may wish to build images that aren’t intended to be directly used as the image of a workload, but instead to be used as a base image for other images. To do that you can call build_image() and use the ID to refer to the images in other Dockerfiles (in the FROM command).

build_image('raftt/base-python', './docker/python')
build_image('raftt/base-web', "./docker/web/base-web/", dockerfile='./docker/web/base-web/Dockerfile', args={"CONFIG_TYPE_ARG": "config.DevelopmentConfig", "USERNAME_ARG": "nobody"})

Where we could use these images like this (note the references to the image in the FROM and the copy --from:

FROM raftt/base-python

COPY --from=raftt/base-web /some_file /some_file

Enhancing and Modifying the Resources

Setting Workload Environment Variables

Sets a Starlark dictionary as the environment variables of a workload (a Pod or a Deployment).


💡 The function overwrites the previously defined environment variables, rather than appending/overriding the new values.

pod.set_env_vars({"KEY": "VALUE"})

Setting Secrets

The commands to fetch the secrets are defined in the raftt.yml file (see our docs). This is because the commands are run on the local machine and the .raftt file is interpreted remotely, on the env controller.

In the .raftt file, the secrets can be mounted to the workloads, using volume mounts, assuming the secret is defined with the attribute outputvolume: true.

Another method to define secrets, that isn’t related to the .raftt file is defining cluster secrets, that can be access from the envs. For more information, see our docs.

There are few ways to define secrets for Raftt envs:

  • Fetch secrets from the local machine. The commands to fetch these secrets are defined in the raftt.yml (see our docs). This is because the commands should run on the local machine and the .raftt file is interpreted remotely, on the env controller.
  • Cluster secrets accessible from all envs running on the private cluster. For more information, see our docs.
  • Defining K8s secret objects by importing from a Helm chart or a K8s manifest.

In the .raftt file, the secrets can be mounted to the workloads, using volume mounts, assuming the secret is defined with the attribute outputvolume: true.

Volumes and Mounts

In the .raftt file, you can mount different kinds of volumes into the workloads. Defining these mounts provide various capabilities -

  • Mounting a repo dir (synced with the local machine)
  • Mounting a volume (named or unnamed)
  • Mount a secret, defined in raftt.yml or as a cluster secret
    • Mounting a secret defined using a Kubernetes manifest using the .raftt file is currently not supported.
  • All mounts can be read-write (the default) or read-only, except for secrets that are always read-only.
  • Use the init-on-rebuild argument, available for volumes, to configure the volume to be re-initialized on every call to raftt rebuild <service>.
    • This is useful, for example, if the image being run already has node_modules and after rebuild we want to set the volume contents with the updated modules.

You first need to create a volume of any kind (if it’s a secret from raftt.yml, it’s already mounted) and then mount it to one or more workloads

# Repo volume - used commonly for syncing the source code
repo_root = repo_volume()
pod_a.mount(repo_root, dst="/code", read_only=True)
deployment_b.mount(repo_root.subpath("./test_dir"), dst="/tmp/test_dir")

# Anonymous volume
anon_volume = volume() # Anonymous, since no name was provided

# Named volume
vol_foo = volume(name="foo")
deployment_b.mount(vol_foo, dst="/etc/foo", read_only=True, init_on_rebuild=True)
pod_a.mount(vol_foo, dst="/tmp/etc", read_only=False, init_on_rebuild=True)

# Assuming `test-credentials` is defined in raftt.yml with `outputvolume: true`
pod_a.mount(secret_volume("test-credentials"), dst="/tmp/test_creds", read_only=True)

Port Mapping

Allows mapping workload ports into the local machine. Currently supports only specific ports, no random ports or port ranges.

nginx.map_port(local=8080, remote=80)

Initializing Databases

Allows to define database initialization used for database seeding and by the raftt data commands (seed/dump/save/load) commands.

Raftt currently supports three types of initializers - two native initializers - for PostgreSQL and MongoDB, and a custom initializer using a script. For more information on data seeding, see our docs.

db_storage_vol = volume("db_storage")

resources = ... # Load from docker-compose, helm, or k8s manifests
db_storage_vol = resources.named_volumes["db_storage"]
db_pod = resources.pods["db"]

db_pod.mount(db_storage_vol, dst="/data")

# Use a native PostgreSQL initializer
# API: postgres_volume_initializer(workload, dump_file_path, user?, key_provider?)
db_storage_vol.initializer = postgres_volume_initializer(workload=db_pod, dump_file_path="dev_container/dump.sql", user="postgres")

# Use a native MongoDB initializer
# API: mongodb_volume_initializer(workload, dump_file_path, key_provider?)
db_storage_vol.initializer = mongodb_volume_initializer(workload=db_pod, dump_file_path="dev_container/dump.archive")

# Use a custom initializer
# API: script_volume_initializer(workload, script, key_provider?)
seeded_db_storage_vol.initializer = script_volume_initializer(workload=seeded_db_pod, script="bash")

Directly Modifying Resources

You can modify some fields of loaded resources, as described in the example below.

More attributes will be editable in the future.

resources = ...  # Load from docker-compose, helm, or k8s manifests

# Editing metadata
## For pods
resources.pods["nginx"].metadata.annotations["foo"] = "bar"
resources.pods["nginx"].metadata.labels["baz"] = "qux"

## For deployments - both the deployment and the template
resources.deployments["web"].metadata.annotations["deployment-annotation"] = "foo"
resources.deployments["web"].metadata.labels["deployment-label"] = "bar"
resources.deployments["web"].spec.template.metadata.annotations["template-annotation"] = "baz"
resources.deployments["web"].spec.template.metadata.labels["template-label"] = "qux"

## For other resources - services, ingresses, secrets
resources.ingresses["minimal-ingress"].metadata.labels["ingress-label"] = "one"
resources.ingresses["minimal-ingress"].metadata.annotations["ingress-annotation"] = "two"["my-service"].metadata.labels["service-label"] = "three"["my-service"].metadata.annotations["service-annotation"] = "four"
resources.secrets["top-secret"].metadata.labels["secret-label"] = "five"
resources.secrets["top-secret"].metadata.annotations["secret-annotation"] = "six"

# Editing PodSpec and DeploymentSpec
resources.pods["nginx"].spec.hostname = "nginy"
resources.deployments["web"].spec.template.spec.hostname = "nginy-dep"
resources.deployments["web"].spec.replicas = 1 # Not supporting deploying multiple replicas, for now.
resources.pods["nginx"].spec.containers[0].name = "foo"
resources.pods["nginx"].spec.containers[0].working_dir = "/path/to/code"
resources.pods["nginx"].spec.containers[0].command = ["/bin/echo"]
resources.pods["nginx"].spec.containers[0].args = ["hello", "world"]
resources.deployments["web"].spec.template.spec.containers[0].name = "foo"
resources.deployments["web"].spec.template.spec.containers[0].working_dir = "/path/to"
resources.deployments["web"].spec.template.spec.containers[0].command = ["/bin/echo"]
resources.deployments["web"].spec.template.spec.containers[0].args = ["hello", "world"]

Defining the dev container

To define the dev container (see docs), call the function deploy_dev_container(). The dev container is a pod that is imported using any of the supported resource definition methods (docker-compose / K8s manifest / Helm).


dev = load_docker_compose('./dev-container/dev_compose.yml')

A dev container must be defined exactly once. When deploying another .raftt file, in multi-repo scenarios, make sure that only one repo actually defines the dev container.

For backwards compatibility, the dev container can still be defined in raftt.yml. Defining a dev container both in raftt.yml and in the .raftt file, will result in an error.

Troubleshooting the .raftt File

When you write your .raftt file, like any other case of writing code, the result may differ from what you expected. To help you understand why that happens, use the raftt config debug command. Note that since the .raftt file interpretation happens in the remote env, for the raftt config debug command to work, you must have an active and connected env. To view your env status, use raftt list.

When you do that you get the dry-run results of interpreting the .raftt file. The file is fully interpreted but instead of deploying the changes to the env, it outputs the results of all the print() calls in the file. You can use this function to print any of the objects in the script.

resources = k8s_manifests("./k8s_manifests/pod.yml")

Viewing the Resources Expected to be Deployed

To view the “final result” of the .raftt file interpretation - the list of the resources that are expected to be deployed, run raftt config debug with the flag --to-be-deployed.

Multi-repo Support

Raftt lets you deploy environments whose code and definitions are held in multiple repositories.
In this context, we have two kinds of repos -

  1. The main repo - the repo from which the user runs raftt up. The .raftt file of this repo is the one interpreted. This repo is automatically live-synced to the remote env.
  2. Secondary repos - repos that are loaded when explicitly requested in the .raftt file of a previously loaded repo. Such repos don’t automatically sync to the env, and don’t even have to be cloned locally.

Whether a repo is considered main or secondary depends on the context of the specific raftt up executed - it’s not a characteristic of the repo itself.

Cloning Secondary Repositories

For all multi-repo scenarios, you must get access to the secondary repo(s) from the context of the .raftt file. To do that you must first clone it using the clone_repo_branch() function. This function clones the repo into the environment, receiving a Git URL and a branch name. The command returns a RepoWorkingTree object that can be used to access the repo files or to deploy the resources defined in it.

A subpath may be added to a RepoWorkingTree object using the + operator, in order to retrieve a sub-path inside the side repository. The operation results in a new RepoWorkingTree.

A repo can’t be cloned twice in the same environment with different branches, so make sure you don’t have contradicting branches in different .raftt files (see also the next section)

# Clone the repo
secondary_repo = clone_repo_branch("", "main")

# Mount the source code dir to one of the workloads
secondary_repo_mount = repo_volume(secondary_repo + "src")
resources.deployments["web"].mount(secondary_repo_mount, dst="/app")

# Use it to build an image
resources.deployments["web"].image_builder = build_image("web", secondary_repo, dockerfile="Dockerfile")

Access an Already-cloned Repo

The get_cloned_repo() function is used to retrieve aRepoWorkingTree object of a previously cloned Git repository. The function receives only the Git URL. This function helps you access the same repo from different .raftt files without worrying about having matching branches to prevent an error. The first .raftt file using the repo has to define the branch, and the others can get a handle to the cloned repo using this function

# Get a handle to the repo, assuming it was already cloned in another .raftt file
secondary_repo = get_cloned_repo("")

# Mount the source code dir to one of the workloads
secondary_repo_mount = repo_volume(secondary_repo + "src")
resources.deployments["web"].mount(secondary_repo_mount, dst="/app")

# Use it to build an image
resources.deployments["web"].image_builder = build_image("web", secondary_repo, dockerfile="Dockerfile")

Deploy the .raftt File of a Secondary Repo

In some cases, instead of accessing the secondary repo’s files and/or folders, as described in the previous sections, you may want to deploy all the resources defined in its .raftt file. For that purpose, you can use RepoWorkingTree’s deploy() method.

This method only receives config_args as an input argument and does not return a value. It triggers the interpretation of the .raftt file defined in the secondary repo (as defined in its raftt.yml). The outcome of the interpretation is additional resources to be deployed to the env.

Note that different .raftt files are executed separately and have no access to one another’s objects.

secondary_repo = clone_repo_branch("", "main")
secondary_repo.deploy(config_args='{"profile": "default"}')

raftt sync command

As mentioned above, secondary repos do not have file syncing / hot reloading enabled by default, and they don’t even have to be cloned locally. If you want to start syncing the local state of a cloned secondary repo to the remote env, use the raftt sync command.

Branch Switching in Synced Repos

Impact on raftt rebuild commands

Switching branches in the main repo creates a new Raftt env for the new branch, or switches to one if it already exists. When doing it in a synced secondary repo, the file changes happen in-place, without changing the env. The file changes are synced to the remote repo, and depends on the changes, you might want to run raftt restart or raftt rebuild.

When running raftt rebuild, the main and secondary .raftt files are re-interpreted. When rebuilding, the branches specified in clone_repo_branch() are ignored for synced repos - the current repo state is used.

Running Raftt Commands From the Context of Synced Repo

Currently, Raftt commands (sh, logs, status, ...) may only be run from the main repo from which the original raftt up was performed.

Local Configuration

In some cases, users may want to customize the environment created by raftt up or raftt rebuild without changing the .raftt file that’s committed to the repo. This can be done either by referring to the local environment variables from the .raftt file, or by running it with arguments.

Accessing the Local Environment Variables

Env variables from the host are available as a dict in local.env.

env_var = local.env['MY_ENV_VAR']


You can send arguments to raftt up and raftt rebuild commands that can be accessed from the .raftt file. The args can be received in the CLI using the --config-args option or to be read from a file using the --config-args-file option that receives the file path. The string value passed in the CLI or in the file contents can be accessed from the .raftt file using the local.config_args variable.

The input can be formatted in any format of your choice, but some formats have builtin parsers (see here).

A common use-case for using arguments is stating which docker-compose profiles are to be deployed

load("encoding/", "json")
# This assumes args are a "serialized" json string
input_args = json.decode(local.config_args)
profiles = input_args['compose_profiles']
deploy(load_docker_compose("./path/to/compose.yml", ".", profiles=profiles))

Persistency of the Config Arguments

The .raftt file is interpreted on every raftt rebuild. Every interpretation potentially uses the config_args variable. To save the user from having to remember the config_args used for the running env, Raftt stores the last config_args used and uses the same string for future raftt rebuilds. If you want to change the config_args used, run raftt rebuild with --config-args or --config-args-file. The new args will overwrite the existing args, and will be used for the current rebuild and for future ones.

JSON/YAML Encoding/Decoding

If you wish to encode and/or decode JSON or YAML inside your .raftt file, you can do it by loading external modules using the load() function. The load function gets the module and the name symbol you wish to load. In the code snippet below you can see the exact syntax for loading the JSON and YAML libraries and using them.

Handling JSON and YAML can very handy in conjunction with local.config_args.

load("encoding/", "json")
load("encoding/", "yaml")
print(json.encode({"a":1, "b": 2})) # Prints: {"a":1,"b":2}
yaml_dict = """apiVersion: apps/v1
kind: Deployment
name: web
print(yaml.loads(yaml_dict)) # Prints: {"apiVersion": "apps/v1", "kind": "Deployment", "metadata": {"name": "web"}}

For this functionality, Raftt uses Starlib - a community-driven project to bring a standard library to Starlark. It has additional features besides handling JSON and YAML. You can read about them and see usage documentation in their repo.

File Watching

This mechanism allows registering hooks that trigger actions when files that match a given glob patterns changed. The patterns must contain absolute paths, relative paths will raise an error.

When registering a hook you need to specify the following:

  • One or more workloads to watch glob patterns for changed files - the on argument
  • One or more actions to perform when a file changes matching one of the glob patterns - the do argument

Hooks registered with more than one action will execute the commands in order of definition.

The actions.CMD constructor creates a command type that receives a command to execute as a string or an iterable of strings and a workload to execute the command on

Common use-cases -

  • Auto-install dependencies by watching files like package.json or requirements.txt
  • Auto-build the code on file changes by watching the source code files, e.g. all the *.ts files in the relevant folders.

Example file

resources = k8s_manifests("./test-manifests.yml")

nginx = resources.pods["nginx"]
nginx_dep = resources.deployments["nginx-dep"]

# Trigger npm install when package files change
package_json_glob = "/app/**/package.json"
cmd=("npm", "install")))

# Trigger pip install when requirements.txt changes
pip_install = ["pip", "install", "-r", "requirements.txt"]

When the requirements.txt file changes on the nginx_dep workload, the command pip install -r requirements.txt is executed on the nginx pod and nginx_dep deployment.

Changing the Hooks

The hook is “controlled” by the workload that watches the glob patterns. It means that in order to apply changes to a hook, either a change in the watchers (the on clause) or in the actions (the do clause), the “watching” workload must be rebuilt. The “affected” workload (the one in which the action is performed) only needs to be rebuilt if the action is modified, not necessarily if the watcher was modified.

Viewing the Hook Mechanism Logs

The logs for the hooks, including the result of the actions, are not a part of the regular workload logs. To view the logs, run raftt logs --hooks with or without specifying service. If you don’t specify a service, you will view the full log of the hook mechanism.

Accessing the filesystem

This can be useful for dynamically configuring volume mounts for all the sub directories of the node_modules in your project.

The module is can be imported like so:

load('', 'fs')

File object attributes

The fs.File object attributes:

  • is_dir: True if the file is a directory
  • exists: True if the file exists
  • name: The last element of the file path

File object methods

The fs.File object methods:

  • list_dir(): Returns a list of fs.File under the current directory, returns an error if not a directory
  • read_text(): Returns the content of the file encoded in UTF-8

Listing files under a directory

Example: print all the files under a directory

load('', 'fs')

def iter_dir(path):
for f in fs.File(path).list_dir():

def files_in_dir(path):
return [f for f in fs.File(path).list_dir() if not f.is_dir]

iter_dir(clone_repo_branch("", "trunk") + "./node_modules") # Accessing dir in another repository

Reading files

Example for reading a file

load('', 'fs')

# Read file as text

# OUT:
> fire==0.4.0