The .raftt File
Overview
What is a .raftt
File
A .raftt
file is a file in which your Raftt environment is defined. It is written in Starlark - a python-like scripting language open-sourced by Google.
The .raftt
is expected to be committed to your project’s repo and shared between all devs. Different repo branches may contain different .raftt
files, which allows Raftt env definition to differ between branches.
Configuring the .raftt
File in raftt.yml
The envDefinition
field in the raftt.yml
is used to configure the path to the .raftt
file that defines the environment spawned by Raftt.
# An example for a raftt.yml that spawns an env defined in example.raftt
envDefinition: acme.raftt
host: admiral.acme.raftt.io
secrets:
...
...
When is the .raftt
File Evaluated
The output of the interpretation of a .raftt
file is a set of resource definitions to-be-deployed.
On raftt up
What exactly raftt up
does depends on the status of the environment for the current branch. The status of all your envs can be viewed using raftt list
.
The .raftt
file will be interpreted on raftt up
only if the branch doesn’t have any existing env.
If the current branch has a connected
env, raftt up
connects to the env. If the env is running
, raftt up
connects to it and makes it connected
, without modifying the env. If the env is hibernated
, raftt up
will wake it up (without re-interpreting the .raftt
file).
On raftt rebuild
The .raftt
file is interpreted on every raftt rebuild
. The interpretation result is a set of resources. What happens with this set, depends on whether any resources were specified in the raftt rebuild
command.
If you didn’t specify resources, all of the existing resources are taken down, and the output of the interpretation is deployed. If one or more resources were specified, any changes to these resources will be applied, and the rest will not change, even if the result of the interpretation differs from their current state.
The deploy()
Function
The deploy()
function receives a Resources
object and adds the resources to the set of resources to-be-deployed when running raftt up
or rebuild
.
deploy()
can be called multiple times, each call adds the input resources to the previously added set. Trying to deploy a resource that shares the same type and name as a previously-deployed resource will result in an error. Resources of different types can share a name.
deploy(k8s_manifests("./k8s_manifests"))
resources = load_docker_compose("./docker-compose.yml")
print(type(resources)) # Will print "Resources"
deploy(resources)
Data Types
Starlark Data Types
As the script in .raftt
files is written in Starlark, all Starlak data types can be used in .raftt
files.
Resources
A Resources
object contains several dicts, one for each supported resource type. The object types that are currently supported are pods, deployments, ingresses, services, and secrets (these are also the names of the Resources
object attributes). If you need Raftt to support additional object types, join our community or contact us, and we'll be glad to look into it.
The keys in the dicts are the names of the resources, and the values are the objects that represent the resources themselves, as represented by Raftt.
When importing resource definitions (from a docker-compose file, Helm charts, or K8s manifests), the result is a Resources
object.
resources = k8s_manifests("./k8s_manifests") # Can also use Docker Compose and Helm
print(type(resources)) # Will print "Resources"
# You can modify the imported resources
nginx = resources.pods["nginx"]
nginx.map_port(local=8080, remote=80)
# These are the rest of the supported object types
resources.deployments
resources.ingresses
resources.services
resources.secrets
resources.configmaps
resources.statefulsets
resources.namedvolumes
resources.crds
resources.roles
resources.rolebindings
resources.serviceaccounts
Merging Resources
Objects
Multiple Resources
objects can be merged using the +
operator. Note that the names must be unique - merging Resources
objects that both have a resource of the same type and the same name will result in an error
k8s_resources = k8s_manifests("./k8s_manifests")
compose_resources = load_docker_compose("./path/to/compose.yml", ".")
all_resources = k8s_resources + compose_resources
Defining dependencies between Resources
A deployed resource can have dependencies on any other deployed workload.
Receives:
- A workload (must be either
Pod
,Deployment
orStatefulSet
) - Condition (optional) - the condition to wait on. Defaults to
service_started
, and can accpet any of:service_started
service_healthy
service_completed_successfully
k8s_resources = k8s_manifests("./k8s-manifests")
db_pod = k8s_resources.pods["db"]
backend_pod = k8s_resources.pod["backend"]
backend_pod.add_dependency(db_pod)
backend_pod.add_dependency(db_pod, condition="service_healthy")
RepoWorkingTree
In multi-repo scenarios, you may need to access or reference a file that’s in a different repo from the current .raftt
file. The handle to the other repo is an object called RepoWorkingTree
.
To get this handle you need to either call clone_repo_branch()
or get_cloned_repo()
. See here for more details.
A subpath can be added to the RepoWorkingTree
object using the +
operator, to access specific files. This can as the file/folder path in functions that receive them as an argument.
# Clone the repo
secondary_repo = clone_repo_branch("https://github.com/rafttio/frontend", "main")
# Mount the source code dir to one of the workloads
secondary_repo_mount = repo_volume(secondary_repo + "/src")
resources.deployments["web"].mount(secondary_repo_mount, dst="/app")
# Use it to build an image
resources.deployments["web"].get_container().image = build_image("web", secondary_repo, dockerfile="Dockerfile")
Importing Resource Definitions
The resource definitions of the env can be imported in the .raftt
file using standard formats - Docker Compose, K8s manifests, Helm charts or Kustomize. It is easy to support other formats or sources! Contact us and we'll be glad to discuss.
Import a Docker-compose File
To import resources defined in a docker-compose file, use the load_docker_compose()
function.
It receives the docker-compose file path and optionally the workdir and the docker-compose profiles. The profiles are passed as a list of profiles to be used.
The function returns a Resources
object.
Docker-compose services are converted to Kubernetes pods.
services = load_docker_compose("./path/to/compose.yml", ".", profiles=["default", "gui"])
print(type(services)) # Will print "Resources"
deploy(services)
It is possible to change/add the local environment variables dictionary passed to load_docker_compose()
using the env
parameter:
env = local.env
env["MY_ENV_KEY"] = "MY_ENV_VALUE"
deploy(load_docker_compose('./docker-compose.yml', env=env))
Import Kubernetes Manifests
To import resources defined using raw Kubernetes manifests, use the k8s_manifests()
function.
It receives the path of a K8s manifest, or a directory of manifests. Note that only .yml
and .yaml
files are parsed.
The function returns a Resources
object containing all the resources defined in the manifests.
The object types that are currently supported are pods, deployments, ingresses, services, configmaps, and secrets (these are also the names of the Resources
object attributes). If you need Raftt to support additional object types, join our community or contact us, and we'll be glad to look into it.
resources_from_directory = k8s_manifests("./k8s_manifests") # Get manifests from a directory
resources_from_file = k8s_manifests("./k8s_manifests/pod.yml") # Or a specific file
print(type(resources_from_directory)) # Will print "Resources"
print(type(resources_from_file)) # Will print "Resources"
Import Helm Charts
Raftt lets you use environment definitions defined in local Helm charts using the helm_local_chart()
function. Raftt will run helm with the template
command and then load and return the generated resources identically to the k8s_manifests()
function.
Receives:
Helm release name.
Path to chart (relative to repo root).
values_files (optional) - string / array of strings of the values files to use. These can be paths to files in the repo / URLs. Same as when using
helm template
, when passing multiple values files, the last one will override the previous files.values (optional) - a dictionary of values to be passed to Helm. Will override values defined in the files.
version (optional) - the version of Helm to use.
Currently,
3.10.1
(default),3.8.2
and2.13.0
are supported. Supporting additional versions is easy - contact us if you need a different one.
Returns: A Resources
object containing all the resources defined in the manifests.
resources = helm_local_chart("chart-name", "./helm/chart-path", values_files="./helm/values.dev.yml", version="3.8.2")
resources = helm_local_chart("chart-name", "./helm/chart-path", values_files=["https://example.io/values.yml", "./helm/values.dev.yml"], values={"serviceName": "serv", "replicaCount": 1, "ingressPort": 80}, version="3.8.2")
print(type(resources)) # Will print "Resources"
Import Kustomize Files
To import resources defined with Kustomize, use the kustomize_build()
function. Raftt will run Kustomize with the build
command and then load and return the generated resources identically to the k8s_manifests()
function.
Receives:
Path to kustomization directory
version (optional) - the version of Kustomize to use.
helm_version (optional) - the version of helm to use.
Currently,
4.5.7
(default) is supported. Supporting additional versions is easy - contact us if you need a different one.
Returns: A Resources
object containing all the resources defined in the manifests.
resources = kustomize_build("./kustomize")
resources = kustomize_build("./kustomize", version="4.7.5", helm_version="3.10.1")
print(type(resources)) # Will print "Resources"
Image Building
The docker images used in your environment can be defined in the .raftt
file and built by Raftt. To define an image you wish to build, use the build_image()
function.
The return value of build_image()
can be assigned to a workload's container. This will overwrite any previous image definition.
build_image()
receives:
- The ID to be used when referring to the image, e.g. in other Dockerfiles
- The context for building the Dockerfile.
dockerfile
(optional, default isCONTEXT/Dockerfile
) - The path of the Dockerfile. Equivalent to--file
when usingdocker build
.args
(optional) - the arguments passed to Dockerfile. Similar to--build-arg
when usingdocker build
.target
(optional) - Set the target build stage to build. Equivalent to--target
when usingdocker build
.prebuild
(optional) - a command to run in thedev
container before the image is built. It is passed as a string or an iterable of strings, e.g.prebuild="/path/to/script.sh"
prebuild=["python3", "-m", "compileall", "-f", "app.py"]
secrets
(optional) - A dictionary mapping secret ID to a value to pass to the docker buildsecrets={'SECRET_ID': 'SECRET_VALUE'}
workload.get_container().image = build_image('web', "./docker/web/", dockerfile="./docker/web/Dockerfile", args={"VERSION_ARG": "latest"}, target="builder", prebuild=["python3", "-m", "compileall", "-f", "app.py"])
Using Base Images
Sometimes you may wish to build images that aren’t intended to be directly used as the image of a workload, but instead to be used as a base image for other images. To do that you can call build_image()
and use the ID to refer to the images in other Dockerfiles (in the FROM
command).
build_image('raftt/base-python', './docker/python')
build_image('raftt/base-web', "./docker/web/base-web/", dockerfile='./docker/web/base-web/Dockerfile', args={"CONFIG_TYPE_ARG": "config.DevelopmentConfig", "USERNAME_ARG": "nobody"})
Where we could use these images like this (note the references to the image in the FROM
and the copy --from
:
FROM raftt/base-python
WORKDIR /app
COPY --from=raftt/base-web /some_file /some_file
COMMAND bash
Enhancing and Modifying the Resources
Modifying the Environment Variables of a Workload
There are two methods of modifying the environment variables of a workload (a Pod or a Deployment).
set_env_vars
: Sets a Starlark dictionary as the environment variables
💡 The function overwrites ALL previously defined environment variables.
pod.set_env_vars({"KEY": "VALUE"})
deploy(pod)
add_env_vars
: Update the environment variables with a Starlark dictionary
💡 The function overwrites previously defined environment variables if any exists with the same name.
pod.add_env_vars({"KEY": "VALUE"})
deploy(pod)
Setting Secrets
There are few ways to define secrets for Raftt envs:
- Fetch secrets from the local machine. The commands to fetch these secrets are defined in the
raftt.yml
(see our docs). This is because the commands should run on the local machine and the.raftt
file is interpreted remotely, on the env controller. - Cluster secrets accessible from all envs running on the private cluster. For more information, see our docs.
- Defining K8s
secret
objects by importing from a Helm chart or a K8s manifest.
In the .raftt
file, the secrets can be used as variables or mounted to the workloads.
Local and cluster secrets can be fetched using the get_secret()
function. The input for the function is the key used for the secret definition in the raftt.yml
file, or the name of the cluster secret as seen in raftt cluster secrets list
and the output is the secret value, as a string.
aws_creds = get_secret("aws-credentials") # will return "AKIAEXAMPLEAWSCREDS"
Mounting a secret to the workload is done using volume mounts, which is possible for all cluster secrets, and for local secrets.
Volumes and Mounts
In the .raftt
file, you can mount different kinds of volumes into the workloads. Defining these mounts provide various capabilities -
- Mounting a repo dir (synced with the local machine)
- Mounting a volume (named or unnamed)
- Mount a secret, defined in
raftt.yml
or as a cluster secret- Mounting a secret defined using a Kubernetes manifest using the
.raftt
file is currently not supported.
- Mounting a secret defined using a Kubernetes manifest using the
- All mounts can be read-write (the default) or read-only, except for secrets that are always read-only.
- Specify a container by name using the
container
argument. - Use the
init-on-rebuild
argument, available for volumes, to configure the volume to be re-initialized on every call toraftt rebuild <service>.
- This is useful, for example, if the image being run already has node_modules and after rebuild we want to set the volume contents with the updated modules.
- Use the
no_copy
argument, when you don't want the volume's contents to be initialized from the image. This defaults toFalse
.
You first need to create a volume of any kind (if it’s a secret from raftt.yml
, it’s already mounted) and then mount it to one or more workloads
# Repo volume - used commonly for syncing the source code
repo_root = repo_volume()
pod_a.mount(repo_root, dst="/code", read_only=True)
deployment_b.mount(repo_root.subpath("./test_dir"), dst="/tmp/test_dir")
# Anonymous volume
anon_volume = volume() # Anonymous, since no name was provided
# Named volume
vol_foo = volume(name="foo")
deployment_b.mount(vol_foo, dst="/etc/foo", container="web", read_only=True, init_on_rebuild=True)
pod_a.mount(vol_foo, dst="/tmp/etc", read_only=False, init_on_rebuild=True)
#Secrets
pod_a.mount(secret_volume("test-credentials"), dst="/tmp/test_creds", read_only=True)
replace_volume
If your workload already has volumes defined and you'd like to replace them so their data can be managed by Raftt, use the replace_volume()
method.
For example, if we have shared data between the web
pod and the api
deployment, that is usually present in a PVC, host path, or any other volume type, use:
shared_vol = volume("shared")
resources.pods["web"].replace_volume(name="my-volume", source=shared_vol)
resources.deployments["api"].replace_volume("my-volume", source=shared_vol)
Volume List Access
To enable easy access to workload volumes, we've supplied the volumes
helper attribute on the PodSpec object. This returns the list of volumes defined in the PodSpec.
Note: the returned list is not editable. It is intended to be used with the replace_volume()
method above. For example:
def replace_deployment_volumes(resources):
vol = volume("shared")
for deployment in resources.deployments:
for vol in deployment.spec.template.spec.volumes:
if vol.name == "shared":
deployment.replace_volume(vol.name, vol)
break
resources = k8s_manifests("my-manifest-file.yaml")
replace_deployment_volumes(resources)
Port Mapping
Allows mapping workload ports into the local machine. Currently supports only specific ports, no random ports or port ranges.
nginx.map_port(local=8080, remote=80)
Initializing Databases
Allows to define database initialization used for database seeding and by the raftt data
commands (seed
/dump
/save
/load
) commands.
Raftt currently supports three types of initializers - two native initializers - for PostgreSQL and MongoDB, and a custom initializer using a script. For more information on data seeding, see our docs.
db_storage_vol = volume("db_storage")
resources = ... # Load from docker-compose, helm, or k8s manifests
db_pod = resources.pods["db"]
# Use a native PostgreSQL initializer
# API: postgres_volume_initializer(workload, dump_file_path, user?, key_provider?)
db_storage_vol.initializer = postgres_volume_initializer(workload=db_pod, dump_file_path="dev_container/dump.sql", user="postgres")
# Use a native MongoDB initializer
# API: mongodb_volume_initializer(workload, dump_file_path, key_provider?)
db_storage_vol.initializer = mongodb_volume_initializer(workload=db_pod, dump_file_path="dev_container/dump.archive")
# Use a custom initializer
# API: script_volume_initializer(workload, script, key_provider?)
db_storage_vol.initializer = script_volume_initializer(workload=db_pod, script="bash seed_db.sh")
db_pod.mount(db_storage_vol, dst="/data")
Directly Modifying Resources
You can modify some fields of loaded resources, as described in the example below.
More attributes will be editable in the future.
resources = ... # Load from docker-compose, helm, or k8s manifests
# Editing metadata
## For pods
resources.pods["nginx"].metadata.annotations["foo"] = "bar"
resources.pods["nginx"].metadata.labels["baz"] = "qux"
## For deployments - both the deployment and the template
resources.deployments["web"].metadata.annotations["deployment-annotation"] = "foo"
resources.deployments["web"].metadata.labels["deployment-label"] = "bar"
resources.deployments["web"].spec.template.metadata.annotations["template-annotation"] = "baz"
resources.deployments["web"].spec.template.metadata.labels["template-label"] = "qux"
## For other resources - services, ingresses, secrets
resources.ingresses["minimal-ingress"].metadata.labels["ingress-label"] = "one"
resources.ingresses["minimal-ingress"].metadata.annotations["ingress-annotation"] = "two"
resources.services["my-service"].metadata.labels["service-label"] = "three"
resources.services["my-service"].metadata.annotations["service-annotation"] = "four"
resources.secrets["top-secret"].metadata.labels["secret-label"] = "five"
resources.secrets["top-secret"].metadata.annotations["secret-annotation"] = "six"
# Editing PodSpec and DeploymentSpec
resources.pods["nginx"].spec.hostname = "nginy"
resources.deployments["web"].spec.template.spec.hostname = "nginy-dep"
resources.deployments["web"].spec.replicas = 1 # Not supporting deploying multiple replicas, for now.
resources.pods["nginx"].spec.containers[0].name = "foo"
resources.pods["nginx"].spec.containers[0].working_dir = "/path/to/code"
resources.pods["nginx"].spec.containers[0].command = ["/bin/echo"]
resources.pods["nginx"].spec.containers[0].args = ["hello", "world"]
resources.deployments["web"].spec.template.spec.containers[0].name = "foo"
resources.deployments["web"].spec.template.spec.containers[0].working_dir = "/path/to"
resources.deployments["web"].spec.template.spec.containers[0].command = ["/bin/echo"]
resources.deployments["web"].spec.template.spec.containers[0].args = ["hello", "world"]
The get_container()
function
To fetch a specific container you may use the workload utility method get_container()
. It can receive the container name, and if none is given it uses the default container or the first one if no container is annotated as default.
Usage examples:
resources = k8s_manifests("./test-manifests.yml")
default_container = resources.pods["nginx"].get_container()
fluentbit_container = resources.pods["nginx"].get_container(name="fluentbit")
# Override a pod side-container with a deployments default container
resources.pods["nginx"].get_container(name="fluentbit").image = resources.deployments["web"].get_container().image
Defining the dev
container
To define the dev container (see docs), call the function deploy_dev_container()
.
The dev container is a workload that is imported using any of the supported resource definition methods (docker-compose / K8s manifest / Helm).
The imported definitions must include a single workload.
Example:
dev = load_docker_compose('./dev-container/dev_compose.yml')
deploy_dev_container(dev)
A dev container must be defined exactly once. When deploying another .raftt
file, in multi-repo scenarios, make sure that only one repo actually defines the dev container.
For backwards compatibility, the dev container can still be defined in raftt.yml
. Defining a dev container both in raftt.yml
and in the .raftt
file, will result in an error.
Troubleshooting the .raftt
File
When you write your .raftt
file, like any other case of writing code, the result may differ from what you expected. To help you understand why that happens, use the raftt config debug
command. Note that since the .raftt
file interpretation happens in the remote env, for the raftt config debug
command to work, you must have an active and connected env. To view your env status, use raftt list
.
When you do that you get the dry-run results of interpreting the .raftt
file. The file is fully interpreted but instead of deploying the changes to the env, it outputs the results of all the print()
calls in the file. You can use this function to print any of the objects in the script.
resources = k8s_manifests("./k8s_manifests/pod.yml")
print(resources.pods['nginx'])
Viewing the Resources Expected to be Deployed
To view the “final result” of the .raftt
file interpretation - the list of the resources that are expected to be deployed, run raftt config debug
with the flag --to-be-deployed
.
Multi-repo Support
Raftt lets you deploy environments whose code and definitions are held in multiple repositories.
In this context, we have two kinds of repos -
- The main repo - the repo from which the user runs
raftt up
. The.raftt
file of this repo is the one interpreted. This repo is automatically live-synced to the remote env. - Secondary repos - repos that are loaded when explicitly requested in the
.raftt
file of a previously loaded repo. Such repos don’t automatically sync to the env, and don’t even have to be cloned locally.
Whether a repo is considered main or secondary depends on the context of the specific raftt up
executed - it’s not a characteristic of the repo itself.
Cloning Secondary Repositories
For all multi-repo scenarios, you must get access to the secondary repo(s) from the context of the .raftt
file. To do that you must first clone it using the clone_repo_branch()
function. This function clones the repo into the environment, receiving a Git URL and a branch name. The command returns a RepoWorkingTree
object that can be used to access the repo files or to deploy the resources defined in it.
A subpath may be added to a RepoWorkingTree
object using the +
operator, in order to retrieve a sub-path inside the side repository. The operation results in a new RepoWorkingTree
.
A repo can’t be cloned twice in the same environment with different branches, so make sure you don’t have contradicting branches in different .raftt
files (see also the next section)
# Clone the repo
secondary_repo = clone_repo_branch("https://github.com/rafttio/frontend", "main")
# Mount the source code dir to one of the workloads
secondary_repo_mount = repo_volume(secondary_repo + "src")
resources.deployments["web"].mount(secondary_repo_mount, dst="/app")
# Use it to build an image
resources.deployments["web"].get_container().image = build_image("web", secondary_repo, dockerfile="Dockerfile")
Access an Already-cloned Repo
The get_cloned_repo()
function is used to retrieve aRepoWorkingTree
object of a previously cloned Git repository. The function receives only the Git URL. This function helps you access the same repo from different .raftt
files without worrying about having matching branches to prevent an error. The first .raftt
file using the repo has to define the branch, and the others can get a handle to the cloned repo using this function
# Get a handle to the repo, assuming it was already cloned in another .raftt file
secondary_repo = get_cloned_repo("https://github.com/rafttio/frontend")
# Mount the source code dir to one of the workloads
secondary_repo_mount = repo_volume(secondary_repo + "src")
resources.deployments["web"].mount(secondary_repo_mount, dst="/app")
# Use it to build an image
resources.deployments["web"]..get_container().image = build_image("web", secondary_repo, dockerfile="Dockerfile")
Deploy the .raftt
File of a Secondary Repo
In some cases, instead of accessing the secondary repo’s files and/or folders, as described in the previous sections, you may want to deploy all the resources defined in its .raftt
file. For that purpose, you can use RepoWorkingTree
’s deploy()
method.
This method only receives config_args
as an input argument and does not return a value. It triggers the interpretation of the .raftt
file defined in the secondary repo (as defined in its raftt.yml
). The outcome of the interpretation is additional resources to be deployed to the env.
Note that different .raftt
files are executed separately and have no access to one another’s objects.
secondary_repo = clone_repo_branch("https://github.com/rafttio/frontend", "main")
secondary_repo.deploy(config_args='{"profile": "default"}')
raftt sync
command
As mentioned above, secondary repos do not have file syncing / hot reloading enabled by default, and they don’t even have to be cloned locally. If you want to start syncing the local state of a cloned secondary repo to the remote env, use the raftt sync
command.
Impact on raftt rebuild
commands
When running raftt rebuild
, the main and secondary .raftt
files are re-interpreted. When rebuilding, the branches specified in clone_repo_branch()
are ignored for synced repos - the current repo state is used.
Running Raftt Commands From the Context of Synced Repo
Currently, Raftt commands (sh
, logs
, status
, ...) may only be run from the main repo from which the original raftt up
was performed.
Local Configuration
In some cases, users may want to customize the environment created by raftt up
or raftt rebuild
without changing the .raftt
file that’s committed to the repo. This can be done either by referring to the local environment variables from the .raftt
file, or by running it with arguments.
Accessing the Local Environment Variables
Env variables from the host are available as a dict in local.env
.
env_var = local.env['MY_ENV_VAR']
Arguments
You can send arguments to raftt up
and raftt rebuild
commands that can be accessed from the .raftt
file. The args can be received in the CLI using the --config-args
option or to be read from a file using the --config-args-file
option that receives the file path. The string value passed in the CLI or in the file contents can be accessed from the .raftt
file using the local.config_args
variable.
The input can be formatted in any format of your choice, but some formats have builtin parsers (see here).
A common use-case for using arguments is stating which docker-compose profiles are to be deployed
load("encoding/json.star", "json")
# This assumes args are a "serialized" json string
input_args = json.decode(local.config_args)
profiles = input_args['compose_profiles']
deploy(load_docker_compose("./path/to/compose.yml", ".", profiles=profiles))
Persistency of the Config Arguments
The .raftt
file is interpreted on every raftt rebuild
. Every interpretation potentially uses the config_args
variable. To save the user from having to remember the config_args
used for the running env, Raftt stores the last config_args
used and uses the same string for future raftt rebuild
s. If you want to change the config_args
used, run raftt rebuild
with --config-args
or --config-args-file
. The new args will overwrite the existing args, and will be used for the current rebuild and for future ones.
JSON/YAML Encoding/Decoding
If you wish to encode and/or decode JSON or YAML inside your .raftt
file, you can do it by loading external modules using the load()
function. The load function gets the module and the name symbol you wish to load. In the code snippet below you can see the exact syntax for loading the JSON and YAML libraries and using them.
Handling JSON and YAML can very handy in conjunction with local.config_args.
load("encoding/json.star", "json")
load("encoding/yaml.star", "yaml")
print(json.encode({"a":1, "b": 2})) # Prints: {"a":1,"b":2}
yaml_dict = """apiVersion: apps/v1
kind: Deployment
metadata:
name: web
"""
print(yaml.loads(yaml_dict)) # Prints: {"apiVersion": "apps/v1", "kind": "Deployment", "metadata": {"name": "web"}}
For this functionality, Raftt uses Starlib - a community-driven project to bring a standard library to Starlark. It has additional features besides handling JSON and YAML. You can read about them and see usage documentation in their repo.
File Watching
This mechanism allows registering hooks that trigger actions when files that match a given glob patterns changed. The patterns must contain absolute paths, relative paths will raise an error.
When registering a hook you need to specify the following:
- One or more workloads to watch glob patterns for changed files - the
on
argument - One or more actions to perform when a file changes matching one of the glob patterns - the
do
argument
Hooks registered with more than one action will execute the commands in order of definition.
The actions.CMD
constructor creates a command type that receives a command to execute as a string or an iterable of strings and a workload to execute the command on
Common use-cases -
- Auto-install dependencies by watching files like
package.json
orrequirements.txt
- Auto-build the code on file changes by watching the source code files, e.g. all the
*.ts
files in the relevant folders. - Restart the container process when code changes
Example file
resources = k8s_manifests("./test-manifests.yml")
nginx = resources.pods["nginx"]
nginx_dep = resources.deployments["nginx-dep"]
py_backend = resources.deployments["py-backend"]
dev = resources.deployments["dev"]
# Trigger npm install when package files change
package_json_glob = "/app/**/package.json"
register_hook(
on=[
events.OnFileChanged(
workload=nginx,
patterns=package_json_glob),
events.OnFileChanged(
workload=nginx_dep,
patterns=package_json_glob)
],
do=actions.CMD(
workload=nginx,
cmd=("npm", "install")))
# Trigger pip install when requirements.txt changes
pip_install = ["pip", "install", "-r", "requirements.txt"]
register_hook(
on=events.OnFileChanged(
workload=nginx_dep,
patterns="/root/requirements.txt"),
do=(
actions.CMD(
workload=nginx,
cmd=pip_install),
actions.CMD(
workload=nginx_dep,
cmd=pip_install)))
deploy(resources)
# Restart py-backend main process when python file changes under /app
register_hook(
on=events.OnFileChanged(
workload=py_backend,
patterns="/app/**/*.py"),
do=actions.CMD(
workload=dev,
cmd=["raftt", "restart", "py-backend"]))
When the requirements.txt
file changes on the nginx_dep
workload, the command pip install -r requirements.txt
is executed on the nginx
pod and nginx_dep
deployment.
Changing the Hooks
The hook is “controlled” by the workload that watches the glob patterns. It means that in order to apply changes to a hook, either a change in the watchers (the on
clause) or in the actions (the do
clause), the “watching” workload must be rebuilt. The “affected” workload (the one in which the action is performed) only needs to be rebuilt if the action is modified, not necessarily if the watcher was modified.
Viewing the Hook Mechanism Logs
The logs for the hooks, including the result of the actions, are not a part of the regular workload logs. To view the logs, run raftt logs --hooks
with or without specifying service. If you don’t specify a service, you will view the full log of the hook mechanism.
Accessing the filesystem
This can be useful for dynamically configuring volume mounts for all the sub directories of the node_modules
in your project.
The module is can be imported like so:
load('filesystem.star', 'fs')
File object attributes
The fs.File
object attributes:
is_dir
:True
if the file is a directoryexists
:True
if the file existsname
: The last element of the file path
File object methods
The fs.File
object methods:
list_dir()
: Returns a list offs.File
under the current directory, returns an error if not a directoryread_text()
: Returns the content of the file encoded in UTF-8
Listing files under a directory
Example: print all the files under a directory
load('filesystem.star', 'fs')
def iter_dir(path):
for f in fs.File(path).list_dir():
print(f)
def files_in_dir(path):
return [f for f in fs.File(path).list_dir() if not f.is_dir]
iter_dir("./node_modules")
iter_dir(clone_repo_branch("git@rafttio@github.com/another", "trunk") + "./node_modules") # Accessing dir in another repository
Reading files
Example for reading a file
load('filesystem.star', 'fs')
# Read file as text
print(fs.File("requirements.txt").read_text())
# OUT:
> fire==0.4.0
Mako==1.2.3
boto3==1.24.96
botocore==1.27.96
semver==2.13.0
...
Configuring credential helpers
Raftt allow to configure credential helpers to allow the deployment of workloads with images from private registries.
Raftt will use the credential helpers to get the permissions from the cluster node without requiring extra permissions.
The credential helpers are configured using the configure_cred_helper
function.
Supported credential helpers are "gcp"
and "ecr" (aws)
.
There is a more thorough discussion on the configuration of image registries here.
ECR (AWS)
For clusters running on AWS the credential helper can be configured like so:
configure_cred_helper(
provider="ecr",
registries=["ACCOUNT_ID.dkr.ecr.REGION.amazonaws.com", "MIRROR.INTERNAL.com"])
The second argument to the function is one or more image registry hostnames for which we need the ECR credential helper. When using the ECR credential helper one or more registries must be supplied.
GCP
For clusters running on google cloud the credential helper can be configured like so:
configure_cred_helper("gcp")
Embedding the raftt CLI
You can add the Raftt CLI to any container to access the CLI commands like status
or restart
. See the full CLI reference here.
To add the CLI to a container use the add_raftt_cli
function like so:
resources = k8s_manifests("./manifests.yml")
nginx = resources.deployments["nginx"]
nginx.add_raftt_cli(container="nginx")
If no container argument is specified the default container is assumed.
Setting default container annotation for a workload
You can set the workload's default container in .raftt files. The default container is the one used in Raftt commands that refer to a specific container in a workload, e.g., cp, logs, restart, sh, and stop.
Notes:
- Rebuild and dev commands will keep operating on the workload as a whole, including all of its containers.
- This function overwrites an existing default container annotation if one exists.
See Kubernetes docs for more info.
resources = k8s_manifests("./manifests.yml")
resources.pods["nginx"].set_default_container("nginx")