.raftt
File API Reference#
Note
📝 Back to docs
The API module
- build_image(name: str, context: str | repoworkingtree.RepoWorkingTree, dockerfile: str | repoworkingtree.RepoWorkingTree = 'Dockerfile', args: dict[str, str] = {}, target: str = '', prebuild: str | collections.abc.Iterable[str] = '', secrets: dict[str, str] = {})[source]#
Create an image builder object that can be assigned to workloads. The return value can be assigned to a workload to set the image as the workload’s image.
- Parameters:
name (str) – The ID to be used when referring to the image, e.g. in other Dockerfiles
context (str |
RepoWorkingTree
) – The context for building the Dockerfile.dockerfile (str |
RepoWorkingTree
, optional) – The path of the Dockerfile. Equivalent to –file when usingdocker build
. Defaults to “<context>/Dockerfile”.args (dict[str, str], optional) – the arguments passed to Dockerfile. Equivalent to –build-arg when using
docker build
. Defaults to {}.target (str, optional) – Set the target build stage to build. Equivalent to –target when using
docker build
. Defaults to “”.prebuild (str | Iterable[str], optional) – A command to run in the dev container before the image is built. It is passed as a string or an iterable of strings. Defaults to “”.
secrets (dict[str, str], optional) – A dictionary mapping secret ID to a value to pass to the
docker build
. Defaults to {}.
- Returns:
The name of the image (as the given arg). Can be assigned as an
image
attribute of workloads.- Return type:
str
- clone_repo_branch(repo: str, branch: str)[source]#
Create a remote clone of the given repo, at a specific branch in the env controller. This repo and its files can than be accessed in the
.raftt
file for different needs.- Parameters:
repo (str) – URL of the git repository.
branch (str) – The branch to checkout.
- Returns:
A handle to the working tree.
- Return type:
RepoWorkingTree
- configure_cred_helper(provider: str, registries: str | collections.abc.Iterable[str] | None = None)[source]#
Raftt allows to configure credential helpers to allow the deployment of workloads whose images are in private registries. Raftt will use the credential helpers to get the permissions from the cluster node without requiring permissions to be provided explicitly. The credential helpers are configured using the configure_cred_helper function.
Supported credential helpers are “gcp” and “ecr” (aws)
- Parameters:
provider (str) – “gcp” or “ecr” (aws)
registries (str | Iterable[str] | None, optional) – The URL(s) of the registry, only required if the provider is
ecr
. Defaults to None.
Examples
Configuring ECR
configure_cred_helper( provider="ecr", registries=["ACCOUNT_ID.dkr.ecr.REGION.amazonaws.com", "MIRROR.INTERNAL.com"])
Configuring GCP
configure_cred_helper("gcp")
- configure_overridden_workload_annotations(annotations: dict[str, str])[source]#
Configure custom annotations to be added to workloads that are scaled-down in dev-mode. This is useful to prevent GitOps tools from reverting the changes made by Raftt when they sync the env state.
- Parameters:
annotations (dict[str, str]) – A dict of annotations to add to the workloads.
- deploy(resources: collections.abc.Iterable[workload.Resources | workload.Pod | workload.Deployment | workload.StatefulSet | k8s.Ingress | k8s.Service | k8s.Secret | k8s.ServiceAccount | k8s.Role | k8s.RoleBinding | k8s.ConfigMap] | workload.Resources | workload.Pod | workload.Deployment | workload.StatefulSet | k8s.Ingress | k8s.Service | k8s.Secret | k8s.ServiceAccount | k8s.Role | k8s.RoleBinding | k8s.ConfigMap)[source]#
Mark the resources to be deployed when the
.raftt
file interpretation is completed afterup
,rebuild
, anddev
commands.- Parameters:
resources (
Resources
|Pod
|Deployment
|StatefulSet
|Ingress
|Service
|Secret
|ServiceAccount
|Role
|RoleBinding
|ConfigMap
) – The K8s objects to be deployed. Can be iterable or single instance of the Resources type or a resource of any supported type.
- deploy_dev_container(resources: workload.Resources)[source]#
Mark a workload to be deployed as a dev container. Resources that are marked to be deployed using this function don’t need to be deployed using the
deploy
function.- Parameters:
resources (
Resources
) – TheResources
object containing the workload defining the dev container.
- deploy_on_connect(resources: workload.Resources)[source]#
Mark the Kubernetes resources to be deployed to the env when running
raftt connect
. Resources that are marked to be deployed using this function don’t need to be deployed using thedeploy
function.Only relevant for Raftt in connect-mode.
- Parameters:
resources (
Resources
) – The resources to-be-deployed
- get_cloned_repo(repo: str)[source]#
Get a handle to a working tree of a secondary repo, by repo URL. This assumes that the repo was cloned earlier in the
.raftt
file interpretation using theclone_repo_branch()
function.- Parameters:
repo (str) – URL of the git repository.
- Returns:
A handle to the working tree.
- Return type:
RepoWorkingTree
- get_env_id(full: bool = False)[source]#
Get an identifier of the current environment.
- Parameters:
full (bool, optional) – Get the full-length version of the identifier (GUID). Defaults to False.
- Returns:
The identifier.
- Return type:
str
- get_secret(name: str)[source]#
Fetch a secret defined in the
raftt.yaml
file.- Parameters:
name (str) – The secret name, as defined in the
raftt.yaml
file- Returns:
The secret value
- Return type:
str
Examples
aws_creds = get_secret("aws-credentials") # will return "AKIAEXAMPLEAWSCREDS"
- helm_local_chart(name: str, path: str | repoworkingtree.RepoWorkingTree, values: dict[str, Any] = {}, version: str = '3.10.1', values_files: str | repoworkingtree.RepoWorkingTree | collections.abc.Iterable[str | repoworkingtree.RepoWorkingTree] = [])[source]#
Load resources from the definitions in Helm charts. The values are resolved using values files or values dict. Object kinds that aren’t supported are ignored.
- Parameters:
name (str) – Helm release name.
path (str |
RepoWorkingTree
) – Path to chart. Relative to repo root, or from a different repository.values (dict[str, Any], optional) – a dictionary of values to be passed to Helm. Will override values defined in the files. Defaults to {}.
version (str, optional) – the version of Helm to use, currently, 3.10.1, 3.8.2 and 2.13.0 are supported. Defaults to “3.10.1”.
values_files (str |
RepoWorkingTree
| Iterable[str |RepoWorkingTree
], optional) – Paths to files in the repo, or URLs. Same as when usinghelm template
, when passing multiple values files, the last one will override the previous files. Defaults to [].
- Returns:
A
Resources
object containing all the loaded K8s resources- Return type:
Resources
Examples
resources = helm_local_chart( "chart-name", "./helm/chart-path", values_files="./helm/values.dev.yml", version="3.8.2") resources = helm_local_chart( "chart-name", "./helm/chart-path", values_files=["https://example.io/values.yml", "./helm/values.dev.yml"], values={"serviceName": "serv", "replicaCount": 1, "ingressPort": 80}, version="3.8.2") print(type(resources)) # Will print "Resources"
- k8s_manifests(path: str | repoworkingtree.RepoWorkingTree)[source]#
Load resources from the definitions in raw Kubernetes manifests. Can receive of a single YAML manifest or a directory of manifests. Object kinds that aren’t supported are ignored.
- Parameters:
path (str |
repoworkingtree.RepoWorkingTree
) – path to the manifests file- Returns:
A
Resources
object containing all the loaded K8s resources.- Return type:
Resources
- kustomize_build(dir: str | repoworkingtree.RepoWorkingTree, helm_version: str = '3.10.1')[source]#
Load resources from the definitions in Kustomize. Object kinds that aren’t supported are ignored.
- Parameters:
dir (str |
RepoWorkingTree
) – The path of the dir containing the Kustomization file(s)helm_version (str, optional) – the version of Helm to use, currently, 3.10.1, 3.8.2 and 2.13.0 are supported. Defaults to “3.10.1”.
- Returns:
A
Resources
object containing all the loaded K8s resources- Return type:
Resources
- load_docker_compose(path: str | repoworkingtree.RepoWorkingTree, workdir: str | repoworkingtree.RepoWorkingTree = '', profiles: str | collections.abc.Iterable[str] = [], skip_consistency_check: bool = False, env: dict[str, str] = {})[source]#
Load resources from a docker-compose file.
- Parameters:
path (str |
RepoWorkingTree
) – Path of the docker-compose fileworkdir (str |
RepoWorkingTree
, optional) – Specify an alternate working directory. Defaults to the directory containing the docker-compose file.profiles (str | Iterable[str], optional) – Specify a profile to enable. Defaults to [].
skip_consistency_check (bool, optional) – Whether or not to perform a consistency check on the docker-compose file. Defaults to False.
env (dict[str, str], optional) – Environment variables to add or override when evaluating the docker compose file. Defaults to {}.
- Returns:
A
Resources
object containing all the loaded K8s resources.- Return type:
Resources
- mongodb_volume_initializer(workload: workload.Workload, dump_file_path: str | repoworkingtree.RepoWorkingTree, key_provider: str = '')[source]#
Define how to initialize a volume that contains a MongoDB from a dump file The dump will be loaded to the volume on env creation. This also allows the volume to be saved and loaded using
raftt data
commands.Only relevant to Raftt in orchestration-mode.
- Parameters:
workload (
workload.Workload
) – The workload running the seeded MongoDB instance.dump_file_path (str |
RepoWorkingTree
) – The path of the dump file to be loaded.key_provider (str, optional) – A script that creates a cache of the created seed. Used to cache seeding script outputs and reduce seeding time. Defaults to “”.
- Returns:
A volume initializer object can be assigned as an
volumes.Volume.initializer
of avolumes.Volume
object.- Return type:
volumes.VolumeInitializer
- namespace()[source]#
Returns the namespace where the current environment is deployed.
- Returns:
the name of the namespace where the current environment is deployed.
- Return type:
str
- namespace_resources()[source]#
Load resources currently running in the connected namespace. Object kinds that aren’t supported are ignored.
- Returns:
A
Resources
object containing all the loaded K8s resources- Return type:
Resources
- on_hibernation(path: str)[source]#
The command to be executed in the setup container every time an env is hibernated. The output can be viewed in the lifecycle hooks log using the
raftt lifecycle hooks
command.This function can’t be used in the env definition file, only in the file that configure as the envLifecycleHooks attribute in raftt.yml. Only relevant for orchestration mode.
- Parameters:
path (str) – The command to be executed.
- on_init(path: str)[source]#
The command to be executed in the setup container every time an env is created. The output can be viewed in the lifecycle hooks log using the
raftt lifecyclehooks
command.This function can’t be used in the env definition file, only in the file that configure as the envLifecycleHooks attribute in raftt.yml. Only relevant for orchestration mode.
- Parameters:
path (str) – The command to be executed.
- on_start(path: str)[source]#
The command to be executed in the setup container every time an env is started - either created or waking from hibernation. The output can be viewed in the lifecycle hooks log using the
raftt lifecyclehooks
command.This function can’t be used in the env definition file, only in the file that configure as the envLifecycleHooks attribute in raftt.yml. Only relevant for orchestration mode.
- Parameters:
path (str) – The command to be executed.
- on_termination(path: str)[source]#
The command to be executed in the setup container every time an env is terminated. The output can be viewed in the lifecycle hooks log using the
raftt lifecycle hooks
command.This function can’t be used in the env definition file, only in the file that configure as the envLifecycleHooks attribute in raftt.yml. Only relevant for orchestration mode.
- Parameters:
path (str) – The command to be executed.
- pod_volume(volume_source: k8s.EmptyDirVolumeSource | k8s.PersistentVolumeClaimVolumeSource | k8s.AWSElasticBlockStoreVolumeSource | k8s.AzureDiskVolumeSource | k8s.AzureFileVolumeSource | k8s.CSIVolumeSource | k8s.CephFSVolumeSource | k8s.CinderVolumeSource | k8s.ConfigMapVolumeSource | k8s.DownwardAPIVolumeSource | k8s.EphemeralVolumeSource | k8s.FCVolumeSource | k8s.FlexVolumeSource | k8s.FlockerVolumeSource | k8s.GCEPersistentDiskVolumeSource | k8s.GitRepoVolumeSource | k8s.GlusterfsVolumeSource | k8s.HostPathVolumeSource | k8s.ISCSIVolumeSource | k8s.NFSVolumeSource | k8s.PhotonPersistentDiskVolumeSource | k8s.PortworxVolumeSource | k8s.ProjectedVolumeSource | k8s.QuobyteVolumeSource | k8s.RBDVolumeSource | k8s.ScaleIOVolumeSource | k8s.SecretVolumeSource | k8s.StorageOSVolumeSource | k8s.VsphereVirtualDiskVolumeSource | None = <api.k8s.EmptyDirVolumeSource object>)[source]#
Creates a volume that can be mounted to a single
workload.Workload
, and can’t be shared between different pods. This is required when the env is deployed across multiple nodes.- Parameters:
volume_source (VolumeSource) – The underlying Kubernetes volume source to be used for the volume. Defaults to EmptyDirVolumeSource().
- Returns:
The Volume object that can be mounted into workloads.
- Return type:
volumes.PodVolume
- postgres_volume_initializer(workload: workload.Workload, dump_file_path: str | repoworkingtree.RepoWorkingTree, user: str, key_provider: str = '')[source]#
Define how to initialize a volume that contains a Postgres DB from a dump file The dump will be loaded to the volume on env creation. This also allows the volume to be saved and loaded using
raftt data
commands.Only relevant to Raftt in orchestration-mode.
- Parameters:
workload (
workload.Workload
) – The workload running the seeded postgres instance.dump_file_path (str |
RepoWorkingTree
) – The path of the dump file to be loaded.user (str) – The user to be used to connect to the DB
key_provider (str, optional) – A script that creates a cache of the created seed. Used to cache seeding script outputs and reduce seeding time. Defaults to “”.
- Returns:
A volume initializer object can be assigned as an
volumes.Volume.initializer
of avolumes.Volume
object.- Return type:
volumes.VolumeInitializer
- register_hook(on: events.Event | collections.abc.Iterable[events.Event], do: actions.Action | collections.abc.Iterable[actions.Action])[source]#
Register a hook which runs one or more actions when any of the events is triggered. The actions are executed sequentially.
- repo_volume(repo_working_tree: repoworkingtree.RepoWorkingTree | None = None)[source]#
Returns a handle to the volume that contains a working tree of a repo. Can be either the “current” rep - the one of this
.raftt
file, or another repo, if a differentRepoWorkingTree
object is provided. Can be used to mount the code into application containers and/or builder containers.- Parameters:
repo_working_tree (
RepoWorkingTree
, optional) – The repo working tree that will be in the created volume. Defaults to None, which means the repo is the repo of the current.raftt
.- Returns:
The RepoVolume object that can contains the repo and can be mounted into workloads.
- Return type:
volumes.RepoVolume
- script_volume_initializer(workload: workload.Workload, script: str, key_provider: str = '')[source]#
Create a
volumes.VolumeInitializer
object that is initilized by a custom script. The returned object can be assigned as anvolumes.Volume.initializer
of avolumes.Volume
object.- Parameters:
workload (
workload.Workload
) – The workload that Raftt will enforce to be up when the script is running. This is done to guarantee that the seeded app (e.g. a DB) is running during the scrip execution.script (str) – The path of the script in the workload
key_provider (str, optional) – A script that creates a cache of the created seed. Used to cache seeding script outputs and reduce seeding time. Defaults to “”.
- Returns:
A volume initializer object can be assigned as an
volumes.Volume.initializer
of avolumes.Volume
object.- Return type:
volumes.VolumeInitializer
- secret_volume(name: str)[source]#
Creates a volume that can be mounted using workload.mount() that contains a Raftt-managed secret.
- Parameters:
name (str) – The name of the secret
- Returns:
The SecretVolume object that can be mounted into workloads.
- Return type:
volumes.SecretVolume
- set_rescale_workloads(rescale: bool = True)[source]#
Configure the behavior of scalable workloads entering dev-mode . Setting to False deletes the original workload, re-deploying it when exit-dev mode. Leaving this as True (default) scales down the original workload to 0, rescaling it back to 1 when it exits dev-mode.
- Parameters:
rescale (bool) – the chosen rescale behavior. Defaults to True.
- setup_container(resources: workload.Resources)[source]#
Define the
workload.Workload
to be used for the execution of lifecycle hooks.This function can’t be used in the env definition file, only in the file that configure as the envLifecycleHooks attribute in raftt.yml. Only relevant for orchestration mode.
- Parameters:
resources (
Resources
) – TheResources
object containing the workload to be used. Must contain a singleworkload.Workload
.
- volume(name: str)[source]#
Creates a volume that can be mounted to one or more
workload.Workload
.- Parameters:
name (str) – The name of the created volume.
- Returns:
The Volume object that can be mounted into workloads.
- Return type:
volumes.Volume
- local.config_args: str = ''#
A string value sent in the –config-args option that lets the provide additional information to the
.raftt
file execution. For example, this can be useful when there are different configurations for spawning an env and you want to select in theraftt up
command which one is spawned.Examples
load("encoding/json.star", "json") # This assumes args are a "serialized" json string input_args = json.decode(local.config_args) profiles = input_args['compose_profiles'] deploy(load_docker_compose("./path/to/compose.yml", ".", profiles=profiles))
- local.env: dict[str, str]#
A dictionary containing the local environment variables. This information is sent from the client to the env every time the
.raftt
file is interpreted.The returned value is a dict. For example,
local.env['PATH']
will return the system’s search path for executable files.
- class workload.Resources[source]#
A Resources object contains several dicts, one for each supported resource type.
Example
resources = k8s_manifests("./k8s_manifests") # Can also use Docker Compose and Helm print(type(resources)) # Will print "Resources" # You can modify the imported resources nginx = resources.pods["nginx"] nginx.map_port(local=8080, remote=80)
- property configmaps: dict[str, ConfigMap]#
A dict of Kubernetes ConfigMaps in the
Resources
object.- Returns:
A dict that maps the ConfigMap name to the K8s resource itself
- Return type:
dict[str, ConfigMap]
- property crds: dict[str, CommonK8sResource]#
A dict of Kubernetes CRDs in the
Resources
object.- Returns:
A dict that maps the CRD name to the K8s resource itself
- Return type:
dict[str, CommonK8sResource]
- property deployments: dict[str, Deployment]#
A dict of Kubernetes Deployments in the
Resources
object.- Returns:
A dict that maps the Deployment name to the K8s resource itself
- Return type:
dict[str, Deployment]
- property ingresses: dict[str, Ingress]#
A dict of Kubernetes Ingresses in the
Resources
object.- Returns:
A dict that maps the Ingress name to the K8s resource itself
- Return type:
dict[str, Ingress]
- property namedvolumes: dict[str, Volume]#
A dict of Kubernetes NamedVolumes in the
Resources
object.- Returns:
A dict that maps the NamedVolume name to the K8s resource itself
- Return type:
dict[str, Volume]
- property pods: dict[str, Pod]#
A dict of Kubernetes Pods in the
Resources
object.- Returns:
A dict that maps the Pod name to the K8s resource itself
- Return type:
dict[str, Pod]
- property rolebindings: dict[str, RoleBinding]#
A dict of Kubernetes RoleBindings in the
Resources
object.- Returns:
A dict that maps the RoleBinding name to the K8s resource itself
- Return type:
dict[str, RoleBinding]
- property roles: dict[str, Role]#
A dict of Kubernetes Roles in the
Resources
object.- Returns:
A dict that maps the Role name to the K8s resource itself
- Return type:
dict[str, Role]
- property secrets: dict[str, Secret]#
A dict of Kubernetes Secrets in the
Resources
object.- Returns:
A dict that maps the Secret name to the K8s resource itself
- Return type:
dict[str, Secret]
- property serviceaccounts: dict[str, ServiceAccount]#
A dict of Kubernetes ServiceAccounts in the
Resources
object.- Returns:
A dict that maps the ServiceAccount name to the K8s resource itself
- Return type:
dict[str, ServiceAccount]
- property services: dict[str, Service]#
A dict of Kubernetes Services in the
Resources
object.- Returns:
A dict that maps the Service name to the K8s resource itself
- Return type:
dict[str, Service]
- property statefulsets: dict[str, StatefulSet]#
A dict of Kubernetes StatefulSets in the
Resources
object.- Returns:
A dict that maps the StatefulSet name to the K8s resource itself
- Return type:
dict[str, StatefulSet]
- class workload.Workload[source]#
Abstract type which represents a workload, either a Kubernetes Pod, Deployment or StatefulSet
- add_dependency(dependency: workload.Workload, condition: str = '')[source]#
Makes this workload depend on another workload.
- Parameters:
dependency (
workload.Workload
) – The workload this workload depends on.condition (str, optional) – The condition to wait on. Defaults to ‘service_started’. possible values: - service_started - service_healthy - service_completed_successfully
- add_env_vars(env: dict[str, str], container: str = '')[source]#
Adds environment variables to a workload, overrides existing variables with the same name
- Parameters:
env (dict[str, str]) – The env vars to add to the workload
container (str, optional) – The container to which the env vars will be added. Defaults to “”, which means the container is the workload’s default container.
- add_raftt_cli(container: str = '')[source]#
Mounts the
raftt
cli executable to the given workload. This allows running different Raftt commands in the workload.- Parameters:
container (str, optional) – specify a container to mount the executable to. Defaults to “”, which means the container is the workload’s default container.
- dev_with(resources: Iterable[workload.Resources | workload.Pod | workload.Deployment | workload.StatefulSet | k8s.Ingress | k8s.Service | k8s.Secret | k8s.ServiceAccount | k8s.Role | k8s.RoleBinding | k8s.ConfigMap | common.CommonK8sResource])[source]#
Specify additional resources to be devified when this workload is devified.
- Parameters:
resources (Iterable[
Resources
|Pod
|Deployment
|StatefulSet
|Ingress
|Service
|Secret
|ServiceAccount
|Role
|RoleBinding
|ConfigMap
|CommonK8sResource
]) – The additional resources to devify.
- get_container(name: str = '')[source]#
Get a specific container from the workload by name
- Parameters:
name (str, optional) – name of the container to get. Defaults to “”, which means the container is the workload’s default container.
- Return type:
Container
- map_port(local: int, remote: int)[source]#
Maps a port on the local machine to forward traffic to the remote workload at specified remote port.
- Parameters:
local (int) – port on the local machine
remote (int) – port on the remote workload
- mount(mounted_object: volumes.Volume, dst: str, read_only: bool = False, init_on_rebuild: bool = False, no_copy: bool = False, container: str = '')[source]#
Mounts a volume to a workload.
- Parameters:
mounted_object (Volume) – The volume to mount
dst (str) – The path in the workload to which the volume will be mounted
read_only (bool, optional) – Flag to set the volume as read-only. Defaults to False.
init_on_rebuild (bool, optional) – Whether the volume is to be re-initialized with the content from the original image when the workload is being rebuilt. Defaults to False.
no_copy (bool, optional) – Whether to disable copying of data from a container when a volume is created. Defaults to False.
container (str, optional) – The container in the workload to which the volume is mounted. Defaults to “”, which means the container is the workload’s default container.
- property name: str#
Return the name of the
workload.Workload
object.
- replace_volume(name: str, source: volumes.Volume)[source]#
Use this function if your workload already has volumes defined and you’d like to replace them so their data can be managed by Raftt. See here for more information.
- Parameters:
name (str) – Name of the volume to replace
source (Volume) – new named volume instance
- restart_on_termination(no_restart: bool = False, container: str = '')[source]#
Indicate whether the process should be restarted after exiting.
- Parameters:
no_restart (bool, optional) – If True disable restarting the process after exiting. Defaults to False.
container (str, optional) – Name of the container of which to control the restart of the process. Defaults to main container.
- set_default_container(container_name: str)[source]#
Sets the default container for this workload. This container will be used as default when using CLI commands and
.raftt
functions without specifying a container. If no container is marked using this function or the by <ENTER_DETAILS>, the first container of theworkload.Workload
object is selected as default.- Parameters:
container_name (str) – The name of the container to mark as the default container.
- set_env_vars(env: dict[str, str], container: str = '')[source]#
Overwrites the environment variables of the workload.
- Parameters:
env (dict[str, str]) – The new env vars dict
container (str, optional) – The container in which the env vars will be overridden. Defaults to “”, which means the container is the workload’s default container.
- sync(src: repoworkingtree.RepoWorkingTree | str, dst: str, ignore: Iterable[str] | None = None, container: str = '')[source]#
Sync a path to a container
- Parameters:
src (str |
RepoWorkingTree
) – Source to sync, if a string assumed relative to the main repository.dst (str) – Destination on the container, if not absolute assumed relative to workdir.
ignore (Iterable[str], optional) – glob patterns to ignore when syncing. Defaults to [].
container (str, optional) – Name of the container to sync to, if empty the default container is selected. Defaults to “”.
- class volumes.PodVolume[source]#
Type which represents a volume that can be mounted to a single
workload.Workload
object using Workload.mount().
- class volumes.RepoVolume[source]#
A handle to the volume that contains a working tree of the repo. can be mounted using Workload.mount()
- class volumes.SecretVolume[source]#
A volume that can be mounted using Workload.mount() that contains a Raftt-managed secret.
- class volumes.Volume[source]#
Type which represents a volume that can be mounted into on or more
workload.Workload
objects using Workload.mount().- property initializer: VolumeInitializer | None#
The volume initializer for this volume
- Getter:
Returns the initializer
- Setter:
Sets the initializer
- property name: str#
Return the name of the
volumes.Volume
object.
- class volumes.VolumeInitializer[source]#
Abstract type which represents a policy for initializing a volume.
The actions module
- class actions.CMD[source]#
Represents a
actions.Action
type that receives a command to execute as a string or an iterable of strings and a workload to execute the command on.- __init__(workload: workload.Workload, cmd: str | collections.abc.Iterable[str], container: str = '')[source]#
Create a new CMD object.
- Parameters:
workload (
workload.Workload
) – Workload instance to run the action on.cmd (str | Iterable[str]) – String or iterable of strings to be executed as a command on the specified workload.
container (str, optional) – name of the container to run the action on. Defaults to “”, which means the container is the workload’s default container.
- class events.OnContainerStart[source]#
Represents an Event which is triggered when a container starts.
- __init__(workload: workload.Workload, container: str = '', block_main_process: bool = True, start_if_failed: bool = False)[source]#
Create a new OnContainerStart event. This event is used to trigger actions using Raftt’s hooks mechanism.
- Parameters:
workload (
workload.Workload
) – Workload instance for which the hook is triggered.container (str) – Name of the container for which the hook is triggered. Defaults to “”, which means the container is the workload’s default container.
block_main_process (bool) – Whether or not to wait for the hook to finish before starting the main process. Defaults to True.
start_if_failed (bool) – Whether or not to start the main process if the hook actions fail. Only relevant if block_main_process is True. Defaults to False.
- class events.OnFileChanged[source]#
Represents an
events.Event
which triggers on a file being changed on a workload.- __init__(workload: workload.Workload, patterns: str | collections.abc.Iterable[str], container: str = '')[source]#
Create a new
events.OnFileChanged
event. This event is used to trigger actions using Raftt’s file watching mechanism. See here for more information.- Parameters:
workload (
workload.Workload
) – Workload instance to watch for file changes on.patterns (str | Iterable[str]) – String or iterable of strings of glob patterns describing the files being watched.
container (str) – Name of the container for which the hook is triggered. Defaults to “”, which means the container is the workload’s default container.
fs provides logic for working the filesystem of the dev env.
Import as:
load('filesystem.star', 'fs')
- class fs.File[source]#
File represents a file on the repo (main or secondary), it can be used to read and iterate other files.
- __init__(path: str)[source]#
Create a new File object at path. :type path:
str
:param path: path to set the file object to. :type path: str- Parameters:
path (str) –
- property exists: bool#
Returns whether a
File
path exists on the dev env filesystem.- Returns:
True if the file exists, False otherwise.
- Return type:
bool
- property is_dir: bool#
Returns whether a
File
is a directory.- Returns:
True if a directory, False otherwise.
- Return type:
bool
- list_dir()[source]#
Returns a list of
File
under the current directory, raises an error if not a directory.- Returns:
list of
File
object representing the files under the provided directory.- Return type:
list[File]
Examples
load('filesystem.star', 'fs') def iter_dir(path): for f in fs.File(path).list_dir(): print(f) def files_in_dir(path): return [f for f in fs.File(path).list_dir() if not f.is_dir] iter_dir("./node_modules") # Accessing dir in another repository iter_dir(clone_repo_branch("git@rafttio@github.com/another", "trunk") + "./node_modules")
- property name: str#
Return the filename - the last element of the file path.
- Returns:
The filename - the last element of the file path.
- Return type:
str
- read_text()[source]#
Reads the contents of the
File
as an utf-8 encoded string.- Returns:
The contents of the
File
as an utf-8 encoded string.- Return type:
str
Examples
load('filesystem.star', 'fs') # Read file as text print(fs.File("requirements.txt").read_text()) # OUT: > fire==0.4.0 Mako==1.2.3 boto3==1.24.96 botocore==1.27.96 semver==2.13.0 ...
String encoding#
json provides functions for working with json data
Import as:
load("encoding/json.star", "json")
- json.decode(src: str)[source]#
Return the Object representation of a string instance containing a JSON document. Decoding fails if src is not a valid JSON string.
- Parameters:
src (str) – source string, must be valid JSON string
- Returns:
representation of the data structure
- Return type:
object
Examples
decode a JSON string into a Starlark structure
load("encoding/json.star", "json") x = json.decode('{"foo": ["bar", "baz"]}')
- json.encode(obj: Any)[source]#
Return a JSON string representation of a data structure
- Parameters:
obj (Any) – Any valid data structure
- Returns:
JSON representation of the data structure
- Return type:
str
Examples
encode a simple object as a JSON string
load("encoding/json.star", "json") x = json.encode({"foo": ["bar", "baz"]}) print(x) # Output: {"foo":["bar","baz"]}
- json.indent(src: str, prefix: str = '', indent='\\t')[source]#
The indent function pretty-prints a valid JSON encoding, and returns a string containing the indented form.
- Parameters:
src (str) – source JSON string to encode
prefix (str, optional) – string prefix that will be prepended to each line.. Defaults to “”.
indent (str, optional) – string that will be used to represent indentations. Defaults to ” “.
- Returns:
a string containing the indented form
- Return type:
str
Examples
“pretty print” a valid JSON encoding
load("encoding/json.star", "json") x = json.indent('{"foo": ["bar", "baz"]}') # print(x) # { # "foo": [ # "bar", # "baz" # ] # }
“pretty print” a valid JSON encoding, including optional prefix and indent parameters
load("encoding/json.star", "json") x = json.indent('{"foo": ["bar", "baz"]}', prefix='....', indent="____") # print(x) # { # ....____"foo": [ # ....________"bar", # ....________"baz" # ....____] # ....}
yaml provides functions for working with YAML data
Import as:
load("encoding/yaml.star", "yaml")
- yaml.dumps(obj: Any)[source]#
Serialize obj to a yaml string
- Parameters:
obj (Any) – input object
- Returns:
Representation of the object
- Return type:
str
Examples
encode to yaml
load("encoding/yaml.star", "yaml") data = {"foo": "bar", "baz": True} res = yaml.dumps(data)
- yaml.loads(src: str)[source]#
Return the Object representation of a string instance containing a YAML document. Decoding fails if src is not a valid YAML string.
- Parameters:
src (str) – Source string, must be valid YAML string
- Returns:
Representation of the object
- Return type:
object
Examples
load a YAML string
load("encoding/yaml.star", "yaml") data = '''foo: bar baz: true ''' d = yaml.loads(data) print(d) # Output: {"foo": "bar", "baz": True}