The infrastructure is based on top of Kubernetes® using Tekton® TaskRuns, ConfigMaps, Secrets, and Persistent Volumes. See the Task architecture for more information.
Tekon TaskRuns are a Custom Resource Definition (CRD) wrapping Kubernetes Pods and allow us to define the Task specific metadata.
A certain amount of disk, memory, and CPU is required to process TaskRuns. Our recommendation is to run these on dedicated nodes and to set them to automatically delete. This will ensure you have enough resources to continually execute new Tasks.
The Workflow Tasks run as jobs on any node, unless dedicated nodes are implemented using:
dedicated=bmrg-worker:NoSchedule
node-role.kubernetes.io/bmrg-worker=true
As with all containers, there is ephemeral storage used that we have limited to 8GB by default. This impacts the number of Tasks that can be running in parallel, based on the amount of primary disk used. This is important.
Flow Tasks have a setting to delete on completion. If this is not enabled, then the completed workers stick around and use up the available ephemeral storage.
See Kubernetes ephemeral storage reference information.
There are different types of persistent volumes used by the task orchestration system and are enabled by Workflow in the Workflow Editor > Configuration.
You can configure the storage size, storage class, and access modes for the following types in the Settings under Administer. By default:
We recommend using Ranchers Local Path Provisioner on the nodes executing Tasks as this allows for dynamic provisioning of local disk, that if SSD, allows for low latency high speed writes.
Workspaces are the representation of Storage in use by Boomerang Flow (and Tekton) Workflows. There are currently two workspaces available to be enabled in a Workflow. See the Workspaces section of the Workflow Editor How to Guide for more information.
All Tasks run with a data drive (/data
) specific to that Task and based on Kubernetes EmptyDir volume. Use this for inner Task workings.
If dedicated nodes are enabled, a pod-soft anti-affinity feature is also enabled to ensure that attempts are made to balance workers across nodes as best as possible.
If you are implementing a Kubernetes cluster, which uses ClusterImagePolicy or ImagePolicy, you may need to add docker.io/boomerangio/*:*
to your policies to be able to retrieve the images.
Supported for use with IPv4 networks only.