Enhancing a Promise
Pre-requisites
You need an installation of Kratix for this section. Click here for instructions
The simplest way to do so is by running the quick-start script from within the Kratix directory. The script will create two KinD clusters, install, and configure Kratix.
./scripts/quick-start.sh --recreate
You can run Kratix either with a multi-cluster or a single-cluster setup. The commands on the remainder of this document assume that two environment variables are set:
PLATFORM
representing the platform cluster Kubernetes contextWORKER
representing the worker cluster Kubernetes context
If you ran the quick-start script above, do:
export PLATFORM="kind-platform"
export WORKER="kind-worker"
For single cluster setups, the two variables should be set to the same value. You can find your cluster context by running:
kubectl config get-contexts
Refer back to Installing Kratix for more details.
In this tutorial, you will
- experience the power of leveraging customised Kratix Promises
- gain confidence with the components of a Promise
- enhance an sample Postgres Promise
Using Kratix to support your organisation
As you've seen, Kratix can support Promises for services like Jenkins, Nginx, and Postgres.
When you think about providing services for things like automation, deployment or data, how often are you able to choose a service (like Postgres) and offer it to your users straight off the shelf?
Probably not very often.
Application teams need to be able to easily run their services in different environments. They'll want specific sizes, particular backup strategies, defined versions, and more. Key stakeholders in other parts of the business also need to easily understand the state of service usage as it applies to them (hello audit, billing, and security!).
Your team works with all of these users to understand the if, when, and how of each of these requests and creates a platform from a prioritised backlog of platform features.
This platform needs to be extensible and flexible—your users will have new and changing needs, and you'll want to quickly respond to valuable feature requests.
Kratix and Promises make it easier to create a platform paved with golden paths that deliver value easily and quickly.
Now you will create and enhance a Promise as a response to user and business needs.
From off-the-shelf to ready for the paved path
The scenario
In this exercise, you and the platform team are starting development of the next platform feature.
You discussed needs with application teams and you've decided to offer a new service. You'll be adding Postgres to your platform.
The billing team is a key stakeholder for the platform, and they will need a cost centre for each new Postgres Resource to charge back to the right team.
For the purposes of this exercise, you know that all of the underlying functionality to get the billing team what it needs is already in place.
In this guide, you only need create a new Postgres Promise that creates Postgres Resources with a costCentre
label.
The steps:
- Get a base Promise
- Change the Promise so that the user who wants an Resource knows they need to include their
costCentre
name when they make their request to the platform - Change the Promise so that the operator Dependency that creates the Resource knows to apply your new
costCentre
labelcostCentre
- Change the Promise so that the Workflow knows how to add the user's
costCentre
to the request for a Resource - Install the modified Promise on your platform
- Check it works: make a request to your platform for a Postgres Resource
Step one: Get a base Promise
There's a PostgreSQL Promise available on the Marketplace. You'll use that as your base. Start by cloning the repository:
git clone https://github.com/syntasso/promise-postgresql.git
Take a look
cd promise-postgresql
ls
You should see something a structure similar to the one below:
. 📂 promise-postgresql
├── promise.yaml
├── resource-request.yaml
├── ...
└── 📂 internal
├── 📂 configure-pipeline
│ ├── 📂 resources
│ │ └── minimal-postgres-manifest.yaml
│ ├── Dockerfile
│ ├── execute-pipeline.sh
├── 📂 dependencies
│ ├── operator.yaml
│ └── ...
├── 📂 scripts
│ ├── test
│ └── ...
└── README.md
You should see the promise.yaml
file. This is the Promise definition
that you'll modify and install on your platform. Ignore everything else in the
folder for now.
Step two: api
Change the Promise so that the user who wants a Postgres knows they need to include their
costCentre
name when they make their request to the platform
About api

api
is the API exposed to the users of the Promise.
To see api
in the Promise definition file, open promise.yaml
and look under the spec
section.
api
is the contract with the user who wants a Resource. It's where you get to define the required and optional configuration options exposed to your users.
You can already see a number of properties in this section of the promise.yaml
file. These properties are defined within a versioned schema and can have different types and validations.
Update the api
To add the required cost centre configuration, add the following to the promise.yaml
:
costCentre:
pattern: "^[a-zA-Z0-9_.-]*$"
type: string
From the top of the file, navigate to
spec
> api
> spec
> versions[0]
> schema
>
openAPIV3Schema
> properties
> spec
> properties
Here, add your costCentre
YAML from above as a sibling to the existing dbName
property.
👀 Click here to view a final version of the extended api
which should be indented so as to nest under the spec
header
api:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: postgresqls.marketplace.kratix.io
spec:
group: marketplace.kratix.io
names:
kind: postgresql
plural: postgresqls
singular: postgresql
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
properties:
spec:
properties:
costCentre:
pattern: "^[a-zA-Z0-9_.-]*$"
type: string
dbName:
default: postgres
description: |
Database name. A database will be created with this name. The owner of the database will be the teamId.
type: string
env:
default: dev
description: |
Configures and deploys this PostgreSQL with environment specific configuration. Prod PostgreSQL are configured with backups and more resources.
pattern: ^(dev|prod)$
type: string
teamId:
default: acid
description: |
Team ID. A superuser role will be created with this name.
type: string
type: object
type: object
served: true
storage: true
Step three: dependencies
Change the Promise so that the worker Destination that can host the Postgres knows to apply your new
costCentre
labelcostCentre
About dependencies

dependencies
is the description of all of the Kubernetes resources required to create a promised Resource, such as CRDs, Operators and Deployments.
In the Promise definition, you divide resources based on the idea of prerequisite and per-resource items. Prerequisite resources are resources that we create before any application team requests a Resource. This can be helpful for two scenarios:
- Any CRDs or Dependencies are ready when an Resource is requested which speeds up response time to application teams.
- Resources that can be shared across Resources are only deployed once. This can reduce load on the cluster, and it can also allow delivering a Resource as a portion of an existing Resource (e.g. you could provide a whole database instance on each request, or you could provide a database within an existing instance on each request)
The dependencies
section of the Kratix Promise defines the prerequisite capabilities.
These capabilities are:
- created once per Destination.
- complete Kubernetes YAML documents stored in the
dependencies
section of the Promise.
For the Postgres Promise you're defining, the only Dependency workloads you need are conveniently packaged in a Kubernetes Operator that is maintained by Zalando. The Operator turns the complexities of configuring Postgres into a manageable configuration format.
Update dependencies
To make sure all Postgres Resources includes costCentre
, you need to make the Operator aware of the label.
To ensure Zalando's Postgres Operator is aware of the label, you need to add configuration when installing the Operator. The configuration the Operator needs will be under a new key: inherited_labels
.
inherited_labels
is unique to how Zalando's Postgres Operator works. If you were using a different Operator (or writing your own!), a different change may be required (or no change at all).
Following the Zalando docs
, you need to add inherited_labels
in a particular spot.
From the top of the file, navigate to
spec
> dependencies[7]
> configuration
> kubernetes
To verify you're in the right place, the object should be kind: OperatorConfiguration
with name: postgres-operator
.
Under the kubernetes
key, add inherited_labels: [costCentre]
.
👀 Click here to see the complete OperatorConfiguration
resource after this change
# Note, the property was added to the top of the configuration.kubernetes
- apiVersion: acid.zalan.do/v1
configuration:
aws_or_gcp:
aws_region: eu-central-1
enable_ebs_gp3_migration: false
connection_pooler:
connection_pooler_default_cpu_limit: "1"
connection_pooler_default_cpu_request: 500m
connection_pooler_default_memory_limit: 100Mi
connection_pooler_default_memory_request: 100Mi
connection_pooler_image: registry.opensource.zalan.do/acid/pgbouncer:master-22
connection_pooler_max_db_connections: 60
connection_pooler_mode: transaction
connection_pooler_number_of_instances: 2
connection_pooler_schema: pooler
connection_pooler_user: pooler
crd_categories:
- all
debug:
debug_logging: true
enable_database_access: true
docker_image: registry.opensource.zalan.do/acid/spilo-14:2.1-p6
enable_crd_registration: true
enable_lazy_spilo_upgrade: false
enable_pgversion_env_var: true
enable_shm_volume: true
enable_spilo_wal_path_compat: false
etcd_host: ""
kubernetes:
inherited_labels: [costCentre]
cluster_domain: cluster.local
cluster_labels:
application: spilo
cluster_name_label: cluster-name
enable_cross_namespace_secret: false
enable_init_containers: true
enable_pod_antiaffinity: false
enable_pod_disruption_budget: true
enable_sidecars: true
oauth_token_secret_name: postgres-operator
pdb_name_format: postgres-{cluster}-pdb
pod_antiaffinity_topology_key: kubernetes.io/hostname
pod_management_policy: ordered_ready
pod_role_label: spilo-role
pod_service_account_name: postgres-pod
pod_terminate_grace_period: 5m
secret_name_template: "{username}.{cluster}.credentials.{tprkind}.{tprgroup}"
spilo_allow_privilege_escalation: true
spilo_privileged: false
storage_resize_mode: pvc
watched_namespace: "*"
load_balancer:
db_hosted_zone: db.example.com
enable_master_load_balancer: false
enable_master_pooler_load_balancer: false
enable_replica_load_balancer: false
enable_replica_pooler_load_balancer: false
external_traffic_policy: Cluster
master_dns_name_format: "{cluster}.{team}.{hostedzone}"
replica_dns_name_format: "{cluster}-repl.{team}.{hostedzone}"
logging_rest_api:
api_port: 8080
cluster_history_entries: 1000
ring_log_lines: 100
logical_backup:
logical_backup_docker_image: registry.opensource.zalan.do/acid/logical-backup:v1.8.0
logical_backup_job_prefix: logical-backup-
logical_backup_provider: s3
logical_backup_s3_access_key_id: ""
logical_backup_s3_bucket: my-bucket-url
logical_backup_s3_endpoint: ""
logical_backup_s3_region: ""
logical_backup_s3_retention_time: ""
logical_backup_s3_secret_access_key: ""
logical_backup_s3_sse: AES256
logical_backup_schedule: 30 00 * * *
major_version_upgrade:
major_version_upgrade_mode: "off"
minimal_major_version: "9.6"
target_major_version: "14"
max_instances: -1
min_instances: -1
postgres_pod_resources:
default_cpu_limit: "1"
default_cpu_request: 100m
default_memory_limit: 500Mi
default_memory_request: 100Mi
min_cpu_limit: 250m
min_memory_limit: 250Mi
repair_period: 5m
resync_period: 30m
teams_api:
enable_admin_role_for_users: true
enable_postgres_team_crd: false
enable_postgres_team_crd_superusers: false
enable_team_member_deprecation: false
enable_team_superuser: false
enable_teams_api: false
pam_role_name: zalandos
postgres_superuser_teams:
- postgres_superusers
protected_role_names:
- admin
- cron_admin
role_deletion_suffix: _deleted
team_admin_role: admin
team_api_role_configuration:
log_statement: all
timeouts:
patroni_api_check_interval: 1s
patroni_api_check_timeout: 5s
pod_deletion_wait_timeout: 10m
pod_label_wait_timeout: 10m
ready_wait_interval: 3s
ready_wait_timeout: 30s
resource_check_interval: 3s
resource_check_timeout: 10m
users:
enable_password_rotation: false
password_rotation_interval: 90
password_rotation_user_retention: 180
replication_username: standby
super_username: postgres
workers: 8
kind: OperatorConfiguration
metadata:
labels:
app.kubernetes.io/instance: postgres-operator
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: postgres-operator
helm.sh/chart: postgres-operator-1.8.2
name: postgres-operator
namespace: default
Step four: workflows
Change the Promise
resource.configure
Workflow so that the image knows how to add the user'scostCentre
to the request for the Resource.
About workflows

workflows.resource.configure
contains a Kratix Pipeline that will take your user's request, apply rules from your organisation (including adding the costCentre
name), and output valid Kubernetes documents for the Operator to run on a Destination cluster.
Conceptually, a configure Pipeline is a sequential set of steps that transforms an input value to generate an output value. There are three parts to the PostgreSQL Pipeline.
resources/minimal-postgres-manifest.yaml
execute-pipeline.sh
Dockerfile
You can see these files in the internal/configure-pipeline
directory. To
connect the new user input label, we will need to make sure the image both
reads it in from the API input, and applies it to the right place in the customised resource outputs.
This requires you to change two of files:
- Template: This template needs to hold reference to the
costCentre
label - Pipeline script: Inject the user's
costCentre
actual value into the resource template to generate the output
Update the minimal-postgres-manifest.yaml
to add in the property
The minimal-postgres-manifest.yaml
is the basic template for a Postgres. This is a valid Kubernetes document that the Postgres Operator can understand.
You know every Postgres Resource needs the costCentre
. Change the metadata in minimal-postgres-manifest.yaml
to include the costCentre
label. This sets up a holding spot for the costCentre
value the user sends in the request.
labels:
costCentre: TBD
👀 Click here for the complete metadata section
metadata:
name: acid-minimal-cluster
labels:
costCentre: TBD
Update the execute-pipeline.sh
to add in the user's value
The execute-pipeline.sh
(in promise-postgresql/internal/configure-pipeline
)
runs when Kubernetes schedules the Pipeline. This script is where the transformation logic lives.
You can see that the script is already parsing the request to
identify key user variables (name
, namespace
, teamId
, etc). The
script then uses yq to add those
user-provided values to the output document. You can do the same to process the
user's costCentre
.
In the execute-pipeline.sh
- Export another environment variable to store the value
export COST_CENTRE=$(yq eval '.spec.costCentre' /kratix/input/object.yaml)
- Add a new line for
yq
process the label replacement.metadata.labels.costCentre = env(COST_CENTRE) |
👀 Click here to view an example of the final script
#!/usr/bin/env sh
set -x
base_instance="/tmp/transfer/minimal-postgres-manifest.yaml"
# Read current values from the provided request
name="$(yq eval '.metadata.name' /kratix/input/object.yaml)"
env_type="$(yq eval '.spec.env // "dev"' /kratix/input/object.yaml)"
team="$(yq eval '.spec.teamId // "acid"' /kratix/input/object.yaml)"
dbname="$(yq eval '.spec.dbName // "postgres"' /kratix/input/object.yaml)"
instance_name="${team}-${name}-postgresql"
backup="false"
size="1Gi"
instances="1"
if [ $env_type = "prod" ]; then
backup="true"
size="10Gi"
instances="3"
fi
export COST_CENTRE=$(yq eval '.spec.costCentre' /kratix/input/object.yaml)
# Replace defaults with user provided values
cat ${base_instance} |
yq eval "
.metadata.labels.costCentre = env(COST_CENTRE) |
.metadata.namespace = \"default\" |
.metadata.name = \"${instance_name}\" |
.spec.enableLogicalBackup = ${backup} |
.spec.teamId = \"${team}\" |
.spec.volume.size = \"${size}\" |
.spec.numberOfInstances = ${instances} |
.spec.users = {\"${team}\": [\"superuser\", \"createdb\"]} |
.spec.databases = {\"$dbname\": \"$team\"} |
del(.spec.preparedDatabases)
" - > /kratix/output/postgres-instance.yaml
Test the pipeline locally
You can easily validate your pipeline locally by building and running the Docker image with the correct volume mounts.
Check that you are in the promise-postgresql
directory, and run the block below to:
- create two directories inside
internal/configure-pipeline
:input
andoutput
- create expected input file (i.e., the request from your user)
cd internal/configure-pipeline
mkdir -p {input,output}
cat > input/object.yaml <<EOF
---
apiVersion: marketplace.kratix.io/v1alpha1
kind: postgresql
metadata:
name: example
namespace: default
spec:
costCentre: "rnd-10002"
env: dev
teamId: acid
dbName: bestdb
EOF
Now test the pipeline by doing a Docker build and run. Check that, per the step above, you are still in the internal/configure-pipeline
directory.
docker build . --tag kratix-workshop/postgres-configure-pipeline:dev
docker run -v ${PWD}/input:/kratix/input -v ${PWD}/output:/kratix/output kratix-workshop/postgres-configure-pipeline:dev
Now you can validate the output/postgres-instance.yaml
file.
It should be the base manifest with all the custom values inserted and look like the example below. If your output is different, go back and check the steps from above and the files in the directory. Repeat this process until your output matches the output below.
👀 Click here to view an example of expected output YAML
apiVersion: "acid.zalan.do/v1"
kind: postgresql
metadata:
name: acid-example-postgresql
labels:
costCentre: "rnd-10002"
spec:
teamId: "acid"
volume:
size: 1Gi
numberOfInstances: 1
users:
acid:
- superuser
- createdb
databases:
bestdb: acid
postgresql:
version: "15"
enableLogicalBackup: false
Give the platform access to your image
Once you have made and validated all the image changes, you will need
to make the newly created kratix-workshop/postgres-configure-pipeline:dev
image
accessible.
If you created your clusters with KinD, you can load the image into local cache by running the command below. This will stop any remote DockerHub calls.
kind load docker-image kratix-workshop/postgres-configure-pipeline:dev --name platform
Click here if your clusters were not created with KinD
- Push the image to a Image repository (like Dockerhub), or
- Use the appropriate command to load the image (for example,
minikube cache add
if you are using minikube)
Update the Promise's workflows
value
The new image is built and available on your platform cluster. Update your Promise to use the new image.
Open the Promise definition file (promise-postgresql/promise.yaml
). From the top of the file, navigate to spec
> workflows
> resource
> configure[0]
> spec
> containers[0]
> image
and replace the current value image with the newly created kratix-workshop/postgres-configure-pipeline:dev
image.
👀 Click here to see the resulting Workflows section which should be indented under spec
in the Promise yaml
apiVersion: platform.kratix.io/v1alpha1
kind: Promise
metadata:
name: postgresql
spec:
api:
# ...
workflows:
resource:
configure:
- apiVersion: platform.kratix.io/v1alpha1
kind: Pipeline
metadata:
name: configure-instance
spec:
containers:
- name: pipeline-stage-0
image: kratix-workshop/postgres-configure-pipeline:dev
dependencies:
# ...
Step five: Install
Install the modified Promise on your platform
You can now install your enhanced Postgres Promise on your platform. Make sure you're in the promise-postgresql/
directory.
kubectl --context $PLATFORM apply --filename promise.yaml
Check that your Promise's resource is available.
kubectl --context $PLATFORM get crds
You should see something similar to
NAME CREATED AT
clusters.platform.kratix.io 2022-08-09T14:35:54Z
postgresqls.marketplace.kratix.io 2022-08-09T14:54:26Z
promises.platform.kratix.io 2022-08-09T14:35:54Z
workplacements.platform.kratix.io 2022-08-09T14:35:54Z
works.platform.kratix.io 2022-08-09T14:35:55Z
Check that the ependencies have been installed on the worker:
(This may take a few minutes so --watch
will watch the command. Press Ctrl+C to stop watching)
For Postgres, you can see in the Promise file that there are a number of RBAC
resources, as well as a deployment that installs the Postgres Operator in the
worker cluster. That means that when the Promise is successfully applied you
will see the postgres-operator
deployment in the worker cluster. That's also
an indication that the Operator is ready to provision a new Postgres.
kubectl --context $WORKER --namespace default get pods
You should see something similar to
NAME READY STATUS RESTARTS AGE
postgres-operator-6c6dbd4459-hcsg2 1/1 Running 0 1m
You have successfully released a new platform capability! Your users can request a Postgres Resource, and that Postgres will include their costCentre
.
Step six: Verify
Check it works: make a request to your platform for a Postgres Resource
Verifying your Kratix Promise can be fulfilled
Switching hats to test your release, now act as one of your users to make sure the Promise creates working Resource.
You need to create a request for a Resource, which is a valid Kubernetes resource. Like all Kubernetes resources, this must include all required fields:
apiVersion
where the resource can be found. This ismarketplace.kratix.io/v1alpha1
in your Postgres Promise (fromspec.api.spec.group
inpromise.yaml
).kind
. This ispostgresql
in your Postgres Promise (fromspec.api.spec.name
inpromise.yaml
).- Values for required fields. Fields are
teamId
,env
,dbName
andcostCentre
in your Postgres Promise (fromspec
>api
>spec
>versions
[0] >schema
>openAPIV3Schema
>properties
>spec
>properties
inpromise.yaml
). - A unique name and namespace combination.
In the sample request (promise-postgresql/resource-request.yaml
) add
the additional costCentre
field as a sibling to the other fields under spec
with any valid input. For example, costCentre: "rnd-10002"
.
👀 Click here for the full Postgres Resource definition
apiVersion: marketplace.kratix.io/v1alpha1
kind: postgresql
metadata:
name: example
namespace: default
spec:
costCentre: "rnd-10002"
env: dev
teamId: acid
dbName: bestdb
Then apply the request file to the platform cluster:
kubectl --context $PLATFORM apply --filename resource-request.yaml
We will validate the outcomes of this command in the next section.
Validating the created Postgres
Back as a platform engineer, you want to ensure that the platform and Promise behaved as it should when creating the Resources and that the Resources have met the requirements for the feature.
After you applied the request in the step above, you should
eventually see a new pod executing the
execute-pipeline.sh
script you created.
Check by listing the pods on the platform:
(This may take a few minutes so --watch
will watch the command. Press Ctrl+C to stop watching)
kubectl --context $PLATFORM get pods --watch
You should see something similar to
NAME READY STATUS RESTARTS AGE
configure-pipeline-postgresql-default-SHA 0/1 Completed 0 1h
Then view the Pipeline logs by running (replace SHA with the value from the output of get pods
above):
kubectl --context $PLATFORM logs --container xaas-configure-pipeline-stage-0 pods/configure-pipeline-postgresql-default-SHA
On the worker cluster, you will eventually see a Postgres service as a two-pod cluster in the Running state with the name defined by the Resource definition (postgres-resource-request.yaml
).
(This may take a few minutes so --watch
will watch the command. Press Ctrl+C to stop watching)
kubectl --context $WORKER get pods --watch
You should see something similar to
NAME READY STATUS RESTARTS AGE
acid-example-postgresql-0 1/1 Running 0 1h
...
For the finance team, the pods will provide cost tracking through your new costCentre
label. This can be confirmed by only selecting pods that contain the provided cost centre value:
kubectl --context $WORKER get pods --selector costCentre=rnd-10002
You should see something similar to
NAME READY STATUS RESTARTS AGE
acid-example-postgresql-0 1/1 Running 0 1h
Summary
Your platform has a new Promise. Your users have access to a new service from the Promise. Your finance team has the ability to track service usage. Well done!
To recap the steps we took:
- ✅ Acquired a base Promise
- ✅ Changed the Promise so that the user who wants a Postgres knows they need to include their
costCentre
name when they make their request to the platform - ✅ Changed the Promise so that the operator dependency that creates the Resource knows to apply the new
costCentre
labelcostCentre
- ✅ Changed the Promise so that the Workflow knows how to add the user's
costCentre
to the request for the Postgres - ✅ Installed the modified Promise on your platform
- ✅ Checked it works: make a request to your platform for a Postgres Resource
Clean up environment
To clean up your environment first delete your request for the Postgres Resource
kubectl --context $PLATFORM delete --filename resource-request.yaml
Verify the workloads belonging to the request have been deleted in the worker
kubectl --context $WORKER get pods
Now that the Resource has been deleted you can delete the Promise
kubectl --context $PLATFORM delete --filename promise.yaml
Verify the Dependencies are deleted from the worker
kubectl --context $WORKER get pods
🎉 Congratulations!
✅ You have enhanced a Kratix Promise to suit your organisation's needs.
👉🏾 Let's add a new Worker.