Enhancing a Promise
This is Part 5, the final hands-on part, of a series illustrating how Kratix works.
ππΎΒ Β Previous: Writing and installing a Kratix Promise
ππΎΒ Β Next: Final Thoughts
Pre-requisitesβ
If you completed the environment cleanup steps at the end of the previous workshop chapter your good to go! If you did not cleanup or ran into issues you can run the following from inside the Kratix repo to get a fresh environment:
./scripts/quick-start.sh --recreate
In this tutorial, you will
- experience the power of leveraging customised Kratix Promises
- gain confidence with the components of a Promise
- enhance an sample Postgres Promise
Using Kratix to support your organisationβ
As you've seen, Kratix can support Promises for services like Jenkins, Knative, and Postgres.
When you think about providing services for things like automation, deployment or data, how often are you able to choose a service (like Postgres) and offer it to your users straight off the shelf?
Probably not very often.
Application teams need to be able to easily run their services in different environments. They'll want specific sizes, particular backup strategies, defined versions, and more. Key stakeholders in other parts of the business also need to easily understand the state of service usage as it applies to them (hello audit, billing, and security!).
Your team works with all of these users to understand the if, when, and how of each of these requests and creates a platform from a prioritised backlog of platform features.
This platform needs to be extensible and flexibleβyour users will have new and changing needs, and you'll want to quickly respond to valuable feature requests.
Kratix and Promises make it easier to create a platform paved with golden paths that deliver value easily and quickly.
Now you will create and enhance a Promise as a response to user and business needs.
From off-the-shelf to ready for the paved pathβ
The scenarioβ
In this exercise, you and the platform team are starting development of the next platform feature.
You discussed needs with application teams and you've decided to offer a new service. You'll be adding Postgres to your platform.
The billing team is a key stakeholder for the platform, and they will need a cost centre for each new instance of your Postgres service to charge back to the right team.
For the purposes of this exercise, you know that all of the underlying functionality to get the billing team what it needs is already in place.
In this guide, you only need create a new Postgres Promise that creates Postgres instances with a costCentre
label.
The steps:
- Get a base Promise
- Change the Promise so that the user who wants an instance knows they need to include their
costCentre
name when they make their request to the platform - Change the Promise so that the Worker Cluster Operator that creates the instance knows to apply your new
costCentre
labelcostCentre
- Change the Promise so that the pipeline knows how to add the user's
costCentre
to the request for the instance - Install the modified Promise on your platform
- Check it works: make a request to your platform for a Postgres instance
Step one: Get a base Promiseβ
There's a PostgreSQL promise available on the Marketplace. You'll use that as your base. Start by cloning the repository:
git clone https://github.com/syntasso/promise-postgresql.git
Take a look
cd promise-postgresql
ls
You should see something a structure similar to the one below:
. π promise-postgresql
βββ promise.yaml
βββ resource-request.yaml
βββ ...
βββ π internal
βββ π request-pipeline
β βββ π resources
β β βββ minimal-postgres-manifest.yaml
β βββ Dockerfile
β βββ execute-pipeline.sh
βββ π resources
β βββ operator.yaml
β βββ ...
βββ π scripts
β βββ test
β βββ ...
βββ README.md
You should see the promise.yaml
file. This is the Promise definition
that you'll modify and install on your platform. Ignore everything else in the
folder for now.
Step two: xaasCrd
β
Change the Promise so that the user who wants an instance knows they need to include their
costCentre
name when they make their request to the platform
About xaasCrd
β

xaasCrd
is the CRD exposed to the users of the Promise.
To see xaasCrd
in the Promise definition file, open promise.yaml
and look under the spec
section.
xaasCrd
is the contract with the user who wants an instance. It's where you get to define the required and optional configuration options exposed to your users.
You can already see a number of properties in this section of the promise.yaml
file. These properties are defined within a versioned schema and can have different types and validations.
Update xaasCrd
β
To add the required cost centre configuration, add the following to the promise.yaml
:
costCentre:
pattern: "^[a-zA-Z0-9_.-]*$"
type: string
From the top of the file, navigate to
spec
> xaasCrd
> spec
> versions[0]
> schema
>
openAPIV3Schema
> properties
> spec
> properties
Here, add your costCentre
YAML from above as a sibling to the existing dbName
property.
πΒ Β Click here to view a final version of the extended xaasCrd
which should be indented so as to nest under the spec
header
xaasCrd:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: postgresqls.marketplace.kratix.io
spec:
group: marketplace.kratix.io
names:
kind: postgresql
plural: postgresqls
singular: postgresql
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
properties:
spec:
properties:
costCentre:
pattern: "^[a-zA-Z0-9_.-]*$"
type: string
dbName:
default: postgres
description: |
Database name. A database will be created with this name. The owner of the database will be the teamId.
type: string
env:
default: dev
description: |
Configures and deploys this PostgreSQL with environment specific configuration. Prod PostgreSQL are configured with backups and more resources.
pattern: ^(dev|prod)$
type: string
teamId:
default: acid
description: |
Team ID. A superuser role will be created with this name.
type: string
type: object
type: object
served: true
storage: true
Step three: workerClusterResources
β
Change the Promise so that the Worker Cluster Operator that creates the instance knows to apply your new
costCentre
labelcostCentre
About workerClusterResources
β

workerClusterResources
is the description of all of the Kubernetes resources required to create an instance of the Promise, such as CRDs, Operators and Deployments.
In the Promise definition, you divide resources based on the idea of prerequisite resources and per-instance resources. Prerequisite resources are resources that we create before any application team requests an instance. This can be helpful for two scenarios:
- Any CRDs or dependency resources are ready when an instance is requested which speeds up response time to application teams.
- Resources that can be shared across instances are only deployed once. This can reduce load on the cluster, and it can also allow defining a Kratix Resource Request as a portion of an existing resource (e.g. you could provide a whole database instance on each Resource Request, or you could provide a database within an existing instance on each Resource Request)
The workerClusterResources
section of the Kratix Promise defines the prerequisite capabilities.
These capabilities are:
- created once per cluster.
- complete Kubernetes YAML documents stored in the
workerClusterResources
section of the Promise.
For the Postgres Promise you're defining, the only cluster resources (prerequisite capabilities) you need are conveniently packaged in a Kubernetes Operator that is maintained by Zalando. The Operator turns the complexities of configuring Postgres into a manageable configuration format.
Update workerClusterResources
β
To make sure each Postgres instance includes costCentre
, you need to make the Operator aware of the label.
To ensure Zalando's Postgres Operator is aware of the label, you need to add configuration when installing the Operator. The configuration the Operator needs will be under a new key: inherited_labels
.
inherited_labels
is unique to how Zalando's Postgres Operator works. If you were using a different Operator (or writing your own!), a different change may be required (or no change at all).
Following the Zalando docs
, you need to add inherited_labels
in a particular spot.
From the top of the file, navigate to
spec
> workerClusterResources[7]
> configuration
> kubernetes
To verify you're in the right place, the object should be kind: OperatorConfiguration
with name: postgres-operator
.
Under the kubernetes
key, add inherited_labels: [costCentre]
.
πΒ Β Click here to see the complete OperatorConfiguration
resource after this change
# Note, the property was added to the top of the configuration.kubernetes
- apiVersion: acid.zalan.do/v1
configuration:
aws_or_gcp:
aws_region: eu-central-1
enable_ebs_gp3_migration: false
connection_pooler:
connection_pooler_default_cpu_limit: "1"
connection_pooler_default_cpu_request: 500m
connection_pooler_default_memory_limit: 100Mi
connection_pooler_default_memory_request: 100Mi
connection_pooler_image: registry.opensource.zalan.do/acid/pgbouncer:master-22
connection_pooler_max_db_connections: 60
connection_pooler_mode: transaction
connection_pooler_number_of_instances: 2
connection_pooler_schema: pooler
connection_pooler_user: pooler
crd_categories:
- all
debug:
debug_logging: true
enable_database_access: true
docker_image: registry.opensource.zalan.do/acid/spilo-14:2.1-p6
enable_crd_registration: true
enable_lazy_spilo_upgrade: false
enable_pgversion_env_var: true
enable_shm_volume: true
enable_spilo_wal_path_compat: false
etcd_host: ""
kubernetes:
inherited_labels: [costCentre]
cluster_domain: cluster.local
cluster_labels:
application: spilo
cluster_name_label: cluster-name
enable_cross_namespace_secret: false
enable_init_containers: true
enable_pod_antiaffinity: false
enable_pod_disruption_budget: true
enable_sidecars: true
oauth_token_secret_name: postgres-operator
pdb_name_format: postgres-{cluster}-pdb
pod_antiaffinity_topology_key: kubernetes.io/hostname
pod_management_policy: ordered_ready
pod_role_label: spilo-role
pod_service_account_name: postgres-pod
pod_terminate_grace_period: 5m
secret_name_template: '{username}.{cluster}.credentials.{tprkind}.{tprgroup}'
spilo_allow_privilege_escalation: true
spilo_privileged: false
storage_resize_mode: pvc
watched_namespace: '*'
load_balancer:
db_hosted_zone: db.example.com
enable_master_load_balancer: false
enable_master_pooler_load_balancer: false
enable_replica_load_balancer: false
enable_replica_pooler_load_balancer: false
external_traffic_policy: Cluster
master_dns_name_format: '{cluster}.{team}.{hostedzone}'
replica_dns_name_format: '{cluster}-repl.{team}.{hostedzone}'
logging_rest_api:
api_port: 8080
cluster_history_entries: 1000
ring_log_lines: 100
logical_backup:
logical_backup_docker_image: registry.opensource.zalan.do/acid/logical-backup:v1.8.0
logical_backup_job_prefix: logical-backup-
logical_backup_provider: s3
logical_backup_s3_access_key_id: ""
logical_backup_s3_bucket: my-bucket-url
logical_backup_s3_endpoint: ""
logical_backup_s3_region: ""
logical_backup_s3_retention_time: ""
logical_backup_s3_secret_access_key: ""
logical_backup_s3_sse: AES256
logical_backup_schedule: 30 00 * * *
major_version_upgrade:
major_version_upgrade_mode: "off"
minimal_major_version: "9.6"
target_major_version: "14"
max_instances: -1
min_instances: -1
postgres_pod_resources:
default_cpu_limit: "1"
default_cpu_request: 100m
default_memory_limit: 500Mi
default_memory_request: 100Mi
min_cpu_limit: 250m
min_memory_limit: 250Mi
repair_period: 5m
resync_period: 30m
teams_api:
enable_admin_role_for_users: true
enable_postgres_team_crd: false
enable_postgres_team_crd_superusers: false
enable_team_member_deprecation: false
enable_team_superuser: false
enable_teams_api: false
pam_role_name: zalandos
postgres_superuser_teams:
- postgres_superusers
protected_role_names:
- admin
- cron_admin
role_deletion_suffix: _deleted
team_admin_role: admin
team_api_role_configuration:
log_statement: all
timeouts:
patroni_api_check_interval: 1s
patroni_api_check_timeout: 5s
pod_deletion_wait_timeout: 10m
pod_label_wait_timeout: 10m
ready_wait_interval: 3s
ready_wait_timeout: 30s
resource_check_interval: 3s
resource_check_timeout: 10m
users:
enable_password_rotation: false
password_rotation_interval: 90
password_rotation_user_retention: 180
replication_username: standby
super_username: postgres
workers: 8
kind: OperatorConfiguration
metadata:
labels:
app.kubernetes.io/instance: postgres-operator
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: postgres-operator
helm.sh/chart: postgres-operator-1.8.2
name: postgres-operator
namespace: default
Step four: xaasRequestPipeline
β
Change the Promise so that the pipeline knows how to add the user's
costCentre
to the request for the instance
About xaasRequestPipeline
β

xaasRequestPipeline
is the pipeline that will take your user's request, apply rules from your organisation (including adding the costCentre
name), and output valid Kubernetes documents for the Operator to run on a Worker Cluster.
Conceptually, a pipeline is the manipulation of an input value to generate an output value. There are three parts to the PostgreSQL request pipeline.
resources/minimal-postgres-manifest.yaml
execute-pipeline.sh
Dockerfile
You can see these files in the internal/request-pipeline
directory. To
connect the new user input label, we will need to make sure the pipelins both
reads it in, and applies it to the right place in the customised resource
outputs. This requires you to change two of files:
- Resource template: This resource needs to hold reference to the
costCentre
label - Pipeline script: Inject the user's
costCentre
actual value into the resource template to generate the output
Update the minimal-postgres-manifest.yaml
to add in the propertyβ
The minimal-postgres-manifest.yaml
is the pipeline basic template for the
Postgres instance. This is a valid Kubernetes document that the Postgres
Operator can understand.
You know every Postgres instance needs the costCentre
. Change the metadata in
minimal-postgres-manifest.yaml
to include the costCentre
label. This sets
up a holding spot for the costCentre
value the user sends in the request.
labels:
costCentre: TBD
πΒ Β Click here for the complete metadata section
metadata:
name: acid-minimal-cluster
labels:
costCentre: TBD
Update the execute-pipeline.sh
to add in the user's valueβ
The execute-pipeline.sh
(in promise-postgresql/internal/request-pipeline
)
runs when Docker builds the image for the pipeline. This script is where the
pipeline logic lives.
You can see that the script is already parsing the Kratix Resource Request to
identify key user variables (name
, namespace
, teamId
, etc). The
script then uses yq to add those
user-provided values to the output document. You can do the same to process the
user's costCentre
.
In the execute-pipeline.sh
- Export another environment variable to store the value
export COST_CENTRE=$(yq eval '.spec.costCentre' /input/object.yaml)
- Add a new line for
yq
process the replacement as a part of the pipeline.metadata.labels.costCentre = env(COST_CENTRE) |
πΒ Β Click here to view an example of the final script
#!/usr/bin/env sh
set -x
base_instance="/tmp/transfer/minimal-postgres-manifest.yaml"
# Read current values from the provided resource request
name="$(yq eval '.metadata.name' /input/object.yaml)"
env_type="$(yq eval '.spec.env // "dev"' /input/object.yaml)"
team="$(yq eval '.spec.teamId // "acid"' /input/object.yaml)"
dbname="$(yq eval '.spec.dbName // "postgres"' /input/object.yaml)"
instance_name="${team}-${name}-postgresql"
backup="false"
size="1Gi"
instances="1"
if [ $env_type = "prod" ]; then
backup="true"
size="10Gi"
instances="3"
fi
export COST_CENTRE=$(yq eval '.spec.costCentre' /input/object.yaml)
# Replace defaults with user provided values
cat ${base_instance} |
yq eval "
.metadata.labels.costCentre = env(COST_CENTRE) |
.metadata.namespace = \"default\" |
.metadata.name = \"${instance_name}\" |
.spec.enableLogicalBackup = ${backup} |
.spec.teamId = \"${team}\" |
.spec.volume.size = \"${size}\" |
.spec.numberOfInstances = ${instances} |
.spec.users = {\"${team}\": [\"superuser\", \"createdb\"]} |
.spec.databases = {\"$dbname\": \"$team\"} |
del(.spec.preparedDatabases)
" - > /output/postgres-instance.yaml
Test the pipeline locallyβ
You can easily validate your pipeline locally by building and running the Docker image with the correct volume mounts.
Check that you are in the promise-postgresql
directory, and run the block below to:
- create two directories inside
internal/request-pipeline
:input
andoutput
- create expected input file (i.e., the request from your user)
cd internal/request-pipeline
mkdir -p {input,output}
cat > input/object.yaml <<EOF
---
apiVersion: marketplace.kratix.io/v1alpha1
kind: postgresql
metadata:
name: example
namespace: default
spec:
costCentre: "rnd-10002"
env: dev
teamId: acid
dbName: bestdb
EOF
Now test the pipeline by doing a Docker build and run. Check that, per the step above, you are still in the internal/request-pipeline
directory.
docker build . --tag kratix-workshop/postgres-request-pipeline:dev
docker run -v ${PWD}/input:/input -v ${PWD}/output:/output kratix-workshop/postgres-request-pipeline:dev
Now you can validate the output/postgres-instance.yaml
file.
It should be the base manifest with all the custom values inserted and look like the example below. If your output is different, go back and check the steps from above and the files in the directory. Repeat this process until your output matches the output below.
πΒ Β Click here to view an example of expected output YAML
apiVersion: "acid.zalan.do/v1"
kind: postgresql
metadata:
name: acid-example-postgresql
labels:
costCentre: TBD
spec:
teamId: "acid"
volume:
size: 1Gi
numberOfInstances: 1
users:
acid:
- superuser
- createdb
databases:
bestdb: acid
postgresql:
version: "15"
enableLogicalBackup: false
Give the platform access to your pipeline imageβ
Once you have made and validated all the pipeline image changes, you will need
to make the newly created kratix-workshop/postgres-request-pipeline:dev
image
accessible.
If you created your clusters with KinD, you can load the image into local cache by running the command below. This will stop any remote DockerHub calls.
kind load docker-image kratix-workshop/postgres-request-pipeline:dev --name platform
Click here if your clusters were not created with KinD
- Push the image to a Image repository (like Dockerhub), or
- Use the appropriate command to load the image (for example,
minikube cache add
if you are using minikube)
Update the Promise's xaasRequestPipeline
valueβ
The new image is built and available on your Platform Cluster. Update your Promise to use the new image.
Open the Promise definition file (promise-postgresql/promise.yaml
). From the
top of the file, navigate to spec
> xaasRequestPipeline
and replace the
current value image with the newly created
kratix-workshop/postgres-request-pipeline:dev
image.
πΒ Β Click here to see the resulting xaasRequestPipeline section which should be indented under spec
in the Promise yaml
apiVersion: platform.kratix.io/v1alpha1
kind: Promise
metadata:
name: postgresql
spec:
xaasCrd:
# ...
xaasRequestPipeline:
- kratix-workshop/postgres-request-pipeline:dev
workerClusterResources:
# ...
Step five: Installβ
Install the modified Promise on your platform
You can now install your enhanced Postgres Promise on your platform. Make sure you're in the promise-postgresql/
directory.
kubectl --context $PLATFORM apply --filename promise.yaml
Check that your Promise's resource is available.
kubectl --context $PLATFORM get crds
You should see something similar to
NAME CREATED AT
clusters.platform.kratix.io 2022-08-09T14:35:54Z
postgresqls.marketplace.kratix.io 2022-08-09T14:54:26Z
promises.platform.kratix.io 2022-08-09T14:35:54Z
workplacements.platform.kratix.io 2022-08-09T14:35:54Z
works.platform.kratix.io 2022-08-09T14:35:55Z
Check that the workerClusterResources
have been installed.
For Postgres, you can see in the Promise file that there are a number of RBAC
resources, as well as a deployment that installs the Postgres Operator in the
Worker Cluster. That means that when the Promise is successfully applied you
will see the postgres-operator
deployment in the Worker Cluster. That's also
an indication that the Operator is ready to provision a new instance.
kubectl --context $WORKER --namespace default get pods
You should see something similar to
NAME READY STATUS RESTARTS AGE
postgres-operator-6c6dbd4459-hcsg2 1/1 Running 0 1m
You have successfully released a new platform capability! Your users can request a Postgres instance, and that instance will include their costCentre
.
Step six: Verifyβ
Check it works: make a request to your platform for a Postgres instance
Verifying your Kratix Promise can be fulfiledβ
Switching hats to test your release, now act as one of your users to make sure the Promise creates working instances.
You need to create a Kratix Resource Request, which is a valid Kubernetes resource. Like all Kubernetes resources, this must include all required fields:
apiVersion
where the resource can be found. This ismarketplace.kratix.io/v1alpha1
in your Postgres Promise (fromspec.xaasCrd.spec.group
inpromise.yaml
).kind
. This ispostgresql
in your Postgres Promise (fromspec.xaasCrd.spec.name
inpromise.yaml
).- Values for required fields. Fields are
teamId
,env
,dbName
andcostCentre
in your Postgres Promise (fromspec
>xaasCrd
>spec
>versions
[0] >schema
>openAPIV3Schema
>properties
>spec
>properties
inpromise.yaml
). - A unique name and namespace combination.
In the sample Resource Request (promise-postgresql/resource-request.yaml
) add
the additional costCentre
field as a sibling to the other fields under spec
with any valid input. For example, costCentre: "rnd-10002"
.
πΒ Β Click here for the full Postgres Resource Request
apiVersion: marketplace.kratix.io/v1alpha1
kind: postgresql
metadata:
name: example
namespace: default
spec:
costCentre: "rnd-10002"
env: dev
teamId: acid
dbName: bestdb
Then apply the request file to the Platform Cluster:
kubectl --context $PLATFORM apply --filename resource-request.yaml
We will validate the outcomes of this command in the next section.
Validating the created Postgresβ
Back as a platform engineer, you want to ensure that the platform and Promise behaved as it should when creating the instances and that the instances have met the reequirements for the feature.
After you applied the Kratix Resource Request in the step above, you should
eventually see a new pod executing the
execute-pipeline.sh
script you created.
Check by listing the pods on the platform:
(This may take a few minutes so --watch
will watch the command. Press Ctrl+C to stop watching)
kubectl --context $PLATFORM get pods --watch
You should see something similar to
NAME READY STATUS RESTARTS AGE
request-pipeline-postgresql-default-SHA 0/1 Completed 0 1h
Then view the pipeline logs by running (replace SHA with the value from the output of get pods
above):
kubectl --context $PLATFORM logs --container xaas-request-pipeline-stage-1 pods/request-pipeline-postgresql-default-SHA
On the Worker Cluster, you will eventually see a Postgres service as a two-pod cluster in the Running state with the name defined in the request (postgres-resource-request.yaml
).
(This may take a few minutes so --watch
will watch the command. Press Ctrl+C to stop watching)
kubectl --context $WORKER get pods --watch
You should see something similar to
NAME READY STATUS RESTARTS AGE
acid-example-postgresql-0 1/1 Running 0 1h
...
For the finance team, the pods will provide cost tracking through your new costCentre
label. This can be confirmed by only selecting pods that contain the provided cost centre value:
kubectl --context $WORKER get pods --selector costCentre=rnd-10002
You should see something similar to
NAME READY STATUS RESTARTS AGE
acid-example-postgresql-0 1/1 Running 0 1h
Summaryβ
Your platform has a new Promise. Your users have access to a new service from the Promise. Your finance team has the ability to track service usage. Well done!
To recap the steps we took:
- β Β Β Aquired a base Promise
- β
Β Β Changed the Promise so that the user who wants an instance knows they need to include their
costCentre
name when they make their request to the platform - β
Β Β Changed the Promise so that the Worker Cluster Operator that creates the instance knows to apply the new
costCentre
labelcostCentre
- β
Β Β Changed the Promise so that the pipeline knows how to add the user's
costCentre
to the request for the instance - β Β Β Installed the modified Promise on your platform
- β Β Β Checked it works: make a request to your platform for a Postgres instance
Cleanup environmentβ
To clean up your environment first delete the Resource Requests for the Postgres instance
kubectl --context $PLATFORM delete --filename resource-request.yaml
Verify the resources belonging to the Resource Requests have been deleted in the Worker Cluster
kubectl --context $WORKER get pods
Now the Resource Requests have been deleted you can delete the Promise
kubectl --context $PLATFORM delete --filename promise.yaml
Verify the Worker Cluster Resources are deleted from the Worker Cluster
kubectl --context $WORKER get pods
π Congratulations!
β
Β Β You have enhanced a Kratix Promise to suit your organisation's needs. This concludes our introduction to Kratix.
ππΎΒ Β Let's see where to go from here.