Scale resources in an AI-powered control plane
In this tutorial, you deploy an AI controller that manages an AWS RDS database.
A CronOperation runs every minute. It reads live CloudWatch metrics from the
database object, calls Claude, and decides whether to scale. If it scales, it
writes its reasoning back to the object as an annotation.
By the end of this tutorial, you can:
- See live CloudWatch metrics surfaced directly on a Crossplane
SQLInstanceobject - Deploy an AI scaling controller with a single
kubectl apply - Read the model's reasoning from the Kubernetes object it acted on
- Trigger a load test and watch the AI decide to scale up in real time
Prerequisites
Install the following tools before starting:
kubectl- AWS CLI, configured with credentials that can create VPCs and RDS instances
- kind
- An Anthropic API key with access to Claude
up CLIv0.44.3 or later
The load test later uses mysqlslap, which ships with the MySQL client tools.
On macOS:
brew install mysql-client
export PATH="$(brew --prefix mysql-client)/bin:$PATH"
On Linux (Debian/Ubuntu):
apt-get install -y mysql-client
Clone the project
git clone https://github.com/upbound/configuration-aws-database-ai demo
cd demo
All commands from this point run from inside the demo directory.
Configure credentials
Create a file named aws-credentials.txt in the project directory with your
AWS credentials in INI format:
[default]
aws_access_key_id = <your-access-key-id>
aws_secret_access_key = <your-secret-access-key>
This tutorial uses static AWS credentials for convenience. Don't use static credentials in production. Use IAM roles, IRSA, or another short-lived credential mechanism instead. See AWS authentication for secure alternatives.
Export your Anthropic API key. The setup steps below use it to create a Kubernetes secret:
export ANTHROPIC_API_KEY=<your-anthropic-api-key>
Start the project
Open a dedicated terminal and run from inside the demo directory:
up project run --local --ingress
This command:
- Creates a kind cluster
- Installs UXP
- Builds and deploys the composition functions (
function-rds-metricsandfunction-claude) - Installs the AWS providers declared in
upbound.yaml - Applies the XRDs from
apis/ - Installs an ingress controller for the UXP console
Startup takes several minutes. The command exits when the cluster is ready.
up project run --local may print traces export: context deadline exceeded.
This message reports a telemetry timeout and doesn't affect the cluster setup.
Verify the connection:
kubectl get nodes
Enable the alpha operations feature on the Crossplane deployment so that
CronOperation and Operation resources reconcile:
kubectl patch deploy crossplane -n crossplane-system --type=json \
-p='[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--enable-operations"}]'
kubectl rollout status deploy/crossplane -n crossplane-system
Without this flag, CronOperation resources stay unreconciled (no status,
no schedule fires).
Create the namespace and load AWS credentials and the Anthropic API key into the cluster:
-
Create the
database-teamnamespace:kubectl apply -f examples/ns-database-team.yaml -
Create the AWS credentials secret in both namespaces. The
ProviderConfigand thefunction-rds-metricsfunction both read from this secret:kubectl create secret generic aws-creds \
--namespace database-team \
--from-file=credentials=./aws-credentials.txt \
--dry-run=client -o yaml | kubectl apply -f -
kubectl create secret generic aws-creds \
--namespace crossplane-system \
--from-file=credentials=./aws-credentials.txt \
--dry-run=client -o yaml | kubectl apply -f - -
Create the Anthropic API key secret used by
function-claude:kubectl create secret generic claude \
--namespace crossplane-system \
--from-literal=ANTHROPIC_API_KEY="${ANTHROPIC_API_KEY}" \
--dry-run=client -o yaml | kubectl apply -f -
Wait for both AWS providers and both functions to become healthy:
kubectl get providers
kubectl get functions
All four should show HEALTHY: True before continuing.
If kubectl get providers or kubectl get functions returns No resources found,
up project run --local didn't complete. Delete the cluster and restart from
Start the project.
Apply the ProviderConfig, then the network, then the database:
-
Apply the
ProviderConfig:kubectl apply -f examples/providerconfig-aws-static.yaml -
Provision the network:
kubectl apply -f examples/network-rds-metrics.yamlWait for the network composite resource to become ready (~5 minutes):
kubectl get network rds-metrics-database-ai-scale -n database-team -wPress Ctrl+C once it shows
READY: True. -
Provision the database:
kubectl apply -f examples/mariadb-xr-rds-metrics.yamlRDS provisioning takes 10 to 15 minutes. Watch the status:
kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team -wPress Ctrl+C once it shows
READY: Truebefore continuing.
While you wait, the function-rds-metrics composition step is already
collecting CloudWatch data and writing it onto the object. By the time the
database is ready, status.performanceMetrics contains live data.
Open the UXP console for a visual view of the resources:
up uxp web-ui open
Review the database
An RDS MariaDB instance is running on AWS, managed by Crossplane. Before wiring the AI into the loop, explore what the system already knows.
-
List the database object:
kubectl get sqlinstance -n database-teamYou should see
rds-metrics-database-ai-mysqlwithREADY: True. That's a real AWS RDS instance, managed as a Kubernetes object.In the UXP console, click View all Composite Resources. The
rds-metrics-database-ai-mysqlentry appears in the list. Click Relationship View to see the resources Crossplane provisioned. -
Verify the AWS resource. In the AWS Console, RDS in
us-east-1, findrds-metrics-database-ai-mysql. -
Find the performance metrics:
kubectl describe sqlinstance rds-metrics-database-ai-mysql -n database-teamFind the
status.performanceMetricsblock. This block contains live CloudWatch data such as CPU utilization, active connections, and free storage.function-rds-metricscollects this data and writes it into the object. The AI reads only this block and never queries CloudWatch directly.Or fetch just the metrics:
kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \
-o jsonpath='{.status.performanceMetrics}' | jq .
- Open
operations/rds-intelligent-scaling-cron/operation.yamlin your editor. That file is the entire scaling controller. ThesystemPromptdefines the scaling logic, including thresholds, instance class progression, and cooldown.
-
Apply the controller:
kubectl apply -f operations/rds-intelligent-scaling-cron/operation.yaml -
Watch the first decision:
kubectl get cronoperationThe
CronOperationtakes 30 to 45 seconds to start. Once it's running, watch for the first operation:kubectl get operations -wWait until an operation shows
SUCCEEDED: True, then press Ctrl+C and describe it:kubectl describe operation <name>The
Eventssection shows the AI's reasoning and decision. -
Check the annotation written back to the database object:
kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \
-o jsonpath='{.metadata.annotations}' | jq .In the UXP console, navigate to
rds-metrics-database-ai-mysqland open the YAML tab. Theintelligent-scaling/last-scaled-decisionannotation contains the model's last decision.
Watch the controller idle
The CronOperation runs every minute. CPU is low, so watch what the AI decides
when there's nothing to do.
-
Watch operations run:
kubectl get operations -wA new operation appears every minute. Press Ctrl+C after several have run. In the UXP console, select Operations in the left navigation to see the same list visually.
-
Read one of the decisions:
kubectl describe operation <name>
Look at the Events section. At low CPU, the AI decides to hold. The
cooldown logic is also in the prompt, so it doesn't flip the instance class
every minute even if usage crosses the thresholds.
-
Look at the current metrics:
kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \
-o jsonpath='{.status.performanceMetrics}' | jq .The AI reads this same data before making a decision.
-
Confirm the current instance class:
kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \
-o jsonpath='{.spec.parameters.instanceClass}'
It's db.t3.micro.
You can also confirm the current instance type in the AWS Console, RDS in
us-east-1.
Trigger a scale
Run a load test that drives CPU above the scaling threshold so the AI decides to act.
-
Confirm the starting instance class:
kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \
-o jsonpath='{.spec.parameters.instanceClass}'It should be
db.t3.micro. -
In a second terminal, run the load test from inside the
demodirectory:bash perf-scale-demo.shThe script sends CPU-intensive queries to the database for 5 to 10 minutes. If it finishes without triggering a scale, run it again.
-
Watch the controller act:
kubectl get operations -wWhen CPU crosses the threshold (~60%), the next
CronOperationdecides to scale up. Press Ctrl+C once you see a new operation start. -
Check the new instance class:
kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \
-o jsonpath='{.spec.parameters.instanceClass}'It should now be
db.t3.small. -
Check the reasoning:
kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \
-o jsonpath='{.metadata.annotations.intelligent-scaling/last-scaled-decision}'In the AWS Console, RDS in
us-east-1, refresh the database list. The instance class change is in progress, and RDS is modifying the live database.
Clean up
Delete the composite resources. Crossplane deletes all composed AWS resources (VPC, subnets, RDS instance) before removing the composite resources.
kubectl delete sqlinstance rds-metrics-database-ai-mysql -n database-team
kubectl delete network rds-metrics-database-ai-scale -n database-team
RDS deletion takes 5 to 10 minutes. Wait until the sqlinstance is fully removed:
kubectl get sqlinstance -n database-team -w
Once it's gone, delete the CronOperation and its history:
kubectl delete cronoperation rds-intelligent-scaling-cron
kubectl delete operations --all
Delete the cluster:
CLUSTER_NAME=$(kind get clusters | grep "^up-" | head -1)
kind delete cluster --name "${CLUSTER_NAME}"
Next steps
In this tutorial, you:
- Provisioned a real AWS RDS instance managed as a Crossplane
SQLInstance - Observed live CloudWatch metrics surfaced directly on the Kubernetes object
- Deployed an AI scaling controller with a single
kubectl apply - Read the model's reasoning from the annotation it wrote back to the object
- Ran a load test and watched the AI scale the database automatically
Continue with:
- CronOperations reference: schedules, history limits, concurrency
- WatchOperations reference: event-driven operations
- Composition functions: build custom logic for any resource
- Provider authentication: connect providers to your own cloud account
- Upbound Marketplace: providers and functions for AWS, Azure, GCP, and more