PLEASE NOTE: This document applies to v0.1 version and not to the latest release v0.2

Documentation for other releases can be found by using the version selector in the top right of any doc page.

Deploying a WordPress Workload on AWS

This guide will walk you through how to use Crossplane to deploy a stateful workload in a portable way to AWS. In this environment, the following components will be dynamically provisioned and configured during this guide:


Before starting this guide, you should have already configured your AWS account for usage by Crossplane.

You should have a ~/.aws/credentials file on your local filesystem.

Administrator Tasks

This section covers the tasks performed by the cluster or cloud administrator, which includes:

Note: all artifacts created by the administrator are stored/hosted in the crossplane-system namespace, which has restricted access, i.e. Application Owner(s) should not have access to them.

For the next steps, make sure your kubectl context points to the cluster where Crossplane was deployed.

Create credentials

  1. Get base64 encoded credentials with cat ~/.aws/credentials|base64|tr -d '\n'
  2. Replace BASE64ENCODED_AWS_PROVIDER_CREDS in cluster/examples/workloads/wordpress-aws/provider.yaml with value from previous step.

Configure EKS Cluster Pre-requisites

EKS cluster deployment is somewhat of an arduous process right now. A number of artifacts and configuration needs to be set up within the AWS console first before proceeding with the provisioning of an EKS cluster using Crossplane. We anticipate that AWS will make improvements on this user experience in the near future.

Create a named keypair

Create your Amazon EKS Service Role

Original Source Guide

  1. Open the IAM console.
  2. Choose Roles, then Create role.
  3. Choose EKS from the list of services, then Allows Amazon EKS to manage your clusters on your behalf for your use case, then Next: Permissions.
  4. Choose Next: Review.
  5. For Role name, enter a unique name for your role, such as eksServiceRole, then choose Create role.
  6. Replace EKS_ROLE_ARN in cluster/examples/workloads/wordpress-aws/provider.yaml with role arn from previous step.

Create your Amazon EKS Cluster VPC

Original Source Guide

  1. Open the AWS CloudFormation console.
  2. From the navigation bar, select a Region that supports Amazon EKS. ```> Note Amazon EKS is available in the following Regions at this time:
    • US West (Oregon) (us-west-2)
    • US East (N. Virginia) (us-east-1)
    • EU (Ireland) (eu-west-1) ```
  3. Choose Create stack.
  4. For Choose a template, select Specify an Amazon S3 template URL.
  5. Paste the following URL into the text area and choose Next:
  6. On the Specify Details page, fill out the parameters accordingly, and then choose Next. ```
    • Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can call it eks-vpc.
    • VpcBlock: Choose a CIDR range for your VPC. You may leave the default value.
    • Subnet01Block: Choose a CIDR range for subnet 1. You may leave the default value.
    • Subnet02Block: Choose a CIDR range for subnet 2. You may leave the default value.
    • Subnet03Block: Choose a CIDR range for subnet 3. You may leave the default value. ```
  7. (Optional) On the Options page, tag your stack resources. Choose Next.
  8. On the Review page, choose Create.
  9. When your stack is created, select it in the console and choose Outputs.
  10. Replace EKS_VPC, EKS_ROLE_ARN, EKS_SUBNETS, EKS_SECURITY_GROUP in cluster/examples/workloads/wordpress-aws/provider.yaml with values from previous step (vpcId, subnetIds, securityGroupIds). Note EKS_SECURITY_GROUP needs to be replaced twice in file.
  11. Replace REGION in cluster/examples/workloads/wordpress-aws/provider.yaml with the region you selected in VPC creation.

Create an RDS subnet group

  1. Navigate to aws console in same region as the EKS clsuter
  2. Navigate to RDS service
  3. Navigate to Subnet groups in left hand pane
  4. Click Create DB Subnet Group
  5. Name your subnet i.e. eks-db-subnets
  6. Select the VPC created in the EKS VPC step
  7. Click Add all subnets related to this VPC
  8. Click Create
  9. Replace RDS_SUBNET_GROUP in cluster/examples/workloads/wordpress-aws/provider.yaml with the DBSubnetgroup name you just created.

Create an RDS Security Group (example only)

Note: This will make your RDS instance visible from Anywhere on the internet. This if for EXAMPLE PURPOSES ONLY, and is NOT RECOMMENDED for production system.

  1. Navigate to ec2 in the region of the EKS cluster
  2. Navigate to security groups
  3. Select the same VPC from the EKS cluster.
  4. On the Inbound Rules tab, choose Edit.
    • For Type, choose MYSQL/Aurora
    • For Port Range, type 3306
    • For Source, choose Anywhere from drop down or type:
  5. Choose Add another rule if you need to add more IP addresses or different port ranges.
  6. Replace RDS_SECURITY_GROUP in cluster/examples/workloads/wordpress-aws/provider.yaml with the security group we just created.

Deploy all Workload Resources

Now deploy all the workload resources, including the RDS database and EKS cluster with the following single commands:

Create provider:

kubectl create -f cluster/examples/workloads/wordpress-aws/provider.yaml

Create cluster:

kubectl create -f cluster/examples/workloads/wordpress-aws/cluster.yaml

It will take a while (~15 minutes) for the EKS cluster to be deployed and becoming ready. You can keep an eye on its status with the following command:

kubectl -n crossplane-system get ekscluster -o,STATE:.status.state,CLUSTERNAME:.status.clusterName,ENDPOINT:.status.endpoint,LOCATION:.spec.location,,RECLAIMPOLICY:.spec.reclaimPolicy

Once the cluster is done provisioning, you should see output similar to the following (note the STATE field is ACTIVE and the ENDPOINT field has a value):

NAME                                       STATE      CLUSTERNAME   ENDPOINT                                                                   LOCATION   CLUSTERCLASS       RECLAIMPOLICY
eks-8f1f32c7-f6b4-11e8-844c-025000000001   ACTIVE     <none>   <none>     standard-cluster   Delete

Application Developer Tasks

This section covers the tasks performed by the application developer, which includes:

Now that the EKS cluster is ready, let’s begin deploying the workload as the application developer:

kubectl -n demo create -f cluster/examples/workloads/wordpress-aws/workload.yaml

This will also take awhile to complete, since the MySQL database needs to be deployed before the WordPress pod can consume it. You can follow along with the MySQL database deployment with the following:

kubectl -n crossplane-system get rdsinstance -o,STATUS:.status.state,,VERSION:.spec.version

Once the STATUS column is available like below, then the WordPress pod should be able to connect to it:

NAME                                         STATUS      CLASS            VERSION
mysql-2a0be04f-f748-11e8-844c-025000000001   available   standard-mysql   <none>

Now we can watch the WordPress pod come online and a public IP address will get assigned to it:

kubectl -n demo get workload -o,,NAMESPACE:.spec.targetNamespace,,SERVICE-EXTERNAL-IP:.status.service.loadBalancer.ingress[0].ip

When a public IP address has been assigned, you’ll see output similar to the following:

demo            demo-cluster   demo        wordpress

Once WordPress is running and has a public IP address through its service, we can get the URL with the following command:

echo "http://$(kubectl get workload test-workload -o jsonpath='{.status.service.loadBalancer.ingress[0].ip}')"

Paste that URL into your browser and you should see WordPress running and ready for you to walk through the setup experience. You may need to wait a few minutes for this to become active in the AWS load balancer.

Connect to your EKSCluster (optional)


Please see Install instructions section: Install and Configure kubectl for Amazon EKS

When the EKSCluster is up and running, you can update your kubeconfig with:

aws eks update-kubeconfig --name <replace-me-eks-cluster-name>

Node pool is created after the master is up, so expect a few more minutes to wait, but eventually you can see that nodes joined with:

kubectl config use-context <context-from-last-command>
kubectl get nodes


First delete the workload, which will delete WordPress and the MySQL database:

kubectl -n demo delete -f cluster/examples/workloads/wordpress-aws/workload.yaml

Then delete the EKS cluster:

kubectl delete -f cluster/examples/workloads/wordpress-aws/cluster.yaml

Finally, delete the provider credentials:

kubectl delete -f cluster/examples/workloads/wordpress-aws/provider.yaml

Note: There may still be an ELB that was not properly cleaned up, and you will need to go to EC2 > ELBs and delete it manually.