PLEASE NOTE: This document applies to v0.2 version and not to the latest release v0.5

Documentation for other releases can be found by using the version selector in the top right of any doc page.

Deploying a WordPress Workload on AWS

This guide walks you through how to use Crossplane to deploy a stateful workload in a portable way on AWS. The following components are dynamically provisioned and configured during this guide:


Before starting this guide, you should have already configured your AWS account for use with Crossplane.

You should also have an AWS credentials file at ~/.aws/credentials already on your local filesystem.

Administrator Tasks

This section covers tasks performed by the cluster or cloud administrator. These include:

Note: All artifacts created by the administrator are stored/hosted in the crossplane-system namespace, which has restricted access, i.e. Application Owner(s) should not have access to them.

To successfully follow this guide, make sure your kubectl context points to the cluster where Crossplane was deployed.

Configuring EKS Cluster Pre-requisites

EKS cluster deployment is somewhat of an arduous process right now. A number of artifacts and configurations need to be set up within the AWS console prior to provisioning an EKS cluster using Crossplane. We anticipate that AWS will make improvements on this user experience in the near future.

Create a named keypair

  1. Use an existing ec2 key pair or create a new key pair by following these steps
  2. Export the key pair name as the EKS_WORKER_KEY_NAME environment variable
     export EKS_WORKER_KEY_NAME=replace-with-key-name

Create your Amazon EKS Service Role

Original Source Guide

  1. Open the IAM console.
  2. Choose Roles, then Create role.
  3. Choose EKS from the list of services, then “Allows EKS to manage clusters on your behalf”, then Next: Permissions.
  4. Choose Next: Tags.
  5. Choose Next: Review.
  6. For the Role name, enter a unique name for your role, such as eksServiceRole, then choose Create role.
  7. Set the EKS_ROLE_ARN environment variable to the name of your role ARN
     export EKS_ROLE_ARN=replace-with-full-role-arn

Create your Amazon EKS Cluster VPC

Original Source Guide

  1. Open the AWS CloudFormation console.
  2. From the navigation bar, select a Region that supports Amazon EKS.

    Note: Amazon EKS is available in the following Regions at this time:

    • US West (Oregon) (us-west-2)
    • US East (N. Virginia) (us-east-1)
    • EU (Ireland) (eu-west-1)
  3. Set the REGION environment variable to your region
     export REGION=replace-with-region
  4. Choose Create stack.
  5. For Choose a template, select Specify an Amazon S3 template URL.
  6. Paste the following URL into the text area and choose Next.
  7. On the Specify Details page, fill out the parameters accordingly, and choose Next. ```
    • Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can call it eks-vpc.
    • VpcBlock: Choose a CIDR range for your VPC. You may leave the default value.
    • Subnet01Block: Choose a CIDR range for subnet 1. You may leave the default value.
    • Subnet02Block: Choose a CIDR range for subnet 2. You may leave the default value.
    • Subnet03Block: Choose a CIDR range for subnet 3. You may leave the default value. ```
  8. (Optional) On the Options page, tag your stack resources and choose Next.
  9. On the Review page, choose Create.
  10. When your stack is created, select it in the console and choose Outputs.
  11. Using values from outputs, export the following environment variables.
     export EKS_VPC=replace-with-eks-vpcId
     export EKS_SUBNETS=replace-with-eks-subnetIds01,replace-with-eks-subnetIds02,replace-with-eks-subnetIds03
     export EKS_SECURITY_GROUP=replace-with-eks-securityGroupId

Create an RDS subnet group

  1. Navigate to the aws console in same region as the EKS clsuter
  2. Navigate to RDS service
  3. Navigate to Subnet groups in left hand pane
  4. Click Create DB Subnet Group
  5. Name your subnet i.e. eks-db-subnets
  6. Select the VPC created in the EKS VPC step
  7. Click Add all subnets related to this VPC
  8. Click Create
  9. Export the db subnet group name
     export RDS_SUBNET_GROUP_NAME=replace-with-DBSubnetgroup-name

Create an RDS Security Group (example only)

Note: This will make your RDS instance visible from anywhere on the internet. This is for EXAMPLE PURPOSES ONLY and is NOT RECOMMENDED for production system.

  1. Navigate to ec2 in the same region as the EKS cluster
  2. Click: security groups
  3. Click Create Security Group
  4. Name it, ex. demo-rds-public-visibility
  5. Give it a description
  6. Select the same VPC as the EKS cluster.
  7. On the Inbound Rules tab, choose Edit.
    • For Type, choose MYSQL/Aurora
    • For Port Range, type 3306
    • For Source, choose Anywhere from drop down or type:
  8. Choose Add another rule if you need to add more IP addresses or different port ranges.
  9. Click: Create
  10. Export the security group id
     export RDS_SECURITY_GROUP=replace-with-security-group-id

Deploy all Workload Resources

Now deploy all the workload resources, including the RDS database and EKS cluster with the following commands:

Create provider:


Create cluster:

kubectl create -f cluster/examples/workloads/wordpress-aws/cluster.yaml

It will take a while (~15 minutes) for the EKS cluster to be deployed and become available. You can keep an eye on its status with the following command:

kubectl -n crossplane-system get ekscluster -o,STATE:.status.state,CLUSTERNAME:.status.clusterName,ENDPOINT:.status.endpoint,LOCATION:.spec.location,,RECLAIMPOLICY:.spec.reclaimPolicy

Once the cluster is done provisioning, you should see output similar to the following

Note: the STATE field is ACTIVE and the ENDPOINT field has a value):

NAME                                       STATE      CLUSTERNAME   ENDPOINT                                                                   LOCATION   CLUSTERCLASS       RECLAIMPOLICY
eks-8f1f32c7-f6b4-11e8-844c-025000000001   ACTIVE     <none>   <none>     standard-cluster   Delete

Application Developer Tasks

This section covers tasks performed by an application developer. These include:

Now that the EKS cluster is ready, let’s begin deploying the workload as the application developer:

kubectl create -f cluster/examples/workloads/wordpress-aws/workload.yaml

This will also take awhile to complete, since the MySQL database needs to be deployed before the WordPress pod can consume it. You can follow along with the MySQL database deployment with the following:

kubectl -n crossplane-system get rdsinstance -o,STATUS:.status.state,,VERSION:.spec.version

Once the STATUS column is available as seen below, the WordPress pod should be able to connect to it:

NAME                                         STATUS      CLASS            VERSION
mysql-2a0be04f-f748-11e8-844c-025000000001   available   standard-mysql   <none>

Now we can watch the WordPress pod come online and a public IP address will get assigned to it:

kubectl -n demo get workload -o,,NAMESPACE:.spec.targetNamespace,,SERVICE-EXTERNAL-IP:.status.service.loadBalancer.ingress[0].ip

When a public IP address has been assigned, you’ll see output similar to the following:

demo            demo-cluster   demo        wordpress

Once WordPress is running and has a public IP address through its service, we can get the URL with the following command:

echo "http://$(kubectl get workload test-workload -o jsonpath='{.status.service.loadBalancer.ingress[0].ip}')"

Paste that URL into your browser and you should see WordPress running and ready for you to walk through its setup experience. You may need to wait a few minutes for this to become accessible via the AWS load balancer.

Connecting to your EKSCluster (optional)


Please see Install instructions section: Install and Configure kubectl for Amazon EKS

When the EKSCluster is up and running, you can update your kubeconfig with:

aws eks update-kubeconfig --name <replace-me-eks-cluster-name>

Node pool is created after the master is up, so expect a few more minutes to wait, but eventually you can see that nodes joined with:

kubectl config use-context <context-from-last-command>
kubectl get nodes


First delete the workload, which will delete WordPress and the MySQL database:

kubectl delete -f cluster/examples/workloads/wordpress-aws/workload.yaml

Then delete the EKS cluster:

kubectl delete -f cluster/examples/workloads/wordpress-aws/cluster.yaml

Finally, delete the provider credentials:

kubectl delete -f cluster/examples/workloads/wordpress-aws/provider.yaml

Note: There may still be an ELB that was not properly cleaned up, and you will need to go to EC2 > ELBs and delete it manually.