...
- Create a terraform.tfvars file in the backend directory and ensure the following variables are included
app_name=”<your-app-name>”
- Run
terraform init - Run
terraform planto observe the resources that will be deployed (optional) - Once verified, run
terraform apply -auto-approve - Once the backend has been deployed, go to the
infrastructuredirectory and openmain.tf - Find the block:
backend "s3" {
bucket = "xxx"
key = "terraform.infrastructure.tfstate"
region = "xxx"
dynamodb_table = "xxx"
}
- Fill in the marked with xxx with the ones that you have created on step 1 - 5
...
- Create a
terraform.tfvarsfile and ensure the following variables are included
app_name=”<your-app-name>”
cluster_name=”<your-eks-cluster-name>”
rds_username=”<your-rds-username>”
rds_password=”<your-rds-password>”
- Run
terraform init - Run
terraform planto observe the resources that will be deployed (optional) - Once verified, run
terraform apply -auto-approve
...
- Run
kubectl get ingress -A. You should see the DNS under Address column like so:
k8s-namespace-RANDOM-STRING.REGION.elb.amazonaws.com
- Use the Address and go to
/jw. It will redirect you to the database setup
...
- This happens when you are using different credentials - different users or roles to access the cluster. If you are the cluster creator, you should be able to access the cluster
- Solution:
- In the Terraform Iac, go to infrastructure/compute/eks/eks.tf
- Under the module “eks”, add the following
- If you are using users credential:
aws_auth_users= [
{
userarn = "arn:aws:iam::<account-id>:user/<username>"
username = "<username>"
groups = ["system:masters"]
}
]
- If you are using roles, you may append the aws_auth_roles block like so:
{
rolearn = “arn:aws:iam::<account-id>:role/<role-name>”
username = "<role-name>"
groups = ["system:masters"]
}