b7 Infrastructure Features

b7 configuration automates the foundations of dynamic performance, static content web hosting infrastructure, including:

  • classic load balancing and load balancer verification
  • auto scaling groups
  • instance launch configuration
  • integration with any web server and OS type of your choice
  • fundamental security controls

b7 Infrastructure Configuration Parameters

region
AWS Region Code such as us-east-1, representing one of twenty regions. The comments in vars.tf  list all possible values that may be used
access_key
IAM Access key ID
secret_key
IAM Secret access key
[instance_type]
EC2 instance type of the web server instance(s) that the load balancer will distribute traffic to. This will default to t2.micro  which is Free Tier eligible
[server_prt]
The TCP port number the load balancer will use to communicate with EC2 instances – 8080 by default. Your AMI’s web server should be configured with the same value. If you don’t have an AMI with your preferred web server configured and don’t set the AMI parameter, b7 infrastructure will select the latest official Amazon Linux 2 AMI for the region specified by region and provide a very rudimentary HTTP server for each instance on boot, for setup test purposes. It responds to HTTP GET requests by outputting a text string of the instance’s private IP address, so accessing the public domain name of the load balancer will display the private IP address of the instance the load balancer forwarded the GET request to and that actually served the response. With 3 instances in the Auto Scaling group up to 3 different IP addresses will be shown in a browser on successive page reloads of the load balancer’s URL
[min_size]
Auto Scaling group minimum number of instances – 2 by default
[desired_capacity]
Auto Scaling group desired and initial number of instances – 3 by default
[max_size]
Auto Scaling group maximum number of instances – 4 by default

Set the preceding 3 parameters to 1 until gaining confidence in the automation from inspecting the AWS console

[public_key]
The SSH public key to insert into all instances in the Auto Scaling group allowing SSH access, given ownership of the corresponding private key. By default the instances will not be assigned public IP addresses, unless [assign_public_ip] is set to true. In the default case, SSH access can still be achieved through a temporary proxy instance

[ami]
region specific AMI ID of the system with your configured web server, which should be using the port number defined by server_prt for connection requests and traffic
[assign_public_ip]
The default value of false  means the instances in the Auto Scaling group will not be assigned public IP addresses. So, set this to true  if you DO want them assigned with public IP addresses. The load balancer will have a public IP address regardless of this value

[] indicates an optional parameter

b7 Infrastructure Example Runs

As user devops  from within the b7 subdirectory (if in the a1 subdirectory first change directory using a command such as cd ../b7):

Initialize the Terraform AWS provider with the terraform init command:


devops@stretch:/root/terraformation/a1$
devops@stretch:/root/terraformation/a1$
devops@stretch:/root/terraformation/a1$
devops@stretch:/root/terraformation/a1$ cd ../b7
devops@stretch:/root/terraformation/b7$ terraform init

Initializing provider plugins...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
devops@stretch:/root/terraformation/b7$

Decide on your parameter values and whether to specify them on the command line with subsequent Terraform subcommands or in the vars.tf  file, or a mixture of both. Use the -var ‘parameter=value’  syntax to append Terraform subcommands as needed, or edit vars.tf  accordingly with either the vi(vim), nano, pico or mg editors. Note that storing IAM credentials in such a plain text file should not be a production practice. Consider using both mechanisms to gain convenience and a measure of security.

Remember to use the terraform destroy command to tear down infrastructure that is not needed. Only the region, access_key  and secret_key  parameters need to be passed, but using it with all the same parameters as the terraform apply command would work.

To make Terraform’s apply and destroysubcommands non-interactive use them with the -auto-approve option.

Terraform is all about workflow, terraform plan for dry running is worth finding out about (you run it like the apply subcommand with the same options apart from -auto-approve). There’s obviously vastly more to Terraform workflow than that, it requires experience and trickery, but that is all secondary, the most important thing is to get the actual working Infrastructure as Code (IaC) – from somewhere – get the actual software – so can just run it and get the benefits.

For b7 you need the AmazonEC2FullAccess policy for your IAM user. If you’re sure you’ve specified your IAM keys correctly and you see the following error when doing a terraform plan or terraform apply:

AuthFailure: AWS was not able to validate the provided access credentials

This is due to clock skew between the VM and AWS. Resetting the time within the VM to match your host system (assuming that’s roughly right for your time zone), will fix the issue. You can do this with this form of the date +%T -s “HH:MM:SS”  command, so that would look this if resetting the VM time to 4:35pm for example:


devops@stretch:/root/terraformation/b7$ date +%T -s "16:35:00"
16:35:00 
devops@stretch:/root/terraformation/b7$

 


Using defaults to launch a triple instance load balanced web host

This is easy as 3 is the default value of desired_capacity. Initially don’t bother setting an SSH public key in vars.tf  and let b7 insert the dummy example placeholder key. All you have to do is set the region specific AMI ID of your image choice. It could be an AWS Community AMI, AWS Marketplace AMI, your own private AMI, Bitnami etc., as long it has an HTTP server listening for connection requests on server_prt, which defaults to 8080, NOT 80.

Say you’ve not set anything in vars.tf  apart from your region, just use:

terraform apply -var 'access_key=...' -var 'secret_key=...' -var 'ami=ami-...'

If your region is not already set in vars.tf  then add that as well, such as:

terraform apply -var 'region=eu-central-1' -var 'access_key=...' -var 'secret_key=...' -var 'ami=ami-...'

At the end of a b7 run which generates a lot of output, you get a report of the public DNS name of the load balancer, which you can use in your browser:


.
.
.
aws_autoscaling_group.rn: Creation complete after 1m0s (ID: tf-asg-...…………………...)

Apply complete! Resources: 9 added, 0 changed, 0 destroyed.

Outputs:

elb_dns_name = b7-elb-...……....eu-central-1.elb.amazonaws.com
devops@stretch:/root/terraformation/b7$ 
devops@stretch:/root/terraformation/b7$ 
devops@stretch:/root/terraformation/b7$ 

Remember, that if you’re running this on the Free Tier your instance hours would burn 3 times as fast, so remember to tear it down quick when done, with:

terraform destroy -var 'region=eu-central-1' -var 'access_key=...' -var 'secret_key=...' -var 'ami=ami-...' -auto-approve

In fact, to play it safe, nail the instance count to 1 to begin with like this:

terraform apply -var 'region=eu-central-1' -var 'access_key=...' -var 'secret_key=...' -var 'ami=ami-...' -var 'min_size=1' -var 'desired_capacity=1' -var 'max_size=1'

and remember to pull it down when done.

 

Launching a triple instance load balancer test

This is even easier, just follow the instructions above, but don’t set the AMI parameter. Assuming region is already set in vars.tf, this will suffice:

terraform apply -var 'access_key=...' -var 'secret_key=...'

b7 will automatically select the latest Amazon Linux 2 AMI for your specified region and create a rudimentary Python based web server for each instance, returning its private IP address. Keep reloading the load balancer’s domain name in a browser and you’ll notice the displayed private IP address change, assuming you’ve got more than 1 instance behind the load balancer.