Working

Works on the Application layer (7).
It can look at the application's requests and route traffic to respective endpoints.
EX:
πŸ€ Two instances are running behind the ALB.
πŸ€ Instance 1: An application is running that take orders of anything related to Starbucks on the URL: orders.com/Starbucks
πŸ€ Instance 2: An application is running that take orders of anything related to only SMOOR on the URL: orders.com/smoor
πŸ€ Requests hit on ALB, This requests can be either of kind i.e. for ordering Starbucks coffee or ordering Belgium chocolate truffle with icecream at SMOOR, looks into it and process the requests to respective instances where desired application is running.

Gotchas

πŸ¦‹ Supports for web sockets and HTTP/2
πŸ¦‹ Allows content-based routing
πŸ¦‹ Path and host-based routing
πŸ¦‹ Supports microservices and container-based applications with ECS
πŸ¦‹ Has health checks and Cloudwatch metrics
πŸ¦‹ Load balancer API deletion protection

Architecture

Architectire

Explanation

Parameters in application load balancer:
πŸ€ Listener:
Where we set the port at which the requests should listen. It's just configuration part of ALB

πŸ€ Rules:
Behind the listener there are rules. These rules where to send the requests based on the request arrived at the listener's port.
There can only be One port for One listener, but many Rules can be there for the listener.
Ex: HTTP request comes at listener's port 80. Rules are set for port 80 saying if it is http://x/y send to Instance 1 and if the requests are of http://x/z send to instance 2. Simple as that. So there can be many rules behind One listener.
It's just configuration part of ALB

πŸ€ Target Groups:
Let's say three instances are running for requests related to Starbucks only. This will become a group. So if the request is of https://order.com/starbucks we tell load balancer that Target these requests to the group of three running instances. Thus it goes to the target...within the target, there are three instances it then distributes the requests among those instances in round-robin fashion.

πŸ€ Health Check:
Can monitor the wprking of ALB.

Demo

Recommend using Console...basically, you will explore a lot! Then start using Configuration management tools such as Terraform/ Ansible to play around.
NOTE: Click on highlighted text to refer screenshots. :)

  1. Launch Two instances
    πŸ‘‰ Click on Launch instance
    πŸ‘‰ Select 64bit AMI:Amazon Linux AMI 2018.03.0 (HVM), SSD Volume Type
    πŸ‘‰ Select t2.micro-> Next ->
    πŸ‘‰ Under Number of Instances: type 2 && πŸ‘‰ Under Network: select VPC (default/your own) &&
    subnet: select public subnet &&
    Auto-assign public ip: select
    πŸ‘‰ enable &&
    πŸ‘‰ Under User data: type
   #!/bin/bash
   yum update -y
   yum install -y httpd24
   service httpd start
   chkconfig httpd on

πŸ‘‰ -> Next ->
πŸ‘‰ Add storage: let it be dafult ->
πŸ‘‰ Next->Security group create your own. SSH:22:MyIP and HTTP:80:anywhere
πŸ‘‰ Review and launch -> Launch -> Give your key pair!.
We have launched two same instances in the same availability zone.
screenshots to refer.

  1. Try hitting using public ip of instances. It should display a deafult apache page. It should look something like below
    Instance 1
    Instance 2

  2. SSH into each instance using it's public ip and do the following.
    For Instance 1

ssh -i "your-key" [email protected]<your-ec2-public-ip>
sudo su
cd /var/www/html/
vi index.html
<html>                                        ### write the content
<header><title>Instance-1</title></header>
<body>
INSTANCE 1 <br><br>
This is deafult page<br><br>
</body>
</html>

Refresh the page and check what it displays

Simillarly For Instance 2

ssh -i "your-key" [email protected]<your-ec2-public-ip>

sudo su
cd /var/www/html/
vi index.html
<html>                                        ### write the content
<header><title>Instance-2</title></header>
<body>
INSTANCE 2 <br><br>
This is deafult page<br><br>
</body>
</html>

Refresh the page and check what it displays

  1. Create Application Load Balancer
    πŸ‘‰ Create two target groups, same config for both: name the target group, parameters for health check
    πŸ‘‰ Add instances to Target groups created. (one ec2 in each): 1, 2
    πŸ‘‰ create Load balancer -> ALB you have to select minimum two AZ -> Next-> Security groups select the same as created for EC2 -> Select TG -> Next -> create.
    Copy ALB's DNS and try hitting it.
    πŸ‘‰ Check what happens.
    Deault rule says route the request to TG1. which in our case Instance 1. So it should display Instance 1 page.

  2. SSH into each instance and do the following

Instance 1

ssh -i "your-key" [email protected]<your-ec2-public-ip>
sudo su
cd /var/www/html
mkdir starbucks
cd starbucks
vi index.html 
## add following contenct in file
<html>    
content
<header><title>Instance-1</title></header>
<body>
    INSTANCE 1 <br><br>
This is Starbucks path in TG-1 <br><br>
</body>
</html>

Instance 2

ssh -i "your-key" [email protected]<your-ec2-public-ip>
sudo su
cd /var/www/html
mkdir starbucks
cd starbucks
vi index.html 
<html>
content
<header><title>Instance-2</title></header>
<body>
        INSTANCE 2<br><br>
This is SMOOR path in TG-2<br><br>
</body>
</html>

  1. So far we haven't edited any rules, just gave listener port 80.
    Click on edit rule and add the Rules for path-based routing i.e.
    • if /starbucks* should go to TG 1 (Instance 1's display Starbuck page )
    • if /smoor* should go to TG 2 (Instance 2's display Smoor page )
    • by default i.e. when hit on to ALB's DNS should go to TG 1 (Instance 1's default page display)

This I how usually flow looks

flow
That's all folks go and explore and play around a little bit. :)