AWS Networking Deep Dive: Elastic Load Balancing (ELB)

AWS is one of the most popular public cloud providers. This course will teach you how to securely configure load balancing for any internet-facing or internal application, including configuring HTTPS, path-based routing, and idle timeouts.
Course info
Rating
(28)
Level
Intermediate
Updated
Jan 25, 2018
Duration
2h 30m
Table of contents
Description
Course info
Rating
(28)
Level
Intermediate
Updated
Jan 25, 2018
Duration
2h 30m
Description

Selecting and configuring the right load balancer type can be tough. In this course, AWS Networking Deep Dive: Elastic Load Balancing (ELB), you'll learn how to configure elastic load balancing for any application using the Application and Network Load Balancers. First, you'll discover how to securely load balance internet-facing and internal applications using the Application Load Balancer. Next, you'll explore how to load balance microservices using path-based routing. Finally, you'll delve into how and when to use the Network Load Balancer. When you're finished with this course, you'll have the necessary skills and knowledge to load balance any application.

About the author
About the author

Ben Piper is an IT consultant and the author of "Learn Cisco Network Administration in a Month of Lunches" from Manning Publications. He holds numerous certifications from Cisco, Citrix, and Microsoft.

More from the author
Architecting for Security on AWS
Intermediate
4h 8m
6 Sep 2018
AWS Networking Deep Dive: Route 53 DNS
Intermediate
4h 10m
18 May 2018
More courses by Ben Piper
Section Introduction Transcripts
Section Introduction Transcripts

Course Overview
Hi everyone. My name is Ben Piper, and welcome to my course, AWS Networking Deep Dive: Elastic Load Balancing (ELB). I'm an AWS Certified Solutions Architect and author. AWS is the world's most popular public cloud provider. As more organizations move their applications to the cloud, security, scalability, and resiliency become increasingly important. That's where Elastic Load Balancing comes in. In this course, you'll learn how to configure Elastic Load Balancing for any application, whether you're dealing with a single, monolithic application, or one that's been broken up into several microservices. You'll learn about the differences between the Classic, Application, and Network Load Balancers, and how to decide which one is right for you. Some of the major topics that we'll cover include load-balancing, internet-facing, and internal applications, securing your applications using HTTPS, path-based routing for containers and microservices, sticky sessions and idle timeouts, and implementing load balancing with IPv6. By the end of this course, you'll know how to implement Elastic Load Balancing with any application. Before beginning the course, you should be familiar with creating VPCs and managing AWS instances. I hope you'll join me on this journey to learn Elastic Load Balancing with the AWS Networking Deep Dive: Elastic Load Balancing course, only here at Pluralsight.

Load Balancing Internet-facing HTTP-based Web Applications
Welcome back. In this module, we're going to bring up the web tier instances for our application and place them behind an internet-facing Elastic Load Balancer. If you've watched my other courses, you know that I like to build things one piece at a time and verify that each piece works before moving on. So in this module, we're just going to get the web tier working, and later on in the course, we'll add the application and database tiers. Let's take a look at what we're going to end up with at the conclusion of this module. We're going to get the web tier instances up and running, and those are web1, web2, and web3. We'll create a target group with those instances as targets, and we'll create an Application Load Balancer listening on TCP port 80. In addition, the listener will be able to accept both IPv4 and IPv6 connections.

Load Balancing Internal Web Services
Welcome back. In this module, we're going to deploy the application tier of our web application and create an internal load balancer. The internal load balancer we're going to create is going to work a bit differently than the internet-facing load balancer we created earlier. We're going to start by creating a target group containing our three application tier instances, app1, app2, and app3. This target group is going to use the HTTP protocol and will point to TCP port 8080. Port 8080 is the port that the application tier components listen on for requests coming from the web tier. Next, we'll create an internal Application Load Balancer called app-lb, which will listen on TCP port 8080. After that, we'll reconfigure our web tier instances to point to the URL of the internal Application Load Balancer. To refresh your memory, the web tier instances will send requests to the application tier instructing those servers to read from or write to the database, which we're also going to set up. By the end of this module, all three tiers of our web application should be working.

Sticky Sessions and Idle Timeouts
Welcome back, my friends. In this module, we're going to learn about sticky sessions and idle timeouts. These are configuration items that let you control some aspects of the traffic between your clients and your load balancers. Let's start with a brief overview of each. Sticky sessions may sound like the name of a mobster or some other nefarious character, but it's actually a reference to the client sticking to the target that it originally got load balanced to. To put it a little differently, when a client gets load balanced to a particular target, every subsequent request from that client will also go to the same target. This binding between the client and the target is called a session, hence the term sticky session. Idle timeouts are completely different. When a client connects to an Application Load Balancer listener, they establish a TCP connection. HTTP or HTTPS traffic traverses this TCP connection. When there's no traffic going over this connection, the connection is idle, but it remains open. The idle timeout controls how long that TCP connection can remain idle before the load balancer closes it. In this module, we're going to talk about why you might want to use sticky sessions and idle timeouts, and of course, we're going to configure both. Let's get started.

Securing Web Applications with HTTPS
Welcome back. In this module, we're going to secure our web application end to end using HTTPS. That means that the entire communication path all the way from the client to the web tier and from the web tier to the application tier will be encrypted. Here's what we're going to end up with in this module. First, we'll secure the web front end. We're going to create a secure listener on our internet-facing load balancer. This is going to require us to generate and install a TLS certificate on the load balancer. Sometimes people call this an SSL certificate, SSL being the old protocol that TLS replaced. We're going to create this certificate using the Amazon Certificate Manager, which is super easy. So if you've had bad experiences creating certificates in the past, you're going to be pleasantly surprised at how painless this is. Next, we're going to secure the back end. We'll create a new target group for the web tier, and this target group will use HTTPS and TCP port 443. Recall that earlier in the course, I demonstrated that each web server has its own certificate already installed. When the load balancer connects to any of them, that communication is going to be secure because both the target group and the individual web servers will be configured for HTTPS. Okay, that's going to secure the web tier. What about the application tier? Well, we're going to secure the application tier as well, only this time, we'll be using TCP port 8443 instead.