When you deploy a web service to multiple containers you might want to load balance between the containers using a proxy or load balancer.
In this tutorial, you’ll use the dockercloud/hello-world image as a sample web service and dockercloud/haproxy to load balance traffic to the service. If you follow this tutorial exactly, your traffic will be distributed evenly between eight containers in a node cluster containing four nodes.
First, deploy a node cluster of 4 nodes.
If you have not linked to a host or cloud services provider, do that now.
You can find instructions on how to link to your own hosts, or to different providers here.
Click Node Clusters in the left-hand navigation menu.
Enter a name for the node cluster, select the Provider, Region, Type/Size.
Add a deployment tag of
web. (This is used to make sure the right services are deployed to the correct nodes.)
Drag or increment the Number of nodes slider to 4.
Click Launch node cluster.
This might take up to 10 minutes while the nodes are provisioned. This a great time to grab a cup of coffee.
Once the node cluster is deployed and all four nodes are running, we’re ready to continue and launch our web service.
Click Services in the left hand menu, and click Create.
Click the rocket icon and select the dockercloud/hello-world image.
On the Service configuration screen, configure the service using these values:
latestso you get the most recent build of the image.
web. This is what we call the service internally.
high availability. Deploy evenly to all nodes.
web. Deploy only to nodes with this tag.
Note: For this tutorial, make sure you change the deployment strategy to High Availability, and add the tag web to ensure this service is deployed to the right nodes.
Last, scroll down to the Ports section and make sure the published box is checked next to port 80.
We’re going to access these containers from the public internet, and
publishing the port makes them available externally. Make sure you leave the
node port field unset so that it stays dynamic.
Click Create and deploy.
Docker Cloud switches to the Service detail view after you create the service.
Scroll up to the Containers section to see the containers as they deploy.
The icons for each container change color to indicate what phase of deployment they’re in. Once all containers are green (successfully started), continue to the next step.
Once your containers are all green (running), scroll down to the Endpoints section.
Here you’ll see a list of all the endpoints available for this service on the public internet.
Click an endpoint URL (it should look something like
http://web-1.username.cont.dockerapp.io:49154) to open a new tab in your
browser and view the dockercloud/hello-world web page. Note the hostname for the page that loads.
Click other endpoints and check the hostnames. You’ll see different hostnames which match the container name (web-2, web-3, and so on).
We verified that the web service is working, so now we’ll set up the load balancer.
Click Services in the left navigation bar, and click Create again.
This time we’ll launch a load balancer that listens on port 80 and balances the traffic across the 8 containers that are running the
Click the rocket icon if necessary and find the Proxies section.
Click the dockercloud/haproxy image.
On the next screen, set the service name to
Leave the tag, deployment strategy and number of containers at their default values.
Locate the API Roles field at end of the General settings section.
Set the API Role to
When you assign the service an API role, it passes a
environment variable to the service’s containers, which allows them to query
Docker Cloud’s API on your behalf. You can read more about API Roles here
The dockercloud/haproxy image uses the API to check how many containers
are in the
web service we launched earlier. HAproxy then uses this
information to update its configuration dynamically as the web service
Next, scroll down to the Ports section.
Click the Published checkbox next to the container port 80.
Click the word dynamic next to port 80, and enter 80 to set the published port to also use port 80.
Scroll down to the Links section.
web from the drop down list, and click the blue plus sign to
add the link.
This links the load balancing service
lb with the web service
link appears in the table in the Links section.
You’ll also notice that a new set of
WEB environment variables
appear in the service we’re about to launch. You can read more about
service link environment variables here.
Click Create and deploy and confirm that the service launches.
On the load balancer service detail page, scroll down to the endpoints section.
Unlike on the web service, you’ll see that this time the HTTP URL for the load balancer is mapped to port 80.
Click the endpoint URL to open it in a new tab.
You’ll see the same hello-world webpage you saw earlier. Make note of the hostname.
Refresh the web page.
With each refresh, the hostname changes as the requests are load-balanced to different containers.
Each container in the web service has a different hostname, which
appears in the webpage as
container_name-#. When you refresh the
page, the load balancer routes the request to a new host and the displayed hostname changes.
Tip: If you don’t see the hostname change, clear your browser’s cache or load the page from a different web browser.
Congratulations! You just deployed a load balanced web service using Docker Cloud!
What if you had so many
web containers that you needed more than one
Docker Cloud automatically assigns a DNS endpoint to all services. This endpoint routes to all of the containers of that service. You can use the DNS endpoint to load balance your load balancer. To learn more, read up on service links.
You can try this by pointing your web browser to servicename.username.svc.dockerapp.io or using dig or nslookup to see how the service endpoint resolves.