• Servers and Tools

Introduction: Getting Started with NGINX

From the class:  Getting Started with NGINX

Now you've seen how to get an application server deployed to a machine that's either running in the cloud or someplace else, some remote machine. We deployed a Node.js application as a sample in the get deployed class, but it could have been a Rails application or Meteor or any of the other application frameworks that provide your application as an HDP service.

What we're going to do in this class is layer on another role in our deployment environment. Sometimes that role is called a proxy server or a web server. And now you might be asking, why would we do that? Why would we layer on another web server when our app is already an HTTP server? Well, I'm going to walk you through some of the features that a web or proxy server provide to us. And you can get a hint of them down here in the lower right.

In this class, we're going to be using a software program called NGINX. It's a very popular proxy server used by tons of companies, including Evented Mind. So we're going to install NGINX and use it as a web server or as a proxy server in front of our application. Here's a simple diagram of where we've been so far. We haven't actually touched the database, but I put it in here just so that you would see.

We've got our application. That's where we've been spending our time. In the application, whether it be Rails or Phoenix or Meteor or any of these application servers, is a stack of code that's running as a process on the machine. And that code is typically set up to respond to requests over some TCP socket. Last time we looked at one that was running on port 3,000, and it's listening for HTTP requests. So we could actually open up a browser and navigate right to that application by going to the IP address for the virtual machine and to port 3,000, and we would get our app.

Now what we're going to do is we're going to add another piece of software, that's NGINX, up front. So add NGINX here. And NGINX also runs as a process, and it also listens on port for TCP connections. And it speaks HTTP as well. So a little bit of redundancy here. That might be a little confusing. And NGINX will typically listen on port 80 and also on port for 443 for SSL. You'll see how SSL works soon as well.

So this is normally what is going to sit up in front of everything and is what browsers are going to connect to. And it will interact with our application over TCP, or you'll also see how it can interact with the application through a Unix domain socket or a file.

So why, again, would we put a piece of software in front of the app? Well, for one thing, the application is normally quite expensive. So in other words, it's a little bit slower. Normally these are running on an interpreted language, and it takes a little bit of time to process requests. And also, you have to go through all the different software pieces that the app has, like the router, until ultimately you get to the specific function that needs to handle the request. So it's generally a little bit slower.

NGINX is going to be really fast. So when requests go to NGINX, it's going to handle-- it can handle a bunch of them. And it also is very good at queuing them up. So if you start to get bogged down on the back end here, NGINX can sort of queue up those requests and handle those performance-- or handle that bottleneck for you a little bit better than most application servers would be able to.

The other feature that NGINX is going to be able to provide is it can serve up files. So this is called caching. Or it just might be serving up public assets. If we have a file, let's say that you've got an image or a robots.txt file, it's really kind of useless to go all the way to the app and let the app serve up that file. We can serve it directly from disk through NGINX. And again, that's going to be really, really fast. And we don't have to hit the application server at all. So that's a pretty nice feature.

The other feature that NGINX will provide us is SSL termination. So SSL stands for Secure Sockets Layer. It's how we do secure communication over the web. Normally, apps servers don't have SSL termination built into them. So we need something else sitting on front of it, some piece of software that's going to take an encrypted connection, decrypt it, and then send the decrypted request back to our app server. And so this is going to work with SSL. So if we have an SSL certificate, we can upload it to NGINX and have it automatically work with secure connections to the browsers.

Next up, NGINX also provides the ability to load balance. Load balance. So let's say we have more than one app server. I'll draw another one down here. So this is app 2. And we added another app server because the first app server is using up all the CPU or all the memory of the machine. So we added machine 2 and we're running app 2 on it.

Now what we want to do is to send requests back and forth between the two app servers. Maybe we'll just use a strategy called round robin. So we'll start by sending requests here, and then the next one will go here, and the next one to here. So that's just called load balancing. So when we use NGINX as a reverse proxy, it's going to-- requests are going to come in here, and then we're going to proxy them back to either app 1 or to app 2, and it will just go back and forth. And you can imagine that we can have as many apps as we want back there. So we might have thousands of them. And NGINX can sit in front of them and decide where the requests get routed.

Next up, NGINX can also do very fast redirecting. So let's say that we want to redirect all non-HTTPS traffic to HTTPS. So redirecting, let's say, HTTP to secure HTTPS. We can do that really quickly in NGINX without even touching the app server.

Or let's say you wanted to redirect all Evented Mind traffic to www-- so all dot something, we'll say it's dot boom., since that's our app name. Want to send out to We can do that really quickly. And we can do other kinds of redirects in NGINX as well. If you wanted to redirect all URLs that used an old scheme to something new, we don't even need to touch our app in order to do that.

Finally, one of the features that we use at Evented Mind is we can show a maintenance page really easily. So let's say that we want to take down the app servers for a little so that we can do some work, and we need to show the user something in the meantime. Well, we can just put a static maintenance HTML file into the NGINX folder, and NGINX will serve that up. And we can even do some fancy logic inside of NGINX to say when a certain file exists, go ahead and show the maintenance page. Otherwise don't.

So there's probably lots of other features that NGINX provides, but these are kind of the big ones that come to mind. And I'm going to show you around in the rest of this class how to use some of these features in front of our app server.