5 Common Web Application Server Structures
A quick guide of 5 Common ways to Structure your Web Application
Blue Light Tech
Every environment is unique in its own way which implies that there is no single correct configuration to suit all environments. You can opt for a combination of these and what you need to consider is reliability, ease of management, scalability, performance, availability and cost. Here are some common server setups:
- One Server
- Separated Database Server
- Load Balancer
- HTTP Accelerator
- Master-Slave Database Replication
This is commonly applied in a LAMP stack which represents Linux, Apache, PHP and MySQL all existing on one server. The entire environment, which includes the database server, web server and application server have been placed on one server. Though this option is limited in terms of component isolation and scalability, it is the simplest and it takes a shorter period of time.
- It is not horizontally scalable.
- It causes poor performance and it might be difficult to trace this since same server resources are shared between database and application.
Separated Database Server
This creates the aspect of enhanced security as a result of removing the database from public internet or DMZ. It also does away with database and application resource contention by separating the database management system (DBMS) from the rest of the environment.
- It can improve on security depending on your setup.
- No contention for same resources such as I/O, CPU and Memory among others by database and application.
- Depending on a server’s increased capacity requirements, you can add more resources which vertically scales each tier separately.
- If there is high latency connection between two servers involved, network performance issues may arise.
- Setup is slightly complex compared to the single server one.
Load Balancer (Reverse Proxy)
This can use the same port and domain to serve multiple applications and examples of such software are Varnish, HAProxy and Nginx. Load balancers distribute workload across multiple servers which improves reliability and performance such that if one server fails, the other handles all incoming traffic until the issue is rectified.
- It can limit connections which guards against DDOS attacks.
- Horizontal scaling possibility.
- Additional consideration due to complexities may be required such as performing SSL termination and handling applications that require sticky sessions.
- Poor configuration or lack of enough resources can make the load balancer a performance bottleneck.
HTTP Accelerator (Caching Reverse Proxy)
This is meant to reduce time taken to serve content to user and this is mainly achieved by caching responses from a web or application server in memory. Some of the software capable of this are Nginx, Varnish and Squid.
- Protection against DDOS attacks is possible with some caching software.
- Caching and compression increases user capacity and reduces CPU load on server.
- Reduced performance if cache-hit rate is low.
- A fair amount of tuning is needed for best performance.
Master-Slave Database Replication
A master and one or more slave nodes are required in this case in which the master node receives all updates and reads distribute these across all nodes.
- If master is used for updates only, write performance is improved.
- Reads are spread across slaves which improves database read performance.
- No updates can take place if master fails.
- Does not have a built-in failover just in case master node fails.
- Content in slaves could be outdated since the updates are asynchronous.
An interesting read! I'm just getting started with all of this, and this has raised a few points I hadn't thought about