Usman Nasir
on
September 21, 2020

OpenLiteSpeed vs NGINX

Embrace Love: Share Us on Social Media

Updated: On 22 September 2020.

OpenLiteSpeed is an open-source version of LiteSpeed Enterprise Web Server that shares the same code base so the same Enterprise-Grade performance. Today we will see the performance of openlitespeed vs nginx and we will look into various scenarios such as;

  1. Static file performance of OpenLiteSpeed vs Nginx.
  2. Simple PHP file performance.
  3. WordPress site performance with and without LSCache and FastCGI Cache for NGINX.

We will run our tests on DigitalOcean $5 Droplet with the following specs:

  1. 1GB Ram.
  2. 25GB SSD Disk Space.

For the OpenLiteSpeed environment, we will install CyberPanel and for the NGINX environment, we will use a clean VestaCP installation. We will be using h2load for benchmarking on a DigitalOcean $10 plan. (All these virtual machines reside in Frankfurt, Germany location)


Install h2load (nghttp2)

As mentioned above, we are going to use h2load for performing benchmarks. On our Centos 7.6 DigitalOcean server ($10 plan) we ran following commands to install h2load

yum install epel-release -y

yum install nghttp2

Then make sure it is installed

[root@h2load nghttp2]# h2load –version
h2load nghttp2/1.31.1

This server is only dedicated to run the benchmarks.


Make sure to Enable HTTP2 on NGINX Server

By default on VestaCP, you get HTTP 1.1 with NGINX. You can open the vhost configuration file to turn on HTTP2.

nano /home/admin/conf/web/yourdomain.com.nginx.ssl.conf

Replace yourdomain.com with the domain you have on VestaCP. Once in the file convert

server {
listen 192.168.100.1:443;

Into

server {
listen 192.168.100.1:443 ssl http2;

Save the file and restart Nginx using systemctl restart nginx. On CyberPanel you will get HTTP2 by default.


Let’s test a small static file of 725 Bytes

In this test, we will be using the following command

h2load -t1 -H ‘Accept-Encoding: gzip’ -c1000 -n100000

OpenLiteSpeed completed the requests in almost half the time.

Result for OpenLiteSpeed

[root@h2load nghttp2]# h2load -t1 -H ‘Accept-Encoding: gzip’ -c1000 -n100000 $domainols
starting benchmark…
spawning thread #0: 1000 total client(s). 100000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES128-GCM-SHA256
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 7.77s, 12864.34 req/s, 4.71MB/s
requests: 100000 total, 100000 started, 100000 done, 100000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 36.64MB (38416114) total, 1.21MB (1267114) headers (space savings 94.34%), 32.81MB (34400000) data
min max mean sd +/- sd
time for request: 4.47ms 468.95ms 66.97ms 16.56ms 94.64%
time for connect: 186.83ms 1.97s 864.64ms 371.78ms 88.00%
time to 1st byte: 615.39ms 2.03s 970.81ms 343.46ms 90.80%
req/s : 12.90 13.47 13.23 0.14 70.60%

Result for NGINX

[root@h2load nghttp2]# h2load -t1 -H ‘Accept-Encoding: gzip’ -c1000 -n100000 $domainnginx
starting benchmark…
spawning thread #0: 1000 total client(s). 100000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES256-GCM-SHA384
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 17.68s, 5657.34 req/s, 2.57MB/s
requests: 100000 total, 100000 started, 100000 done, 100000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 45.35MB (47549000) total, 10.78MB (11300000) headers (space savings 35.80%), 32.81MB (34400000) data
min max mean sd +/- sd
time for request: 69.67ms 1.46s 104.37ms 74.98ms 96.91%
time for connect: 6.19s 7.76s 7.13s 521.05ms 61.80%
time to 1st byte: 7.66s 7.95s 7.75s 71.72ms 62.60%
req/s : 5.66 5.71 5.69 0.01 66.90%

Make sure that when you run the test against NGINX application protocol is h2.


Static file of size 2MB

In this test, we will be using the following command

h2load -t1 -H ‘Accept-Encoding: gzip’ -c100 -n1000

OpenLiteSpeed completed the requests in 8.4 seconds, while for the same number of requests NGINX took 74.81 seconds.

Result for OpenLiteSpeed

[root@h2load nghttp2]# h2load -t1 -H ‘Accept-Encoding: gzip’ -c100 -n1000 $domainols
starting benchmark…
spawning thread #0: 100 total client(s). 1000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES128-GCM-SHA256
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 8.40s, 119.05 req/s, 231.84MB/s
requests: 1000 total, 1000 started, 1000 done, 1000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 1000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 1.90GB (2041926867) total, 37.08KB (37967) headers (space savings 83.56%), 1.90GB (2036628000) data
min max mean sd +/- sd
time for request: 7.53ms 1.94s 791.62ms 185.17ms 75.20%
time for connect: 101.46ms 112.75ms 107.14ms 2.21ms 71.00%
time to 1st byte: 115.26ms 136.43ms 125.44ms 5.40ms 61.00%
req/s : 1.19 1.40 1.25 0.04 68.00%

Result for NGINX

[root@h2load nghttp2]# h2load -t1 -H ‘Accept-Encoding: gzip’ -c100 -n1000 $domainnginx
starting benchmark…
spawning thread #0: 100 total client(s). 1000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES256-GCM-SHA384
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 74.81s, 13.37 req/s, 25.99MB/s
requests: 1000 total, 1000 started, 1000 done, 1000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 1000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 1.90GB (2039006900) total, 112.30KB (115000) headers (space savings 35.75%), 1.90GB (2036628000) data
min max mean sd +/- sd
time for request: 66.81ms 44.02s 7.04s 1.82s 92.30%
time for connect: 545.07ms 920.01ms 646.84ms 92.66ms 86.00%
time to 1st byte: 635.69ms 8.21s 4.34s 2.17s 59.00%
req/s : 0.13 0.15 0.14 0.00 61.00%

In both large and small files OpenLiteSpeed clearly stands a winner.


Testing a simple PHP Hello World Application

We will now create a simple php file with the following content:

<?php

echo “hello world”

?>

Additional Configuration for OpenLiteSpeed

PHP_LSAPI_CHILDREN=10

LSAPI_AVOID_FORK=1

Additional Configuration for NGINX

pm.start_servers = 10

Command Used

h2load -t1 -H ‘Accept-Encoding: gzip’ -c1000 -n100000

OpenLiteSpeed completed the requests in 23.76 seconds, while for the same number of requests NGINX took 115.02 seconds. OpenLiteSpeed is a clear winner with PHP applications due to its own implementation of PHP processes called LSPHP (PHP + LSAPI). It performs much better then PHP-FPM which is being used with NGINX.


Is LiteSpeed server good?

LiteSpeed server is generally well-regarded for its speed and efficiency. It is known for significantly improving website performance by implementing advanced caching mechanisms and supporting features like HTTP/3. LiteSpeed is a popular choice for web hosting environments where speed and resource optimization are crucial factors.

LiteSpeed Cache vs FastCGI Caching with NGINX

We will now discuss caching in OpenLiteSpeed and NGINX.

With OpenLiteSpeed web server you get a built-in cache module and with NGINX you get a FastCGI Caching Module.

Why OpenLiteSpeed Cache Module is better?

  1. Tag-based caching, pages can be cached for an unlimited amount of time until the cache copy gets invalid.
  2. Built right into the Web server.
  3. Multiple Cache modules are available for popular CMS.
  4. Use the disk to save cache copies.

What is wrong with NGINX Based FastCGI Caching

  1. Not tag based caching, or you can say time-based caching.
  2. This type of caching is not intelligent and does not know when to invalidate the cache copy.
  3. You can use MicroCaching but it is explained here as to why MicroCaching is not recommended.

Why use OpenLiteSpeed?

OpenLiteSpeed is a free and open-source web server software based on the LiteSpeed server. It is designed to be lightweight, fast, and easy to use. There are several reasons to use OpenLiteSpeed:

1. Performance

OpenLiteSpeed inherits the performance benefits of LiteSpeed, making it a fast and efficient web server.

2. Ease of Use

It comes with a user-friendly web-based interface for easy server management and configuration.

3. Compatibility

OpenLiteSpeed is compatible with Apache configurations, making it a seamless transition for users familiar with Apache.

4. Resource Efficiency

It is known for using fewer server resources while delivering high performance.

5. Open Source

Being open-source, it is freely available, allowing users to customize and modify the software according to their needs.

Benchmarking OpenLiteSpeed vs NGINX for WordPress

We will now benchmark LiteSpeed vs nginx for WordPress by installing WordPress on both.

  1. OpenLiteSpeed will use LiteSpeed Official WordPress Caching Plugin.
  2. On the NGINX setup we will use Cache Enabler Caching plugin.

Command used

h2load -t1 -H ‘Accept-Encoding: gzip’ -c100 -n1000

The first question after seeing the graph above will be why OpenLiteSpeed took only 1.4 seconds and NGINX (even using the cache plugin) took 91.6 seconds to complete the same number of requests. Let’s recall the image we shared above.

Here you can see that in the case of OpenLiteSpeed when there is a cache hit, the request does not go to PHP Engine which is a very costly operation that causes all the bottlenecks. Because the OpenLiteSpeed cache module sits inside the webserver and all logic is handled there, which means no need to invoke the PHP Engine.

However in the case of NGINX, this is not the case, the Cache Enabler plugin resides on the PHP side. So even if there is a cache hit, PHP needs to be forked and used which causes all the bottlenecks. Let see the detailed results now

OpenLiteSpeed

[root@h2load nghttp2]# h2load -t1 -H ‘Accept-Encoding: gzip’ -c100 -n10000 $domainols
starting benchmark…
spawning thread #0: 100 total client(s). 10000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES128-GCM-SHA256
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 1.44s, 6925.11 req/s, 25.10MB/s
requests: 10000 total, 10000 started, 10000 done, 10000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 10000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 36.25MB (38006300) total, 118.55KB (121400) headers (space savings 95.55%), 35.87MB (37610000) data
min max mean sd +/- sd
time for request: 9.31ms 20.81ms 13.39ms 1.13ms 89.23%
time for connect: 89.91ms 100.89ms 95.89ms 2.78ms 64.00%
time to 1st byte: 101.79ms 113.77ms 107.89ms 3.45ms 61.00%
req/s : 69.35 70.00 69.66 0.19 62.00%

NGINX

[root@h2load nghttp2]# h2load -t1 -H ‘Accept-Encoding: gzip’ -c100 -n10000 $domainnginx
starting benchmark…
spawning thread #0: 100 total client(s). 10000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES256-GCM-SHA384
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 91.69s, 109.06 req/s, 417.50KB/s
requests: 10000 total, 10000 started, 10000 done, 10000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 10000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 37.38MB (39198270) total, 1.44MB (1513370) headers (space savings 27.93%), 35.76MB (37500000) data
min max mean sd +/- sd
time for request: 355.76ms 1.23s 907.05ms 78.63ms 76.91%
time for connect: 357.00ms 678.18ms 506.17ms 153.42ms 54.00%
time to 1st byte: 712.81ms 1.60s 1.15s 264.29ms 57.00%
req/s : 1.09 1.10 1.10 0.00 57.00%


OpenLiteSpeed and .htaccess

OpenLiteSpeed also supports .htaccess file (a very popular feature provided by Apache Web Server). But some people confuse slow performance, yes in the case of Apache, your performance will get affected if you have enabled the use of .htaccess file. However, in the case of OpenLiteSpeed, it will only look for .htaccess file in the directory for the first time, which means you get the benefit of .htaccess file along with high performance.


Is LiteSpeed better than Apache?

LiteSpeed is often considered better than Apache for certain use cases. LiteSpeed is known for its superior performance and efficiency, especially in handling a large number of concurrent connections. It typically requires fewer server resources, providing faster and more scalable performance compared to Apache.

FAQs

Which server is faster, OpenLiteSpeed, or NGINX?

Both OpenLiteSpeed and NGINX are known for their speed and efficiency. The performance comparison may vary based on specific use cases and configurations. OpenLiteSpeed is often praised for its simplicity and ease of use, while NGINX is celebrated for its flexibility and extensive feature set.

Can I use both OpenLiteSpeed and NGINX for the same website?

While it’s technically possible to use both OpenLiteSpeed and NGINX together, it’s not a common configuration. Typically, users choose one web server based on their preferences and requirements. Mixing them may add complexity and could require advanced configurations.

Which server is better for handling concurrent connections?

Both OpenLiteSpeed and NGINX are designed to handle a large number of concurrent connections efficiently. The choice between them may depend on other factors such as ease of use, specific feature requirements, and the complexity of your web hosting environment.

Is OpenLiteSpeed suitable for large-scale websites?

OpenLiteSpeed can be suitable for large-scale websites, especially when simplicity and ease of use are important considerations. For very high-traffic and complex setups, some users may opt for the commercial LiteSpeed server, but OpenLiteSpeed can still handle significant workloads effectively.

Is NGINX more widely used than OpenLiteSpeed?

NGINX has been widely adopted and is often used in diverse environments, including large-scale websites and high-traffic servers. OpenLiteSpeed, while gaining popularity, may be chosen for its simplicity in specific use cases, particularly where ease of use is prioritized.

Conclusion

We ran multiple types of tests

  1. Small static file.
  2. Large static file.
  3. Simple Hello World PHP application.
  4. WordPress Site

In all the cases OpenLiteSpeed was the clear winner against NGINX.

8 comments

Leave a Reply to Anonymous (Cancel Reply)

Stay in the loop!

If you found this article helpful, join our mailing list to receive CyberHosting updates and latest news from the LiteSpeed community.

  • Only useful information.
  • Relevent to hosting and LiteSpeed.
  • No spam. Ever.

By joining our mailing list you agree to receive periodic communications from CyberHosting. You can unsubscribe from within our emails any time.