|By Lori MacVittie||
|April 17, 2009 08:15 PM EDT||
Open Source SSL Accelerator solution not as cost effective or well-performing as you think
o3 Magazine has a write up on building an SSL accelerator out of Open Source components. It's a compelling piece, to be sure, that was picked up by Slashdot and discussed extensively.
If o3 had stuck to its original goal - building an SSL accelerator on the cheap - it might have had better luck making its arguments. But it wanted to compare an Open Source solution to a commercial solution. That makes sense, the author was trying to show value in Open Source and that you don't need to shell out big bucks to achieve similar functionality. The problem is that there are very few - if any - commercial SSL accelerators on the market today. SSL acceleration has long been subsumed by load balancers/application delivery controllers and therefore a direct comparison between o3's Open Source solution and any commercially available solution would have been irrelevant; comparing apples to chicken is a pretty useless thing to do.
To the author's credit, he recognized this and therefore offered a complete Open Source solution that would more fairly be compared to existing commercial load balancers/application delivery controllers, specifically he chose BIG-IP 6900. The hardware platform was chosen, I assume, based on the SSL TPS rates to ensure a more fair comparison. Here's the author's description of the "full" Open Source solution:
The Open Source SSL Accelerator requires a dedicated server running Linux. Which Linux distribution does not matter, Ubuntu Server works just as well as CentOS or Fedora Core. A multi-core or multi-processor system is highly recommended, with an emphasis on processing power and to a lesser degree RAM. This would be a good opportunity to leverage new hardware options such as Solid State Drives for added performance. The only software requirement is Nginx (Engine-X) which is an Open Source web server project. Nginx is designed to handle a large number of transactions per second, and has very well designed I/O subsystem code, which is what gives it a serious advantage over other options such as Lighttpd and Apache. The solution can be extended by combining a balancer such as HAproxy and a cache solution such as Varnish. These could be placed on the Accelerator in the path between the Nginx and the back-end web servers.
o3 specs out this solution as running around $5000, which is less than 10% of the listed cost of a BIG-IP 6900. On the surface, this seems to be quite the deal. Why would you ever purchase a BIG-IP, or any other commercial load balancer/application delivery controller based on the features/price comparison offered?
Turns out there are quite a few reasons; reasons that were completely ignored by the author.
CHAINING PROXIES vs INTEGRATED SOLUTIONS
While all of the moving parts cited by the author (Nginx, Apache, HAproxy, Varnish) are all individually fine solutions, he suggests combining them to assemble a more complete application delivery solution that provides caching, Layer 7 inspection and transformation, and other advanced functionality. Indeed, combining these solutions does provide a deployment that is closer to the features offered by a commercial application delivery controller such as BIG-IP.
Unfortunately, none of these Open Source components are integrated. This necessitates an architecture based on chaining of proxies, regardless of their deployment on the same hardware (as suggested by the author) or on separate devices; in path, of course, but physically separated.
Chaining proxies incurs latency at every point in the process. If you chain proxies, you are going to incur latency in the form of:
- TCP connection setup and teardown processing
- Inspection of application data (layer 7 inspection is rarely computationally inexpensive)
- Execution of functionality (caching, security, acceleration, etc...)
- Transfer of data between proxies (when deployed on the same device this is minimized)
- Multiple log files
This network sprawl degrades response time by adding latency at every hop and actually defeats the purposes for which they were deployed. The gains in performance achieved by offloading SSL to Nginx is almost immediately lost when multiple proxies are chained in order to provide the functionality required to match a commercial application delivery controller.
A chained proxy solution adds complexity, obscures visibility (impacts ability to troubleshoot) and makes audit paths more difficult to follow. Aggregated logging never mentioned, but this is a serious consideration, especially where regulatory compliance enters the picture. The issue of multiple log files is one that has long plagued IT departments everywhere, as they often require manual aggregation and correlation - which incurs time and costs. A third party solution is often required to support troubleshooting and transactional monitoring, which incurs additional costs in the form of acquisition, maintenance, and management not considered by the author.
Soft costs, too, are ignored by the author. The configuration of the multiple Open Source intermediaries required to match a commercial solution often require manual editing of configuration files; and must be configured individually. Commercial solutions - and specifically BIG-IP - reduce the time and effort required to configure such solutions by offering myriad options for management - standards-based API, scripting, command line, GUI, application templates and wizards, central management system, and integration as part of other standard data center management systems.
COMPRESSION SHOULD NEVER BE A BINARY CONFIGURATION
The author correctly identifies that offloading compression duties from back-end servers to an intermediary can result in improved performance of the application and greater efficiencies of the servers. NGinx supports industry-standard gzip compression.
The problem with this - and there is a problem - is that it is not always beneficial to apply compression. Years of extensive experience and testing prove that the use of compression can actually degrade performance. Factors such as size of application payload, type of content, and the speed of the network on which the application data will be transferred should all be considered when making the decision to compress or not compress.
This intelligence, this context-awareness, is not offered by this Open Source solution. o3's solution is on or off, with nothing in between. In situations where images are being delivered over a LAN, for example, this will not provide any significant performance benefit and in fact will likely degrade performance. Certainly NGinx could be configured to ignore images, but this does not solve the problem of the inherent uselessness of trying to compress content traversing a LAN and/or under a specific length.
Another overlooked item is security. Not just application security, but full TCP/IP stack security. The Open Source solution could easily add mod_security to the list to achieve parity with the application security features available in commercial solutions. That does not address the underlying stack security. The author suggests running on any standard Linux platform. To be sure, anyone building such a solution for deployment in a production environment will harden the base OS; potentially using SELinux to further lock down the system. No need to argue about this; it's assumed good administrators will harden such solutions.
But what will not be done - and can't be done - is securing the system against network and application attacks. Simple DoS, ARP poisoning, SYN floods, cookie tampering. The potential attacks against a system designed to sit in front of web and application servers are far more lengthy than this, but even these commonly seen attacks will not be addressed by o3's Open Source solution. By comparison, these types of attacks are part and parcel of BIG-IP; no additional modules or functionality necessary.
Furthermore, the performance numbers provided by o3 for their solution seem to indicate that testing was accomplished using 512-bit key certificates. A single Opteron core can only process around 1500 1024-bit RSA operations per second. This means an 8-core CPU could only perform approximately 12,000 1024-bit RSA ops per second - assuming that's all they were doing. 512-bit keys run around five times faster than 1024-bit. The author states: "The system had no problems handling over 26,590 TPS" which seems to indicate it was not using the industry standard 1024-bit key based on the core capabilities of the processors to process RSA operations. In fact, 512-bit key certificates are no longer supported by most CAs due to their weak key strength.
Needless to say, if the testing used to determine the SSL TPS for BIG-IP were to use 512-bit keys, you'd see a marked increase in the number of SSL TPS in the data sheet.
YOU GET WHAT YOU PAY FOR
Look, o3 has a put together a fairly cool and cheap solution that accomplishes many of the same tasks as a commercial application delivery controller. That's not the point. The point is trying to compare a robust, integrated application delivery solution with a cobbled together set of components designed to mimic similar functionality is silly.
Not only that, the logic that claims it is more cost efficient is flawed.
Is the o3 solution cheaper? Sure- as long as we look only at acquisition. If we look at cost to application performance, to maintain the solution, to troubleshoot, and to manage it then no, no it isn't. You're trading in immediate CAPEX cost savings for long-term OPEX cost outlays.
And as is always the case, in every market, you get what you pay for. A $5000 car isn't going to last as long or perform as well as the $50,000 car, and it isn't going to come with warranties and support, either. It will do what you want, at least for a while, but you're on your own when you take the cheap route.
That said, you are welcome to do so. It is your data center, after all. Just be aware of what you're sacrificing and the potential issues with choosing the road less expensive.
Related blogs and articles:
- Application Acceleration: To compress or not compress
- Open Source SSL Accelerator
- IT @ AnandTech: Intel Woodcrest, AMD's Operatorn and
Sun's UltraSparc T1: Server CPU Shoot-out
Technorati Tags: MacVittie,F5,SSL,SSL acceleration,open source,
Categories: Development and General , Performance , Security
- Doing VDI, Only Better
- Dear Slashdot: You Get What You Pay For
- Finding New Life For SOA in the Cloud
- Is Social Media a Hostile Work Environment?
- Your Cloud is Not a Precious Snowflake (But it Could Be)
- Maybe Ubuntu Enterprise Cloud Makes Cloud Computing Too Easy
- The Cloud Metastructure Hubub
- Infrastructure 2.0: Squishy Name for a Squishy Concept
- CloudNOW Interviews: Lauren States, IBM VP of Cloud Computing
- Vertical Scalability Cloud Computing Style