If everyone is thinking the same, someone isn't thinking

Lori MacVittie

Subscribe to Lori MacVittie: eMailAlertsEmail Alerts
Get Lori MacVittie: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Virtualization Magazine, MySQL Journal, DevOps Journal

Blog Feed Post

The Power of the Proxy: Request Routing Memcached By @LMacVittie | @DevOpsSummit #DevOps

There are three things today that an application needs to survive in today’s demanding world: scale, security, and performance

There are three things today that an application needs to survive in today’s demanding world: scale, security, and performance.

It is for both reasons of scale and performance that memcached has become such a popular solution in modern application architectures. It aids in scalability by offloading database requests, which naturally increases the capacity of the database to answer queries not answerable by memcached. It improves performance, of course, by providing very fast responses to queries that in turn, are able to be returned to the user with greater alacrity.

From memcached’s site: “memcached is a free & open source, high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.

Redis-Labs-database-challenges

Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.”

It’s in-memory, which makes it fast. Disk I/O is one of the most latency-incurring actions on any given system, so eliminating the need to go to disk to seek out data – which is pretty much a requirement in a database system – is critical to improving performance. And it’s based on key-value pairs, the basis for NoSQL databases which have arisen in response to the need to speed up access to traditional relational databases (like MySQL, Microsoft SQL, and Oracle) that are far more complex under the covers.

Basically, it’s an excellent addition to app architectures seeking a performance (and capacity) boost. Given that 60% of developers in an informal survey at AWS re:Invent in 2014 said performance was their biggest database challenge, it’s easy to understand why systems like redis (another NoSQL option for which 27% of developers in the same survey said they chose for “speed” or “performance”) and memcached are popular today.

That said, memcached servers suffer from a few shortcomings. They’re a single point of failure. They don’t scale so great, and they suffer from network interface saturation. These are problematic because if the system fails, requests fall-back to the database. And if the database scaled and performed well enough to satisfy consumers (and developers) then memcached wouldn’t be deployed in the first place, would it? It’s likely to cause outages – both the real kind (database has crashed) and the perceived outages that happen thanks to timeouts caused by overwhelmed servers.  Similarly, network interface saturation is going to cause all sorts of performance issues that arise from any other kind of congestion – time outs and increased latency – that, once onset begins, will continue to compound until the app is pretty much unusable.

In other words, the availability and performance of memcached are as critical as the availability and performance of the app it was put in place to assist.

Which is where We (the corporate F5 ‘we’) come in.

BIG-IP can, of course, load balance the heck out of web traffic in general, but did you know it can distribute load across an array of memcached servers?

Yup, it sure can. It can also provide the redundancy (failover) necessary to avoid the single point of failure problem, and has greater network interface capacity (and can aggregate multiple interfaces) meaning it can address the problem of interface saturation.

But back to the scaling. See, BIG-IP has the visibility (because it’s a full proxy) necessary to extract memcached key values from its binary protocol and then consistently (persistently) distribute requests to the appropriate Memcached server. This is basically a very simple sharding pattern, in the network, using memcached servers chosen based on the use of CARP (Cache Array Routing Protocol) to hash the memcached key and select the best pool member for delivery. Once the memcached server has been elected for a particular key, the consistency inherent in CARP ensures that subsequent requests for that key-value pair will be directed to the same memcached server.

For those of you who want to give it a try, check out this iApp template for deploying memcached request routing on BIG-IP.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.