Redis is more than just a key-value cache: find out how it sped up our home feed

*image courtesy of Redis.

The home page is where users discover new, interesting content. I’m going to describe how I optimised the backend to cope with the increased engagement and growth we faced recently. But first, an example. Here’s my home page:

A typical example of Clowdy's home feed.
A typical example of Clowdy’s home feed.

I am following Stuart, Pulkit, Dylan and The Polyveldts.

That is where I can listen to In My Mind, which Stuart commented on. I can watch Room 324 because it was reclowded by Pulkit, or simply just play the list recently updated by Dylan.

Simply put, the feed is an aggregation of unique entities that my friends made some actions on. The entities can be clowds, lists of clowds, or users. The actions can be: reclowd, comment, upload for clowds, create/update for lists and follow for users. The feed is ordered with the most recent actions at the top. Makes sense – so what had to change?

Backend Issues

Initially, the feed was based on MySQL queries. To fetch a page of the feed, we joined all the action tables (follow, reclowd, comment), sorted by timestamp, with the most recent ones at the top. We also limited it to 40 entities per page. The issue was that it became slower and slower. We needed to address this before it became a problem.

Potential solutions

Why not scale the database?

One solution was to scale MySQL database, providing read replicas. This would fix the scale problem – the increased number of users – but not the time it was taking to do all of the complex queries for the feed, including multiple table joins. And if the number of followers increased, the problem would worsen.

What about caching it?

Another solution was to use Memcached or Redis to simply cache the feed. When the feeds page needed updated, we’d just invalidate the cache and regenerate it from database. This sounds good in theory, but when I looked into cache misses, I thought differently.

Actions (reclowding, commenting, following) happen very frequently on Clowdy. So caching a feed that is highly dynamic, to an extent, defeats the purpose of caching.

Then I rediscovered Redis – Redis is not only a key-value cache!

Redis can do anything Memcached can, but also provides additional features that Memcached doesn’t like: disk persistence, abstracted data types and Lua scripts. I’ll cover this in more detail elsewhere, perhaps in another blog – for now, you can find more info about Redis data types here, persistence here and Lua scripts here.

For our purposes, we’re looking at two data types: sets and sorted sets.

What are sets?

Sets are a collection of unique values that belongs to a unique key on the server. For example:

> sadd rooms 1 2 3

This will create a set called rooms that will contain the values 1, 2, 3.

You can retrieve the members of the set with:

> smembers rooms

and get: 1, 2, 3 back.

OK, then what are sorted sets?

A sorted set is like a set, but for each value it stores a floating point number that is called a score. Internally, the structure is always kept sorted using the scores. Any new value is inserted to the right position according to its score, meaning the ordered structure is always maintained. The time complexity of additions is O(log(n)).

For example, this is how one would use the process to create a list of hackers ordered by the year of their last attempt to hack NASA.

zadd hackers 2007 “Flavius”
(integer) 1
zadd hackers 2006 “Stuart Logan”
(integer 1)
> zadd hackers 2013 “Pulkit”
(integer) 1
> zadd hackers 2014 “Sean”

And to retrieve the list in the right order:

zrange hackers 0 -1

  • Stuart Logan
  • Flavius
  • Pulkit
  • Sean

Time complexity to retrieve the set in the correct order is only O(log(N)+M), and the time complexity of insertions is O(log(n)).

Check the documentation here for more details.

How did this help with the home feed?

Do you remember when I said that our home feed is an aggregation of unique entities?

That means:

A clowd will never appear on the feed twice. The last action that happened to it (a comment or a reclowd) will always bring it to the front.

Because of this, the home feed can be represented on Redis using a set. And because all of the feeds need to be sorted by time, we can use sorted sets and the timestamp as the score. That’s it!

So – job done?

After implementing this and benchmarking it I was struck by how the new design performed. The time taken to fetch a full page of feeds from Redis was never more than three milliseconds. That was a remarkable 3,000 times better than the old system.

But there was a problem. When an action was performed (for example, someone reclowding a song) then the time taken to push that action to all of their followers’ home pages was taking quite long, one Redis command was needed for each push. The more followers involved, the longer it took. The problem was amplified by the fact that we use a centralised Redis node that is not on the same server as our website, so we wasted a lot of time in round trips.

LUA Scripts to the rescue

Redis, though, allows us to push a script to the server that will execute and return the result. That meant only one round trip is needed to execute an atomic command, which sounded perfect. We could build a script on the fly, to update all of the users’ feeds in one go.

After testing this, the results exceeded my expectations. If with the previous design it took five minutes to push an event on 100,000 feeds, the new solution did the same in just under a second. That is, 300 times faster. Lua scripts are definitely a powerful feature of Redis.

Was everything fine?

No, it wasn’t! After the Lua optimisation, we had coaxed good performances from all of our feed actions: insert, update, get. But one issue remained – the amount of memory used by Redis. For each home feed this was about 0.5MB of data per user, assuming a history of 2000 events.

This didn’t sound particularly memory-efficient. With this in mind, I began looking for a way to reduce our data usage.

Learn from others

Luckily, I found the following blogpost posted on Instagram’s tech blog. They had experienced a similar issue, and hoped to reduce the memory usage of Redis on their system.

You can read about it in more detail over there, but I’ll explain briefly how it worked for us.

The solution was to pack home feeds in packages of 1000 per bulk and put the server settings to zset-max-ziplist-entries to 1000. This guarantees that each ordered set will store the first 1000 members in a very memory-efficient way. The tradeoff is the CPU used by the Redis server, so you have to choose the right number carefully!

With that implemented, the memory usage was reduced by a factor of ten!

After doing some maths, I realised that we can now store feeds for one million users with only 13 GB of cached memory! That is simply amazing!

It was enough for the solution to find its way quickly into production, albeit after a long period of planning on my part.

Conclusion

We now have home feeds that are very fast to fetch O(n), very fast to push O(log(n)) and very efficient memory-wise. We have not yet encountered any get calls for feeds that took more than three milliseconds in production.

We are very happy with the current backend of home feeds, although we’re waiting for the next big challenge to come along as user behaviour evolves and changes.

Hopefully, the tricks presented here can also help you get the most out of Redis.

The following two tabs change content below.
Flavius Mester

Flavius Mester

Another graduate of the University of Manchester’s computer science course, Flavius is also from Romania (but he didn’t know Dan before he moved to England, stop asking him that, Romania is a big country). He’s been working at Clowdy since its inception and played a big role in shaping its initial design and code.

Comments

Flavius Mester

Flavius Mester

Another graduate of the University of Manchester’s computer science course, Flavius is also from Romania (but he didn’t know Dan before he moved to England, stop asking him that, Romania is a big country).

He’s been working at Clowdy since its inception and played a big role in shaping its initial design and code.