Rahul Soni

Backend Engineer. Sharing my learnings.

How Many Users Can My Laptop Actually Handle? (Part 1)

Like we know the answer, we know that a laptop cannot be that powerful to have such capability, that it handles millions of users, but it was always vague to me and I did not know the exact technical reason why it is the case. Where things break? Where are the bottlenecks? Are they related to hardware or something else? Is this a framework dependent question? Or what actually stops us from going there....

May 6, 2026 · 7 min

How Many Users Can My Laptop Actually Handle? (Part 2)

I am testing how many users my laptop can handle, and in Part 1 we discussed all the limits that come along the way from 0 to 1000 users all the possible pitfalls, now here I will put our discussion to test and move towards next layer of 1000 to 10000 users. I’m starting at 1000 virtual users at the system. The setup has two .NET APIs one is compute-only, it calculates age when you pass a birthdate....

May 5, 2026 · 8 min

Making Distributed Cache — 1

This article is about building a distributed cache like Redis and learning the concepts around it. Things we will cover RESP protocol communication Client interface to interact with Redis Handling load Persistence in both modes, just like Redis: append-only file and RDB file Crash recovery Replication (in next part) Master / Replica architecture (in next part) Lets dive in. First and foremost, the most essential part is communication between the client and server....

March 1, 2026 · 5 min

Understanding Memcached at Facebook: A Simple Guide to High-Performance Caching

If you’ve ever wondered how Facebook handles millions of users fetching data at lightning speed, it’s possible because of systems like Memcache. It is a network cache that helps Facebook serve data quickly while keeping things reliable. Main purpose of memcache is to provide frequently accessed data fast and save time in computing redundant information. Let’s look at how Facebook modified Memcache to operate at their massive scale and solve real-world challenges, based on their paper _"Scaling Memcache at Facebook....

April 27, 2025 · 9 min

Reading Hadoop(Map-Reduce) Paper

Google faced a challenge: they needed to query massive amounts of raw data, but processing and generating output was time-consuming. They required a solution that could parallelize the entire computation process, deliver results faster across multiple machines, and handle failures effectively. The goal was to build a system capable of parallel computation. While the tasks of deriving data were straightforward, the volume of raw data (crawled pages, documents, etc.) was huge....

March 15, 2025 · 9 min