Why Your Rails Application Is Getting Slower (And How to Find the Cause in One Day)

Your Rails app worked fine six months ago. Now certain pages drag. Sidekiq queues back up. Your hosting bill climbs, but traffic has barely changed. Nobody changed anything major, or so they say.
Rails application performance tends to degrade gradually, not all at once. A new feature quietly adds a few unindexed queries. A background job starts holding a database connection longer than it should. An association that was fine at 10,000 records becomes punishing at 500,000. By the time users are complaining, the cause is usually several layers deep.
The good news: the root cause is almost always findable in a day, if you follow a structured diagnosis. The mistake most teams make is jumping to fixes before they understand what is actually slow.
Stop Guessing: Measure First
The most common pattern in a slow Rails app is this: someone notices a slow page, adds an index or enables fragment caching, and performance improves slightly. The team declares it fixed. Three weeks later the complaints come back.
That cycle happens because the fix was applied to the symptom, not the cause.
Before you change a single line of code, you need to answer one question: which 3 to 5 endpoints account for the majority of your server time? Not the slowest endpoints in isolation, but the ones where (request count times average response time) is highest. A page that takes 3 seconds but gets 10 hits a day matters far less than a page that takes 300ms and gets hit 50,000 times a day.
If you have an APM tool like Skylight, Scout APM, or New Relic installed, pull up the "time spent" view, not the "slowest requests" view. That single change in perspective will tell you where to focus. If you have no APM, start by scanning your production logs for requests above 500ms and group them by controller action.
This is the step most Rails performance guides skip. They hand you a list of techniques. You need a list of suspects first.
Morning: Use Bullet and rack-mini-profiler to Surface the Obvious Culprits
Once you know which endpoints to focus on, add two gems to your development environment and start clicking through those pages.
Bullet detects N+1 queries and unused eager loading. An N+1 query is what happens when Rails makes one query to fetch a list of records, then fires a separate database query for each record in that list. With 100 records and 3 nested associations, that can mean 300+ queries per page load. Fixing it is often a single .includes() call. Removing N+1 queries alone has been shown to produce 2x latency improvements on affected endpoints.
Consider a page listing 100 authors, each with 10 posts, each post with 5 tags. Without eager loading, Rails fires 1,101 separate queries. A well-specced development machine will take over a second just to render that page, and production is not a well-specced development machine.
rack-mini-profiler overlays a timing badge on every page in development. Click through your problem endpoints and you will immediately see the breakdown: SQL time, view rendering time, and total request time. Add the flamegraph gem alongside it and you can trace exactly which method calls are consuming the most clock time.
Run through your five problem endpoints. You will likely find that the same 2 or 3 patterns keep appearing.
Afternoon: Audit the Database Layer Directly
Once Bullet and rack-mini-profiler have given you a list of slow queries, go one level deeper into the database.
Open a Rails console on production (or a recent replica) and run EXPLAIN ANALYZE against your slowest queries. Look for sequential scans on large tables, as that is almost always a missing index. Adding the right index can improve query performance by 10x or more on a table with hundreds of thousands of rows.
A few things to check specifically:
Foreign key columns without indexes. Every belongs_to association that gets queried in a WHERE clause or a JOIN should have an index on the foreign key column. Rails does not add these automatically. They get missed constantly.
Polymorphic association indexes. If you use polymorphic associations (commentable_type and commentable_id), you need a composite index on both columns. A single-column index on just commentable_id will not be used.
Tables growing past a few hundred thousand rows. Run SELECT COUNT(*) FROM your_table on any table you are frequently querying. If a table has grown significantly since the app was first built, any query that was fast at launch may now be doing a full sequential scan.
If you are on PostgreSQL (and most Rails apps are), the pg_stat_statements extension gives you query-level statistics directly from the database. This is especially useful for catching slow queries fired from background jobs, which would not show up in your web request profiling.
For teams working on long-running Rails platforms, Ruby on Rails development partnerships often benefit most from this kind of systematic layer-by-layer audit rather than ad-hoc performance sprints.
Watch Your Background Jobs Too
Sidekiq is not part of your web request cycle, but it shares your database. If your job queues are backing up or your database CPU is elevated even during low web traffic, background jobs are often the culprit.
Common issues in Sidekiq that cause database strain:
- Jobs that load entire ActiveRecord objects when they only need an ID
- Jobs that run database queries inside loops without batching
- Jobs scheduled too frequently for the work they do
- Jobs that create long-running transactions, blocking other queries
The quickest way to find these is to check your database's pg_stat_activity view during a period of high CPU. If you see long-running queries coming from background worker processes, you have found your problem. Use find_each instead of .all when processing large record sets in jobs, as it batches in groups of 1,000 by default and avoids loading everything into memory at once.
One pattern we saw repeatedly in the HelpfulCrowd engagement: Sidekiq was acting as a full bottleneck, PostgreSQL was hitting 100% CPU, and the root cause turned out to be a combination of unbatched job queries and missing indexes on the tables those jobs touched. A focused audit and query rewrite dropped server response times to the 95th percentile at 500ms and cut infrastructure costs by 50%.
The Caching Question: When to Use It and When It Hides the Problem
Caching is often the first thing developers reach for when a page is slow, and sometimes it is the right tool. But applied to a badly written query, caching just delays the next explosion.
The rule is simple: fix the query first, cache second.
Once your queries are clean and indexed, fragment caching and low-level caching with Redis make a lot of sense for data that does not change per-request. Rails' cache helper in views is particularly effective for complex rendered partials that hit the same data repeatedly across users.
What caching does not fix:
- N+1 queries (the query count stays high; you just cache the result set)
- Missing indexes on frequently-updated data (cache invalidation becomes painful)
- Memory leaks from poorly written background jobs
If your application is serving a data-heavy operations platform, think scheduling, reporting dashboards, or multi-tenant SaaS, you will also want to look at Rails 7's load_async for parallelising independent queries on heavy pages. It is not appropriate for everything, but for a dashboard that fires five unrelated queries per load, running them concurrently rather than sequentially can cut render time significantly.
Teams building operations backbone platforms typically deal with this class of problem: multiple data sources, complex joins, and background processes all competing for the same database resources.
End of Day: Build a Short List, Not a Long One
After a day of structured diagnosis, you should have a clear short list, not a 30-item backlog of every possible optimization.
A well-prioritised list looks something like this:
- Two or three N+1 fixes on high-traffic endpoints (single-day fixes)
- Three to five missing indexes (under an hour each)
- One or two Sidekiq jobs to refactor with batching
- One endpoint that needs a Redis caching layer after the query is fixed
That is typically enough to cut response times by 40 to 60 percent for a mature Rails app that has not had a performance audit before. The remaining long tail of optimisations (query rewrites, caching strategy, infrastructure tuning) can be prioritised by impact over the following weeks.
The key is that you now have evidence, not guesses. Every item on the list has a measured impact behind it.
Explore more real-world examples in our case studies to see how these patterns play out across different product types and team structures.
FAQ
How do I know if my Rails app has N+1 queries without running Bullet?
Check your Rails logs during a page load. If you see the same SQL query repeated dozens of times with slightly different id values in the WHERE clause, that is an N+1. It will look like SELECT * FROM users WHERE users.id = 42, then SELECT * FROM users WHERE users.id = 43, and so on. In production, you can also enable slow_query_log at the database level and look for patterns in which queries appear most frequently.
What is the fastest single fix for a slow Rails application?
Adding missing indexes on foreign key columns. It takes minutes to deploy, carries very low risk, and can make queries 10x faster on large tables. Run EXPLAIN ANALYZE on your slowest queries and look for Seq Scan on a large table, which is almost always a missing index. Use a tool like lol_dba or check your schema manually against your most frequent queries.
Should I upgrade Rails before doing a performance audit?
Not necessarily, and doing both at once is risky. A performance audit and a major version upgrade are two different kinds of work. Audit and fix performance on your current version first. That way you have a stable baseline and you know what is genuinely slow versus what the upgrade might affect. Then upgrade once performance is understood.
How long does a proper Rails performance audit take?
For a focused audit on 5 to 10 key endpoints, one to two days is enough to identify the main causes. Full remediation (actually fixing and deploying the changes, writing tests, monitoring in production) typically takes one to two weeks depending on the severity of the issues found and the test coverage in the codebase.
Conclusion
A slow Rails application is almost never a mystery once you stop guessing and start measuring. The most effective thing you can do today is identify your highest-impact endpoints by server time, not just by response time in isolation, and run Bullet and rack-mini-profiler against them in a production-like environment. Most of the time, the audit itself takes a day. The top fixes take another week. The performance gain lasts years.
If your team is dealing with a Rails app that has accumulated significant technical debt and you want a second set of eyes on it, take a look at our Ruby on Rails development services or get in touch with the NUS Technology team, as we have been doing exactly this kind of work with international clients since 2013.


