Every millisecond counts when building a website. The speed at which a website responds has a direct impact on user experience and search engine optimization (SEO). As an agency specializing in fast and efficient websites, we are always on the lookout for optimization opportunities to increase the performance of our projects. A decisive factor for the speed of a website is the server response time - the time that measures how quickly a server starts sending data to the browser. In this post, I would like to show you how we were able to drastically improve the server response time and thus also the TTFB value by optimizing the route caching on our TutKit.com website.
Here's a comparison of the values using Audisto's tech SEO tool before and after implementing route caching - an 88 percent drop in response time:
Now before you say to yourself: wow, we're celebrating these speed improvements here with caching enabled. This is not about the classic database cache! A quick distinction. The route cache discussed here reduces the processing time for routing logic, allowing the server to respond more quickly to requests. The database cache stores the results of database queries, which reduces the number of direct database accesses and optimizes loading times. Of course, we already had the database cache running before.
Server first response time vs. TTFB (Time to First Byte)
First of all, an explanation of how server response time and TTFB are related, as both terms are used repeatedly in the following. The server first response time is part of TTFB, but only refers to the time that the server needs to process a request and send the first response. TTFB is a more holistic metric that also takes into account other network delay metrics (DNS, connection, data transfer) before the first byte of the response arrives at the browser.
Server first response time (also known as server response time) measures the time it takes the server to receive a request, process it and send the first response. This time includes:
- The time at which the server receives the request.
- The processing time on the server (e.g. loading data, rendering templates, executing logic).
- The sending of the first response (often the HTTP header or the first part of the HTML page).
- The server first response time is therefore an important factor for the performance of a website, as it determines how quickly the server starts to send the response.
TTFB is a more specific metric that measures the time it takes for the first byte of the response from the server to arrive at the client (browser). This time starts as soon as the request is sent from the browser and ends when the first byte of the response arrives at the client. TTFB is made up of several components:
- DNS lookup: The time it takes the browser to find the IP address of the server.
- Connection establishment: The time it takes to establish a TCP connection and a secure connection (e.g. via TLS).
- Server initial response time: The time it takes the server to process the request and send the response.
- Data transmission: The time it takes to transfer the first byte of the response from the server to the client.
If your server is very fast, but the network connection is slow or the DNS query takes a long time, your server first response time may be good, but the TTFB will still show poor values. If the DNS query and connection establishment are fast, but the server is slow to respond to queries, both the server first response time and TTFB will be poor.
TTFB is therefore a more comprehensive metric than server first response time as it includes not only the processing on the server, but also the network transport of the response to the client.
Initial situation: Potential for improvement in server first response time
Our team analyzed a large number of pages on our multilingual portal TutKit.com using tools such as Google PageSpeed Insights. One noticeable point in the results was the server first response time in particular and the TTFB value in general. Before our optimization, the server response time fluctuated between 280 and 700 milliseconds - depending on the page type - which led to a noticeable delay in many cases. These values are suboptimal for modern websites that require fast user interaction. We spent a lot of time on CSS & JavaScript refactoring to improve PageSpeed scores and also implemented other optimization suggestions from PageSpeed Insights, such as using the modern AVIF image format. Nevertheless, the TTFB value remained low. What's more, with every additional language that we activated in our portal, the first server response times and the TTFB value deteriorated.
We started rolling out English and Russian in February 2024 and our previously good TTFB scores plummeted.
In March, we started our big JavaScript refactoring sprint, which will now reach its finale in October. This JS sprint included several milestones, which meant that we made improvements online every few months, which also had a very positive impact on website speed. There will be a whole series of articles about this JS refactoring sprint, because we have logged our refactoring work and the document for the article series is already over 100 pages long. It is probably the most detailed documentation of a JS refactoring sprint that will be available on the internet.
Here is a screenshot from our Miroboard to visualize the individual steps of the JS refactoring sprint, which actually ran from March to October and was realized by my Head of Development.
In April, a database sprint to reduce database queries took place in parallel, which enabled us to reduce the requests by up to 98 percent in some cases, which in turn made the pages load faster. The TTFB value continued to improve in April.
In May, we activated further languages so that we were online with 16 language variants on around 35,000 pages at this time. The chart clearly shows how the improved April values plummeted again.
In the following months, we continued to refactor the JavaScript files and optimized the DOM sizes, which benefited PageSpeed. We also took care of some CSS improvements, especially in font loading and the way icons are integrated (SVG sprites). At the same time, we activated further languages, so that we now have 26 languages online on over 98,000 subpages. While all other Core Web Vitals and PageSpeed relevant metrics were good, only TTFB had a problem. It was clear that it must be something with the DNS lookup, the server and above all the caching.
The following screenshot is from RumVision's Core Web Vitals Checker and shows the values before route caching was activated. Everything looks great for TutKit.com except for the TTFB value.
After a few checks and debugging sessions, we realized that one of the causes of these delays was route caching not being enabled. In Laravel, route caching is disabled by default. It needs to be enabled manually to improve the performance of the application, especially for larger projects with many routes. When we were a small project, it was not a problem. TTFB was positive, the site was lightning fast. The normal database caching did its job well. It was only when we scaled the pages and languages that it became a problem for us.
The key to optimization: route caching
Route caching is a function that allows a server to store all defined routes of a website in a cache instead of recalculating them for each request. This significantly reduces the number of operations the server has to perform for each request - especially for larger websites with many pages and dynamic routes. We run 500 to 800 URLs per request in the route per language variant.
Laravel as a PHP framework offers some caching options. Caching takes place via the databases (the standard that many website operators know as server-side database caching) and via PHP. The latter is where route caching comes into play, which is not activated by default in PHP frameworks because it is not necessary for most websites. It is also more or less a special caching option in Laravel.
In Laravel, route caching is not automatically created, even when routes are visited. Instead, you must create the route cache manually by running the php artisan route:cache command. This command compiles all your routes into a single cache file to improve performance, especially in production. Visiting routes during normal operation will not trigger route caching, as Laravel relies on this manual command to cache routes. Without running this command, Laravel will continue to load routes directly from the route files on every request. We run this command after every live deployment to ensure optimal performance.
Laravel offers many caching options that you need to enable and configure to take advantage of. For the best results in a production environment, it is recommended to enable configuration, route and view caching, along with an appropriate database caching backend, depending on the needs of your application.
So after enabling route caching, we quickly realized that it wasn't working as expected. Through debugging, we found that changes were needed in 12 different places in our code. In particular, the MapApiRoutes function had been loaded twice, which significantly impacted the efficiency of the route cache.
Debugging and Fix: After analyzing these double loads and other route management issues, we implemented the necessary fixes and successfully enabled the route cache. This directly led to a noticeable improvement in the way our website responds to requests.
Reduced first server response time: After enabling route caching and making the necessary fixes, we ran tests again with Google PageSpeed Insights to measure the impact. The results were impressive: before, the server first-response time was between 280 and 700 ms for different page types. After optimization, the initial response time dropped to 30 to 70 ms. We were therefore able to reduce the server response time by around 88 to 90 percent.
This means that the website now responds much faster to requests, which is reflected not only in a better user experience but also in improved SEO performance. This reduction in server response time also improves TTFB scores and is a massive gain in terms of speed and performance.
A test with Pingdom shows you the live TTFB (as PageSpeed Insights only shows you the average values for the last 28 days): The value at Wait shows 29.4 ms for us.
DNS optimization as an additional factor
Another aspect that contributes to improving overall performance is the DNS query time. This is the time it takes to resolve the domain of a website to its IP address. The time for the average DNS lookup is typically between 20 and 120 milliseconds.
We have also analyzed the DNS times of our website and found that our DNS query time is 6 to 11 milliseconds, well below the average of 20 to 120 ms. This shows that our DNS setup was already optimal and therefore had no negative impact on the overall speed.
Special feature of multilingual, extensive projects: Challenges for route caching
An important challenge that arose during the optimization of our project is the multilingual structure of our website. Our portal is currently online in 26 languages. With each new language, additional routes (URLs) were created, which exponentially increased the number of routes the server had to manage.
Growing number of routes and their impact on TTFB
In a multilingual project, each new language means that an additional URL is created in the routing table for each individual page. For example:
A page like /contact becomes /en/contact, /fr/contact, /es/contacto and so on.
With 26 languages, the number of routes multiplies accordingly, which dramatically increases the amount of data that the server has to process.
As a result, without efficient caching, each additional language increases the server first-time response time (TTFB), as the server has to search and calculate more and more routes for each request. In our case, we noticed a gradual deterioration in the TTFB values with each new language added.
Why large, multilingual projects are particularly affected:
- Exponential route growth: multilingual websites not only have simple static routes, but also dynamic routes that depend on user interactions or API calls. Multiply this by 26 languages and the routing table grows exponentially, putting more load on the server with each request.
- Increased complexity: Multilingual projects often have more complex requirements, especially when it comes to correct URL structure and localization. The server must not only find the correct route, but also ensure that the content is delivered in the correct language. Without caching, this complexity slows down every request.
- Increased database queries: In many cases, multilingual projects require additional database queries to load localized content or products, for example. Route caching helps here by ensuring that these queries do not have to be performed again for each request.
If you look at large content platforms that are multilingual, you will find problems with PageSpeed in general and the TTFB value in particular on many websites. Here is an overview of websites that are similar to TutKit.com in terms of content (only much larger in terms of the number of pages) - created via RUMVision.
They are all well-known, very successful services. But they all have their problems with TTFB and other core values.
Route caching as a solution for multilingual projects
In our case, the activation of route caching was particularly crucial, as it greatly reduced the load on the server. By caching the routes, the server was able to load all routes - regardless of the number of languages - from a fast cache instead of recalculating them each time. This led to a massive improvement in server first response times, from the previous 280-700 ms to just 30-70 ms.
For multilingual projects, it is therefore essential that route caching is not only activated, but also well optimized. Route caching is a particularly well-known and well-implemented function in Laravel, one of the most popular PHP frameworks. Laravel offers developers like us the ability to store all of an application's routes in a single file, which can then be loaded faster, significantly improving performance, especially for large projects with many routes.
But route caching is not just limited to Laravel. There are similar concepts or implementations in other CMS and PHP frameworks to optimize the efficiency of route management.
- Efficient cache management: The more languages and routes your project has, the more important it becomes to have an efficient cache mechanism. Regularly check the cache integrity to ensure that outdated or unnecessary routes do not unnecessarily increase the cache size.
- Cache dynamic routes correctly: Ensure that dynamic routes, such as those created by user actions or API calls, are handled correctly in the cache. In some cases, dynamic routes may require specific rules or invalidations.
- Test cache effectiveness: Use tools such as Google PageSpeed Insights or WebPageTest to check how effectively route caching works. Regular tests are particularly important for large, multilingual projects to ensure that the cache is working as intended.
Conclusion: Caching as a critical factor for multilingual websites
For large, multilingual websites, as in our case with 26 languages, the correct implementation of route caching is crucial to improve TTFB and overall performance. Each new language adds more complexity and potential load on the server. Without caching, the initial response time deteriorates noticeably with each new language. By enabling and optimizing the route cache, we have made a big step towards better server first response time. The reduction in server first-response time from over 280 ms to around 30 ms shows how effective this measure is. Added to this is the already optimized DNS query time, which together with the caching has significantly increased the speed of our website.
The optimization of our caching system was the key to ensuring a fast and responsive website despite the many routes and languages. Multilingual projects therefore benefit enormously from a well thought-out and maintained caching strategy - not only for the user experience, but also for SEO performance.
Update 17.11.2024: It is interesting to note that the improvement in server response times has also increased crawl requests, which means that Google crawls our pages more quickly and includes them in the index.
If you are looking for performance improvements for your multilingual, upscaled website, you should not underestimate route caching. The implementation requires some debugging and optimization in some cases, but the results speak for themselves: faster load times, better user experience and improved SEO. If you need help, please write to us! As a tech agency for SEO & PageSpeed optimization, we are happy to help.