Web Performance Optimization in Practice: How I Shaved Seconds Off My Web App Loading

Alright, folks, let's talk about something near and dear to every web developer's heart (and every user's patience): web performance. Frankly, a slow web app is a dead web app. Users expect near-instantaneous loading times, and if you can't deliver, they'll bounce faster than a rubber ball. In this post, I'm going to dive deep into how I recently optimized the loading speed of one of my web apps, sharing the specific strategies I employed, the tools I used, and the results I achieved. This isn't just theoretical fluff; it's a real-world account of what worked (and what didn't).

TL;DR: I went from a mediocre Lighthouse score to a blazing-fast one by focusing on image optimization, code splitting, lazy loading, and efficient data fetching. The result? Happier users and a better-performing app.

The Problem: A Lighthouse Score That Made Me Cringe

Let's be clear: I thought my app was okay. It wasn't screamingly slow, but it wasn't winning any speed awards either. I had built it using Next.js, which should have given me a head start with its built-in optimization features. But when I ran a Lighthouse audit, the results were… underwhelming.

Specifically, my First Contentful Paint (FCP) and Largest Contentful Paint (LCP) were lagging, and my Time to Interactive (TTI) was borderline unacceptable. The opportunities section was filled with the usual suspects: "Properly size images," "Eliminate render-blocking resources," and "Reduce unused JavaScript." Sound familiar?

For years, I was mystified by the TTI (Time to Interactive) metric. I knew what it measured, but I had no feel for the user experience it reflected, and no idea how to meaningfully improve it. Turns out, it's all about delivering the minimum viable code required to make your app responsive to user input as quickly as possible. Anything else can wait.

My First (Failed) Attempt: Throwing Hardware at the Problem

My initial instinct (like many developers, I suspect) was to simply throw more hardware at the problem. I upgraded my Vercel plan, hoping that more powerful servers would magically make everything faster.

While it did result in a slight improvement, it was a band-aid solution at best. The underlying performance issues were still there, just masked by faster processing power. And, frankly, spending more money on servers without addressing the root cause felt like a waste. This is a cautionary tale about premature optimization – specifically, optimizing the wrong things.

The Solution: Standing on the Shoulders of Giants (and Some Clever Code)

Alright, time to get serious. I knew I needed to tackle the performance bottlenecks head-on. Here's what I did, step by step:

1. Image Optimization: A Picture is Worth a Thousand Words (and a Thousand Milliseconds)

This was the low-hanging fruit. My app had a bunch of high-resolution images that were being served without any optimization. I was frankly embarrassed.

  • Resized Images: I went through all my images and resized them to the actual dimensions they were being displayed at. No more serving 2000px images in a 500px container.

  • Compressed Images: I used a tool called ImageOptim (a free Mac app) to compress the images without significant loss of quality. There are plenty of alternatives, like TinyPNG or Squoosh.

  • Modern Image Formats: I switched to using WebP images where possible. WebP offers superior compression and quality compared to JPEG and PNG. Next.js makes this easy with its <Image> component, which can automatically serve WebP images to browsers that support them. Living dangerously, I even tried AVIF1 experimentally on a few images, and the compression was impressive, but browser support is still not universal.

  • Lazy Loading: I implemented lazy loading for images below the fold using the <Image> component's loading="lazy" prop. This prevents images from being loaded until they're actually needed, improving initial page load time.

2. Code Splitting: Divide and Conquer

My app's JavaScript bundle was huge. It contained code that wasn't even being used on the initial page load. Time for some code splitting.

  • Dynamic Imports: I used dynamic imports (import()) to load components and modules only when they're needed. This is crucial for large components or modules that are not immediately visible.
  • React.lazy and Suspense: For React components, I used React.lazy and Suspense to lazy-load components. This allows you to defer loading components until they're actually rendered.

3. Efficient Data Fetching: Stop Over-Fetching

My app was fetching too much data on initial load. I needed to be more selective about what data I was fetching and when.

  • GraphQL (with Apollo Client): I migrated to GraphQL with Apollo Client for my data fetching. GraphQL allows you to request only the data you need, avoiding over-fetching. This drastically reduced the amount of data being transferred over the network.
  • Caching: I implemented caching on both the client and server-side. Apollo Client handles client-side caching automatically. For server-side caching, I used Redis to cache frequently accessed data.
  • Prefetching: Where appropriate, I prefetched data for pages that the user was likely to visit next. Next.js provides a <Link> component with a prefetch prop that makes this easy.

4. Eliminating Render-Blocking Resources: Get Out of the Way

This one was a bit trickier. I had some CSS and JavaScript files that were blocking the rendering of the page.

  • Defer JavaScript: I used the defer attribute on script tags to prevent JavaScript files from blocking rendering. This tells the browser to download the script in the background and execute it after the HTML has been parsed.
  • Inline Critical CSS: I identified the critical CSS (the CSS needed to render the above-the-fold content) and inlined it into the HTML. This eliminates the need for a separate CSS file to be downloaded, improving initial rendering speed. I used a tool called critical to automate this process.

5. Fonts: Beware of FOIT and FOUT

Custom fonts are beautiful, but they can also be a performance killer if not handled correctly. Two problems to watch out for:

  • FOIT (Flash of Invisible Text): The browser waits for the font to download before rendering the text, resulting in a blank screen.
  • FOUT (Flash of Unstyled Text): The browser renders the text with a fallback font and then switches to the custom font when it's downloaded, causing a jarring visual shift.

I used the font-display: swap; CSS property to mitigate these issues. This tells the browser to render the text with a fallback font immediately and then swap to the custom font when it's downloaded, minimizing the visual impact.

The Results: A Transformation

After implementing these optimizations, I ran another Lighthouse audit. The results were dramatically better.

My FCP, LCP, and TTI all improved significantly. The app felt much faster and more responsive. And, perhaps most importantly, my users were happier.

The new endpoint has a P95 latency of 50ms, according to my Datadog metrics.

Lessons Learned: It's a Marathon, Not a Sprint

Web performance optimization is an ongoing process. It's not a one-time fix. As your app evolves, you'll need to continuously monitor its performance and identify new bottlenecks.

Here are some key takeaways from my experience:

  • Measure, Measure, Measure: Use tools like Lighthouse, WebPageTest, and Google PageSpeed Insights to measure your app's performance and identify areas for improvement.
  • Prioritize: Focus on the areas that will have the biggest impact on user experience. Image optimization and code splitting are often good places to start.
  • Automate: Automate as much of the optimization process as possible. Use tools like ImageOptim and critical to automate image compression and CSS inlining.
  • Monitor: Continuously monitor your app's performance and identify new bottlenecks as they arise.

Remember, every millisecond counts!

Footnotes

  1. AVIF is a next-generation image format with potentially even better compression than WebP.