SSR vs CSR: The ever on-going debate

Photo by Thomas Tastet on Unsplash

Frankly, a decade ago or more, this would never have been an option. The server-side resources were always limited. The client-side devices were always low in computation power. Everyone settled on one thing and one thing alone:

Let the server side application handle rendering.

That’s how most of our tools and crawlers have evolved, always hoping that they’d get the final content. This led us being able to design our own crawlers and content scrappers for websites.

With the advent of CSR (Client Side Rendering) made possible by using frameworks like backbonejs (in the initial days) or React recently, the structure and design of webpages have changed immensely. The frontend developer now has the power and ability to work with backend data just like any backend developer would. This is where even REST APIs started showing shortcomings and that necessitated the creation of GraphQL.

Coming back to why a certain methodology, let’s see how and why things are better in certain aspects for either of the technologies.

Client-Side Rendering:

CSR works by providing the visitor with a small shim index.html which contains all the CSS and JS bundles to load. Once the files are loaded, the JS code in them can manipulate and render content on the visitor’s browser itself using DOM manipulation. Data sources can be varied but not limited to API calls, or even XHR calls to fetch pre-rendered HTML.

Server-Side Rendering:

SSR is relatively a new concept frontend wise which has been specifically formulated to overcome certain scenarios that exist with CSR. In an SSR based application/website, whenever a certain url path is loaded, it generates the HTML structure server-side, along with a few lightweight js bundles and design, and provides it to the visitor. For the initial render, the client-side device has to do nothing but display the HTML as it normally would.

What does SEO think?

Since its such an expensive process (they have to run a Google Chrome instance to load and parse your website, albeit headless), Google does the CSR indexing on the sidelines, with a time delay that can stretch to a week for your content to be indexed and utilized.

Enters SSR…

Solving this specific crawling issue is our SSR hero, the one in the cape which resolves the issue of rendering for Googlebot. This way, your application or website can remain a first-class citizen on the SEO land and preserve/improve your rankings.

But what if I can only do CSR?

Then your choices are limited. Either you let Google handle, or consider spending a few resources and render the pages yourself for Googlebot to eat up. This can be done in a variety of ways, namely Puppeteer, Rendertron, etc. Either of them can be configured to pre-render, i.e. crawl your own website and save its HTML copy or dynamically render whenever a bot makes a request, by spinning up a headless chrome instance in essence and serving the rendered HTML to it.

The idea here is, whenever you encounter Googlebot or Bing or other bots for that matter, you serve them a pre-rendered HTML page for their indexing. The resource consumption for this can vary widely but is based mainly on the number of your pages, frequency of the bot crawler and parallel requests that you’d want to serve.

In short, let’s summarize the pros and cons of both methods:

CSR

  • Avoids constant full page reloads compared to conventional websites
  • Website is rendered faster after the initial slow load
  • Optimal for web applications

Cons:

  • Initial load time is slow due to the JS bundles required for rendering
  • FCP is usually higher since JS and CSS need to load and are render-blocking
  • Negative impact on SEO if there are any delays in API responses and rendering doesn’t happen correctly
  • Usually higher memory consumption on client-side devices
  • SEO issue can be solved using pre-render but it requires its routing complexity and methods to save the pre-rendered results

SSR

  • Initial page load can be faster compared to an SPA or CSR app
  • SEO doesn’t take a hit and continues as it was
  • FCP is usually lower as dependency on JS is not present to render
  • Site interactions can be made smoother by utilizing the libraries like React or Angular to handle route changes and render the subsequent pages client-side
  • Removes the need for requests that are needed on initial page loads to the server for rendering in case of CSR

Cons:

  • Higher resource consumption as every page load is rendered on the server-side. This can be mitigated by the use of caching
  • Site interactions can appear to be of traditional websites if not designed with client-side capabilities in mind
  • Writing Universal apps ( aka SSR) is usually hard, as you’ll have to decide and juggle between routes that are needed to be server-side rendered and ones that can handle client-side rendering

Final Words

Or you could always follow what someone like Walmart has done for their website and use it if it falls in similar use-case.