Traditional Front End Development: A Love Story

8 min read Original article ↗

Jeremy Green

We don’t use React. We don’t write CSS as JSON. We still use Gulp. This is a story about how using modern front end development techniques significantly improved performance during our latest redesign of CityLab.com.

HTML

We use the compatibility layer between Jinja and Nunjucks. See https://whatisjasongoldstein.com/writing/universal-jinja/. Now, I know the initial reaction to that statement will make you reach for the pitchfork. We are not monsters. Hear me out…

For some time, we’ve been talking about how we can render a site separate from the authoring experience. Having a distributed front end makes a lot of sense for the way our team is structured. Before beginning the redesign, we discussed how we can move closer to our goals. The answer for us was serialized data, Jinja2, and Nunjucks. The reality of it is that Jinja2 is what renders our templates. However, using the compatibility layer between the two allowed us to create a living styleguide.

As a part of our build process, we use KSS to parse our CSS and templates and render a styleguide for us. I hacked together a Nunjucks builder that would allow for us to render the Nunjucks templates as HTML and compile them. There were only two Jinja specific tags that needed to be ported: one for ads that returned an empty string. The other is for including components.

When I talk about the compatibility layer between the two, really I’m talking about the shared control statements between the two. We strove to only use loops and conditionals. If we came across a situation that required more complex logic, or needed a filter, we discussed how we could modify the data to handle that instead of the template. While this might not work for most, it worked great for us.

Funnily enough, we used a lot more of the compatibility layer than originally intended but everything just worked. We should do more of that and will.

Using serialized data allowed the front end team to also visualize the data model of each component we built. KSS has the ability to pull in a JSON file that is passed to the template when rendering the styleguide. We can render a component using real data in the styleguide.

JavaScript

When we looked at the current JavaScript ecosystem, there were so many different players that offered so many different solutions. It was important that we outlined what our JavaScript requirements for the project really were:

  • use modern JS to bundle all our core requirements together like Webpack or Browserify
  • any random module we write should be able to be written in ES2015+ and transpile so it is requireable
  • be able to load a random module from our site using any kind of require or import statement
  • random module should be able to use any previously defined module in our bundle without having to re-download anything
  • random module should be able to use any random module and download that as needed
  • we should be able to import or require any random script/file/css from the web as a dependency without having to bundle it

After looking at RequireJS, Webpack, Browserify, and SystemJS, we decided to go with SystemJS. It met all requirements. It would allow us to write modern JavaScript, ship our bundles, import our modules when needed, and import third party libraries when on-demand.

That allowed us to come up with a loading strategy. The first, most important bundle that we ship is our core experience. Really what that entailed was libraries, utilities, and globals. Once that loads, SystemJS then loads our next tier of page level dependencies: ads, our core analytics package, and the JavaScript for any page specific interactive components like newsletter signup or share buttons. Finally, once those dependencies are resolved, we load all the other third party libraries. In Django, we manage these page level dependencies in the view. This allows us to control what assets are loaded on the homepage vs the article page vs the masthead.

jQuery. Yep, jQuery. We still use jQuery. There was discussion about breaking away from it for newer, lightweight libraries. But, in the end, we still ship jQuery. However, in order to reduce the amount of times we imported or selected the window element, I created a global module. That module exported the most used selectors such as $(window). This allowed us to minimize the impact of extraneous selectors. How many times have you looked through a jQuery file and found multiple instances of $(window).width()? Billions. Exposing those selectors as well as parsed query parameters helped set a best practice for using jQuery.

CSS

We used a typical Sass setup to compile our CSS. We tried implementing Critical CSS a couple times, but it always was a maintainability problem. So, we decided to ship with a single link tag. BEM was the naming convention we used. We loosely followed the component namespacing methodology as well as ITCSS for our project organization.

Get Jeremy Green’s stories in your inbox

Join Medium for free to get updates from this writer.

Not much else to see here. Moving on…

Performance

We didn’t really do much here either. Keep reading…

Psych!

Performance was the most important factor going into the redesign. At each phase of the build process, we measured and analyzed the performance impact of every change. We care about all metrics for page performance. Generally, there are four that we pay special attention to:

  • Time to First Byte (TTFB)
  • SpeedIndex
  • Start Render
  • domInteractive

These metrics aren’t perfect. This is an ever evolving field and new ways to measure performance are coming into play. However, these measurements come out-of-the-box with WebpageTest, so we went with that.

Before the project began, we setup a few ways to monitor and profile CityLab.com. The first was a nightly cronjob using WebpageTest that sent the data to a Google Spreadsheet. The repository is here. It was important to gather as much data as possible before the launch so that we could see where our performance trended and where it landed post launch.

The second way that CityLab was monitored was another cronjob, but this time using sitespeed.io. With sitespeed.io, we can monitor all aspects of our performance. It’s amazing. You should use it if you can’t afford services like SpeedCurve.

Every time a new part of the site was added whether it was a new component or JavaScript module, we ran the site through some performance evaluation. Most of the time, it was firing up ngrok and testing against WebpageTest. This allowed us to instantly know if we introduced a significant regression or if what was added was an expected impact. Adding new elements to a redesign will always incur a performance penalty. To what degree that change affects your sites performance can be measured.

Fonts. Oh, webfonts. How I love you. How I hate you. Webfonts can be great. There’s lots of great articles out there about webfonts and how to use them. I wouldn’t say that fonts are bad for performance. They present different problems which require different solutions. With the deadline that we had for the redesign, we skipped testing and implementing the Font Loading API or using the FontFaceObserver library. The two main performance enhancements we used on CityLab was preload and font subsetting. Using preload allowed us to start the download of our fonts faster on browsers that support it. Subsetting the fonts meant smaller font payloads. The smaller fonts saved us 43kb. This meant that we had more room in our budget.

Custom Ads

Custom ads are great! They can be beautiful, super fast, and are built by us. You should buy one. See https://medium.com/building-the-atlantic/the-case-for-bespoke-advertising-a3bdac7e4d16. With CityLab using a strictly custom ad format, we were able to reduce the performance impact and make for a generally better reading experience.

Things We Missed

One of the goals for the project was to launch with a ServiceWorker in place. Throughout the QA process, we ran into issues with it. In order to have a smooth launch, we decided to pull the ServiceWorker temporarily.

I mentioned earlier about not implementing the Font Loading API. Getting content to our users as fast as possible is a top priority. We can do better there.

Looking forward

We still have a few more tricks up our sleeves. We’ve played with completely inlining our CSS since it’s so small; 17kb. With the 43kb we saved from using font subsets, we have the room. Initial performance profiling using WPT suggests we would see an improvement in our Start Render and SpeedIndex. We need to pay a visit back to the ServiceWorker and iron out some kinks there.

As the dust settles, it will be important to continue to monitor our site and to make sure we don’t go over our budget. Webpage Test and sitespeed.io will be critical in that process.

The End

That’s it. I’m done telling my story. I’m sure there’s a lot that I missed, but, you know, that happens. I think there’s this perception that sites developed in a traditional way can’t be fast. They can. We made optimizations in the places we could. We made compromises in the places we had to. When we started the redesign, CityLab ranked 19th on the Article Performance Leaderboard. A few times it’s been ranked number 1. Not bad.