If I had asked my customers what they wanted, they would have said ‘faster horses’.
There is no actual evidence that Ford ever said this - regardless, it has become a favorite adage for people talking about creativity and innovation. It’s often accompanied by a smirk and an air of superiority. The essential message being that users don’t actually know what they want or what true innovation looks like until a visionary comes along who presents it to them.
That mindset of “innovation against popular demand” seems quite pervasive in the IT industry these days, now that every tech company on the planet has decided that AI is the way to go.
Even though there’s an overwhelming chorus of consumer voices saying “we don’t actually want this”, most tech giants are so convinced that AI is the future that they’re still pushing it on every service and product under the sun.
Here’s the thing: that approach to innovation is a huge gamble. It really only works if the new, innovative thing is so undeniably better that once they see it, people will want to use it straight away.
The success of the iPhone, for example, was evident from the moment Steve Jobs first showed a breathless audience how to pinch-and-zoom a photo. Nobody had to force people to use it.
In contrast, most major tech companies have now started to opt-in users to their AI features against their will, sometimes making it almost impossible to disable them again.
Feature adoption doesn’t work if it’s forced; it has to come from a genuine user belief that the new feature can help them achieve their goals. And it certainly doesn’t work if the feature actually creates a worse user experience and degrades the quality of the product.
Google implementing AI search results has led to countless examples of misinformation, factual errors and hallucination. Google was already excellent at ranking information, guessing the intent behind a search phrase and modifying its results accordingly. They have now augmented that with a solution that gives either false (even dangerous) information or may just dream up answers on the spot.
The people might have asked for faster horses, but instead they got donkeys on LSD.
I get that it’s not as fun to build “a faster horse”. To just make the thing you already have better, more reliable, more helpful. It doesn’t get your shareholders excited, and it doesn’t make you look like a visionary genius.
But in my opinion, the tech industry desperately needs less disruptive new shit for the sake of innovation and more listening to the actual problems users are facing out there.
To close, here’s a quote by Henry Ford that he did in fact say:
]]>If there is any one secret of success, it lies in the ability to get the other person’s point of view and see things from that person’s angle as well as from your own.
This year’s annual review post is a bit different.
In previous years, I reflected on the work that I did, the web projects I built, the posts that I wrote and so on. There was lots of that in 2024 too of course (well maybe except the blogging part, that seems to become a pattern).
But the truth is, most of my energy this year went towards building a life for our new family.
My son was born in November, and he’s happily sleeping on my chest as I am typing this. I’ve never felt more grateful or proud about anything in my life, and I still can’t believe he’s with us now.
The months leading up to his arrival were quite stressful at times, supporting my wife’s pregnancy and preparing everything as best I could for the steps ahead. We’re planning a move next year, and there’s still lots of work to do before we can settle into our new home.
But all of it is very rewarding, and I can’t wait to see where 2025 takes us. Despite everything that’s going wrong in the world right now, I feel hopeful for the future.
]]>It’s just a simple change, so you log on via FTP, edit your style.css file, hit save - and reload the page to see your changes live.
Did that story resonate with you? Well then congrats A) you’re a nerd and B) you’re old enough to remember a time before bundlers, pipelines and build processes.
Now listen, I really don’t want to go back to doing live updates in production. That can get painful real fast. But I think it’s amazing when the files you see in your code editor are exactly the same files that are delivered to the browser. No compilation, no node process, no build step. Just edit, save, boom.
There’s something really satisfying about a buildless workflow. Brad Frost recently wrote about it in “raw-dogging websites”, while developing the (very groovy) site for Frostapalooza.
So, how far are we away from actually working without builds in HTML, CSS and Javascript? The idea of “buildless” development isn’t new - but there have been some recent improvements that might get us closer. Let’s jump in.
The obvious tradeoff for a buildless workflow is performance. We use bundlers mostly to concatenate files for fewer network requests, and to avoid long dependency chains that cause "loading waterfalls". I think it's still worth considering, but take everything here with a grain of performance salt.
The main reason for a build process in HTML is composition. We don’t want to repeat the markup for things like headers, footers, etc for every single page - so we need to keep these in separate files and stitch them together later.
Oddly enough, HTML is the one where native imports are still an unsolved problem. If you want to include a chunk of HTML in another template, your options are limited:
There is no real standardized way to do this in just HTML, but Scott Jehl came up with this idea of using iframes and the onload event to essentially achieve html imports:
<iframe
src="/includes/something.html"
onload="this.before((this.contentDocument.body||this.contentDocument).children[0]);this.remove()"
></iframe>
Andy Bell then repackaged that technique as a neat web component. Finally Justin Fagnani took it even further with html-include-element, a web component that uses native fetch and can also render content into the shadow DOM.
For my own buildless experiment, I built a simplified version that replaces itself with the fetched content. It can be used like this:
<html-include src="./my-local-file.html"></html-include>
That comes pretty close to actual native HTML imports, even though it now has a Javascript dependency 😢.
Right, so using web components works, but if you want to nest elements (fetch a piece of content that itself contains a html-include), you can run into waterfall situations again, and you might see things like layout shifts when it loads. Maybe progressive enhancement can help?
I’m hosting my experiment on Cloudflare Pages, and they offer the ability to write a “worker” script (very similar to a service worker) to interact with the platform.
It’s possible to use a HTML Rewriter in such a worker to intercept requests to the CDN and rewrite the response. So I can check if the request is for a piece of HTML and if so, look for the html-include element in there:
// worker.js
export default {
async fetch(request, env) {
const response = await env.ASSETS.fetch(request)
const contentType = response.headers.get('Content-Type')
if (!contentType || !contentType.startsWith('text/html')) {
return response
}
const origin = new URL(request.url).origin
const rewriter = new HTMLRewriter().on(
'html-include',
new IncludeElementHandler(origin)
)
return rewriter.transform(response)
}
}
You can then define a custom handler for each html-include element it encounters. I made one that pretty much does the same thing as the web component, but server-side: it fetches the content defined in the src attribute and replaces the element with it.
// worker.js
class IncludeElementHandler {
constructor(origin) {
this.origin = origin
}
async element(element) {
const src = element.getAttribute('src')
if (src) {
try {
const content = await this.fetchContents(src)
if (content) {
element.before(content, { html: true })
element.remove()
}
} catch (err) {
console.error('could not replace element', err)
}
}
}
async fetchContents(src) {
const url = new URL(src, this.origin).toString()
const response = await fetch(url, {
method: 'GET',
headers: {
'user-agent': 'cloudflare'
}
})
const content = await response.text()
return content
}
}
This is a common concept known as Edge Side Includes (ESI), used to inject pieces of dynamic content into an otherwise static or cached response. By using it here, I can get the best of both worlds: a buildless setup in development with no layout shift in production.
Cloudflare Workers run at the edge, not the client. But if your site isn't hosted there - It should also be possible to use this approach in a regular service worker. When installed, the service worker could rewrite responses to stitch HTML imports into the content.
Maybe you could even cache pieces of HTML locally once they've been fetched? I don't know enough about service worker architecture to do this, but maybe someone else wants to give it a shot?
Historically, we’ve used CSS preprocessors or build pipelines to do a few things the language couldn’t do:
Well good news: we now have native support for variables and nesting, and prefixing is not really necessary anymore in evergreen browsers (except for a few properties). That leaves us with bundling again.
CSS has had @import support for a long time - it’s trivial to include stylesheets in other stylesheets. It’s just … really frowned upon. 😅
Why? Damn performance waterfalls again. Nested levels of @import statements in a render-blocking stylesheet give web developers the creeps, and for good reason.
But what if we had a flat structure? If you had just one level of imports, wouldn’t HTTP/2 multiplexing take care of that, loading all these files in parallel?
Chris Ferdinandi ran some benchmark tests on precisely that and the numbers don’t look so bad.
So maybe we could link up a main stylesheet that contains the top-level imports of smaller files, split by concern? We could even use that approach to automatically assign cascade layers to them, like so:
/* main.css */
@layer default, layout, components, utils, theme;
@import 'reset.css' layer(default);
@import 'base.css' layer(default);
@import 'layout.css' layer(layout);
@import 'components.css' layer(components);
@import 'utils.css' layer(utils);
@import 'theme.css' layer(theme);
Love your atomic styles? Instead of Tailwind, you can use something like Open Props to include a set of ready-made design tokens without a build step. They’ll be available in all other files as CSS variables.
You can pick-and-choose what you need (just get color tokens or easing curves) or use all of them at once. Open props is available on a CDN, so you can just do this in your main stylesheet:
/* main.css */
@import 'https://unpkg.com/open-props';
Javascript is the one where a build step usually does the most work. Stuff like:
A buildless worflow can never replace all of that. But it may not have to! Transpiling for example is not necessary anymore in modern browsers. As for bundling: ES Modules come with a built-in composition system, so any browser that understands module syntax…
<script src="/assets/js/main.js" type="module"></script>
…allows you to import other modules, and even lazy-load them dynamically:
// main.js
import './some/module.js'
if (document.querySelector('#app')) {
import('./app.js')
}
The newest addition to the module system are Import Maps, which essentially allow you to define a JSON object that maps dependency names to a source location. That location can be an internal path or an external CDN like unpkg.
<head>
<script type="importmap">
{
"imports": {
"preact": "https://unpkg.com/htm/preact/standalone.module.js"
}
}
</script>
</head>
Any Javascript on that page can then access these dependencies as if they were bundled with it, using the standard syntax: import { render } from 'preact'.
So, can we all ditch our build tools soon?
Probably not. I’d say for production-grade development, we’re not quite there yet. Performance tradeoffs are a big part of it, but there are lots of other small problems that you’d likely run into pretty soon once you hit a certain level of complexity.
For smaller sites or side projects though, I can imagine going the buildless route - just to see how far I can take it.
Funnily enough, many build tools advertise their superior “Developer Experience” (DX). For my money, there’s no better DX than shipping code straight to the browser and not having to worry about some cryptic node_modules error in between.
I’d love to see a future where we get that simplicity back.
When someone makes changes to the content via the CMS, they usually don’t get it done in one go and hit publish - it’s an iterative process, going back and forth between CMS and website. Editors might need to check whether a piece of text fits the layout, or they may have to tweak an image so the crop looks good on all devices. To do this, they’ll typically need some sort of visual preview that shows the new content in the actual context of the website.
For static websites, that’s easier said than done.
Content changes on static websites require a rebuild, and that process can take a while. When you’re editing content in a headless CMS like Sanity, you don’t have access to a local dev server - you need to preview changes on the web somehow. Even for small sites and even with blazingly fast SSGs like Eleventy, building and deploying a new version can take a minute.
That doesn’t sound like much, but when you’re in the middle of writing, having to wait that long for every tiny change to become visible can feel excrutiatingly slow. We need a way to render updates on demand, without actually rebuilding the entire site.
Here’s what we’re trying to achieve:
This is quite a common problem, so there are existing solutions. They revolve around making some parts of your Eleventy site available for on-demand rendering by using serverless functions.
Eleventy has the ability to run inside a serverless function as well, and it provides the Serverless Bundler Plugin to do that. Basically, the plugin bundles your entire site’s source code (plus some metadata) into a serverless function that you can call to trigger a new partial build.
FYI: The upcoming v3 release of Eleventy (currently in beta) will not include the Serverless Plugin as part of the core package anymore, precisely because the current implementation is quite heavily geared towards Netlify and their specific serverless architecture. To keep the project as vendor-agnostic as possible, the functionality will probably be handled by external third-party-plugins in the future.
The most common scenario here is to have such a function run on the same infrastructure that hosts the regular static site. Providers like Netlify, Vercel, AWS or Cloudflare all have slightly different expectations when it comes to serverless functions, so the exact implementation varies. All dependencies of your build process need to be packaged and bundled along with the function, and some platforms (in our case Cloudflare) don’t run them in a node environment at all, which is its own set of trouble.
One of the coolest things about Eleventy is its independence from frameworks and vendors. You can host a static Eleventy site anwhere from a simple shared webserver to a full-on bells-and-whistles cloud provider, and switching between them is remarkably easy (in essence, you can drag and drop your output folder anywhere and be done with it).
For the Sanity × Eleventy setup we’re building at Codista, we really wanted to avoid getting locked-in to a specific provider and their serverless architecture. We also wanted to have more control over the infrastructure and the associated costs.
So we did what every engineer in that position would do: We rolled our own solution. 😅
The basic idea for our preview service was to have our own small server somewhere. Everytime someone deploys a new version of our 11ty project, we would automatically push the latest source code to that preview server too and run a build, to pre-generate all the static assets like CSS and Javascript early on.
A node script running on there will then accept GET requests to re-build parts of our site when the underlying Sanity content changes and spit out the updated HTML. We could then show that updated HTML right in the CMS as a preview.
To get this off the ground, we essentially need three things:
Let’s jump in!
The first piece of the puzzle is a way to trigger a new build when the request comes in. Usually, builds would be triggered from the command line or from a CI server, using the predefined npx eleventy command or similar. But it’s also possible to run Eleventy through its programmatic API instead. You’ll need to supply an input (a file or a directoy of files to parse), an output (somewhere for Eleventy to write the finished files) and a configuration object.
Here’s an example of such a function:
// preview/server.js
import Eleventy from '@11ty/eleventy'
async function buildPreview(request) {
// get some data from the incoming GET request
const { path: url, query } = request
let preview = null
// look up the url from the request (i.e. "/about")
// and try to match it to a input template src (i.e. "aboutPage.njk")
// using the JSON file we saved earlier
const inputPath = mapURLtoInputPath(url)
// Run Eleventy programmatically
const eleventy = new Eleventy(inputPath, null, {
singleTemplateScope: true,
inputDir: INPUT_DIR,
config: function (eleventyConfig) {
// make the request data available in Eleventy
eleventyConfig.addGlobalData('preview', { url, query })
}
})
// write output directly to memory as JSON instead of the file system
const outputJSON = await eleventy.toJSON()
// output will be a list of rendered pages,
// depending on the configuration of our input source
if (Array.isArray(outputJSON)) {
preview = outputJSON.find((page) => page.url === url)
}
return preview
}
Let’s say we want to call GET preview.codista.com/myproject/about from within the CMS to get a preview of the “about us” page. First, we will need a way to translate the permalink part of that request (/about) to an input file in the source code like src/pages/about.njk that Eleventy can render.
Luckily, Eleventy already does this in reverse when it builds the site - so we can hook into its contentMap event to get a neat map of all the URLs in our site to their respective input paths. Writing this map to a JSON file will make it available later on at runtime, when our preview function is called.
// eleventy.config.js
eleventyConfig.on('eleventy.contentMap', (map) => {
const fileName = path.join(options.outputDir, '/preview/urls.json')
fs.writeFileSync(fileName, JSON.stringify(map.urlToInputPath, null, 2))
})
The generated output then looks somehing like this:
{
"/sitemap.xml": {
"inputPath": "./src/site/sitemap.xml.njk",
"groupNumber": 0
},
"/": {
"inputPath": "./src/site/cms/homePage.njk",
"groupNumber": 0
},
"/about/": {
"inputPath": "./src/site/cms/aboutPage.njk",
"groupNumber": 0
},
...
}
We use a small express server to have our script listen for preview requests. Here’s a (simplified) version of how that looks:
// preview/server.js
import express from 'express'
const app = express()
app.get('*', async (req, res, next) => {
const { path: url } = req
// check early if the requested URL matches any input sources.
// if not, bail
if (mapURLtoInputPath(url)) {
res.status(404).send(`can't resolve URL to input file: ${url}`)
}
try {
// call our preview function
const output = await buildPreview(req)
// check if we have HTML to output
if (output) {
res.send(output.content)
} else {
throw new Error(`can't build preview for URL: ${url}`)
}
} catch (err) {
// pass any build errors to the express default error handler
return next(err)
}
})
The production version would also check for a security token to authenticate requests, as well as a revision id used to cache previews, so we don't run multiple builds when nothing has changed.
Putting all that together, we end up with a script that we can run on our preview server. You can find the final version here. We’ll give it a special environment flag so we can fine-tune the build logic for this scenario later.
$ NODE_ENV=preview node preview/server.js
Right, that’s the on-demand-building taken care of. Let’s move to the next step!
In our regular build setup, we want to fetch CMS data from the Sanity API whenever a new build runs. Sanity provides a helpful client package that takes care of the internal heavy lifting. It’s a good idea to build a little utility function to configure that client first:
// utils/sanity.js
import { createClient } from '@sanity/client'
export const getClient = function () {
// basic client config
let config = {
// your project id in sanity
projectId: process.env.SANITY_STUDIO_PROJECT_ID,
// datasets are basically databases. default is "production"
dataset: process.env.SANITY_STUDIO_DATASET,
// api version takes any date and figures out the correct version from there
apiVersion: '2024-08-01',
// perspectives define what kind of data you want, more on that in a second
perspective: 'published',
// use sanity's CDN for content at the edge
useCdn: true
}
return createClient(config)
}
Through the Eleventy data cascade, we can make a new global data file for each content type, for example data/cms/aboutPage.js. Exporting a function from that file will then cause Eleventy to fetch the data for us and expose it through a cms.aboutPage variable later. We just need to pass it a query (Sanity uses GROQ as its query language) to describe which content we want to have returned.
// src/data/cms/aboutPage.js
import { getClient } from '../utils/sanity.js'
const query = `*[_type == "aboutPage"]{...}`
export default async function getAboutPage() {
const client = getClient()
return await client.fetch(query)
}
When an editor makes changes to the content, these changes are not published straight away but rather saved as a “draft” state in the document. Querying the Sanity API with the regular settings will not return these changes, as the default is to return only “published” data.
If we want to access draft data, we need to pass an adjusted configuration object to the Sanity client that asks for a different “perspective” (Sanity lingo for different views into your data) of previewDrafts. Since that data is private, we’ll also need to provide a secret auth token that can be obtained through the Sanity admin. Finally, we can’t use the built-in CDN for draft data, so we’ll set useCdn: false.
// utils/sanity.js
import { createClient } from '@sanity/client'
export const getClient = function () {
// basic client config
let config = {
projectId: process.env.SANITY_STUDIO_PROJECT_ID,
dataset: process.env.SANITY_STUDIO_DATASET,
apiVersion: '2024-08-01',
perspective: 'published',
useCdn: true
}
// adjust the settings when we're running in preview mode
if (process.env.NODE_ENV === 'preview') {
config = Object.assign(config, {
// tell sanity to return unpublished drafts as well
// note that we need an auth token to access that data
token: process.env.SANITY_AUTH_TOKEN,
perspective: 'previewDrafts',
// we can't use the CDN when fetching unpublished data
useCdn: false
})
}
return createClient(config)
}
By making these changes directly in the API client, we don’t need to change anything about our data fetching logic. All builds running in the preview node environment will automatically have access to the latest draft changes.
We’re almost there! We already have a way to request preview HTML for a specific URL and render it with the most up-to-date CMS data. All we’re missing now is a way to display the preview, enabling the editors to see their content changes from right within the CMS.
In Sanity, we can achieve that using the Iframe Pane plugin. It’s a straightforward way to render any external URL as a view inside Sanity’s “Studio”, the CMS Interface. Check the plugin docs on how to implement it.
The plugin will pass the currently viewed document to a function, and we need to return the URL for the iFrame from that. In our case, that involves looking up the document slug property in a little utility method and combining that relative path with our preview server’s domain:
// studio/desk/defaultDocumentNode.js
import { Iframe } from 'sanity-plugin-iframe-pane'
import { schemaTypes } from '../schema'
import { getDocumentPermalink } from '../utils/sanity'
// this function will receive the Sanity "document" (read: page)
// the editor is currently working on. We need to generate
// a preview URL from that to display in the iframe pane.
function getPreviewUrl(doc) {
// our custom little preview server
const previewHost = 'https://preview.codista.dev'
// a custom helper to resolve a sanity document object into its relative URL like "/about"
const documentURL = getDocumentPermalink(doc)
// build a full URL
const url = new URL(documentURL, previewHost)
// append some query args to the URL
// rev: the revision ID, a unique string generated for each change by Sanity
// token: a custom token we use to authenticate the request on our preview server
let params = new URLSearchParams(url.search)
params.append('rev', doc._rev)
params.append('token', process.env.SANITY_STUDIO_PREVIEW_TOKEN)
url.search = params.toString()
return url.toString()
}
// this part is the configuration for the Sanity Document Admin View.
// we enable the iFrame plugin here for certain document types
export const defaultDocumentNode = (S, { schemaType }) => {
// only documents with the custom "enablePreviewPane" flag get the preview iframe.
// we define this in our sanity content schema
const schemaTypesWithPreview = schemaTypes
.filter((schema) => schema.enablePreviewPane)
.map((schema) => schema.name)
if (schemaTypesWithPreview.includes(schemaType)) {
return S.document().views([
S.view.form(),
S.view
// enable the iFrame plugin and pass it our function
// to display a preview URL for the viewed document
.component(Iframe)
.options({
url: (doc) => getPreviewUrl(doc),
reload: { button: true }
})
.title('Preview')
])
}
return S.document().views([S.view.form()])
}
Aaaand that’s it!
Near-instant live previews from right within Sanity studio.
This was quite an interesting challenge, since there are so many moving parts involved. The end result turned out great though, and it was nice to see it could be accomplished without relying on third-party serverless functions.
Please note that this may not be the route to take for your specific project though, as always: your experience may vary! 😉
]]>This iteration of mxb.dev is already 7 years old, so some of its internal dependencies had become quite dusty. Thankfully with static sites that didn’t matter as much, since the output was still good. Still, it was time for some spring cleaning.
A big change in v3 is that Eleventy is now using ECMAScript Module Syntax (ESM). That brings it in line with modern standards for JS packages.
In “Lessons learned moving Eleventy from CommonJS to ESM”, Zach explains the motivation for the switch.
I’ve already been using ESM for my runtime Javascript for quite some time, and I was very much looking forward to get rid of the CommonJS in my build code. Here’s how to switch:
The first step is to declare your project as an environment that supports ES modules. You do that by setting the type property in your package.json to “module”:
//package.json
{
"name": "mxb.dev",
"version:": "4.2.0",
"type": "module",
...
}
Doing that will instruct node to interpret any JS file within your project as using ES module syntax, something that can import code from elsewhere and export code to others.
Since all your JS files are now modules, that might cause errors if they still contain CommonJS syntax like module.exports = thing or require('thing'). So you’ll have to change that syntax to ESM.
You don’t need to worry about which type of package you are importing when using ESM. Recent node versions support importing CommonJS modules using an import statement.
Starting with node v22, you can probably even skip this step entirely, since node will then support require() syntax to import ES modules as well.
In an Eleventy v2 project, you’ll typically have your eleventy.config.js, files for filters/shortcodes and global data files that may look something like this:
const plugin = require('plugin-package')
// ...
module.exports = {
something: plugin({ do: 'something' })
}
Using ESM syntax, rewrite these files to look like this:
import plugin from 'plugin-package'
// ...
export default {
something: plugin({ do: 'something' })
}
There are ways to do this using an automated script, however in my case I found it easier to go through each file and convert it manually, so I could check if everything looked correct. It only took a couple of minutes for my site.
It’s also helpful to try running npx eleventy --serve a bunch of times in the process, it will error and tell you which files may still need work. You’ll see an error similar to this:
Original error stack trace: ReferenceError: module is not defined in ES module scope
[11ty] This file is being treated as an ES module because it has a '.js'
file extension and 'package.json' contains "type": "module".
To treat it as a CommonJS script, rename it to use the '.cjs' file extension.
[11ty] at file://mxb/src/data/build.js?_cache_bust=1717248868058:12:1
If you absolutely have to use CommonJS in some files, renaming them to yourfile.cjs does the trick.
Some minor issues you may encounter:
npm install with the --legacy-peer-deps flag if some of your deps have trouble with the alpha release.__dirname in your CJS files, you might have to replace that with import.meta.urlimport something like a json file, you might need to specify the type:import obj from "./data.json" with { "type": "json" }Eleventy v3 also comes with a very useful new way to do image optimization. Using the eleventy-img plugin, you now don’t need a shortcode anymore to generate an optimized output. This is optional of course, but I was very eager to try it.
Previously, using something like an async image shortcode, it was not possible to include code like that in a Nunjucks macro (since these don’t support asynchronous function calls).
In v3, you can now configure Eleventy to apply image optimization as a transform, so after the templates are built into HTML files.
Basically, you set up a default configuration for how you want to transform any <img> element found in your output. Here’s my config:
// eleventy.config.js
eleventyConfig.addPlugin(eleventyImageTransformPlugin, {
extensions: 'html', // transform only <img> in html files
formats: ['avif', 'auto'], // include avif version and original file type
outputDir: './dist/assets/img/processed/', // where to write the image files
urlPath: '/assets/img/processed/', // path prefix for the img src attribute
widths: ['auto'], // which rendition sizes to generate, auto = original dimensions
defaultAttributes: {
// default attributes on the final img element
loading: 'lazy',
decoding: 'async'
}
})
Now that will really try to transform all images, so it might be a good idea to look over your site and check if there are images that either don’t need optimization or are already optimized through some other method. You can exclude these images from the process by adding a custom <img eleventy:ignore> attribute to them.
All other images are transformed using the default config.
For example, if your generated HTML output contains an image like this:
<img
src="bookcover.jpg"
width="500"
alt="Web Accessibility Cookbook by Manuel Matuzovic"
/>
The plugin will parse that and transform it into a picture element with the configured specs. In my case, the final HTML will look like this:
<picture>
<source
srcset="/assets/img/processed/Ryq16AjV3O-500.avif 500w"
type="image/avif"
/>
<img
src="/assets/img/processed/Ryq16AjV3O-500.jpg"
width="500"
alt="Web Accessibility Cookbook by Manuel Matuzovic"
decoding="async"
loading="lazy"
/>
</picture>
Any attributes you set on a specific image will overwrite the default config. That brings a lot of flexibility, since you may have cases where you need special optimizations only for some images.
For example, you can use this to generate multiple widths or resolutions for a responsive image:
<img
src="doggo.jpg"
width="800"
alt="a cool dog"
sizes="(min-width: 940px) 50vw, 100vw"
eleventy:widths="800,1200"
/>
Here, the custom eleventy:widths attribute will tell the plugin to build a 800px and a 1200px version of this particular image, and insert the correct srcset attributes for it. This is in addition to the avif transform that I opted to do by default. So the final output will look like this:
<picture>
<source
sizes="(min-width: 940px) 50vw, 100vw"
srcset="
/assets/img/processed/iAm2JcwEED-800.avif 800w,
/assets/img/processed/iAm2JcwEED-1200.avif 1200w
"
type="image/avif"
/>
<img
src="/assets/img/processed/iAm2JcwEED-800.jpeg"
width="800"
alt="a cool dog"
sizes="(min-width: 940px) 50vw, 100vw"
srcset="
/assets/img/processed/iAm2JcwEED-800.jpg 800w,
/assets/img/processed/iAm2JcwEED-1200.jpg 1200w
"
decoding="async"
loading="lazy"
/>
</picture>
I ran a quick lighthouse test after I was done and using the image transform knocked my total page weight down even further! Good stuff.
I refactored some other aspects of the site as well - most importantly I switched to Vite for CSS and JS bundling. If you’re interested, you can find everything I did in this pull request.
]]>Right now, we’re in the middle of a real renai-css-ance (the C is silent). It’s a great time to write CSS, but it can also feel overwhelming to keep up with all the new developments.
Prominent voices at conferences and on social media have been talking about the new stuff for quite some time, but real-world usage seems to lag behind a bit.
Quick question: how many of these have you actively used in production?
👉 (Disclaimer: I’ve used one.)
All of these are very useful, and support for most is pretty good across the board - yet adoption seems to be quite slow.
Granted some things are relatively new, and others might be sort of niche-y. But take container queries, for example. They were the number one feature requested by front-end devs for a looong time. So why don’t we use them more, now that they’re finally here?
From my own experience, I think there’s different factors at play:
I can’t use [feature X], I need to support [old browser].
That old chestnut.
Browser support is an easy argument against most new things, and sometimes a convenient excuse not to bother learning a feature.
The answer there is usually progressive enhancement - except that’s easier to do for actual “enhancements”, if they are optional features that don’t impact the usability of a site that much.
For some of the new features, theres no good path to do this.
CSS Layers or native nesting for example are not something you can optionally use, they’re all-or-nothing. You’d need a separate stylesheet to support everyone.
And while support for Container Queries is green in all modern browsers, people still seem reluctant to go all-in, fearing they could break something as fundamental as site layout in older browsers.
Why use [feature X] if the usual way works fine?
Some of you might be old enough to remember the time when CSS3 features first hit the scene.
Things like border radius or shadows were ridiculously hard to do back in the day. Most of it was background images and hacks, and it required a substantial amount of work to change them.
Suddenly, these designs could be achieved by a single line of CSS.
Writing border-radius: 8px instead of firing up Photoshop to make a fucking 9-slice image was such a no-brainer that adoption happened very quickly. As soon as browser support was there, nobody bothered with the old way anymore.
A big chunk of the new features today are “invisible” though - they focus more on code composition and architecture.
Layers, Container Queries, etc are not something you can actually see in the browser, and the problems they solve may not be such an obvious pain in the ass at first glance. Of course they offer tremendous advantages, but you can still get by without using any of them. That might slow down adoption, since there is no urgency for developers to switch.
I don’t know where I would even use [feature X] in my project.
The initial use-case for container queries I always heard was “styling an element that could be in the main column or the sidebar”. I think that came from a very common WordPress blog design at the time where you had “widgets” that could be placed freely in different-width sections of the site.
Nowadays, the widget sidebar isn’t as common anymore; Design trends have moved on. Of course there are plenty of other use-cases for CQs, but the canonical example in demos is usually still a card component, and people seemed to struggle for a while to find other applications.
The bigger issue (most recently with masonry grids) is that sometimes the need for a CSS feature is born out of a specific design trend. Standards move a lot slower than trends though, so by the time a new feature actually ships, the need might not be that strong anymore.
Spec authors do a very good job of evaluating the long-term benefits for the platform, but they also can’t predict the future. Personally, I don’t think the new features are tied to any specific design - but I think it’s important to show concrete, real-world usecases to get the developer community excited about them.
If you want to learn more about how container queries can help you and which specific UI problems they solve, check out "An Interactive Guide to CSS Container Queries" by Ahmad Shadeed. A fantastic resource that provides a lot of in-depth knowledge and visual examples.
Whatever the technical reasons may be, I guess the biggest factor in all of this are our own habits.
Our monkey brains still depend on patterns for problem solving - if we find a way of doing things that works, our minds will quickly reach for that pattern the next time we encounter that problem.
While learning the syntax for any given CSS feature is usually not that hard, re-wiring our brains to think in new ways is significantly harder. We’ll not only have to learn the new way, we’ll also have to unlearn the old way, even though it has become muscle memory at this point.
So how can we overcome this? How can we train ourselves to change the mental model we have for CSS, or at least nudge it in the new direction?
If we want to adopt some of the broader new architectural features, we need to find ways to think about them in terms of reusable patterns.
One of the reasons BEM is still holding strong (I still use it myself) is because it provides a universal pattern of approaching CSS. In a common Sass setup, any given component might look like this:
// _component.scss
.component {
// block styles
position: relative;
// element styles
&__child {
font-size: 1.5rem;
}
// modifier styles
&--primary {
color: hotpink;
}
// media queries
@include mq(large) {
width: 50%;
}
}
The BEM methodology was born in an effort to side-step the cascade. While we now have better scoping and style encapsulation methods, the basic idea is still quite useful - if only as a way to structure CSS in our minds.
I think learning new architectural approaches is easier if we take existing patterns and evolve them, rather than start from scratch. We don’t have to re-invent the wheel, just put on some new tyres.
Here’s an example that feels similar to BEM, but sprinkles in some of the new goodness:
/* component.css */
/* Layer Scope */
@layer components.ui {
/* Base Class */
.component {
/* Internal Properties */
--component-min-height: 100lvh;
--component-bg-color: #fff;
/* Block Styles */
display: grid;
padding-block: 1rem;
min-block-size: var(--component-min-height);
background-color: var(--component-bg-color);
/* Child Elements, Native CSS Nesting */
& :is(h2, h3, h4) {
margin-block-end: 1em;
}
/* States */
&:focus-visible {
scroll-snap-align: start;
}
&:has(figure) {
gap: 1rem;
}
/* Style Queries as Modifiers */
@container style(--type: primary) {
font-size: 1.5rem;
}
/* Container Queries for component layout */
@container (min-inline-size: 1000px) {
--component-min-height: 50vh;
grid-template-columns: 1fr 1fr;
}
/* Media Queries for user preferences */
@media (prefers-color-scheme: dark) {
--component-bg-color: var(--color-darkblue);
}
@media (prefers-reduced-motion: no-preference) {
::view-transition-old(component) {
animation: fade-out 0.25s linear;
}
::view-transition-new(component) {
animation: fade-in 0.25s linear;
}
}
}
}
My preferred way of learning new techniques like that is by tinkering with stuff in the safe playground of a side project or a personal site. After some trial and error, a pattern might emerge there that sort of feels right. And if enough people agree on a pattern, it could even become a more common convention.
When learning new things, it’s important not to get overwhelmed. Pick an achieveable goal and don’t try to refactor an entire codebase all at once.
Some new features are good candidates to test the water without breaking your conventions:
You can try to build a subtle view transition as a progressive enhancement to your site, or you could build a small component that uses container queries to adjust its internal spacing.
In other cases, browser support also does not have to be 100% there yet. You can start using logical properties in your project today and use something like postcss-logical to transform them to physical ones in your output CSS.
Whatever you choose, be sure to give yourself enough space to experiment with the new stuff. The devil is in the details, and copy-pasting some new CSS into your code usually doesn’t give you the best insight - kick the tyres a bit!
One thing I’d really love to see more of are “best practice” examples of complete sites, using all the new goodness. They help me see the bigger picture of how all these new techniques can come together in an actual real-life project. For me, the ideal resource there are (again) the personal sites of talented people.
Answering these questions helps me to slowly nudge my brain into new ways of thinking about all this.
Having said all that: you absolutely don’t have to use all the latest and greatest CSS features, and nobody should feel guilty about using established things that work fine. But I think it helps to know which specific problems these new techniques can solve, so you can decide whether they’re a good fit for your project.
And maybe we can still learn some new CSS tricks after all.
]]>
We built a lot of interesting projects in 2023 with Codista, and we’ve had a very good year working with established clients and partners. Some of it has been quite challenging, but we managed to pull off a stellar track record of successful projects, and I’m really proud of our small company!
We also hired our first front-end developer (other than myself). I’ve posted the job listing on our own website and on Mastodon, and with the help from the web dev community, we found somebody who fits our team really well and I’m very happy with them. A big thanks to everyone who boosted the post or mentioned it at meetups!
I’ve struggled a bit with personal health issues this year. I have a moderately severe form of atopic dermatitis, an auto-immune disease of the skin. I’ve had it all my life. Some days it feels OK, while on others it feels like my skin is on fire and it’s hard to concentrate on anything else besides the impulse to scratch. If you’ve ever seen me give a talk and my face was bright red, it’s not (only) because I’ve been drinking the night before.
When I was younger I’ve tried different forms of therapy, but none of them really worked - so I’ve become pretty much used to living with it. But then it got a lot worse in 2023. I couldn’t sleep properly anymore and it started to affect other areas of my life, so I decided to take another shot at fighting it.
I got a new doctor and started treatment with a new drug that recently got EMA approval. After half a year, I switched to a different drug, which I’m currently still evaluating. First results have been promising though, so fingers crossed that this is the one. Would certainly make my 2024 more pleasant.
I was fortunate enough to see some beautiful places this year. Event though my pre-pandemic days of traveling the more remote parts of the world are likely over, it’s still nice to get out and explore again.
I went to Zurich for a client workshop in the spring, and immediately followed that up with a trip to Amsterdam for CSS Day 2023. That conference was one of the best I’ve ever been to. Even though I caught a pretty bad cold and had to skip some of the fun - the talks, people and just overall atmosphere of that event were amazing.
I’m grateful I got to see so many familiar faces and talk to some of my web friends in person. I also left the conference feeling very inspired and (as always) a little guilty that I can’t find enough time for community work and writing. Hopyfully next year my schedule will allow for a bit more of that.
In the summer, we went for a vacation in the south of Croatia and spent two weeks on the Dalmatian coast.
We also did some more work on our small garden house in Lower Austria and had a few lovely days in autumn hiking the surrounding hills and vineyards.
I enjoyed lots of stuff this year, so I’ll just give you the top 3 of each:
I think I’ve grown a bit weary of the tech industry this year. I still love the web and I love what I do for work, but I lost some of my interest in new developments, new frameworks, the hot tech of the day. I just don’t care as much anymore.
It may also be the rising topic of AI that is popping up everywhere you look, or the crypto-esque attitude that seems to go along with it. Instead of being excited about new possibilites, I feel a bit disheartened by the trends I see. AI looks like a great tool, but everything I read about how it’s actually applied comes from the same hyper-captitalist mindset that brought us gems like bitcoin mining and NFTs. I don’t know, maybe I’m just tired.
There are other trends as well that I feel more optimistic about. Ones that call for a weirder web with more original, human content. It could just be wishful thinking, but I’ve caught glimpses of that version of the future for a while now. Here’s hoping.
Well, that’s all folks. See you in 2024!
]]>Ok fine, I trash-talked Manuel’s website on Mastodon and he correctly pointed out that while I wrote an impressive two (2) blogposts last year, he wrote around 90 (while also doing talks, audits, raising an infant daughter and probably training for a marathon or some shit like that, I mean let’s face it the guy is annoyingly productive).
I know I was slacking off a bit and those numbers speak for themselves. While I generally want to write, ideas rarely make it all the way to a published post.
Like many others, “write more” is high up on my imaginary list of life improvements and although I don’t usually do new year’s resolutions, now feels like a good time to re-evaluate what’s stopping me there.
I came up with seven reasons that I use to justify why I’m not writing. In a confusing twist of perspective, I’m also going to try and talk myself out of them by explaining to you, dear Reader, why they are bullshit.
This is the big one, right? We all have other things to do, and writing takes time. In my case, I’ve been really swamped with client projects and other work last year.
I think if you actually want to write though, it’s more a lack of routine than a lack of time itself. People who consistently produce content have learned to make a habit out of it. I read “Atomic Habits” by James Clear a couple of months ago and its message kinda stuck with me. It’s about conditioning yourself to do certain things more often by building a routine.
Take 15 minutes every day before you check your email and just write. Or do it on your commute to work if possible! The trick is to use amounts of time that are so small you can’t possibly not fit them in your schedule. It may not be enough for a fully-fledged article, but enough to build a habit.
It’s also worth noting that your writing doesn’t always have to be well-crafted longform blogposts. It can just be a few paragraphs about your thoughts, linking out to other stuff. Chris does a great job at this, and others have recently adopted even shorter formats, mimicking social media posts in length.
The classic impostor syndrome comes out here. I don’t know anything special, so why bother?
The truth is that everyone has something interesting to say because everyone faces different challenges. You don’t have to go viral and make buzzword-riddled thinkpieces about the current hot topic - There’s enough sites who already do that, and AI will soon produce a shitton more of it.
A better plan is to write about what you know and experience in your day-to-day life instead. Authentic posts are always helpful, and you will solidify your own knowledge in the process too.
Here are a few common writing prompts and examples for blogposts I love to read:
This one is especially popular among developers. “How can I possibly write anything before the typography is perfect? How can I ever publish anything when comments are not implemented yet?” We love to tinker with our websites and that’s cool, but at some point it gets in the way of actually using your blog and creating content.
Despite what we tell ourselves, it really doesn’t matter too much how a blog looks or what features it has. People come for the content, and as long as they can read it, they’re happy. Throw in an RSS feed so everyone can use their own reader and you’re golden. It pains me to say it but Manuel is absolutely correct here.
And if he can be “redesigning in the open” for three years while churning out massive amounts of CSS knowledge, your site will be fine too. 😉
Sometimes I’ve got a great idea for a post, but an initial Google search reveals that someone else already beat me to it. The novelty has worn off and that other post is way better than what I could have come up with anyway, I tell myself.
That’s not a real reason of course, nobody has a monopoly on a subject. Others may have already covered the topic - but not in your voice, not from your perspective. You could write a post about the exact same thing and still provide valuable information the other author has missed. Or you could approach the subject from a different angle, for a different skill-level or for a different audience.
Another way is to read the material that is already available and take notes about all the questions you still have afterwards. Try to actually do the thing (write the code, use the app, whatever) and see what other information would have been helpful for you to have. Write that!
There are writing ideas that are inspired by some event or conversation. Maybe something big happened on the web or I’ve had a particularly interesting discussion on social media. So I sketch out a quick outline for a post and stick it in my drafts folder, thinking I’ll get back to it later.
Three weeks pass and that lonely draft sits around gathering dust, and by the time I remember it, the moment has passed. The conversation has moved on, and so the post is abandoned and eventually deleted.
The internet moves pretty fast and there’s always a “hot topic of the day”, but that doesn’t mean that nobody is interested in anything else. A beautiful thing about blogs is that they’re asynchronous. You can just write things and put them out there, and even if they don’t hit a nerve immediately, people can discover them in their own time.
Older posts can also get re-discovered years later and get a second wind, not to mention that people constantly search for specific things - and your post might be just what they’re looking for then! Some of my old posts about webrings and the IndieWeb have recently found readers again since Twitter has started going down the drain. You never know!
Most of the (tech) blogs I read are in English, even though its authors are from all over the world. For a non-native english speaker like myself, it can sometimes be daunting to write in a foreign language. This is a barrier when it comes to producing “polished” text - there’s extra brain cycles involved in getting your ideas to “sound” right.
This is probably not a big deal though. People don’t expect to read world-class literature when they come to check out a blogpost about “Lobster Mode”. As long as you can get your point across, it’s fine if you don’t use fancy words. It can also be an advantage: for an international audience, simple English might even be easier to understand.
That being said, this is a usecase where AI might actually be helpful! While LLMs like GPT-3 and co are useless at creating actual content or original thoughts, they’re great at making sentences sound nice. Tools like Jasper can rewrite your copy and improve the tone without changing the contained information. Sort of like prettier but for English prose.
Let’s be honest: nobody likes to shout into the void. Everyone wants their content to be seen, and social validation is the sweet sweet dopamine reward we all crave.
There’s nothing wrong with sharing your work on social media or popular orange link aggregators either, but sometimes there just won’t be much of a reaction after you publish. That can feel frustrating - but ultimately I think obesessing over vanity metrics is not worth it anyway. Just because something doesn’t make the frontpage of Reddit does not mean it’s not valuable.
Don’t underestimate how many people actively read personal blogs though! The web dev community is especially fond of RSS, and with the Fediverse gaining more and more popularity, original content on your own domain has a much better reach now than before.
Who’s gonna read your personal blog because it has an RSS feed? I’m gonna read your personal blog because it has an RSS feed. pic.twitter.com/mtcyKhEVet
— Chris Coyier (@chriscoyier) January 7, 2020
Right, I realize it’s a bit weird to write a post about how I don’t write posts. But I hope to push back on this in 2023 and find more time for writing. I also suspect that other people have similar reasons and maybe talking about them helps a bit.
In any case, that’s one more post in the bank!
]]>Since there’s a good chance that you -like me- are involved in web development and/or have a special interest in technology, I want you to play along and engage in a thought experiment for this post:
Imagine you’re a regular user.
Imagine you have never heard of git branches, postgres or a “webpack config” (lucky you). You really don’t care about all of that, but you do care about your friends and your connections online.
Ever since Elon took over (and actually even some time before that) Twitter has been feeling increasingly hostile. People start leaving, and you hear them talk about alternatives. You’re curious, so you type “mastodon” into Google and see what comes up.
You find the website and want to sign up. It tells you to choose a server:
Ok wait, you wanted to join mastodon, what’s all this now? Tildes? Furries? Some Belgian company? Why do you have to apply? Everyone else had that mastodon.social handle - Can’t you just use that? The real one? What the hell is a fediverse?
Confused, you close the site. This seems like it’s made for someone else. Maybe you’ll stick around on Twitter for a while longer, while it slowly burns down.
You can be a developer again now.
You and I know the reasons for that experience. We know that a decentralized system has to look like this, and that the choice of instance doesn’t even matter all that much. But I’ve heard this exact story a couple of times now, all from people outside my IT filter bubble.
Why was it so easy to drive these people away?
Being on the web has been heavily commoditized.
In the days of IRC and message boards, or later in the 2000s blogging era, federation was very much the norm. It was the default mode of the web: people grouping together in small communities around shared interests, but scattered on many different sites and services. It was normal to explore, find new places and discover new things by venturing out.
Through the rise of social media though, people have gotten used to being in one place all the time. Now we expect a system that’s easy to set up, handles millions of users at once and makes every interaction frictionless. We expect it to know what we want, and give it to us instantly. Anything too weird or tech-y and you start to lose people.
Mastodon is not supposed to be a second Twitter. Many of its features were designed specifically to avoid becoming another content silo and repeating the same mistakes, yet the assumption seems to be that everything should stay the same as before.
It’s like everyone has spent the last few years in a giant all-inclusive resort, screaming at each other for attention at the buffet. Now we’re moving into nice little bed-and-breakfast places, but we’re complaining because it takes slightly more effort to book a room, and the free WIFI isn’t as fast.
Maybe its time to rethink some of these expectations. Maybe we need some of that early internet vibe back and be ok with smaller, closer communities. Maybe we can even get some of the fun back and start exploring again, instead of expecting everything to be automatically delivered to us in real time.
We can remind ourselves of what social media used to be: a way to connect around shared interests, talk to friends, and discover new content. No grifts, no viral fame, no drama.
Adjusting expectations is one part - but at the same time, we as developers have to try and make these systems as approachable as possible without compromising on their independence. A lot of alternative content publication methods are still very much geared towards the IT bubble.
You could loosely map some of them by how easy it is to get started if you have no technical knowledge:
Generally speaking: The more independence a technology gives you, the higher its barrier for adoption.
I love the IndieWeb and its tools, but it has always bothered me that at some point they basically require you to have a webdevelopment background.
How many of your non-tech friends publish RSS feeds? Have you ever seen webmentions used by someone who isn’t a developer? Hell, even for professional devs it’s hard to wire all the different parts together if you want to build a working alternative to social media.
If you want the independence and control that comes with some of these IndieWeb things, you just have to get your hands dirty. You can’t do it without code, APIs, servers and rolling your own solutions. It’s just harder.
My point is this: it shouldn’t be.
Owning your content on the web should not require extensive technical knowledge or special skills. It should be just as easy as signing up for a cellphone plan.
I know it’s no small feat to lower that barrier. Making things feel easy and straightforward while handling the technical complexity behind them is quite a challenge. Not to mention the work and financial cost involved in running systems that don’t generate millions of ad revenue.
Mastodon, Ghost, Tumblr, micro.blog and others are working hard on that frontier; yet I feel they are still not widely used by the average person looking to share their mind.
I think we’re at a special moment right now. People have been fed up with social media and its various problems (surveillance capitalism, erosion of mental health, active destruction of democracy, bla bla bla) for quite a while now. But it needs a special bang to get a critical mass of users to actually pack up their stuff and move.
When that happens, we have the chance to build something better. We could enable people to connect and publish their content on the web independently – the technology for these services is already there. For that to succeed though, these services have to be useable by all people - not just those who understand the tech.
Just like with migration to another country, it takes two sides to make this work: Easing access at the border to let folks in, and the willingness to accept a shared culture - to make that new place a home.
]]>Growing up with the web as a teenager meant having access to an infinite treasure chest of content. A lot of that content was spread across blogs, forums and personal websites.
The overwhelming motivation behind it seemed to be “I made something, here it is”. Sharing things for the sake of showing them to the world. Somebody had created something, then put it online so you could see it. Visit their website (wait for the dial-up to finish), and it’s yours.
Follow any link on the web today and you’ll likely be met with a different scenario:
Notice how everything about that interaction is designed to extract value from your visit. The goal here is not for you to read an article; it’s to get your analytics data, your email, your phone and your money.
It’s the symptom of a culture that sees the web purely as a business platform. Where websites serve as eloborate flytraps and content as bait for unsuspecting users.
In this culture, the task of the self-appointed web hustler is to build something fast & cheap, then scale it as much as possible before eventually cashing out.
web3 and NFTs are the latest evolution of this culture. The latest attempt to impose even more artificial locks and transactions on users, to extract even more money.
This is the web as envisioned by late-stage capitalism: a giant freemium game where absolutely everyone and everything is a “digital asset” that can be packaged, bought and sold.
Sure, the web has changed since the 90s. It has “grown up”.
Of course there are lots of legitimate reasons to monetize, and creators deserve to be compensated. It’s not about people trying to make a buck. It’s about those treating the web simply as a market to run get-rich-quick schemes in, exploiting others out of pure greed.
We’ve gotten so used to it that some can’t even imagine the web working any other way - but it doesn’t have to be like this.
At its very core, the rules of the web are different than those of “real” markets. The idea that ownership fundamentally means that nobody else can have the same thing you have just doesn’t apply here. This is a world where anything can easily be copied a million times and distributed around the globe in a second. If that were possible in the real world, we’d call it Utopia.
It’s also a world that can be shaped by the consumer:
Large companies find HTML & CSS frustrating “at scale” because the web is a fundamentally anti-capitalist mashup art experiment, designed to give consumers all the power.
— Mia, with valuable secrets 🤫 (@TerribleMia) November 24, 2019
Sorry I didn’t quote tweet anything in order to say that.
This “mashup art experiment”, as Mia calls it, is what made the web great in the first place. It’s the reason it became a global phenomenon and much of it is centered around the idea that digital content is free and abundant.
Resource Scarcity doesn’t make sense on the web. Artificially creating it here serves no other purpose than to charge money for things that could easily have been free for all. Why anyone would consider that better is beyond me.
The online game Wordle recently took the world by storm. To the utter shock of many, it is just a free piece of content. A free and open web game millions can enjoy, no strings attached.
Its creator, Josh Wardle, originally built the game for his partner and put it online. “I made something, here it is”. Despite its success, he had no intention to monetize it through apps or subscriptions - and the world is richer for it. When questioned about it, he said this:
I think people kind of appreciate that there’s this thing online that’s just fun. It’s not trying to do anything shady with your data or your eyeballs. It’s just a game that’s fun.
Because the notion that monetization is the only worthwhile goal on the web is so widespread, this is somehow a very controversial take. You can actually stand out of the crowd by simply treating the web platform as what it is: a way to deliver content to people.
Despite what web3 claims, it’s possible to “own” your content without a proof of it on the blockchain (see: IndieWeb). It’s also possible to create things just for the sake of putting them out into the world.
The best growth hack is still to build something people enjoy, then attaching no strings to it. You’d be surprised how far that can get you.
Make free stuff! The web is still for everyone.
👉 Update: On Feb 1, Wordle was eventually sold to the New York Times for upwards of a million dollars. Josh Wardle claims the game will still remain free to play for all.
]]>Ethan, who coined the term responsive web design over a decade ago, has recently said that media-query-less layouts are certainly within bounds:
Can we consider a flexible layout to be “responsive” if it doesn’t use any media queries, but only uses container queries? [...] I’d be inclined to answer: yes, absolutely.
Over at CSS-Tricks, Chris had similar thoughts. He issued a challenge to examine how and where media queries are used today, and if they will still be necessary going forward:
A common refrain, from me included, has been that if we had container queries we’d use them for the vast majority of what we use media queries for today. The challenge is: look through your CSS codebase now with fresh eyes knowing how the @container queries currently work. Does that refrain hold up?
Fair enough.
I took the bait and had a look at some of my projects - and yes, most of what I use @media for today can probably be accomplished by @container at some point. Nevertheless, I came up with a few scenarios where I think media queries will still be necessary.
While container queries can theoretically be used to control any element, they really shine when applied to reusable, independent components. The canonical example is a card component: a self-contained piece of UI you can put anywhere.
Page layouts, on the other hand, are better suited for media queries in my opinion. Page layouts are usually at the very top level of the DOM, not nested in another container. I’ve never encountered a case where the main page layout had to adapt to any other context than the viewport.
.layout {
display: grid;
}
@media (min-width: 60em) {
.layout {
grid-template-rows: 4rem 1fr auto;
grid-template-columns: 25% 1fr;
grid-template-areas:
"header header"
"sidebar main"
"footer footer";
}
}
Another good usecase for media queries is to set global design tokens, like spacing or font-sizes. With CSS custom properties it’s now much easier to have fine-grain control over global styles for different devices.
For example, you might want to have bigger text and more whitespace on a large TV than you want for a mobile screen. A larger screen means the user’s head will be physically farther away.
It only makes sense to use a media query there - since the reason for the change is the size of the device itself, not the width of any specific element.
:root {
/* Font Sizes */
--font-size-headline-l: 1.875rem;
--font-size-headline-m: 1.75rem;
--font-size-headline-s: 1.5rem;
--font-size-copy-l: 1.125rem;
--font-size-copy-s: 0.875rem;
/* Global Spacing */
--spacing-x: 1rem;
--spacing-y: 1rem;
}
@media (min-width: 48em) {
:root {
--font-size-headline-l: 2.5rem;
--font-size-headline-m: 2rem;
--font-size-headline-s: 1.75rem;
--font-size-copy-l: 1.25rem;
--font-size-copy-s: 1rem;
--spacing-x: 2rem;
--spacing-y: 2rem;
}
}
Screen dimensions are not the only things we can detect with media queries. The Media Queries Level 4 Spec (with Level 5 currently a working draft) lists many different queries related to user preference, like:
prefers-reduced-motionprefers-contrastprefers-reduced-transparencyprefers-color-schemeinverted-colorsWe can use these to better tailor an experience to the current user’s specific needs.
Other media queries allow for micro-optimizations based on a device’s input method (i.e. touch or mouse):
/* fine pointers (mouse) can hit smaller checkboxes */
@media (pointer: fine) {
input[type="checkbox"] {
width: 1rem;
height: 1rem;
border-width: 1px;
border-color: blue;
}
}
/* coarse pointers (touch) need larger hit areas */
@media (pointer: coarse) {
input[type="checkbox"] {
width: 2rem;
height: 2rem;
border-width: 2px;
}
}
Finally, there are actual “media type” queries like @media print that won’t go anywhere. And there are experimental ideas being discussed for new media queries, like this one for “foldable” devices:
:root {
--sidebar-width: 5rem;
}
/* if there's a single, vertical fold in the device's screen,
expand the sidebar width to cover the entire left side. */
@media (spanning: single-fold-vertical) {
:root {
--sidebar-width: env(fold-left);
}
}
main {
display: grid;
grid-template-columns: var(--sidebar-width) 1fr;
}
BTW: You can read more about this in "The new responsive: Web design in a component-driven world" by Una Kravets.
Components that are taken out of the normal document flow don’t have to care about their containers. Some UI elements are fixed to the viewport itself, usually oriented along an edge of the screen.
Have a look at Twitter’s “Messages” tab at the bottom of the screen for example. Its relevant container is the window, so it makes sense to use a media query here and only apply position: fixed at some breakpoint.
The current implementation of @container only allows querying the width of an element (its “inline” axis), not its height.
👉 Update: Miriam tells me that it is possible to query the height of containers, provided they are defined as size rather than inline-size. The exact value name of this is still in flux at the time of writing.
Style adjustments in relation to width are probably the primary use case for most UI elements anyway, but there are still cases where screen height is an issue. Here’s an example from a “hero image” component:
.hero {
display:flex;
flex-direction: column;
height: 100vh;
}
/* if the screen is really tall, don't fill all of it */
@media (min-height: 60em) {
.hero {
height: 75vh;
}
}
While I think container queries will eventually replace most “low level” responsive logic, there are still a lot of good usecases for trusty media queries.
A combination of both techniques will probably be the best way forward. @media can handle the big picture stuff, user preferences and global styles; @container will take care of all the micro-adjustments in the components themselves.
A perfect team!
]]>I came up with this demo of a book store. Each of the books is draggable and can be moved to one of three sections, with varying available space. Depending on where it is placed, different styles will be applied to the book. The full source code is up on Codepen. Here’s how it looks:
This demo currently only works in Chrome Canary. Download the latest version, then enable Container Queries under chrome://flags to see them in action.
Each of these books is a custom element, or “web component”. They each contain a cover image, a title and an author. In markup they look like this:
<book-element color="#ba423d">
<img slot="cover" src="/books/1984.avif" alt="cover by shepard fairey" />
<span slot="title">1984</span>
<span slot="author">George Orwell</span>
</book-element>
This then gets applied to a template which defines the internal Shadow DOM of the component. The <slot> elements in there will get replaced by the actual content we’re passing in.
<template id="book-template">
<style></style>
<article class="book">
<div class="front">
<slot name="cover"></slot>
</div>
<div class="meta">
<h2 class="title"><slot name="title"></slot></h2>
<p class="author"><slot name="author"></slot></p>
</div>
</article>
</template>
Alright, nothing too fancy yet, just some basic structure.
The magic happens when we apply some internal styling to this. Everything inside that <style> tag will be scoped to the component - and since styles can not leak out of the shadow DOM and we can’t (easily) style its contents from the outside, we have real component encapsulation.
Container Queries are one of the last few missing puzzle pieces in component-driven design. They enable us to give components intrinsic styles, meaning they can adapt themselves to whatever surroundings they are placed in.
The new key property there is container-type - it lets us define an element as a container to compare container queries against. A value of inline-size indicates that this container will response to “dimensional queries on the inline axis”, meaning we will apply different styles based on its width.
We can also give our container a name using the container-name property. It is optional in this example, but you can think of it in the same way that grid-area lets you define arbitrary names to use as references in your code later.
<template id="book-template">
<style>
/* Use Web Component Root as the Layout Container */
:host {
display: block;
container-type: inline-size;
container-name: book;
}
</style>
...
</template>
In the bookstore demo, I created three variants that depend on the width of the component’s :host (which translates to the <book-element> itself). I’ve omitted some of the styling for brevity here, but this part is where we define the multi-column or 3D book styles.
/* Small Variant: Simple Cover + Title */
@container (max-width: 199px) {
.book {
padding: 0;
}
}
/* Medium Variant: Multi-Column, with Author */
@container (min-width: 200px) and (max-width: 399px) {
.book {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 1rem;
}
}
/* Large Variant: 3D Perspective */
@container (min-width: 400px) {
.book {
position: relative;
transform-style: preserve-3d;
transform: rotateY(-25deg);
}
}
By adding Dragula.js to enable drag-and-drop functionality, we can then move the individual components around. As soon as they’re moved to a different section in the DOM, its internal styles are re-calculated to match the new environment, and the corresponding CSS block is applied. Magic!
Now theoretically, we could have achieved a similar effect by using the cascade itself. We could for example apply the 3D styles to all .books inside .stage. But that would have some problems:
.stage ever gets too narrow, it would breakIt’s generally a good idea in CSS to separate “layout” from “content” components and let each handle their own specific areas of responsibility. I like to think of Japanese bento boxes as a metaphor for this: a container divided into specific sections that can be filled with anything.
For example, the layout for our bookstore looks like this:
It’s a grid divided into three sections, the middle one containing a nested flexible grid itself.
<div class="layout">
<div class="stage"></div>
<main class="content">
<ul class="itemlist"></ul>
</main>
<div class="cart"></div>
</div>
/* main layout sections */
.stage,
.content,
.cart {
padding: var(--spacing);
}
/* nested sub-grid */
.itemlist {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(200px, 1fr));
gap: var(--spacing);
}
/* desktop layout */
@media (min-width: 1024px) {
.layout {
display:grid;
height: 100vh;
grid-template-columns: 480px 1fr 130px;
}
}
The parts of the layout are only concerned with the alignment and dimensions of boxes. They have no effect whatsoever on their children other than giving them a certain amount of space to fill. Just like a bento box, it doesn’t care what we put into it, so we could easily re-use the layout for a completely different product. It is content-agnostic.
That’s why Container Queries pair so well with Web Components. They both offer ways to encapsulate logic to build smart, independent building blocks. Once they’re defined, they can be used anywhere.
Container Queries bring us one step closer to “Intrinsic Layouts” and a future of truly independent, component-driven design. Exciting stuff ahead!
While some static site generators have a standardized way of handling assets, Eleventy does not. That’s a good thing - it gives you the flexibility to handle this any way you want, rather than forcing an opinionated way of doing things on you that might not fit your specific needs.
That flexibility comes at a price though: you need to figure out your preferred setup first. I’ve tinkered with this a lot, so I wanted to share my learnings here. BTW: My personal setup “Eleventastic” is also available on Github.
The most common requirement for an asset pipeline is to build CSS and Javascript. That process can have different flavors - maybe you need to compile your stylesheets through Sass or PostCSS? Maybe you want Tailwind? Maybe your Javascript is “transpiled” in Babel (translated from modern ES6 to the more compatible ES5 syntax) or bundled by webpack? Or maybe you want to do something entirely different, like build SVG icon sprites or optimize images?
Whatever you want to achieve, it usually involves plugging some tool into your Eleventy workflow. I’ve looked at several ways to tackle that problem - here are a few possible approaches you can take.
One solution is to let Eleventy handle just the SSG part (producing HTML) and define other processes to take care of your assets outside of it. The most common way to do this is through npm scripts. If you’re not familiar with these, they are essentially shortcuts to run node commands, defined in your package.json file.
Some examples using this approach:
So if you wanted to i.e. compile your CSS from a bunch of Sass files, you could set up your NPM scripts like this:
// package.json
"scripts": {
"watch:sass": "sass --watch _src/sass:_site/assets/styles",
"build:sass": "sass _src/sass:_site/assets/styles",
"watch:eleventy": "eleventy --serve",
"build:eleventy": "eleventy",
"start": "npm-run-all build:sass --parallel watch:eleventy watch:sass",
"build": "npm-run-all build:eleventy build:sass"
}
The watch:sass and build:sass scripts here both run the Sass compilation command, just with a different configuration depending on context.
With utilities like npm-run-all, you can even run multiple scripts at once. So one “main command” like npm start will run Eleventy and simultaneously start watching your Sass files for changes, and recompile them when they do.
This solution is extremely flexible. There are node tools for everything, and there’s no limit to what you can do. However depending on how complex your build is, the setup can get a bit unwieldy. If you want to manage multiple asset pipelines that have to run in a specific order with a specific configuration, it’s not that easy to keep track of things.
And since each of these scripts is a separate process that runs outside of Eleventy, it has no knowledge about any of them. You can tell Eleventy to watch for changes that these external builds cause, but things can get complex if tasks depend on each other. You can also run into situations where multiple passes are required to achieve the desired output, and since Eleventy can’t optimize for processes outside of itself, large pages can take longer to build.
Another popular solution is to use Gulp to manage assets. Although it is not the hottest new tech on the block anymore (pssst: it’s OK to use tools that are older than a week), it’s still a perfect tool for the job: Gulp takes in a bunch of source files, runs them through any transformations you want and spits out static files at the end. Sounds exactly right!
Some examples using this approach:
Gulp is a node-based task runner lets you define your asset pipelines as functions in a “Gulpfile” like this:
// Gulpfile.js
const gulp = require('gulp')
const sass = require('gulp-sass')
// define sass compilation task
gulp.task('styles', function() {
return gulp
.src('/main.scss')
.pipe(
sass({
precision: 10,
outputStyle: 'expanded'
}).on('error', sass.logError)
)
.pipe(gulp.dest('/assets/styles'))
})
// define script bundling task
gulp.task('scripts', ...)
// Run the different tasks in the asset pipeline
gulp.task('assets', gulp.parallel('styles', 'scripts', 'icons', 'whatever'))
Then you kick things off from a single npm script like this:
"scripts": {
"assets": "gulp:assets",
"build": "eleventy --serve",
"start": "npm-run-all --parallel build assets"
}
This is more readable and versatile than npm scripts, but really you’re doing the same thing under the hood. Gulp runs all the different processes behind the scenes and outputs the finished .css or .js files into our build folder.
The drawback here is that it locks you into the Gulp world of doing things. You often need gulp-wrapper packages for popular tools (e.g. gulp-sass instead of node-sass) to work with the “streaming” nature of it. Plus you’re still running external builds, so all of the pitfalls that come with npm scripts still apply.
The underlying issue with both these methods is the same: they need external build processes. That’s why some Eleventy setups are going a slightly different route: instead of running asset pipelines on the outside, they teach Eleventy itself to handle them. That way everything runs through a single, integrated process.
Some examples using this approach:
Think of your assets as just another static “page” here. Instead of markdown, a template takes Sass or ES6 as input, and instead of generating HTML, it runs it through a tool like node-sass or webpack and outputs CSS or JS.
By leveraging Javascript templates, you can configure Eleventy to process almost any file you want. To use them, first add the 11ty.js extension to the list of recognized input formats in your .eleventy.js config file:
// .eleventy.js
module.exports = function (eleventyConfig) {
// add "11ty.js" to your supported template formats
return {
templateFormats: ['njk', 'md', '11ty.js']
}
}
Now you can set up your asset pipeline by making a new template somewhere in your input folder. Let’s call it styles.11ty.js for example. It could look something like this:
// styles.11ty.js
const path = require('path')
const sass = require('node-sass')
module.exports = class {
// define meta data for this template,
// just like you would with front matter in markdown.
async data() {
return {
permalink: '/assets/styles/main.css',
eleventyExcludeFromCollections: true,
entryFile: path.join(process.cwd(), '/main.scss')
}
}
// custom method that runs Sass compilation
// and returns CSS as a string
async compileSass(options) {
return new Promise((resolve, reject) => {
const callback = (error, result) => {
if (error) reject(error)
else resolve(result.css.toString())
}
return sass.render(options, callback)
})
}
// this function is mandatory and determines the contents of the
// generated output file. it gets passed all our "front matter" data.
async render({ entryFile }) {
try {
return await this.compileSass({ file: entryFile })
} catch (error) {
throw error
}
}
}
The permalink property here lets you define which file the template generates and where to put it. You can use any type of data as input, then transform it somehow and return it in the render method. We’ve essentially done the same thing as defining a Sass task in Gulp, but this time it’s part of the Eleventy build itself!
This gives you even more control over the process. For example - if the compilation fails, you can use that information in the build. You can catch errors in the Sass code and display a message as an overlay in Eleventy’s dev server:
Check out the Eleventastic source to see how to achieve this. (HT to “Supermaya” by Mike Riethmuller for the idea)
A single template can also build multiple files this way. Using Eleventy’s pagination feature, you can i.e. generate different Javascript bundles from different source files:
// scripts.11ty.js
const ENTRY_POINTS = {
app: 'app.js',
comments: 'comments/index.js',
search: 'search/index.js'
}
module.exports = class {
// again, the data() function does esentially the same
// as defining front matter in a markdown file.
async data() {
return {
// define a custom property "entryPoints" first
entryPoints: ENTRY_POINTS,
// then take each of the files in "entryPoints"
// and process them separately as "bundleName"
pagination: {
data: 'entryPoints',
alias: 'bundleName',
size: 1
},
// for each bundle, output a different Javascript file
permalink: ({ bundleName }) => `/assets/scripts/${bundleName}.js`,
// keep the scripts.11ty.js itself out of collections
eleventyExcludeFromCollections: true
}
}
// a custom helper function that will be called with
// each separate file the template processes.
async compileJS(bundleName) {
const entryPoint = path.join(process.cwd(), ENTRY_POINTS[bundleName])
// do compilation stuff inhere like
// run file through webpack, Babel, etc
// and return the result as a string
// --- omitted for brevity ---
return js
}
// output the compiled JS as file contents
render ({ bundleName }) {
try {
return await this.compileJS(bundleName)
} catch (err) {
console.log(err)
return null
}
}
}
I personally prefer the fully-integrated way of doing things, because it’s easier for my brain to think of assets this way. HTML, CSS, JS, SVG: it’s all handled the same. However, your personal preference might differ. That’s OK - there really is no “right way” of doing this.
The beauty of unopinionated tools like Eleventy is that you get to choose what fits you best. If it works, it works! 😉
]]>This feature was first made popular by Medium, where authors can pick a “top highlight” in a post and hovering it will show a tooltip with sharing options. I wanted something like this for independent blogging too, so I came up with a custom solution.
Here’s how that looks in action:
The base of this feature is a <mark> tag wrapped in a custom <share-highlight> element.
If the Web Share API is supported (currently in Safari, Edge and Android Chrome), clicking the element will bring up your share options and insert the quoted text and a link to the current page. You can share it on any platform that registers as a share target.
If the API is not supported, the component will fall back to sharing on Twitter via tweet intent URL. The tooltip will show the Twitter icon and clicking the highlight opens a new tab with a pre-filled tweet:
If you want to use this on your own site, follow these steps. (keep in mind that this is an early version though, and there are probably still some issues to sort out.)
npm i eleventy-plugin-share-highlight --save on the command line in your project’s root folder (where the package.json file is)..eleventy.js configuration file:// .eleventy.js
const pluginShareHighlight = require('eleventy-plugin-share-highlight');
module.exports = function (eleventyConfig) {
eleventyConfig.addPlugin(pluginShareHighlight, {
// optional: define the tooltip label.
// will be "Share this" if omitted.
label: "Teilen"
})
}
highlight shortcode. You can use it in your templates or markdown files like this:<!-- blogpost.md -->
{% highlight %}Here's some highlighted text you can share!{% endhighlight %}
This will highlight the containing text in a <mark> tag and wrap it in the custom element <share-highlight>. So the output HTML will look something like this:
<share-highlight label="Share this">
<mark>Here's some highlighted text you can share!</mark>
<share-highlight>
If Javascript or Custom Elements are not supported, or if your post is displayed e.g. in an RSS reader, the <mark> tag will still be valid and give the highlighted text the correct semantics.
import 'eleventy-plugin-share-highlight/share-highlight'
…or copy the file and add it directly to your HTML with something like:
<head>
<script src="/js/share-highlight.js" async defer></script>
</head>
/* general styles for text highlight */
mark {
background-color: yellow;
}
/* styling if webcomponent is supported */
share-highlight {
/* default state */
--share-highlight-text-color: inherit;
--share-highlight-bg-color: yellow;
/* hover/focus state */
--share-highlight-text-color-active: inherit;
--share-highlight-bg-color-active: orange;
/* tooltip */
--share-highlight-tooltip-text-color: white;
--share-highlight-tooltip-bg-color: black;
}
This is my first Eleventy plugin and also my first web component. I’m fairly confident that the Eleventy part is sound, but I don’t have much experience with web components, and I have some concerns.
My biggest issue with this is accessibility. I want the element to be keyboard-accessible, so I made it focusable and added keyboard listeners to trigger it with the Enter key. The tooltip label also doubles as the aria-label property for the component, but I don’t quite know how screenreaders handle custom elements with no inherent semantics.
I guess the cleanest option would be to use an actual <button> to trigger the share action, but I also need the element to be inline-level, so the highlighting doesn’t break text flow.
If you know your way around webcomponents and have a suggestion on how to improve this thing, please feel free to submit a PR!
]]>It’s not often that a website stays up mostly unchanged for 25 years. So out of curiosity, I ran a quick check on both sites.
Unsurprisingly, the new site is a lot heavier than the original: with 4.673KB vs. 120KB, the new site is about 39 times the size of the old one. That’s because the new site has a trailer video, high-res images and a lot more Javascript:
This is keeping with the general trend of websites growing heavier every year, with the average site weighing in at around 1.900KB now.
But since our connection speeds and device capabilities are significantly better now - that’s fine. Everything is way faster now than it was back in the days of Michael Jordan’s first Looney Tunes adventure.
Is it though? Let’s find out.
1996 was a different time. The Spice Girl’s “Wannabe” was in anti-shock discmans everywhere, and the most common network connection was 56k dial-up modems. So of course the original developers had a smaller performance budget to work with, and the site is much lighter. Fair enough - so how long did it take to load the original Space Jam site back then?
I ran a webpagetest with a simulated '96 connection: dial-up on an average desktop computer. Dial-up had a maximum speed of 56 kbit/s, but in reality it came in at something around 40-50 kbit/s.
Here’s how that looked (fire up the dial-up noise in another tab for the full experience):
Test Summary | Film Strip View
We can see the first content (the “press box shuttle” menu item) after 4 seconds. The other menu items -all separate GIF images- come in slowly after that. Since the HTML renders as soon as it is parsed, you could theoretically already click on the items before the rest of the page has finished though. The whole site is done after 28.1 seconds in this test.
Now let’s look at the current, futuristic state of the web. Luckily we don’t use dial-up anymore. The most common connection these days is a mobile 3G network, and the most common device is an Android phone (a Moto G4 in this test). A typical 3G connection comes in at around 1.5 Mbp/s, so it is roughly 30 times faster than dial-up. This shouldn’t take long:
Test Summary | Film Strip View
Funnily enough, the first meaningful paint also shows up after about 4 seconds. It’s not actual content though, it’s the loading screen, informing us that we’ve now loaded 0% of the site.
We reach 100% at 12 seconds, but the first real piece of content is not rendered until 21.5 seconds: it’s a youtube video in a modal window. The site is finally ready after 26.8 seconds, although actually playing the video would take some more loading time.
Right. So after 25 years of technological progress, after bringing 4.7 billion people in the world online, after we just landed a fifth robot on Mars, visiting the Space Jam website is now 1.3 seconds faster. That seems… underwhelming.
__
I know that this is just a movie promo site. And of course the requirements for a website are different now - people expect rich content. But I think this speaks to a larger point:
We just keep filling the available space, jamming up the pipes in the process so nothing actually gets faster. Well, at least the dial-up sound is gone now.
]]>However, it can get difficult to see what’s going on with them - especially if there’s a lot of “background noise”. Many sites just scrape content from well-known blogs and republish it for SEO juice. If that content includes a link to your site, it can lead to webmention spam.
Unlike on social media, you also don’t get notifications or reports about incoming webmentions. You’re just handed a bunch of raw data to use however you like. That’s part of the beauty of the Indieweb though: you can tailor it to whatever suits you best.
I recently started playing around with the data I get from webmention.io to see if it could be displayed in a more meaningful way. The result is a new side project I call:
✨✨✨ Webmention Analytics ✨✨✨
You can see it in action in this demo on my site.
I built this with Eleventy and Netlify, mainly because that’s my favorite tech stack to tinker with. But for analytics that don’t have to be real-time, static site generators are actually a really good fit.
Expensive computations like parsing and analyzing 8000+ data points like this can be run once a day through a periodic build hook. The reports it generates are then instantly available to the user, while still being up-to-date enough.
If you also use webmention.io to show webmentions on your site, you can fork the code on Github and make your own instance of webmention-analytics. Just follow the instructions in the README to get started.
Keep in mind that this is still a very early version of a weekend side project, so there's probably a few things to iron out. Cheers!
That’s usually what you want to ensure everything is up-to-date. But there are special cases when keeping parts of the previous build around makes sense. For example, If you fetch lots of data from an external source during your build, it might make sense to cache that data and re-use it again in the future.
I recently found such a case when working on the Eleventy webmentions feature. For each build, a script queries the webmention.io API and fetches all the webmentions for the site. That can be a lot of data - and most of it stays the same, so fetching everything new again each time is sort of wasteful.
A better solution is to store the fetched webmentions locally in a separate _cache folder as JSON and give them a lastFetched timestamp. On the next go, we can load the old data straight from there and only query the API for webmentions newer than that timestamp.
My webmentions code does exactly that - but it had a big problem: that only worked locally. Since Netlify (where my site is hosted) throws everything out the window each time, I couldn’t use the cache there.
To edit anything related to the Netlify build process itself, you need a build plugin. There is a directory of plugins available for you to choose from, but you can also define your own plugins and deploy them alongside the rest of your code.
To define a custom plugin, make a new directory called plugins and within that, a new directory for your code:
_cache
_site
src
plugins
└── webmention-cache
├── index.js
└── manifest.yml
package.json
netlify.toml
Your plugin should contain at least two files: a manifest with some metadata, and the actual plugin code.
For the manifest file, let’s just set a name:
# manifest.yml
name: webmention-cache
The meat of the plugin is in the index.js file. There are lots of things you could do here- but for this usecase, it’s enough to define an object with two functions. These are hooks that will be called on specific parts of the build process that Netlify runs.
Both functions will be given some arguments, and among them is the utils object we can use to access the internal build cache:
// index.js
module.exports = {
// Before the build runs,
// restore a directory we cached in a previous build.
// Does not do anything if:
// - the directory already exists locally
// - the directory has never been cached
async onPreBuild({ utils }) {
await utils.cache.restore('./_cache')
},
// After the build is done,
// cache directory for future builds.
// Does not do anything if:
// - the directory does not exist
async onPostBuild({ utils }) {
await utils.cache.save('./_cache')
}
}
onPreBuild hook looks for a previously cached _cache folder and restores it within the build.onPostBuild hook takes the final build output, looks for changes in the _cache folder and saves it for later.Because these hooks only look at changes that happen between the start and end of your build, your code needs to create the cache directory itself and write files to it as it runs. You can do that by using node’s filesystem functions, similiar to what I’ve done here.
It's important to note that this will not overwrite any existing files from your repository, so it only works when there is no _cache folder already committed to your site. It might make sense to add it to your .gitignore file.
The last thing to do is to let the Netlify build script know you intend to use your plugin. You can register it with a line in your netlify.toml configuration file:
# netlify.toml
[[plugins]]
package = "./plugins/webmention-cache"
Now, when you run a new build, you should see something like these lines in your deploy log:
2:02:08 PM: ❯ Loading plugins
2:02:08 PM: - ./plugins/webmention-cache from netlify.toml
2:02:08 PM:
2:02:08 PM: ────────────────────────────────────────────────────────────────
2:02:08 PM: 1. onPreBuild command from ./plugins/webmention-cache
2:02:08 PM: ────────────────────────────────────────────────────────────────
2:02:09 PM:
2:02:09 PM: (./plugins/webmention-cache onPreBuild completed in 302ms)
2:02:09 PM:
2:02:09 PM: ────────────────────────────────────────────────────────────────
2:02:09 PM: 2. build.command from netlify.toml
2:02:09 PM: ────────────────────────────────────────────────────────────────
2:02:09 PM:
2:02:09 PM: $ npm run build
...[lines omitted]...
2:02:12 PM: >>> 4240 webmentions loaded from cache
2:02:12 PM: >>> 6 new webmentions fetched from https://webmention.io/api
2:02:12 PM: >>> webmentions saved to _cache/webmentions.json
...[lines omitted]...
2:02:29 PM: ────────────────────────────────────────────────────────────────
2:02:29 PM: 3. onPostBuild command from ./plugins/webmention-cache
2:02:29 PM: ────────────────────────────────────────────────────────────────
2:02:29 PM:
2:02:29 PM: (./plugins/webmention-cache onPostBuild completed in 53ms)
And that’s about it!
]]>Still, I want to continue the tradition of “end-of-the-year” blogposts and since there’s already enough doom out there these days, I’m trying to focus on the good things that happened instead.
The web industry is among the fortunate ones that are very well suited for remote and distributed work, which is why I was able to keep working from home throughout most of the year.
We rented a great new office in the spring that I’ve hardly been to since, but our team at Codista is quite used to working remote and we already had all the necessary infrastructure in place.
We had more than enough projects on our hands and we did some really interesting, challenging stuff that I can’t talk about (yet) 😉 - so all in all, work was good.
I wrote nine posts in 2020 - which is fewer than last year, but all things considered, that’s ok. The most popular ones were:
When the first lockdown hit, I kept occupied by building things - mostly in and around Eleventy, which helped me get ideas off the ground quickly. Here are some of these:
Eleventastic: my personal starter kit for Eleventy projects. I wanted to get rid of “external” build tools like Gulp and manage all pipelines inside Eleventy itself.
Eleventy Resumé: a simple microsite that functions as a CV/Resumé in web and print.
Whimsical Website Club: a collection of websites that spark joy by doing things a little bit less serious.
I had some talks planned for 2020 which of course didn’t happen. I did a few online talks though and I participated in Inclusive Design 24, a free 24-hour livestream event where I talked about another side project, the “Emergency Website Kit”:
The Webclerks team and I had the pleasure of hosting our own little virtual meetup event “Vienna Calling” on Twitch, and we had a phenomenal lineup. A big thank you again to all the speakers who joined us, as well as the rest of the team who made this happen behind the scenes.
BTW: You can find the full event as a playlist on Youtube:
In the summer, the situation improved enough for me and my girlfriend to take some much needed vacation time. With international travel still closed, we decided to go on a road trip through Austria instead and it was awesome. This country has some really beautiful places in store.
This year, more than ever, I realized the enormous impact the web has on all of us, and how important it is to keep it free and open. I know we’re all sick of doing things online all the time, but imagine for a moment what this year would have looked like had the web never been invented.
Millions of people would be completely isolated, even more would be out of their jobs. Schools could not operate. Civil rights movements would be almost impossible to organize. Global research projects like the development of a vaccine would take years longer. And you probably wouldn’t have seen the faces of your loved ones in months.
The web has become such an integral part of our lives that we sometimes take it for granted. It’s not. In fact this shitshow of a year should probably remind us that we need to take really good care of the things that are still connecting us.
I’m not going to compare my goals from last year with what I’ve accomplished in 2020. I don’t think it matters. Give yourself a break this year - it’s OK if things didn’t turn out the way you wanted.
I’ll see you all in 2021. And hopefully we’ll all have a vaccine in our system and a better year ahead of us.
Have you written one of these yourself? Let me know and get added here.
]]>It used to be cooler. It used to be weirder.
As Sarah Drasner puts it in “In Defense of a Fussy Website”:
While we’re all laser-focused on shipping the newest feature with the hottest software and the best Lighthouse scores, I’ve been missing a bit of the joy on the web. Apps are currently conveying little care for UX, guidance, richness, and — well, for humans trying to communicate through a computer, we’re certainly bending a lot to… the computer.
I really liked that post, so I made small website meant to showcase how a more personal web could look like, and hopefully give someone else inspiration to make their own corner of the web a bit weirder.
Introducing: The Whimsical Web - a curated list of sites with an extra bit of fun.
I’ve collected a few of my favorites to start, but anyone can add a site to the list if it’s fun, quirky and personal.
Just open an issue on Github and let me know.
Let’s see some fussy websites!
]]>When I look at some of the trends on the web today, I wonder if we’re at that point yet. I wonder if we’re ready to revisit some of the ideas of the early web again.
Probably not in design - I’m afraid dancing-baby.gif is gone for good. But some of the broader ideas from back then are picking up a second wind lately, and I like it.
After spending the better part of the last decade shifting rendering logic to the client, it looks like the pendulum is about to swing into the other direction again.
With projects like Phoenix Liveview or hey.com’s recent “it’s-just-HTML” approach, it seems like server-side rendering (SSR) is stepping back into the spotlight.
If you think that sounds like the web of 25 years ago, you’re right! Except the HEY front-end stack progressively enhances the “classic web” to work like the “2020 web,” with all the fidelity you’d expect from a well-built SPA.
— Sam Stephenson (@sstephenson) June 15, 2020
It makes sense - servers are really good at this, and sending compressed HTML through the network can be lightning fast. The classic request-response cycle has evolved as well: HTTP/2 and smart techniques like Turbolinks or just-in-time preloading allow for a much better experience now than when you first tried to load that Michael Jordan photo on the Space Jam website over dial-up.
Taking the responsibility of rendering UI and all the Javascript that comes with it off users’ shoulders would be a great old new strategy for the next generation of web apps.
Frontpage and Dreamweaver were big in the 90s because of their “What You See Is What You Get” interface. People could set up a website without any coding skills, just by dragging boxes and typing text in them.
Of course they soon found that there was still source code underneath, you just didn’t see it. And most of the time, that source code was a big heap of auto-generated garbage - it ultimately failed to keep up with the requirements of the modern web.
Today our understanding of the web has improved, and so have our tools. Webflow is one of the contenders for the “no-code editor” trophy. The output it generates is far better.
These tools will probably not be a replacement for developers as a whole - complex projects still require tons of human brainpower to work. But for all the landing pages and marketing sites, this could be the holy grail of WYSIWYG we thought we had in the 90s.
It might just be my IndieWeb filter bubble talking, but I think there is a renewed interest in personal websites. A lot of big social media giants are falling out of favor, and it becomes cool again to own a space on the web rather than being one of a billion usernames.
Our digital identities are becoming increasingly more important, and people become aware that they’re not in control of their data. Personal sites were very popular in the era before Myspace and Facebook, and it’s now easier than ever to build one.
Services like Carrd offer a straightforward way to create simple one-pagers, and their numbers show a lot of interest:
Totals for 2019:
— aj ⚡️ 🍜 (@ajlkn) January 1, 2020
🙋 213k new users
🌐 381k new sites
💵 $308k https://t.co/k3mNeiIyzL
Blogging is gaining popularity again as well, used as a tool for self-marketing or simply to express opinions. There are lots of good options for people who want to pick up blogging - either on their own sites or with platforms like micro.blog, that offer more independence than Medium & Co.
Another issue created by social media is the prevalence of “algorithmic feeds”. We decided that the constant stream of input for our eyeballs should never end, so we built these complex systems to generate new content for us based on our interests.
But these are giant black boxes, and nobody really knows what signals go into them. Throw advertising, “fake news” and a couple of trolls into the mix, and you get the mess we all know now.
That’s why many people crave a more controlled reading experience on their own terms. Chronological, personal, relevant - a bespoke magazine of trusted sources. A curated feed.
One way to achieve something like that is through plain ol’ boring RSS. One more thing that was said to be dead but is growing in popularity again:
Who’s gonna read your personal blog because it has an RSS feed? I’m gonna read your personal blog because it has an RSS feed. pic.twitter.com/mtcyKhEVet
— Chris Coyier (@chriscoyier) January 7, 2020
Another possibility is to discover new content through human connections instead of algorithms. People we already know for their content recommend others in the same field, creating decentralized clusters of trusted information.
Website owners used to do this a lot in the days before search engines, by providing Blogroll Pages or forming Webrings with links to others in their cluster.
Webrings were a common way for people to connect their sites in the early web. To be a member of a webring, you had to embed a little widget on your site that contained a “forward”, a “backward”, and a “random” button. These buttons would then link to other sites in the ring.
BTW: If you want to host your own webring, I made this free Starter Kit for you.
Member of the An Example Webring webring
Previous Random NextMany independent creators are moving away from big “everyone’s on them” platforms back to private, more niche communities. New models for membership sites like Ghost’s “Members” feature enable creators to build communities on their content. People teach courses, self-publish books or provide APIs for specific topics.
Where the 90s had chatrooms and message boards, today there are tools like Discord or Twitch that help people with shared interests to connect with each other. These niche communities can then be a powerful userbase for independent businesses.
Of course the problem of monetization has existed from the very start of the web, and it’s still not easy today to earn money without splattering ads everywhere. But new standards like the Web Monetization API could be a very interesting solution, enabling creators to receive micro-payments for their content.
I don’t know if all of these trends will really play out, or if we’re up for something completely different. I do think it’s a good idea to learn from the past though, because that is what keeps us moving forward.
So maybe the second 90s can be even better than the first. At least we’re done with NSYNC this time.
]]>Through my developer-centric filter bubble, I sometimes see the tech world react to times of crisis. Often our first instinct seems to be to turn to technology. Build something to fix this.
I get that if all you have is a hammer, everything looks like a nail. I’m guilty of this myself. That knee-jerk reaction might come from a genuine desire to help - and if code is what we know best, it’s understandable that we want to apply these skills here too.
But some problems can’t be solved with technology.
You can’t code away systemic racism, and you can’t design your way out of a human rights crisis.
No blockchain, no cloud and no A.I. will get us out of this.
There are problems that have to be solved with humans. They have to be solved in our laws, our culture, and ultimately our minds. It’s a long, hard, uncomfortable and sometimes violent process.
Technology can only help that process by taking a step back. By amplifying voices that would otherwise not be heard, and by providing tools for people to take action.
The web is an amazing tool in bringing us together. Yet some of the best and brightest minds of our generation are working on how to get more people to click on ads. Imagine what technology could be capable of if it focused all that energy on the problems in our communities instead.
There are examples of code being used for the greater good in this:
There’s many more I’m sure, but they all stand back behind the actual human beings in the streets, protesting for justice.
Code won’t bring us forward here. People will.
]]>And so every operating system, app and even some websites (mine included) suddenly had to come up with a dark mode. Fortunately though, this coincided nicely with widespread support for CSS custom properties and the introduction of a new prefers-color-scheme media query.
There’s lots of tutorials on how to build dark modes already, but why limit yourself to light and dark? Only a Sith deals in absolutes.
That’s why I decided to build a new feature on my site:
dynamic color themes! Yes, instead of two color schemes, I now have ten! That’s eight (8) better!
Go ahead and try it, hit that paintroller-button in the header.
I’ll wait.
If you’re reading this somewhere else, the effect would look something like this:
Nice, right? Let’s look at how to do that!
First up, we need some data. We need to define our themes in a central location, so they’re easy to access and edit. My site uses Eleventy, which lets me create a simple JSON file for that purpose:
// themes.json
[
{
"id": "bowser",
"name": "Bowser's Castle",
"colors": {
"primary": "#7f5af0",
"secondary": "#2cb67d",
"text": "#fffffe",
"border": "#383a61",
"background": "#16161a",
"primaryOffset": "#e068fd",
"textOffset": "#94a1b2",
"backgroundOffset": "#29293e"
}
},
{...}
]
Our color schemes are objects in an array, which is now available during build. Each theme gets a name, id and a couple of color definitions. The parts of a color scheme depend on your specific design; In my case, I assigned each theme eight properties.
It's a good idea to give these properties logical names instead of visual ones like "light" or "muted", as colors vary from theme to theme. I've also found it helpful to define a couple of "offset" colors - these are used to adjust another color on interactions like hover and such.
In addition to the “default” and “dark” themes I already had before, I created eight more themes this way. I used a couple of different sources for inspiration; the ones I liked best are Adobe Color and happyhues.
All my themes are named after Mario Kart 64 race tracks by the way, because why not.
To actually use our colors in CSS, we need them in a different format. Let’s create a stylesheet and make custom properties out of them. Using Eleventy’s template rendering, we can do that by generating a theme.css file from the data, looping over all themes. We’ll use a macro to output the color definitions for each.
I wrote this in Nunjucks, the templating engine of my choice - but you can do it in any other language as well.
/* theme.css.njk */
---
permalink: '/assets/css/theme.css'
excludeFromSitemap: true
---
/*
this macro will transform the colors in the JSON data
into custom properties to use in CSS.
*/
{% macro colorscheme(colors) %}
--color-bg: {{ colors.background }};
--color-bg-offset: {{ colors.backgroundOffset }};
--color-text: {{ colors.text }};
--color-text-offset: {{ colors.textOffset }};
--color-border: {{ colors.border }};
--color-primary: {{ colors.primary }};
--color-primary-offset: {{ colors.primaryOffset }};
--color-secondary: {{ colors.secondary }};
{% endmacro %}
/*
get the "default" light and dark color schemes
to use if no other theme was selected
*/
{%- set default = themes|getTheme('default') -%}
{%- set dark = themes|getTheme('dark') -%}
/*
the basic setup will just use the light scheme
*/
:root {
{{ colorscheme(default.colors) }}
}
/*
if the user has a system preference for dark schemes,
we'll use the dark theme as default instead
*/
@media(prefers-color-scheme: dark) {
:root {
{{ colorscheme(dark.colors) }}
}
}
/*
finally, each theme is selectable through a
data-attribute on the document. E.g:
<html data-theme="bowser">
*/
{% for theme in themes %}
[data-theme='{{ theme.id }}'] {
{{ colorscheme(theme.colors) }}
}
{% endfor %}
Now for the tedious part - we need to go through all of the site’s styles and replace every color definition with the corresponding custom property. This is different for every site - but your code might look like this if it’s written in SCSS:
body {
font-family: sans-serif;
line-height: $line-height;
color: $gray-dark;
}
Replace the static SCSS variable with the theme’s custom property:
body {
font-family: sans-serif;
line-height: $line-height;
color: var(--color-text);
}
Attention: Custom Properties are supported in all modern browsers, but if you need to support IE11 or Opera Mini, be sure to provide a fallback.
It’s fine to mix static preprocessor variables and custom properties by the way - they do different things. Our line height is not going to change dynamically.
Now do this for every instance of color, background, border, fill … you get the idea. Told you it was gonna be tedious.
If you made it this far, congratulations! Your website is now themeable (in theory). We still need a way for people to switch themes without manually editing the markup though, that’s not very user-friendly. We need some sort of UI component for this - a theme switcher.
The switcher structure is pretty straightforward: it’s essentially a list of buttons, one for each theme. When a button is pressed, we’ll switch colors. Let’s give the user an idea what to expect by showing the theme colors as little swatches on the button:
Here’s the template to generate that markup. Since custom properties are cascading, we can set the data-theme attribute on the individual buttons as well, to inherit the correct colors. The button also holds its id in a data-theme-id attribute, we will pick that up with Javascript later.
<ul class="themeswitcher">
{% for theme in themes %}
<li class="themeswitcher__item">
<button class="themepicker__btn js-themepicker-themeselect" data-theme="{{ theme.id }}" aria-label="select color theme '{{ theme.name }}'">
<span class="themepicker__name">{{ theme.name }}</span>
<span class="themepicker__palette">
<span class="themepicker__swatch themepicker__swatch--primary"></span>
<span class="themepicker__swatch themepicker__swatch--secondary"></span>
<span class="themepicker__swatch themepicker__swatch--border"></span>
<span class="themepicker__swatch themepicker__swatch--textoffset"></span>
<span class="themepicker__swatch themepicker__swatch--text"></span>
</span>
</button>
</li>
{% endfor %}
</ul>
.themepicker__swatch {
display: inline-block;
width: 1.5em;
height: 1.5em;
border-radius: 50%;
box-shadow: 0 0 0 2px #ffffff;
&--primary {
background-color: var(--color-primary);
}
&--secondary {
background-color: var(--color-secondary);
}
&--border {
background-color: var(--color-border);
}
&--textoffset {
background-color: var(--color-text-offset);
}
&--text {
background-color: var(--color-text);
}
}
There’s some more styling involved, but I’ll leave that out for brevity here. If you’re interested in the extended version, you can find all the code in my site’s github repo.
The last missing piece is some Javascript to handle the switcher functionality. This process is a bit more involved than we might initially assume. We need to check the user’s system preference through the prefers-color-scheme media query. But crucially, we also need to enable the user to override that preference, and then store the selected theme choice for later.
I’ve omitted some stuff here for brevity - see the full script on Github for all the details.
// let's make this a new class
class ThemeSwitcher {
constructor() {
// define some state variables
this.activeTheme = 'default'
// get all the theme buttons from before
this.themeSelectBtns = document.querySelectorAll('button[data-theme-id]')
this.init()
}
init() {
// determine if there is a preferred theme
const systemPreference = this.getSystemPreference()
const storedPreference = this.getStoredPreference()
// explicit choices overrule system defaults
if (storedPreference) {
this.activeTheme = storedPreference
} else if (systemPreference) {
this.activeTheme = systemPreference
}
// when clicked, get the theme id and pass it to a function
Array.from(this.themeSelectBtns).forEach((btn) => {
const id = btn.dataset.themeId
btn.addEventListener('click', () => this.setTheme(id))
})
}
getSystemPreference() {
// check if the system default is set to darkmode
if (window.matchMedia('(prefers-color-scheme: dark)').matches) {
return 'dark'
}
return false
}
getStoredPreference() {
// check if the user has selected a theme before
if (typeof Storage !== 'undefined') {
return localStorage.getItem("theme")
}
return false
}
}
// this whole thing only makes sense if custom properties are supported -
// so let's check for that before initializing our switcher.
if (window.CSS && CSS.supports('color', 'var(--fake-var)')) {
new ThemeSwitcher()
}
When somebody switches themes, we’ll take the theme id and set is as the data-theme attribute on the document. That will trigger the corresponding selector in our theme.css file, and the chosen color scheme will be applied.
Since we want the theme to persist even when the user reloads the page or navigates away, we’ll save the selected id in localStorage.
setTheme(id) {
// set the theme id on the <html> element...
this.activeTheme = id
document.documentElement.setAttribute('data-theme', id)
// and save the selection in localStorage for later
if (this.hasLocalStorage) {
localStorage.setItem("theme", id)
}
}
On a server-rendered site, we could store that piece of data in a cookie instead and apply the theme id to the html element before serving the page. Since we’re dealing with a static site here though, there is no server-side processing - so we have to do a small workaround.
We’ll retrieve the theme from localStorage in a tiny additional script in the head, right after the stylesheet is loaded. Contrary to the rest of the Javascript, we want this to execute as early as possible to avoid a FODT (“flash of default theme”).
👉 Update: Chris Coyier came up with the term “FART” (Flash of inAccurate ColoR Theme) for this, which of course is way better.
<head>
<link rel="stylesheet" href="/assets/css/main.css">
<script>
// if there's a theme id in localstorage, use it on the <html>
localStorage.getItem('theme') &&
document.documentElement.setAttribute('data-theme', localStorage.getItem('theme'))
</script>
</head>
If no stored theme is found, the site uses the default color scheme (either light or dark, depending on the users system preference).
You can create any number of themes this way, and they’re not limited to flat colors either - with some extra effort you can have patterns, gradients or even GIFs in your design. Although just because you can doesn’t always mean you should, as is evidenced by my site’s new “Lobster Life” theme.
Please don’t use that one.
]]>This again addresses the over-reliance on powerful Javascript frameworks like React, even in cases where simple semantic HTML might be better suited for the task.
I’m currently putting my self-isolated weekend time towards side projects, and I thought this could be something I can tackle. So I built something new:
This is a static micro-site generated by Eleventy that can be used as an online résumé. The output is basically just a simple index.html file.
Features:
You can find the full source code on Github, along with instructions how to set up and customize the project.
Many people are out of work due to the pandemic already, and even more might still lose their jobs as the world is headed towards a massive economic fallout. There is little we can do to stop that, but a tool like this might hopefully be a small help for people to find new work once the situation improves.
]]>Just received a shelter-in-place emergency alert with a web address for more information. Clicked the link. The site is down. All emergency sites should be static.
— Nicholas C. Zakas (@slicknet) March 17, 2020
To make things worse, natural disasters can also damage local network infrastructure, sometimes leaving people with very poor mobile connections.
I’ve written about the practice of publishing minimal “text-only” versions of critical news websites before and I think it makes a lot of sense to rely on the rule of least power for these things. When it comes to resilience, you just can’t beat static HTML.
Like so many others, I’m currently in voluntary quarantine at home - and I used some time this weekend to put a small boilerplate together for this exact usecase.
Here’s the main idea:
The site contains only the bare minimum - no webfonts, no tracking, no unnecessary images. The entire thing should fit in a single HTTP request. It’s basically just a small, ultra-lean blog focused on maximum resilience and accessibility. The Service Worker takes it a step further from there so if you’ve visited the site once, the information is still accessible even if you lose network coverage.
The end result is just a set of static files that can be easily hosted on cloud infrastructure and put on a CDN. Netlify does this out of the box, but other providers or privately owned servers are possible as well.
You can find the full project source on Github as well as a demo site here.
I’m aware that not everyone, especially the people in charge of setting up websites like this, is familiar with things like Node or the command line. I want to keep the barrier to entry as low as possible.
Taking a hint from the excellent servicerelief.us project, it is possible to configure the template in such a way that all configuration can be done via environment variables.
These are set in the Netlify UI when the site is first deployed, meaning a user would only need a free Github and Netlify account to get started - without ever touching a line of code or having to mess around with npm or Eleventy itself. The content editing can all be done through Netlify CMS, which offers a much more useable graphical interface.
In the meantime, if you want to set up an emergency website and need help to get started, let me know!
I recently did a short talk about this project at an online meetup. You can watch it here:
]]>I also loved making flyers, posters and CD artwork for local bands - It’s actually what got me started in design and ultimately led me to build for the web. You see I can’t really draw, so my only option to create the images in my head was with the help of computers, which I was able to use. Creating posters became a hobby of mine.
These days, I’m usually too busy to find time for that hobby. But I keep a collection of my favourite gigposters and once in a while, when an occasion arises, I get to do one myself.
I still do vocals in one band, and so fortunately that occasion comes once a year in the form of a Rage Against the Machine cover gig we play in my hometown. The event is called “Rage/Aid” because all of the proceeds are donated to charity.
TL;DR: Here’s the poster I did for that show, and how I made it.
The hardest part for me is coming up with a good idea. Much like a website, a gigposter is a mixture of information and art - and should be tailored to the general vibe of the band/show. That being said, there really are no rules as to which motives fit which genre. You get to freely explore different ideas and concepts.
I usually start with a few crudely-drawn pencil sketches of possible motives. Just shoot these out real quick and let your mind wander - you can care about making them look good later. You can get inspiration from anywhere: art, nature, architecture, movies… whatever captures your attention.
For this one, I liked the idea of referencing a famous painting, “The Son of Man” by René Magritte. It’s the one with the apple in front of a man’s face - you might know it.
I thought I could take the concept of the faceless, anonymous person and put a twist on it. I also knew I wanted fire in there somehow to symbolize Rage against the Machine’s anger and spirit of revolution, so I drew a lit match instead of the apple.
It really doesn’t have to be clever or deep or anything though, it’s a fucking poster, not an arts degree. 🧐 I just liked the visual and thought it would go well with the vibe, so that’s what I used.
As I mentioned earlier, I can’t draw for shit. There are some insanely talented poster artists out there that do it all by hand and I greatly admire their skill - but I have to rely on digital trickery to make my stuff look good.
So I took to a stock photo site to find something I could base my illustration on. After some searching, I came across this series of backlit faces that seemed like a nice fit.
I like to have the main motive as a vector drawing because that’s just easier to work with. I can tweak certain parts or recolor it later without too much trouble. Plus if I need it on the side of a bus someday, I can always scale it up. So my first step is usually to get the motive vectorized.
I opened the stock photo in Illustrator and began tracing the outline with the pen tool. I also separated the colors into three layers, from dark to light. This sort of thing is common in stencil or screenprint artwork, and I wanted to recreate that style. It works best on high-contrast images like this.
I played around with different filters and effects to give the silhouette shape more detail. The one I chose is the halftone filter: it transforms the shapes into thousands of “print dots”. The size and density of these dots then determine the lightness.
This breaks a bit with the stencil style, but I like how it blends the edges of the three color layers together, and it reminds me of old newspapers and billboards.
For the match, I googled for a random picture as a base, applied a treshold and vectorized it in Illustrator. The flame is just a doodle I made with the pen tool, with a few extra points and distortion added to make the edges more jagged. The zigzag filter can help with that.
Putting the main motive together already looks cool; the flame fits nicely inside the silhouette shape. Good enough for now - I’ll let that sit for a while and work on other stuff.
For the background, I switched to Photoshop as it’s pixel-based. It’s important to work in CMYK colors here and make sure the document is at least 300dpi, large enough for the intended poster size - it’s a pain to scale pixel artwork up later on.
I started with a flat color and then progressively layered other stuff on top to give it more detail. The base here was a bright red.
I then used a watercolor texture, a bit of speckle/noise and a grunge brush to make it look more eroded. I was going for a screenprint-like style, where the color often doesn’t distribute evenly across the paper and has these interesting imperfections. The nice thing about blending these layers together in Photoshop is that you can still easily tweak the base color afterwards and try out different color schemes.
Another trick I like is to give the artwork a “frame”, again to make it look a bit more handmade:
This is just an extra mask layer, where the sides are drawn with a paintroller brush that gives you these nice rough edges. These are small details, but they all add to the general look and feel of the poster.
Gigposters let you get really creative with type. There are some awesome pieces that use crazy custom letterforms and make them a part of the artwork itself. For my poster, I just wanted something simple that matched the illustration style.
I found this nice big brush font called “Generous” by Danish type foundry PizzaDude. It has broad strokes and rough edges that go well with the background, and work nicely as the display font. I paired it with the clean sans-serif Brandon Grotesque for the body copy.
There are some other pieces of information that just have to be on there, like the date and venue. Rather than doing one big text block though, I like to break these up a bit and play with ways to integrate them into the artwork.
I put the supporting act in a separate badge to make it stand out more, and I made a little cutaway in the frame to hold the venue logo.
Colors always look different on screen than in print. A piece of paper doesn’t glow, so they are usually a bit darker and less saturated in CMYK. To make sure they turn out right, you can proof your work before you send it off. If you know your color profile, Photoshop can simulate how colors will look in print.
A color profile is a bit of data that sets things like the color space, maximum ink application and other instructions for the printer. My local printer for example uses one called “ISO Coated v2 300%” (print companies will usually tell you which profile to use on their website). You can download and install these for free.
After everything is ready, I import the poster without all the type and vector elements into InDesign, then add them back in there. That way they’ll actually end up in the final PDF as vectors and are guaranteed to look sharp. InDesign also lets you set things like bleed and crop marks, which are sometimes required by the printer.
And here’s the whole thing put together (click for full PDF):
]]>At the beginning of 2019, I became a partner at Codista, the software studio where I’ve been working for some time now. Thomas, Luis and I now run the company as a trio, and building together has been great.
Besides working as a frontend developer on a number of challenging projects this year, I was also responsible for the rebranding of our own identity. There are some interesting new ideas in the works for our studio in 2020, and I’m excited to see them come to life.
Counting this one, I’ve written ten new posts in 2019. That’s more than last year, but I’d still like to increase that number.
The most popular posts were:
The feedback from the web community on these posts was amazing, and it never gets old to hear that anything you’ve made actually helped people in some way.
Like in 2018, all my writing was published on my own website. I’m increasingly fond of the IndieWeb and the principles behind it, and I want to continue striving for more independence. Eleventy and Netlify both have been very valuable improvements for my personal site this year, and I really like working with them.
I wrote two new talks in 2019 and had the chance to speak at a few community events.
First I was invited as a guest speaker to the Mozilla Developer Roadshow, where I had a really interesting “Fireside Chat” about CSS with HJ Chen. It was awesome meeting her and I’m very grateful for the opportunity to speak there.
Following my blog post about the topic, I gave a talk on the CSS Mindset at CSS-Minsk-JS in September and had a wonderful time. A big thank you to the organizers and attendees, who made me feel very much at home in Minsk.
A big part of this year was devoted to making our own Webclerks Conference happen in Vienna. It was the first time for me to be involved in the organization of such an event and I really had no idea how it would turn out, so I’m overjoyed that it went so well.
I also shared my experiences with the IndieWeb in a talk called “Rage against the Content Machine” there. I’d love to do that one again in the new year.
I’ve been fortunate enough to see many beautiful places on this planet now. I made a little map to keep track of them here.
This year, I visited Portugal, Georgia, Belarus, the Czech Republic and the Azores. My personal favourites were the ancient city of Tbilisi and the raw nature of São Miguel island, both fascinating places in their own regard.
2019 was also a source of frustration in many ways. From the ongoing global shift to right-wing populism to increasingly dystopian technology trends to the looming threat of climate change. I’ve had some issues with worrying and stress in my personal life as well.
But I’d like to think that the new decade can be an opportunity to learn from past mistakes and change things for the better. I’m going to try and focus on that going forward.
That’s all folks. See you tomorrow!
]]>
I have been to quite a few web development conferences as an attendee, and lately even as a speaker. But the organization of such events always eluded me. I of course noticed when an event was particularly well put together, but I always assumed that the people running the show had a large team, lots of experience and were just very talented organizers.
That’s why it felt so weird to suddenly find myself in that group. Clearly we’re just a bunch of people who thought it would be cool to invite some of our heroes and heroines to Vienna, with little prior experience in creating an event at that scale.
But then it worked! We’d somehow gotten ourselves an absolutely incredible line-up of speakers and a sold out venue.
Fast forward to last Monday, when months of work finally resulted in that one special day. Sleep had been rare in the days before, and the tension was high.
None of us really knew how this would turn out, so we just gave it our best shot. And the feedback so far from speakers and attendees alike has been great:
It's hard to believe that the @wearewebclerks conference was the first one ever by the organisers. It was slick, welcoming, friendly, and interesting. Thanks for asking me to speak! ♥️ #webclerks
— Charlie Don't Surf (@sonniesedge) November 26, 2019
The @wearewebclerks conference today was seriously awesome! Great talks, great organization and great people! And all of that in Vienna ♥️ Looking forward to the next year #webclerks pic.twitter.com/GHOqfgzsNB
— Lisa Gringl (@kringal) November 25, 2019
✅ brilliant speakers
— Harald Atteneder (@urbantrout) November 25, 2019
✅ surprise acts
✅ live captions
✅ tasty food from "Speisen ohne Grenzen"
✅ water bottles to reduce waste
Congratulations to the organizers of a great #webclerks community conference pic.twitter.com/sOflfHkA1F
This conference was a great example of getting it right from the beginning. Amazing speakers, inclusive, accessible and with all details taken care of. Congratulations to the team! I'm really looking forward to next year's event #webclerks pic.twitter.com/YlRfzyt5kD
— Magdalena Adrover (@madrovergaya) November 25, 2019
Here are some of my personal takeaways from the experience:
It really pays off to have a good tech team. We could not have asked for a better crew than ALC, they handled the event flawlessly by themselves. A huge thing to get off your shoulders as an organizer.
If you want to livestream an event on youtube, that apparently requires a “brand account”. These take 24 hours to approve, so it’s best to not discover this the night before the conference. (Thankfully Joschi from a11yclub helped us out there!)
The glass water bottles we put into everyone’s goodie bag were a big hit. People liked that they could refill anytime. Less waste, more hydration!
You can make your own rules! Not all tech confs have to follow the same recipe. In fact some of the little differences are what makes you stand out.
We’re not a faceless recruiting event, but a community of humans, and we wanted to be as inclusive as possible. That thought went into a lot of aspects of our conference from ticket pricing to info emails to food choices. Definitely something to keep.
I really have to highlight the insane amount of work that Manuel and Claudi have put into this event. There’s an internal Trello board with about a gazillion tasks on it, and most of those had their names on them. It is beyond me how they did it.
Daniel has been an absolute beast as well - he did all the designs, printed badges, flags, shirts, stickers - you name it. If you thought webclerks looked like a professional brand, it’s because of him.
We also had lots of help from awesome volunteers who dedicated their time and energy to our event. Jeannine, Kerstin, Michel, Anne and Gregor have been incredibly helpful and I’m so glad they joined our team.
I honestly can’t think of many industries where it would have been possible to put an event like this together from scratch. I believe the reason this worked at all is the incredible openness displayed in the web community.
Where international speakers come to a small local conference to share their knowledge. Where companies like mozilla support a relatively unknown event and fund inclusiveness. Where attendees pay it forward to enable underrepresented others to join.
I’m happy and proud to be a part of that community.
Thank you! 🎉
]]>First plans look promising. It’s really nice and modern. The old road had sidewalks and a bike lane, but these don’t really fit in with the new design and take too much time to build. Nobody on the planning committee knows any pedestrians or cyclists anyway, so the project doesn’t really focus on that.
Soon people get to work. Instead of regular concrete though, the construction crew decides to use a new cutting-edge synthetic asphalt. It was developed by a big construction company from the city that uses it to build a 12-lane highway in the nation’s capital - but they say anyone can use it.
Everyone in construction is very excited to work with this new material. You can do amazing stuff with it, and the team needs less time to get things done. They say this process will make the new road a lot better than the old one, and every modern construction crew does it this way now.
The friction point of the new asphalt is quite high though, so people will need a bit more horse power to drive on it - and it also takes about five times more fuel than usual. Luckily everybody on the construction team earns a very nice salary and drives a big BMW, so that isn’t really an issue. They do a couple of test drives and it all works fine.
The road is almost finished and everybody is really happy with it. But it was expensive to build, and the town wants to get that money back in some way. They don’t really like to put up toll booths, as these tend to scare drivers off. Instead they make a deal with some of the bigger advertising companies to put up some cameras.
Every car that drives on the new road will be photographed and analysed, and the ad firms can then decide to put up billboards in the middle of the road that are perfectly tailored to each driver. That is a bit distracting and slows the cars down somewhat, but it will generate revenue for the town. Plus, all these photos of drivers will tell the planners more about how it is working.
Finally, the road is finished.
There are fewer people using it now. That’s ok though, the new road isn’t for everyone.
Some might even feel that the old road was better in a way, potholes and all. It didn’t have all the fancy new features, but it got them where they wanted to go.
But that’s the nature of progress - just how these things are done today.
It’s the future.
]]>Posting a new short “note” on my site currently requires me to commit a new markdown file to the repository on Github. That’s doable (for a developer), but not really convenient, especially when you’re on the go and just want to share a quick link. Twitter and other social media platforms literally make this as easy as clicking a single button, which makes it tempting to just post stuff straight to them. That’s why I wanted to improve this process for my site.
A quick Google search revealed that smarter people have already solved that problem. I came across this blog post by Tim Kadlec who describes adapting someone else’s link sharing technique for his (Hugo-powered) blog. That just left me the task of adapting it for my setup (Eleventy, Netlify) and customizing a few details.
The new link sharing basically has three main parts:
Here’s how they work together:
The button to kick things off is just a small bit of Javascript that takes the current page’s title, URL and optionally a piece of selected text you may want to quote along with the link.
It then sends these things as GET parameters to mxb.dev/share by opening a new window to it.
function(){
// get link title
var title = document.getElementsByTagName('title')[0].innerHTML;
title = encodeURIComponent(title);
// get optional text selection
var selection = '';
if (window.getSelection) {
selection = window.getSelection().toString();
} else if (document.selection && document.selection.type != 'Control') {
selection = document.selection.createRange().text;
}
selection = encodeURIComponent(selection);
// generate share URL
var url = 'https://mxb.dev/share/?title='+title+'&body='+selection+'&url='+encodeURIComponent(document.location.href);
// open popup window to sharing form
var opts = 'resizable,scrollbars,status=0,toolbar=0,menubar=0,titlebar=0,width=680,height=700,location=0';
window.open(url,'Sharer',opts);
})()
The bookmarklet looks like this:
Share on MXB
…and can then be dragged to the bookmarks bar for quick access.
At mxb.dev/share, I’ve created a small preact app. It will take the GET params passed in via the URL and generate a live preview of the resulting note, so I know what the end product will look like.
There’s also a form that will be pre-populated with the values, which lets me include additional information and edit everything before posting.
The form also has fields for the Github username and security token, necessary for authentification. My password manager will fill those in automatically.
When I hit the submit button, the form will send the data along to another endpoint. I’ve built a serverless function to handle the processing, so I could theoretically send data from other sources there too and keep the posting logic in one place. Netlify Functions seemed to be a nice fit for this.
Here’s the full script if you’re interested. It reads the posted data and generates a new markdown file from it, called something like 2019-08-11-amphora-ethan-marcotte.md:
---
title: "Amphora - Ethan Marcotte"
date: "2019-08-11T16:57:13.104Z"
syndicate: false
tags: link
---
...we've reached a point where AMP may "solve" the web's
performance issues by supercharging the web’s accessibility problem.
(via [@beep](https://twitter.com/beep))
[ethanmarcotte.com/wrote/amphora](https://ethanmarcotte.com/wrote/amphora/)
It will then use the Github API to post that file as a base64-encoded string to a predetermined location in the site’s repository (in my case the folder where I keep all my notes).
Here’s the core function responsible for that:
const postFile = async data => {
const { title, token } = data
const fileName = getFileName(title)
const fileContent = getFileContent(data)
const url = API_FILE_TARGET + fileName
const payload = {
message: 'new shared link',
content: Buffer.from(fileContent).toString('base64'),
committer: {
name: 'Max Böck',
email: 'hello@mxb.dev'
}
}
const options = {
method: 'PUT',
body: JSON.stringify(payload),
headers: {
'Content-Type': 'application/vnd.github.v3+json',
Authorization: `token ${token}`
}
}
return await fetch(url, options)
}
That’s pretty much it! After the file is committed, Netlify will kick in and re-build the static site with the new content. If I have marked the “syndicate to Twitter” flag, another script will then cross-post the link there. (More on that in Static Indieweb pt1: Syndicating Content).
A caveat of this technique is the use on mobile. Javascript bookmarklets are not as easily available in mobile browsers, which complicates the process again.
Thankfully Aaron Gustafson recently pointed out that it’s possible to define a “Share Target” for Progressive Web Apps. That means if your site is a PWA (it probably should be), you can add an entry like this to its manifest file:
// site.webmanifest
{
...,
"share_target": {
"action": "/share/",
"method": "GET",
"enctype": "application/x-www-form-urlencoded",
"params": {
"title": "title",
"text": "text",
"url": "url"
}
}
}
That little bit of JSON registers your site as an application that can share things, just like Twitter, WhatsApp and the others. So after I “install” my PWA (read: create a shortcut link on my device home screen), it shows up as an option in the native Android “share” dialog:
Selecting the "MXB" option will grab the current page title and URL and send them as GET args to my sharing form, just like the bookmarklet would on desktop. There's still a small bug in there where the URL will be sent as the text parameter, but that can be corrected with a bit of Javascript in the form app.
I’m quite happy with how this turned out, as it feels really simple and straightforward. One step closer to IndieWeb bliss!
]]>The original concept starts to look shallow or irrelevant, and the phrases sound awkward and repetitive. It just doesn’t feel good anymore.
I don’t think that’s unique to me - In fact I guess one of the reasons why many people would rather write a series of tweets than a longform blog post is that the expectations we set for ourselves are much higher with the latter. Posts need to be “done right” - there’s a greater risk of falling into the trap of perfectionism.
I know the anxiety of sharing something with the world. I know there is a pressure to match the quality we see elsewhere on the web. But maybe we should stop trying to live up to somebody else’s standards and focus on just getting stuff out there instead. Maybe our “imperfect” things are already helpful to someone. Maybe this shouldn’t be so hard.
Don’t get me wrong, I don’t mean we should all half-ass our writing. There are a lot of details to get right, and going the extra mile certainly pays off as it makes for better articles.
However at least for me, there is a point where a post is about 80% done, and I start to get in my own head:
“Maybe this isn’t as good as I thought it would be.”
“Seems like everyone already knows that.”
“This is missing something new and exciting.”
“I’m sure somebody already covered this way better.”
This is where posts die.
They get swallowed by these voices of doubt and end up collecting dust in a drafts folder as “maybe I’ll come back and finish this later” pieces. I’ve got about six of these currently, and it seems like I’m not the only one.
So now I’m trying a new approach to avoid this:
I’ll publish something as soon as I feel confident that all the important points I want to get across are there. I try to ignore the voice screaming “it’s not ready” just for long enough to push it online. Then I share the link on Twitter.
Right away, the act of making a post public forces me to read it again. And of course the minute it’s out there I’ll immediately notice a bunch of errors I didn’t spot before. It’s a law of nature or something. That’s ok though, I can just push a couple of fixes.
Doing a thing now were I publish stuff early to avoid "It's-not-ready-anxiety" and get it out there. Fixing shit later, sorry 🙃 pic.twitter.com/MYCY382Pn9
— Max Böck (@mxbck) June 6, 2019
Then there’s the feedback aspect. Publishing might either go largely ignored, or the post might get shared or commented on.
If it’s ignored, well - no harm done. It’s out there, that’s all that matters. But if it gains traction, that’s not only a sign that the post is interesting to people, it’s also a motivation boost to get me over that last 20%.
Most importantly though, publishing early is an exercise for my brain. It’s ignoring the feelings of self-doubt. It’s forcing myself to actively decide that something is good enough.
One of the benefits of writing on your own site is that you can always go back and edit stuff later.
Maybe you have a good idea for an additional paragraph that makes a certain point clearer. Maybe you find a better way to write a piece of code. Maybe somebody points out a thing you haven’t considered yet, and you want to include it. Well, why wouldn’t you? Nobody says blog posts should be set in stone. Improve it for the next reader who comes along.
You can even retroactively change the way your post is displayed on social media. For example, if you add an open graph image to your post after publishing it, you can use Twitter’s Card Validator to scrape the URL again, and all the links people may have already shared will update.
I know of some people doing this in web development, for example in open redesigns of their personal sites. I don’t know if anyone tried it with writing yet though. Maybe it’s a bad idea.
But for now, I like it.
I don’t know why CSS sparks so many different emotions in developers, but I have a hunch as to why it can sometimes seem illogical or frustrating: You need a certain mindset to write good CSS.
Now, you probably need a mindset for coding in general, but the declarative nature of CSS makes it particularly difficult to grasp, especially if you think about it in terms of a “traditional” programming language.
Robin Rendle makes a very good point in this CSS-Tricks Newsletter where he finds that CSS lives somewhere between rigid, logical systems like Math and flexible, adaptive systems like natural languages:
Other programming languages often work in controlled environments, like servers. They expect certain conditions to be true at all times, and can therefore be understood as concrete instructions as to how a program should execute.
CSS on the other hand works in a place that can never be fully controlled, so it has to be flexible by default. It’s less about “programming the appearance” and more about translating a design into a set of rules that communicate the intent behind it. Leave enough room, and the browser will do the heavy lifting for you.
For most people who write CSS professionally, the mindset just comes naturally after a while. Many developers have that “aha!” moment when things finally start to click. It’s not just about knowing all the technical details, it’s more about a general sense of the ideas behind the language.
This is true whether you write CSS-in-JS, Sass or plain vanilla stylesheets. The output will always be CSS - so knowing these concepts will be helpful regardless of your tooling choice.
I tried to list some of them here.
This seems obvious, given that the box model is probably one of the first things people learn about CSS. But picturing each DOM element as a box is crucial to understanding why things layout the way they do. Is it inline or block level? Is it a flex item? How will it grow/shrink/wrap in different contexts?
Open your devtools and hover over elements to see the boxes they're drawing, or use a utility style like outline: 2px dotted hotpink to visualize its hidden boundaries.
The Cascade - a scary concept, I know. Say “Cascade” three times in front of a mirror and somewhere, some unrelated styling will break.
While there are legitimate reasons to avoid the cascade, it doesn’t mean that it’s all bad. In fact, when used correctly, it can make your life a lot easier.
The important part is to know which styles belong on the global scope and which are better restricted to a component. It also helps to know the defaults that are passed down, to avoid declaring unnecessary rules.
Aim to write the minimal amount of rules necessary to achieve a design. Fewer properties mean less inheritance, less restriction and less trouble with overrides down the line. Think about what your selector should essentially do, then try to express just that. There’s no point in declaring width: 100% on an element that’s already block-level. There’s no need to set position: relative if you don’t need a new stacking context.
Avoid unnecessary styles, and you avoid unintended consequences.
Some CSS features can be written in “shorthand” notation. This makes it possible to declare a bunch of related properties together. While this is handy, be aware that using the shorthand will also declare the default value for each property you don’t explicitly set. Writing background: white; will effectively result in all these properties being set:
background-color: white;
background-image: none;
background-position: 0% 0%;
background-size: auto auto;
background-repeat: repeat;
background-origin: padding-box;
background-clip: border-box;
background-attachment: scroll;
It's better to be explicit. If you want to change the background color, use background-color.
CSS deals with a large amount of unknown variables: screen size, dynamic content, device capabilities - the list goes on. If your styles are too narrow or restrictive, chances are one of these variables will trip you up. That’s why a key aspect in writing good CSS is to embrace its flexibility.
Your goal is to write a set of instructions that is comprehensive enough to describe what you want to achieve, yet flexible enough to let the browser figure out the how by itself. That’s why its usually best to avoid “magic numbers”.
Magic numbers are random hard values. Something like:
.thing {
width: 218px; /* why? */
}
Whenever you find yourself tapping the arrow key in your devtools, adjusting a pixel value to make something fit - that’s probably a magic number. These are rarely the solution to a CSS problem, because they restrict styles to a very specific usecase. If the constraints change, that number will be off.
Instead, think about what you actually want to achieve in that situation. Alignment? An aspect ratio? Distributing equal amounts of space? All of these have flexible solutions.
In most cases, it's better to define a rule for the intent, rather than hard-code the computed solution to it.
For many layout concepts it’s imperative to understand the relationship between elements and their container. Most components are sets of parent and child nodes. Styles applied to the parent can affect the descendants, which might make them ignore other rules. Flexbox, Grid and position:absolute are common sources of such errors.
When in doubt about a particular element behaving different than you'd want it to, look at the context it's in. Chances are something in its ancestry is affecting it.
The number one mistake made by designers and developers alike is assuming that things will always look like they do in the static mockup. I can assure you, they will not.
Strings may be longer than intended or contain special characters, images might be missing or have weird dimensions. Displays may be very narrow or extremely wide. Those are all states you need to anticipate.
Always be aware that what you see is just one UI state in a bigger spectrum. Instead of styling the thing on your screen, try to build a "blueprint" of the component. Then make sure that whatever you throw at it won't break your styling.
When you set out to turn a design mockup into code, it’s often helpful to take inventory of the different patterns included first. Analyse each screen and take note of any concept that occurs more than one. It might be something small like a typographic style, or large like a certain layout pattern. What can be abstracted? What’s unique?
Thinking of pieces in a design as standalone things makes them easier to reason about, and helps to draw the boundaries between components.
A surprisingly large part of programming in general is coming up with good names for stuff. In CSS, it helps to stick to a convention. Naming schemes like BEM or SMACSS can be very helpful; but even if you don’t use them, stick to a certain vocabulary. You’ll find that certain component patterns come up over and over - but is it called a “hero” or a “stage”? Is it “slider” or “carousel”?
Establish a routine in how you name parts of your UI, then stick to that convention in all your projects. When working in a team, it can be helpful to agree on component names early on and document them somewhere for new team members.
All these things were important for me to understand, but your personal experience as to what matters most might be different. Did you have another “aha” moment that made you understand CSS better? Let me know!
Update: I did a talk about the CSS Mindset at CSS-Minsk-JS in September. There’s also a video recording available, if you prefer that.
It’s a blast from the past: In the 90s, sites about a common topic could join together in a central index. To be a member, you had to embed a little widget on your page that contained a “forward”, a “backward”, and a “random” button. These buttons would then link to the next or previous site in the ring.
Since the term "webring" is trademarked in the US, this needs another cool name. Know any? Please add it to this thread!
To keep the ring from getting spammed or flooded with trolls, it has to be curated. The project does that by hosting the member index on Github, in a simple JSON file. Admins can accept or decline pull requests from people who want to join the ring, after reviewing their sites. There’s also a Code of Conduct that every member has to follow in order to be part of the ring.
For people who are not technical enough to submit a pull request, there’s also a simple signup form (using Netlify forms) to send the admin your site’s info via email and let them add you.
I wanted to make this as easy as possible, so people can start linking their personal sites together straight away. So I made the boilerplate using Eleventy. After forking the codebase, the proud webring admin only needs to set a title and a bit of meta data.
Eleventy then generates a site like this that lists all the members, shows the Code of Conduct and the instructions on how to join.
You can deploy it to Netlify, a free static site host, with just a few clicks. Netlify also lets you either use one of their subdomains, or a custom one you own.
Members of the ring can copy a code snippet to embed a banner on their site. I borrowed a bit from Twitters embed widget here: The basic markup is just a link to the index, and the prev/random/next links. But if you also include the script tag, it will replace that with a custom web component, designed by the ring admin.
<webring-banner>
<p>Member of the <a href="https://webringdemo.netlify.com">An Example Webring</a> webring</p>
<a href="https://webringdemo.netlify.com/prev">Previous</a>
<a href="https://webringdemo.netlify.com/random">Random</a>
<a href="https://webringdemo.netlify.com/next">Next</a>
</webring-banner>
<script async src="https://webringdemo.netlify.com/embed.js" charset="utf-8"></script>
This will automatically show the title, member count, maybe a logo. And it can be edited from a central location. It might look something like this:
Member of the An Example Webring webring
Previous Random NextIf a member publishes an RSS feed on their site, they can add that to the ring as well: the index page will generate an OPML file, so people can subscribe to all members at once.
If you want to start your own webring, go ahead! Fork the repository on Github and follow the instructions there - It’s free and doesn’t take long!
“That shit’s unsafe”, they say (I’m paraphrasing), so they attach a complicated wall-mounted seat to the inside. When the ship launches, that seat starts to pick up heavy vibrations and violently breaks apart. Foster releases her seatbelt seconds before it kills her and ultimately finds that the design was perfect all along, enjoying the rest of the ride in smooth anti-gravity.
We assume that complex problems always require complex solutions. We try to solve complexity by inventing tools and technologies to address a problem; but in the process we create another layer of complexity that, in turn, causes its own set of issues.
Obviously not every problem has a simple solution, and most complex tools exist because of real usecases. But I think there’s a lot of value in actively questioning the need for complexity. Sometimes the smarter way to build things is to try and take some pieces away, rather than add more to it.
Static sites are on the rise again now, precisely because they are simple. They don’t try to manage serverside code with clever abstractions - they don’t have any. They don’t try to prevent security breaches with advanced firewalls - they get rid of the database entirely.
Some of the most important things in the world are intentionally designed “simple”. In any system, the potential for error directly increases with its complexity - that’s why most elections still work by putting pieces of paper in a box.
Developers are obsessed with the notion of “best practice”.
It implies that there is one correct way of doing things, and all other solutions are either imperfect or, at worst, “anti-patterns”. But the definition of best practice changes everytime a new technology arises, rendering the previous solution worthless garbage (even though it still gets the job done).
There is an undeniable ego factor to the way we use technology in our projects. To show everyone else how clever we are, we come up with harder and harder ways to achieve our tasks. And of course they all solve specific problems - but that does not mean they are always the best solution, regardless of context.
It’s cool to use the latest and greatest tech; but we should always ask if our choices really benefit the user, or if we do it mostly for ourselves. After all, the “Developer Experience” is only a means to an end.
And if we’re talking DX - I’ll take simplicity over features any day.
]]>The real value of social media (apart from the massive ad revenue and dystopian data mining) is in the reactions we get from other people. The likes, reposts and replies - they’re what makes it “social”. To gain control over our own content, we need to capture these interactions as well and pull them back to our sites. In indieweb terms, that’s known as “backfeed”.
A Webmention is an open standard for a reaction to something on the web. It's currently in W3C recommendation status. When you link to a website, you can send it a Webmention to notify it.
It's comparable to pingbacks, except that webmentions contain a lot more information than a simple "ping". They can be used to express likes, reposts, comments or other things.
To make a site support webmentions, it needs to declare an endpoint to accept them. That endpoint can be a script hosted on your own server, or in the case of static sites, a third-party service like webmention.io.
Webmention.io is a free service made by indieweb pioneer Aaron Parecki that does most of the groundwork of receiving, storing and organizing incoming webmentions for you. It’s awesome!
To use it, sign up for a free account there using the IndieAuth process, then include a link tag in the head of your site:
<link rel="pingback" href="https://webmention.io/mxb.dev/xmlrpc">
<link rel="webmention" href="https://webmention.io/mxb.dev/webmention">
Cool. So that’s all very nice, but the real party is still over at [currently hip social network], you say. Nobody ever sends me any webmentions.
Well, while your platform of choice is still around, you can use a tool to automatically turn social media interactions into beautiful open webmentions. Bridgy is another free service that can monitor your Twitter, Facebook or Instagram activity and send a webmention for every like, reply or repost you receive.
So if you were to publish a tweet that contains a link back to your site, and somebody writes a comment on it, Bridgy will pick that up and send it as a webmention to your endpoint!
The resulting entry on webmention.io then looks something like this:
{
"type": "entry",
"author": {
"type": "card",
"name": "Sara Soueidan",
"photo": "https://webmention.io/avatar/pbs.twimg.com/579a474c9b858845a9e64693067e12858642fa71059d542dce6285aed5e10767.jpg",
"url": "https://sarasoueidan.com"
},
"url": "https://twitter.com/SaraSoueidan/status/1022009419926839296",
"published": "2018-07-25T06:43:28+00:00",
"wm-received": "2018-07-25T07:01:17Z",
"wm-id": 537028,
"wm-source": "https://brid-gy.appspot.com/comment/twitter/mxbck/1022001729389383680/1022009419926839296",
"wm-target": "https://mxb.dev/blog/layouts-of-tomorrow/",
"content": {
"content-type": "text/plain",
"value": "This looks great!",
"text": "This looks great!"
},
"in-reply-to": "https://mxb.dev/blog/layouts-of-tomorrow/",
"wm-property": "in-reply-to",
"wm-private": false
}
The beauty of webmentions is that unlike with regular social media, reactions to your content are not limited to users of one site. You can combine comments from Facebook and Twitter with replies people posted on their own blogs. You can mix retweets and shares with mentions of your content in newsletters or forum threads.
You also have complete control over who and what is allowed in your mentions. Content silos often only allow muting or blocking on your own timeline, everyone else can still see unwanted or abusive @-replies. With webmentions, you’re free to moderate reactions however you see fit. Fuck off, Nazis!
Once the webmention endpoint is in place, we still need to pull the aggregated data down to our site and display it in a meaningful way.
The way to do this depends on your setup. Webmention.io offers an API that provides data as a JSON feed, for example. You can query mentions for a specific URL, or get everything associated with a particular domain (allthough the latter is only available to site owners.)
My site uses Eleventy, which has a conventient way to pull in external data at build time. By providing a custom function that queries the API, Eleventy will fetch my webmentions and expose them to the templates when generating the site.
// data/webmentions.js
const API_ORIGIN = 'https://webmention.io/api/mentions.jf2'
module.exports = async function() {
const domain = 'mxb.dev'
const token = process.env.WEBMENTION_IO_TOKEN
const url = `${API_ORIGIN}?domain=${domain}&token=${token}`
try {
const response = await fetch(url)
if (response.ok) {
const feed = await response.json()
return feed
}
} catch (err) {
console.error(err)
return null
}
}
The feed can now be accessed in the {{ webmentions }} variable.
Here’s the complete function if you’re interested. Other static site generators offer similiar methods to fetch external data.
Now that the raw data is available, we can mold it into any shape we’d like. For my site, the processing steps look like this:
// filters.js
const sanitizeHTML = require('sanitize-html')
function getWebmentionsForUrl(webmentions, url) {
const allowedTypes = ['mention-of', 'in-reply-to']
const hasRequiredFields = entry => {
const { author, published, content } = entry
return author.name && published && content
}
const sanitize = entry => {
const { content } = entry
if (content['content-type'] === 'text/html') {
content.value = sanitizeHTML(content.value)
}
return entry
}
return webmentions
.filter(entry => entry['wm-target'] === url)
.filter(entry => allowedTypes.includes(entry['wm-property']))
.filter(hasRequiredFields)
.map(sanitize)
}
In Eleventy’s case, I can set that function as a custom filter to use in my post templates.
Each post will then loop over its webmentions and output them underneath.
<!-- webmentions.njk -->
{% set mentions = webmentions | getWebmentionsForUrl(absoluteUrl) %}
<ol id="webmentions">
{% for webmention in mentions %}
<li class="webmentions__item">
{% include 'webmention.njk' %}
</li>
{% endfor %}
</ol>
You can see the result by scrolling down to the end of this post (if there are any replies 😉).
Because static sites are, well, static - it’s possible that new mentions have happened since the last build. To keep the webmention section up-to-date, there’s an extra step we can take: client side rendering.
Remember I said the webmention.io API can be used to only fetch mentions for a specific URL? That comes in handy now. After the page has loaded, we can fetch the latest mentions for the current URL and re-render the static webmention section with them.
On my site, I used Preact to do just that. It has a very small (~3kB) footprint and lets me use React’s mental model and JSX syntax. It would probably also have been possible to re-use the existing nunjucks templates, but this solution was the easiest and most lightweight for me.
I essentially used the same logic here as I did in the static build, to ensure matching results. The rendering only starts after the API call returned valid data though - if anything goes wrong or the API is unavailable, there will still be the static content as a fallback.
// webmentions/index.js
import { h, render } from 'preact'
import App from './App'
...
const rootElement = document.getElementById('webmentions')
if (rootElement) {
fetchMentions()
.then(data => {
if (data.length) {
render(<App webmentions={data} />, rootElement)
}
})
.catch(err => {
console.error(err)
})
}
I also made an Eleventy Starter Template with basic webmention support, using some of the techniques in this post. Check it out!
There are of course still some missing pieces, most notably the ability to send outgoing webmentions to URLs linked to in your own blog posts. I might have to look into that.
Remy Sharp has recently published a very useful new tool that takes care of handling outgoing webmentions for you. Webmention.app is a platform agnostic service that will check a given URL for links to other sites, discover if they support webmentions, then send a webmention to the target.
You can use that service in a number of ways, including your own command line. If you host your site on Netlify though, it’s also very straightforward to integrate it using deployment webhooks!
My implementation was heavily inspired by Aaron Gustafson’s excellent Jekyll Plugin (link below), which goes even further with customization and caching options. If you’re running a Jekyll site, use that for almost instant webmention support 👍.
However, the main reason why people publish on Twitter / Medium or other platforms is that they can reach a much bigger audience there - everyone’s on them, so you have to be too. Publishing on a personal site can cut you off from those readers. That’s why it might be a good idea to automatically post copies of your content on these sites whenever you publish something new.
This practice is known as “POSSE” (Publish on your Own Site, Syndicate Elsewhere). It enables authors to reach people on other platforms while still keeping control of the original content source.
For the recent relaunch of my personal website, I wanted to embrace some of these ideas. I included a section called notes featuring small, random pieces of content - much like tweets. These notes are perfect candidates for syndication to Twitter.
My site is built with Eleventy, a static site generator based on node, and hosted on Netlify. Static sites are awesome for a variety of reasons, but interacting with other platforms typically requires some serverside code - which they don’t have.
Luckily though, Netlify provides a service called “Functions”, which lets you write custom AWS lambda functions without the hassle of dealing with AWS directly. Perfect! 🤘
The first step is to publish a machine-readable feed of the content we want to syndicate. That’s exactly what RSS-Feeds are for - but they’re usually in XML format, which is not ideal in this case.
For my own site, I chose to provide notes as a simple JSON object. I already have an atom feed for content readers, and JSON makes the note processing easier later on.
My feed looks something like this:
// notes.json
[
{
"id": 1,
"date": "2018-12-02T14:20:17",
"url": "https://mxb.dev/notes/2018-12-02/",
"content": "Here's my first note!",
"syndicate": true
},
{...}
]
All entries also include a custom syndicate flag that overrides the auto-publishing behaviour if necessary.
Now for the tricky part: we need to write a lambda function to push new notes to Twitter. I won’t go into detail on how to build lambda functions on Netlify, there are already some great tutorials about this:
Be sure to also check out the netlify-lambda cli, a very handy tool to test and build your functions in development.
To trigger our custom function everytime a new version of the site was successfully deployed, we just need to name it deploy-succeeded.js. Netlify will then automatically fire it after each new build, while also making sure it’s not executable from the outside.
Whenever that function is invoked, it should fetch the list of published notes from the JSON feed. It then needs to check if any new notes were published, and whether they should be syndicated to Twitter.
// deploy-succeeded.js
exports.handler = async () => {
return fetch('https://mxb.dev/notes.json')
.then(response => response.json())
.then(processNotes)
.catch(err => ({
statusCode: 422,
body: String(err)
}))
}
Since we will have to interact with the Twitter API, it’s a good idea to use a dedicated helper class to take some of that complexity off our hands. The twitter package on npm does just that. We will have to register for a developer account on Twitter first though, to get the necessary API keys and tokens. Store those in your project’s .env file.
TWITTER_CONSUMER_KEY=YourTwitterConsumerKeyHere
TWITTER_CONSUMER_SECRET=YourTwitterConsumerSecretStringHere
TWITTER_ACCESS_TOKEN_KEY=12345678-YourTwitterAccessTokenKeyHere
TWITTER_ACCESS_TOKEN_SECRET=YourTwitterAccessTokenSecretStringHere
Use these keys to initialize your personal Twitter client, which will handle the posting for your account.
// Configure Twitter API Client
const twitter = new Twitter({
consumer_key: process.env.TWITTER_CONSUMER_KEY,
consumer_secret: process.env.TWITTER_CONSUMER_SECRET,
access_token_key: process.env.TWITTER_ACCESS_TOKEN_KEY,
access_token_secret: process.env.TWITTER_ACCESS_TOKEN_SECRET
})
Right. Now we need to look at the notes array and figure out what to do. To keep it simple, let’s assume the latest note is a new one we just pushed. Since the JSON feed lists notes in descending date order, that would be the first item in the array.
We can then search twitter for tweets containing the latest note’s URL (we will include that in every syndicated tweet to link back to the original source). If we find anything, then it’s already been published and we don’t need to do anything. If not, we’ll go ahead.
const processNotes = async notes => {
// assume the last note was not yet syndicated
const latestNote = notes[0]
// check if the override flag for this note is set
if (!latestNote.syndicate) {
return {
statusCode: 400,
body: 'Latest note has disabled syndication.'
}
}
// check twitter for any tweets containing note URL.
// if there are none, publish it.
const search = await twitter.get('search/tweets', { q: latestNote.url })
if (search.statuses && search.statuses.length === 0) {
return publishNote(latestNote)
} else {
return {
statusCode: 400,
body: 'Latest note was already syndicated.'
}
}
}
Next, we need to prepare the tweet we want to send. Since our self-published note does not have the same restrictions that twitter has, we should format its content first.
My implementation simply strips all HTML tags from the content, makes sure it is not too long for Twitter’s limit, and includes the source url at the end. It’s also worth noting that Eleventy will escape the output in the JSON feed, so characters like " will be encoded to " entities. We need to reverse that before posting.
// Prepare the content string for tweet format
const prepareStatusText = note => {
const maxLength = 200
// strip html tags and decode entities
let text = note.content.trim().replace(/<[^>]+>/g, '')
text = entities.decode(text)
// truncate note text if its too long for a tweet.
if (text.length > maxLength) {
text = text.substring(0, maxLength) + '...'
}
// include the note url at the end.
text = text + ' ' + note.url
return text
}
When everything is done, we just need to send our note off to Twitter:
// Push a new note to Twitter
const publishNote = async note => {
const statusText = prepareStatusText(note)
const tweet = await twitter.post('statuses/update', {
status: statusText
})
if (tweet) {
return {
statusCode: 200,
body: `Note ${note.date} successfully posted to Twitter.`
}
}
}
Hopefully that all worked, and you should end up with something like this in your timeline:
I did some housekeeping over the holidays and switched my website to @eleven_ty and @Netlify !
— Max Böck (@mxbck) January 4, 2019
👉 https://t.co/oq0OyPyjRs
🎉 You can find the finished lambda function here.
netlify-lambda.To help them, news platforms like CNN and NPR provide text-only versions of their sites:
We have a text-only version of our website for anyone who needs to stay up-to-date on Hurricane Florence news and keep their battery and data usage to a minimum: https://t.co/n2UDDKmlja
— NPR (@NPR) September 14, 2018
That’s a great thing. Here’s how it looks:
Text-only sites like these are usually treated as a MVP of sorts. A slimmed-down version of the real site, specifically for emergencies.
I’d argue though that in some aspects, they are actually better than the original. Think about it- that simple NPR site gets a lot of points right:
Most importantly, it’s user friendly. People get what they came for (the news) and are able to accomplish their tasks.
The only thing missing here might be a few sensible lines of CSS to set better typography rules. Those could still be inlined in the head though, easily coming in under the 14KB limit for the first connection roundtrip.
This is the web as it was originally designed. Pure information, with zero overhead. Beautiful in a way.
The “full” NPR site in comparison takes ~114 requests and weighs close to 3MB on average. Time to first paint is around 20 seconds on slow connections. It includes ads, analytics, tracking scripts and social media widgets.
Meanwhile, the actual news content is roughly the same. The articles are identical - apart from some complementary images, they convey exactly the same information.
If the core user experience can be realized with so little, then what is all that other stuff for?
Of course the main NPR site offers a lot more than just news, it has all sorts of other features. It has live radio, podcasts, video and more. Articles are preloaded via AJAX. It’s a much richer experience - but all that comes at a price.
I recently read this great article by Alex Russel, in which he compares Javascript to CO2 - in the sense that too much of it can be harmful to the ecosystem.
Javascript enables us to do amazing things and it can really enhance the user experience, if done right. But it always has a cost. It’s the most expensive way to accomplish a task, and it’s also the most fragile. It’s easy to forget that fact when we develop things on a highspeed broadband connection, on our state-of-the-art devices.
That’s why websites built for a storm do not rely on Javascript. The benefit simply does not outweigh the cost. They rely on resilient HTML, because that’s all that is really necessary here.
That NPR site is a very useful thing that serves a purpose, and it does so in the simplest, most efficient way possible. Personally, I’d love to see more distilled experiences like this on the web.
… “Well, this might work for a news site - but not every usecase is that simple.”, I hear you say.
True. The web is a text-based medium, and it works best with that type of content. But the basic approach is still valid in any other scenario:
Figure out what the main thing is people want from your site and deliver it - using the simplest, least powerful technology available. That’s what “the rule of least power” tells us, and it’s still the best strategy to a make a website truly resilient.
Make it withstand hurricanes.
]]>We don’t design sites for specific screen dimensions anymore, we make them responsive. We don’t assume ideal browsers and devices, we use progressive enhancement. When it comes to connectivity though, we still treat that as a binary choice: you’re either on- or offline.
Real connections are not that simple. Depending on your location, network condition or data plan, speeds can range from painfully slow to blazingly fast. The concept of “online” can be a drastically different experience for different users, especially on mobile.
What if there was a way to adapt websites based on our users connections, just like we do for varying display widths and browser capabilities? The Network Information API might enable us to do so.
This API is an editor’s draft by the WICG and currently available in Chrome. It can be accessed through the read-only property navigator.connection (MDN), which exposes several properties that provide information about a user’s current connection:
connection.type:Returns the physical network type of the user agent as strings like “cellular”, “ethernet” or “wifi”.
connection.downlink:An effective bandwidth estimate (in Mb/s), based on recently observed active connections.
connection.rtt:An estimate of the average round-trip time (in milliseconds), based on recently observed active connections.
connection.saveData:Returns true if the user has requested “reduced data mode” in their browser settings.
connection.effectiveType:This is a combined estimation of the network quality, based on the round-trip time and downlink properties. It returns a string that describes the connection as either: slow-2g, 2g, 3g or 4g. Here’s how these categories are determined:
There is also an Event Listener available on the connection property that fires whenever a change in the network quality is detected:
function onConnectionChange() {
const { rtt, downlink, effectiveType } = navigator.connection
console.log(`Round Trip Time: ${rtt}ms`)
console.log(`Downlink Speed: ${downlink}Mb/s`)
console.log(`Effective Type: ${effectiveType}`)
}
navigator.connection.addEventListener('change', onConnectionChange)
👉 Be aware that all of this is still experimental. Only Chrome and Samsung Internet browsers have currently implemented the API. It’s a very good candidate for progressive enhancement though - and support for other platforms is on the way.
So how could this be used? Knowing about connection quality enables us to custom-fit resources based on network speed and data preferences. This makes it possible to build an interface that dynamically responds to the user’s connection - a “connection-aware” frontend.
By combining the Network Information API with React, we could write a component that renders different elements for different speeds. For example, a <Media /> component in a news article might output:
Here’s a (very simplified) example of how that might work:
class ConnectionAwareMedia extends React.Component (
constructor(props) {
super(props)
this.state = {
connectionType: undefined
}
}
componentWillMount() {
// check connection type before first render.
if (navigator.connection && navigator.connection.effectiveType) {
const connectionType = navigator.onLine
? navigator.connection.effectiveType
: 'offline'
this.setState({
connectionType
})
}
}
render() {
const { connectionType } = this.state
const { imageSrc, videoSrc, alt } = this.props
// fallback if network info API is not supported.
if (!connectionType) {
return <Image src={imageSrc.hires} alt={alt} />
}
// render different subcomponents based on network speed.
switch(connectionType) {
case 'offline':
return <Placeholder caption={alt} />
case '4g':
return <Video src={videoSrc} />
case '3g':
return <Image src={imageSrc.hires} alt={alt} />
default:
return <Image src={imageSrc.lowres} alt={alt} />
}
}
)
The above example makes our component a bit unpredictable - it renders different things, even when given the same props. This makes it harder to test and maintain. To simplify it and enable reuse of our logic, moving the network condition check into a separate higher-order component might be a good idea.
Such a HoC could take in any component we want and make it connection-aware, injecting the effective connection type as a prop.
function withConnectionType(WrappedComponent, respondToChange = false) {
return class extends React.Component {
constructor(props) {
super(props)
this.state = {
connectionType: undefined
}
// Basic API Support Check.
this.hasNetworkInfoSupport = Boolean(
navigator.connection && navigator.connection.effectiveType
)
this.setConnectionType = this.setConnectionType.bind(this)
}
componentWillMount() {
// Check before the component first renders.
this.setConnectionType()
}
componentDidMount() {
// optional: respond to connectivity changes.
if (respondToChange) {
navigator.connection.addEventListener(
'change',
this.setConnectionType
)
}
}
componentWillUnmount() {
if (respondToChange) {
navigator.connection.removeEventListener(
'change',
this.setConnectionType
)
}
}
getConnectionType() {
const connection = navigator.connection
// check if we're offline first...
if (!navigator.onLine) {
return 'offline'
}
// ...or if reduced data is preferred.
if (connection.saveData) {
return 'saveData'
}
return connection.effectiveType
}
setConnectionType() {
if (this.hasNetworkInfoSupport) {
const connectionType = this.getConnectionType()
this.setState({
connectionType
})
}
}
render() {
// inject the prop into our component.
// default to "undefined" if API is not supported.
return (
<WrappedComponent
connectionType={this.state.connectionType}
{...this.props}
/>
)
}
}
}
// Now we can reuse the function to enhance all kinds of components.
const ConnectionAwareMedia = withConnectionType(Media)
👉 This small proof-of concept is also available on CodePen, if you want to play around.
which one of the two possible websites are you currently designing? pic.twitter.com/ZD0uRGTqqm
— Jon Gold (@jongold) 2. Februar 2016
It mocks the fact that a lot of today’s websites look the same, as they all follow the same standard layout practices that we’ve collectively decided to use. Building a blog? Main column, widget sidebar. A marketing site? Big hero image, three teaser boxes (it has to be three).
When we look back at what the web was like in earlier days, I think there’s room for a lot more creativity in web design today.
Grid is the first real tool for layout on the web. Everything we had up until now, from tables to floats to absolute positioning to flexbox - was meant to solve a different problem, and we found ways to use and abuse it for layout purposes.
The point of these new tools is not to build the same things again with different underlying technology. It has a lot more potential: It could re-shape the way we think about layout and enable us to do entirely new, different things on the web.
Now I know it’s hard to get into a fresh mindset when you’ve been building stuff a certain way for a long time. We’re trained to think about websites as header, content and footer. Stripes and boxes.
But to keep our industry moving forward (and our jobs interesting), it’s a good idea to take a step back once in a while and rethink how we do things.
If we didn’t, we’d still be building stuff with spacer gifs and all-uppercase <TABLE> tags. 😉
I went over to dribbble in search of layout ideas that are pushing the envelope a bit. The kind of design that would make frontend developers like me frown at first sight.
There’s a lot of great work out there - here’s a few of my favorites:
I especially like that last one. It reminds me a bit of the “Metro Tiles” that were all the rage in Windows 8. Not only is this visually impressive, its very flexible too - I could see this working on a phone, a tablet, even on huge TV screens or in augemented reality, as suggested by the designer.
How hard is it to make something like this, given the tools we have today? I wanted to find out and started building a prototype.
I tried to approach this with real production constraints in mind. So the interface had to be responsive, performant and accessible. (It’s not required to be pixel-perfect everywhere though, cause you know - that’s not a real thing.)
Here’s how it turned out:
You can check out the final result on Codepen.
Since this is just for demo purposes, I did not include fallbacks and polyfills for older browsers. My goal was to test the capabilities of modern CSS here, so not all features have cross-browser support (read below). I found that it works best in recent versions of Firefox or Chrome.
Some of the things that made this interesting:
Unsurprisingly, the essential factor for the “Metro Tiles” is the grid. The entire layout logic fits inside this block:
.boxgrid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
grid-auto-rows: minmax(150px, auto);
grid-gap: 2rem .5rem;
&__item {
display: flex;
&--wide {
grid-column: span 2;
}
&--push {
grid-column: span 2;
padding-left: 50%;
}
}
}
The magic is mostly in the second line there. repeat(auto-fit, minmax(150px, 1fr)) handles the column creation responsively, meaning it will fit as many boxes as possible in a row to make sure they align with the outer edges.
The --push modifier class is used to achieve the design’s effect where some boxes “skip” a column. Since this is not easily possible without explicitly setting the grid lines, I opted for this trick: The actual grid cell spans two columns, but only allows enough space for the box to fill have the cell.
The original design shows that the section backgrounds and the tile grid move at different speeds, creating the illusion of depth. Nothing extraordinary, just some good old parallax.
While this effect is usually achieved by hooking into the scroll event and then applying different transform styles via Javascript, there’s a better way to do it: entirely in CSS.
The secret here is to leverage CSS 3D transforms to separate the layers along the z-axis. This technique by Scott Kellum and Keith Clark essentially works by using perspective on the scroll container and translateZ on the parallax children:
.parallax-container {
height: 100%;
overflow-x: hidden;
overflow-y: scroll;
/* set a 3D perspective and origin */
perspective: 1px;
perspective-origin: 0 0;
}
.parallax-child {
transform-origin: 0 0;
/* move the children to a layer in the background, */
/* then scale them back up to their original size */
transform: translateZ(-2px) scale(3);
}
A huge benefit of this method is the improved performance (because it doesn’t touch the DOM with calculated styles), resulting in fewer repaints and an almost 60fps smooth parallax scroll.
CSS Scroll Snap Points are a somewhat experimental feature, but I thought it would fit in nicely with this design. Basically, you can tell the browser scroll to “snap” to certain elements in the document, if it comes in the proximity of such a point. Support is quite limited at the moment, your best bet to see this working is in Firefox or Safari.
There are currently different versions of the spec, and only Safari supports the most recent implementation. Firefox still uses an older syntax. The combined approach looks like this:
.scroll-container {
/* current spec / Safari */
scroll-snap-type: y proximity;
/* old spec / Firefox */
scroll-snap-destination: 0% 100%;
scroll-snap-points-y: repeat(100%);
}
.snap-to-element {
scroll-snap-align: start;
}
The scroll-snap-type tells the scroll container to snap along the y axis (vertical) with a “strictness” of proximity. This lets the browser decide whether a snap point is in reach, and if it’s a good time to jump.
Snap points are a small enhancement for capable browsers, all others simply fall back to default scrolling.
The only Javascript involved is handling the smooth scroll when the menu items on the left, or the direction arrows on top/bottom are clicked. This is progressively enhanced from a simple in-page-anchor link <a href="#vienna"> that jumps to the selected section.
To animate it, I chose to use the vanilla Element.scrollIntoView() method (MDN Docs). Some browsers accept an option to use “smooth” scrolling behaviour here, instead of jumping to the target section right away.
The scroll behaviour property is currrently a Working Draft, so support is not quite there yet. Only Chrome and Firefox support this at the moment - However, there is a polyfill available if necessary.
While this is just one interpretation of what’s possible, I’m sure there are countless other innovative ideas that could be realized using the tools we have today. Design trends may come and go as they always have; but I truly think it’s worth remembering that the web is a fluid medium. Technologies are constantly changing, so why should our layouts stay the same? Go out there and explore.
Encapsulating pieces of UI this way makes it easier to compose larger systems, but it also hides the “bare bones” structure of an application. That’s not a bad thing - but I feel like it’s one of the reasons why people learning frontend today can get a distorted understanding of web development.
If you’re writing code for a browser, you’re writing HTML. That’s why it’s important to know your semantics, and choose the correct element for the task at hand.
Here are a few tips how that can be done.
The most important rule when you’re trying to decide which element to use is: don’t rely on the visual appearance. Everything can be made to look like anything else.
Instead, choose elements based on behaviour and meaning. As a quick check, you can try to disable all CSS in your devtools and take a look at the page in the browser’s default style. Does it still make sense?
Here’s a little quiz: Imagine you’re building an App for we rate dogs™ that provides a searchable database of dog pics. What element would you use to build the <Tag /> component seen here?

In this case, clicking the tags leads you to another page, so they’re links. Easy.
OK, how about now?
Here, the tags are choices a user can make to select multiple values from a predefined set of options. So the underlying element is actually an <input type="checkbox">. The clickable part is the input label, and the actual checkbox is hidden with CSS.
It might be tempting to use the same <Tag> component for both situations. Have it render a neutral <span> and pass it an onClick function via props to handle the different behaviour.
But not only would that strip the component of its semantics, we would miss out on all the things the browser just does for free when we use the correct tag.
One of the strenghts of components is their reusability, and the ability to configure them through props. So why not use that to our advantage?
By using the props supplied to our component, we can conditionally decide which HTML element to render. For example a <Tag href={url} /> could result in a link, while <Tag value={id} /> might render an input. The visual appearance could be the same in both cases, but the context alters the semantic meaning.
Most of the time, the element you’re looking for to trigger an arbitrary action is a <button>. Whenever you find yourself attaching an onClick handler to a <div>, think twice. Is that really the best choice?
The same goes for “empty” links that do not change the URL in some way: If you find something like this in your code:
<a href="#" onClick={someFunction}>
Make it a button instead.
If the thing you’re building isn’t supposed to look like a button, again - don’t rely on visual appearance here. “Clickable divs” are often only used because they come with no default styles of their own - but removing those from a button can be achieved in 3 lines of CSS.
By the way, it’s also perfectly valid for a button to wrap a larger block of content. It is not limited to just text or icons.
Say our dog rating app has an API. You request some data and it gives you something like this:
[
{
id: '216df16ca8b1',
title: 'Shiba Inu',
image: '/assets/img/shibainu.jpg',
description: 'Such shibe, very dog. wow',
url: '/doggos/shiba-inu'
},
{
id: '5ea3621cf16',
title: 'Alaskan Malamute',
image: '/assets/img/malamute.jpg',
description: 'The Malamutes floof is very sof.',
url: '/doggos/alaskan-malmute'
},
{...}
]
Now your job is to transform that data into a card UI.
Most of the time when you want to map() an array of items to a JSX structure, the semantically correct way to do it is a list. Depending on the type of data, this could be an <ol> if the order is important (for example in a comments thread). If not, go with <ul>.
Here’s a useful pattern:
const CardList = ({ items, title }) => (
<ul className="cardlist" aria-label={title}>
{items.map(item => (
<li key={item.id} className="cardlist__item">
<Card {...item} />
</li>
))}
</ul>
)
<CardList items={doggos} title="Todays Doggos" />
Why is this better than simply returning an array of <Card />s?
By making this two dedicated components, you can separate layout from content.
The container could be styled as a list, grid, or a slider - and it can dynamically change columns on different breakpoints. The Card component doesn’t have to care about its context. It can just be dropped into any wrapper and adopt its width.
Screenreaders will announce this as “Todays doggos, list, 5 items” or similar.
As of React v16, you can use <React.Fragment> (or the shorthand <>...</> if you feel like a 1337 hacker). This lets you return multiple sibling DOM nodes without having to wrap them in an unnecessary div. The Fragment does not render to anything tangible in HTML, and you don’t have to pass unique key properties to the elements it contains.
It’s awesome - use it.
return (
<React.Fragment>
<h1>Multiple Siblings without a wrapper!</h1>
<MyComponent />
</React.Fragment>
)
Not only is grid worth checking out, it’s also ready to be used in production, today. You know - on the real web.
So, what can we build with this? I’ve used grid on several projects now, and I found that it really makes building layouts a lot easier. I’ve put together a small demo here to show possible applications of CSS grids and how to make them work cross-browser.
👉 Only after the code? You can find the full demo on Codepen.
We’re going to build a pretty common layout for the backend of an application, where admins or editors can manage their content:
By looking at the design above, we can already imagine the underlying grid. Unlike “regular” websites, these admin screens often have a lot of fixed UI elements that span the entire viewport, and only the main content area is scrollable.
Defining the basic layout is pretty straightforward - we just need to set our rows and columns. Basically, the interface consists of four parts:
$admin-header-height: 70px;
$admin-footer-height: 70px;
$admin-nav-width: 250px;
.admin {
display: grid;
height: 100vh;
grid-template-rows: $admin-header-height 1fr $admin-footer-height;
grid-template-columns: $admin-nav-width 1fr;
grid-template-areas: "header header"
"nav main"
"footer footer";
}
We can define the heights and widths using the grid-template-rows and grid-template-columns properties. The 1fr (= one fraction) in there is similar to flex-grow: it tells the browser to distribute any leftover space equally to the middle row, so the main content takes up all available space.
Finally, the grid-template-areas is just a convienience rule to let us name the parts of our grid to something a bit more readable. After doing that, we can assign all grid-items to their position on the grid.
.header {
grid-area: header;
}
.navigation {
grid-area: nav;
}
// ...you get the idea.
Remember: The visual placement should generally follow the source order, to keep the document accessible.
We can nest another grid inside our main content area to display the dashboard. This will be a separate grid instance though, not connected to the main layout. (Sidenote: connected grids or “subgrids” are not yet possible, but the spec for it is already in development, and subgrids are likely to land with Grid Level 2).
Here’s a common design pattern where different statistics and widgets are displayed in a card grid:

This time, rather than explicitly defining our rows and columns, we’ll leave that open. We’ll just tell the browser how many columns we want, and to space them out evenly. When more items are placed on the grid, the container can just generate additional grid tracks on the fly. This “implicit” grid will accommodate any amount of content we may want to display.
💡 Pro Tip: By using a CSS custom property for the column count, we can easily switch from a 2-col to a 4-col grid on larger screens.
.dashboard {
--column-count: 2;
display: grid;
grid-template-columns: repeat(var(--column-count), 1fr);
grid-gap: 2rem;
&__item {
// per default, an item spans two columns.
grid-column-end: span 2;
// smaller items only span one column.
&--half {
grid-column-end: span 1;
}
// full-width items span the entire row.
// the numbers here refer to the first and last grid lines.
&--full {
grid-column: 1 / -1;
}
}
@media screen and (min-width: 48rem) {
--column-count: 4;
}
}
Yes, yes I know, we need to support IE11. We need to support older mobile browsers. That’s why we can’t have nice things.
Fortunately, it’s possible to build a Flexbox fallback and progressively enhance from there! The layout remains usable, and more capable browsers get all that grid goodness 👌.
We dont even need a media query here, as the grid properties will simply override all flexbox definitions, if they’re supported. If not, the browser will ignore them.
.admin {
// define flexbox fallback first.
display: flex;
flex-wrap: wrap;
// then add the grid definition.
display: grid;
...
&__header,
&__footer {
flex-basis: 100%;
}
&__nav {
flex-basis: $admin-nav-width;
}
&__main {
flex: 1;
}
}
For the dashboard card grid fallback, things are slightly more complex. We have to account for the missing grid-gap property, so we’ll have to fake the spacing with margins and paddings:
.dashboard {
display: flex;
flex-wrap: wrap;
// offset the outer gutter with a negative margin.
margin: 0 -1rem;
&__item {
flex: 1 1 50%;
// this will add up to a 2rem gap between items.
padding: 1rem;
}
}
Since these fallback gaps will mess with our layout if we do have grid support, we need a small reset to restore the original grid. Detecting support can be done using the @supports rule:
@supports (display: grid) {
.dashboard {
margin: 0;
}
.dashboard__item {
padding: 0;
}
}
👉 Check out the full demo on Codepen!
Designing loading states on the web is often overlooked or dismissed as an afterthought. Performance is not only a developer's responsibility - building an experience that works with slow connections can be a design challenge as well.
While developers need to pay attention to things like minification or caching, designers have to think about how the UI will look and behave while it is in a “loading” or “offline” state.
Perceived performance is a measure of how fast something feels to the user. The idea is that users are more patient and will think of a system as faster, if they know what’s going on and can anticipate content before it’s actually there. It’s a lot about managing expectations, and keeping the user informed.
For a web app, this concept might include displaying “mockups” of text, images or other content elements - called skeleton screens 💀. You can find these in the wild, used by companies like Facebook, Google, Slack and others:
Say you are building a web app. It’s a travel-advice kind of thing where people can share their trips and recommend places, so your main piece of content might look something like this:

You can take that card and reduce it down to its basic visual shapes, the skeleton of the UI component.

Whenever someone requests new content from the server, you can immediately start showing the skeleton, while data is being loaded in the background. Once the content is ready, simply swap the skeleton for the actual card. This can be done with plain vanilla Javascript, or using a library like React.
Now you could use an image to display the skeleton, but that would introduce an additional request and data overhead. We’re already loading stuff here, so it’s not a great idea to wait for another image to load first. Plus it’s not responsive, and if we ever decided to adjust some of the content card’s styling, we would have to duplicate the changes to the skeleton image so they’d match again. 😒 Meh.
A better solution is to create the whole thing with just CSS. No extra requests, minimal overhead, not even any additional markup. And we can build it in a way that makes changing the design later much easier.
First, we need to draw the basic shapes that will make up the card skeleton. We can do this by adding different gradients to the background-image property. By default, linear gradients run from top to bottom, with different color stop transitions. If we just define one color stop and leave the rest transparent, we can draw shapes.
Keep in mind that multiple background-images are stacked on top of each other here, so the order is important. The last gradient definition will be in the back, the first at the front.
.skeleton {
background-repeat: no-repeat;
background-image:
/* layer 2: avatar */
/* white circle with 16px radius */
radial-gradient(circle 16px, white 99%, transparent 0),
/* layer 1: title */
/* white rectangle with 40px height */
linear-gradient(white 40px, transparent 0),
/* layer 0: card bg */
/* gray rectangle that covers whole element */
linear-gradient(gray 100%, transparent 0);
}
These shapes stretch to fill the entire space, just like regular block-level elements. If we want to change that, we’ll have to define explicit dimensions for them. The value pairs in background-size set the width and height of each layer, keeping the same order we used in background-image:
.skeleton {
background-size:
32px 32px, /* avatar */
200px 40px, /* title */
100% 100%; /* card bg */
}
The last step is to position the elements on the card. This works just like position:absolute, with values representing the left and top property. We can for example simulate a padding of 24px for the avatar and title, to match the look of the real content card.
.skeleton {
background-position:
24px 24px, /* avatar */
24px 200px, /* title */
0 0; /* card bg */
}
This works well in a simple example - but if we want to build something just a little more complex, the CSS quickly gets messy and very hard to read. If another developer was handed that code, they would have no idea where all those magic numbers are coming from. Maintaining it would surely suck.
Thankfully, we can now use custom CSS properties to write the skeleton styles in a much more concise, developer-friendly way - and even take the relationship between different values into account:
.skeleton {
/*
define as separate properties
*/
--card-height: 340px;
--card-padding:24px;
--card-skeleton: linear-gradient(gray var(--card-height), transparent 0);
--title-height: 32px;
--title-width: 200px;
--title-position: var(--card-padding) 180px;
--title-skeleton: linear-gradient(white var(--title-height), transparent 0);
--avatar-size: 32px;
--avatar-position: var(--card-padding) var(--card-padding);
--avatar-skeleton: radial-gradient(
circle calc(var(--avatar-size) / 2),
white 99%,
transparent 0
);
/*
now we can break the background up
into individual shapes
*/
background-image:
var(--avatar-skeleton),
var(--title-skeleton),
var(--card-skeleton);
background-size:
var(--avatar-size),
var(--title-width) var(--title-height),
100% 100%;
background-position:
var(--avatar-position),
var(--title-position),
0 0;
}
Not only is this a lot more readable, it’s also way easier to change some of the values later on.
Plus we can use some of the variables (think --avatar-size, --card-padding, etc.) to define the styles for the actual card and always keep it in sync with the skeleton version.
Adding a media query to adjust parts of the skeleton at different breakpoints is now also quite simple:
@media screen and (min-width: 47em){
:root {
--card-padding: 32px;
--card-height: 360px;
}
}
Caveat: Browser support for custom properties is good, but not at 100%. Basically all modern browsers have support, with IE/Edge a bit late to the party. For this specific usecase, it would be easy to add a fallback using Sass variables though.
To make this even better, we can animate our skeleton and make it look more like a loading indicator. All we need to do is put a new gradient on the top layer and then animate its position with @keyframes.
Here’s a full example of how the finished skeleton card could look:
Skeleton Loading Card by Max Böck (@mxbck) on CodePen.
💡 Pro Tip: You can use the :empty selector and a pseudo element to draw the skeleton, so it only applies to empty card elements. Once the content is injected, the skeleton screen will automatically disappear.
For a closer look at designing for perceived performance, check out these links:
TL;DR: Here’s the CodePen Demo of this post.
With the introduction of Service Workers, developers are now able to supply experiences on the web that will work even without an internet connection. While it’s relatively easy to cache static resources, things like forms that require server interaction are harder to optimize. It is possible to provide a somewhat useful offline fallback though.
First, we have to set up a new class for our offline-friendly forms. We’ll save a few properties of the <form> element and then attach a function to fire on submit:
class OfflineForm {
// setup the instance.
constructor(form) {
this.id = form.id;
this.action = form.action;
this.data = {};
form.addEventListener('submit', e => this.handleSubmit(e));
}
}
In the submit handler, we can include a simple connectivity check using the navigator.onLine property. Browser support for it is great across the board, and it’s trivial to implement.
⚠️ There is however a possibility of false positives with it, as the property can only detect if the client is connected to a network, not if there’s actual internet access. A false value on the other hand can be trusted to mean “offline” with relative certainty. So it’s best to check for that, instead of the other way around.
If a user is currently offline, we’ll hold off submitting the form for now and instead store the data locally.
handleSubmit(e) {
e.preventDefault();
// parse form inputs into data object.
this.getFormData();
if (!navigator.onLine) {
// user is offline, store data on device.
this.storeData();
} else {
// user is online, send data via ajax.
this.sendData();
}
}
There are a few different options on how to store arbitrary data on the user’s device. Depending on your data, you could use sessionStorage if you don’t want the local copy to persist in memory. For our example, let’s go with localStorage.
We can timestamp the form data, put it into a new object and then save it using localStorage.setItem. This method takes two arguments: a key (the form id) and a value (the JSON string of our data).
storeData() {
// check if localStorage is available.
if (typeof Storage !== 'undefined') {
const entry = {
time: new Date().getTime(),
data: this.data,
};
// save data as JSON string.
localStorage.setItem(this.id, JSON.stringify(entry));
return true;
}
return false;
}
Hint: You can check the storage in Chrome’s devtools under the “Application” tab. If everything went as planned, you should see something like this:
It’s also a good idea to inform the user of what happened, so they know that their data wasn’t just lost.
We could extend the handleSubmit function to display some kind of feedback message.
Once the user comes back online, we want to check if there’s any stored submissions. We can listen to the online event to catch connection changes, and to the load event in case the page is refreshed:
constructor(form){
...
window.addEventListener('online', () => this.checkStorage());
window.addEventListener('load', () => this.checkStorage());
}
When these events fire, we’ll simply look for an entry in the storage matching our form’s id. Depending on what type of data the form represents, we can also add an “expiry date” check that will only allow submissions below a certain age. This might be useful if we only want to optimize for temporary connectivity problems, and prevent users from accidentally submitting data they entered two months ago.
checkStorage() {
if (typeof Storage !== 'undefined') {
// check if we have saved data in localStorage.
const item = localStorage.getItem(this.id);
const entry = item && JSON.parse(item);
if (entry) {
// discard submissions older than one day. (optional)
const now = new Date().getTime();
const day = 24 * 60 * 60 * 1000;
if (now - day > entry.time) {
localStorage.removeItem(this.id);
return;
}
// we have valid form data, try to submit it.
this.data = entry.data;
this.sendData();
}
}
}
The last step would be to remove the data from localStorage once we have successfully sent it, to avoid multiple submissions. Assuming an ajax form, we can do this as soon as we get a successful response back from the server. We can simply use the storage object’s removeItem() method here.
sendData() {
// send ajax request to server
axios.post(this.action, this.data)
.then((response) => {
if (response.status === 200) {
// remove stored data on success
localStorage.removeItem(this.id);
}
})
.catch((error) => {
console.warn(error);
});
}
If you dont want to use ajax to send your form submission, another solution would be to just repopulate the form fields with the stored data, then calling form.submit() or have the user press the button themselves.
☝️ Note: I’ve omitted some other parts like form validation and security tokens in this demo to keep it short, obviously these would have to be implemented in a real production-ready thing. Dealing with sensitive data is another issue here, as you should not store stuff like passwords or credit card data unencrypted locally.
If you’re interested, check out the full example on CodePen:
Offline Form by Max Böck on CodePen.
A truly responsive website should adapt to all kinds of situations. Besides different viewport sizes, there are other factors to consider. A change in connectivity is one of them.
Earlier this week, I was sitting in a train on my way to speak at a local meetup. InterCity trains in Austria all have WIFI now, so I was doing some last-minute work on my slides online. Train WIFI being what it is though, the network wasn’t exactly reliable. The connection kept dropping everytime we went through a tunnel or too many passengers were logged on.
This is quite a common scenario. People are on the move, network coverage can be poor, internet connections fail. Luckily, we can prepare our websites for this and make them more resilient by building them offline-first.
Offline support is awesome, however your users might not be aware of these capabilites - and they shouldn’t have to be. In some cases they might not even know that they’ve gone offline. That’s why it’s important to communicate what’s going on.
Chances are not every part of your site will work offline. Certain things may not be cached, others may require server interaction. This is fine of course, but the interface should reflect that. Just like a responsive layout adapts to changes in viewport size, your offline-optimized site should adapt to changes in connectivity.
The key ingredients here are the offline event and the navigator.onLine property. By combining them, we can check for network changes and react accordingly.
Here’s an example of a simple connectivity check:
let isOffline = false;
window.addEventListener('load', checkConnectivity);
// when the page has finished loading,
// listen for future changes in connection
function checkConnectivity() {
updateStatus();
window.addEventListener('online', updateStatus);
window.addEventListener('offline', updateStatus);
}
// check if we're online, set a class on <html> if not
function updateStatus() {
if (typeof navigator.onLine !== 'undefined'){
isOffline = !navigator.onLine;
document.documentElement.classList.toggle('is-offline', isOffline);
...
}
}
⚠️ Note: With the online event, there’s a slight possibility of false positives: A user might be connected to a network (which is interpreted as being online), but something higher up might block actual internet access. The offline event is a bit more reliable, in the sense that an “offline” user can be expected NOT to have access.
Now we want to display some kind of notification to offline users, so they know what’s going on. This can be done in a number of ways; however I would recommend using aria-live regions to make it accessible and have screen readers announce the connection change as well.
Using such a notification bar is pretty straightforward. First, define an element to display messages on your page:
<!-- notification container -->
<div
class="notification"
id="notification"
aria-live="assertive"
aria-relevant="text"
hidden
></div>
The aria-live attribute tells screen readers to announce changes to this element. “assertive” means it will interrupt whatever it is currently announcing at the time and prioritize the new message. The aria-relevant tells it to listen for changes in the text content of the element.
You can extend the handler function from before to populate the notification area whenever you detect that a user has gone offline:
function updateStatus() {
...
const notification = document.querySelector('#notification');
if (isOffline) {
notification.textContent = 'You appear to be offline right now.';
notification.removeAttribute('hidden');
} else {
notification.textContent = '';
notification.setAttribute('hidden');
}
}
This is a very simple implementation - you can of course always get a bit fancier with an animated notification bar (or “toast message”). There are also some nice pre-made components for this.
If you’re reading this on my site, you can see a version of these notifications in action if you simply switch off your WIFI for a second.
Go ahead, I’ll wait.
If you’re somewhere else or your browser doesn’t support service worker / offline events, here’s how this could look:
Notifications are a good start, but it would be even nicer if we could give the user some visual indication of which parts they can actually use offline, and which not.
To do this, we can loop over all the links on page load and check their href against the cache. If they point to a cached resource (e.g. will work offline), they get a special class.
const links = document.querySelectorAll('a[href]');
Array.from(links).forEach((link) => {
caches.match(link.href, { ignoreSearch: true }).then((response) => {
if (response) {
link.classList.add('is-cached');
}
});
});
Once the offline event fires, we toggle a class on the body and visually disable all links that aren’t cached. This should only apply to URLs, so we can ignore tel:, mailto: and anchor links.
.is-offline {
/* disable all links to uncached pages */
a:not(.is-cached) {
cursor:not-allowed;
pointer-events: none;
opacity:.5;
}
/* ignore anchors, email and phone links */
a[href^="#"],
a[href^="mailto"],
a[href^="tel"] {
cursor:auto;
pointer-events: auto;
opacity:1;
}
}
Another way we might use this is to prevent users from filling out forms. Most forms pass data to the server and require a connection to work, so they won’t be very useful when offline.
What’s worse is that users might not know there is a problem until it’s too late: imagine filling out a lengthy form and finally hitting the submit button, only to find a network connection error page and all your inputs gone. That’s frustrating.
/* Disable Forms when offline */
.is-offline form {
position:relative;
opacity:.65;
cursor:not-allowed;
pointer-events:none;
&::after {
content: 'Sorry, you\'re offline.';
display:block;
position:absolute;
top:50%;
left:50%;
transform:translate(-50%, -50%);
color:#FFFFFF;
background-color:#2D2D2D;
padding:1rem 2rem;
}
}
That effectively disables every form on the page, indicating that this functionality is currently not available. Depending on what your form does, you might also consider applying these styles just to the submit button - that way a user could pre-fill the form (possibly even have it validated in JS), and then submit it once they come back online.
If you’re doing this, remember to suppress “submit on enter” as well, and make sure the user knows why submitting won’t work at the moment.
UPDATE: I found a better way to handle this - by storing form submissions in localStorage and then checking for them once the connection comes back online. Read about it in “Offline-Friendly Forms”.
Turning a basic website into a PWA is not that hard and has a lot of real benefits, so I want to take a look at the three main steps necessary to achieve just that.
But first, let me address some common misconceptions:
1) Your thing does not have to be an “Application” to be a PWA.
A Progressive Web App can easily be a blog, a marketing site, a shop or a collection of cat memes. At its core, a PWA is just a way to optimize your code for better, faster delivery. You can -and should- take advantage of these new possibilites, regardless of your content.
Side note: the term “Application” in PWA is heavily debated, since some people feel it communicates the wrong idea. IMHO, its just a name - and these days it’s hard to define the difference between websites and “web apps” anyway.
2) Your thing does not have to be a Javascript-powered single page app.
Again, if you’re not running a cutting edge React-Redux SPA, that’s no reason to shy away from using this technology. My own site is just a bunch of static HTML based on Jekyll, and it’s still a perfectly valid Progressive Web App. If you run something on the web, it can benefit.
3) PWAs are not specifically made for Google or Android.
The beauty of it is that PWAs offer the best of both worlds - deep linking and URLs from the www, offline access, push notifications and more from native apps - while still staying completely platform-independent. No app stores, no separate iOS / Android codebases, just the web.
4) PWAs are ready and safe to use today.
Jup, the “P” stands for progressive, meaning everything about it can be viewed as an extra layer of enhancement. If an older browser does not support it, it will not break; it just falls back to the default: a regular website.
Turning your website into a PWA offers some serious advantages:
Even if you don’t expect your users to “install” your PWA (e.g. place a shortcut on their home screen),
there is still a lot to be gained by making the switch. In fact, all of the steps necessary to make a PWA will actively improve your website and are widely considered best practice.
A manifest is just a JSON file that describes all the meta data of your PWA. Things like the name, language and icon of your app go in there. This information will tell browsers how to display your app when it’s saved as a shortcut. It looks something like this:
{
"lang": "en",
"dir": "ltr",
"name": "This is my awesome PWA",
"short_name": "myPWA",
"icons": [
{
"src": "\/assets\/images\/touch\/android-chrome-192x192.png",
"sizes": "192x192",
"type": "image\/png"
}
],
"theme_color": "#1a1a1a",
"background_color": "#1a1a1a",
"start_url": "/",
"display": "standalone",
"orientation": "natural"
}
This is usually called “manifest.json”, and linked to from the <head> of your site:
<link rel="manifest" href="manifest.json">
Tip: You don't have to write that file yourself. There are different icon sizes for different systems, and getting everything right can be tedious. Instead, just make one 500x500 image of your app icon (probably your logo), and head over to Real Favicon Generator. They render all common sizes, provide meta tags and generate a manifest file for you. It's awesome.
Progressive Web Apps need to be served over a secure connection, so the HTTPS protocol is the way to go. HTTPS encrypts the data users send to your server and prevents intruders from tampering with their connection. As of recently, Google also heavily favors sites on HTTPS and ranks them higher than non-secure competitors.
To switch to HTTPS, you will need an SSL certificate from a trusted authority. How to get them depends on your hosting situation, but generally there are two common ways to do it:
👉 If you operate your own server or have root access to one, check out LetsEncrypt. It’s a free, open and straightforward certificate authority that allows anyone to start using HTTPS. It’s quite easy to set up and is just as trusted as other authorities.
👉 If you’re on shared hosting, a lot of providers unfortunately won’t allow you the level of control you need to use LetsEncrypt. Instead, they usually offer SSL certificates for a monthly or annual fee. If you’re unsure how to get a cert, contact your hosting provider.
After you obtained a certificate, there might be some adjustments you need to make to your code so that all resources are fetched on a secure line. For more information about the process, read this detailed guide from keyCDN or check out Chris Coyier’s article if you want to migrate a WordPress site.
If everything goes as planned, you’ll be rewarded with a nice green lock icon next to your URL:
![]()
This is where the magic happens. A Service Worker is essentially a piece of Javascript that acts as a middleman between browser and host. It automatically installs itself in supported browsers, can intercept requests made to your site, and respond to them in different ways.
You can set up a new SW by simply creating a Javascript file at the root directory of your project. Let’s call it sw.js. The contents of that file depend on what you want to achieve - we’ll get to that in a second.
To let the browser know we intend to use this file as a Service Worker, we need to register it first. In your site’s main script, include a function like this:
function registerServiceWorker() {
// register sw script in supporting browsers
if ('serviceWorker' in navigator) {
navigator.serviceWorker.register('sw.js', { scope: '/' }).then(() => {
console.log('Service Worker registered successfully.');
}).catch(error => {
console.log('Service Worker registration failed:', error);
});
}
}
The scope parameter defines which requests the SW should be able to intercept. It’s a relative path to the domain root. For example, if you were to set this to /articles, you could control requests to yourdomain.com/articles/my-post but not to yourdomain.com/contact.
There is a number of cool things you can do with Service Workers. One of them is the ability to cache your content, store it locally, and thus make it available when the user is offline. Even if they are online, this will have a huge impact on page loading time, since requests can just bypass the network completely and assets are instantly available.
Other than with traditional browser caching, you can define a list of resources to cache when the worker is installed - so a user does not have to navigate to a page first for it to be cached. Here’s how that might look:
// sw.js
self.addEventListener('install', e => {
e.waitUntil(
// after the service worker is installed,
// open a new cache
caches.open('my-pwa-cache').then(cache => {
// add all URLs of resources we want to cache
return cache.addAll([
'/',
'/index.html',
'/about.html',
'/images/doggo.jpg',
'/styles/main.min.css',
'/scripts/main.min.js',
]);
})
);
});
Tip: If you want to get started with offline-first quickly, I'd highly recommend using sw-precache. This is a tool made by the folks at Google that integrates with your existing Gulp or Grunt build process to generate the service worker file for you.
You can simply pass it a list of files and it will automatically track all changes, and keep your Service Worker cache up to date. Because sw-precache integrates into your site’s build process, you can use wildcards to precache all of the resources that match a specific pattern, like so:
import gulp from 'gulp';
import path from 'path';
import swPrecache from 'sw-precache';
const rootDir = '/';
gulp.task('generate-service-worker', callback => {
swPrecache.write(path.join(rootDir, 'sw.js'), {
staticFileGlobs: [
// track and cache all files that match this pattern
rootDir + '/**/*.{js,html,css,png,jpg,gif}',
],
stripPrefix: rootDir
}, callback);
});
Run this task in your build, and you’ll never have to worry about cache invalidation again!
For smaller, mostly static sites, you can have it precache every image, HTML, JavaScript, and CSS file. For sites with lots of dynamic content, or many large images that aren’t always needed, precaching a “skeleton” subset of your site often makes the most sense.
PS: For a deeper look into the subject of offline support, be sure to check out “The Offline Cookbook” by Jake Archibald.
The Chrome Lighthouse Extension is a testing tool to check Progressive Web Apps for their Performance, Accessibility and compliance with the PWA spec.
It tests your site in different viewports and network speeds, measures time to first paint and other performance factors, and gives valueable advice for areas that still need improvement. It’s a really good benchmark for websites in general.
You can either install the Lighthouse extension in the Chrome Web Store or use Chrome Canary, where it is included in the Devtools’ Audit tab by default.
Hopefully that gave you a quick overview on how to get started with PWAs. If you want to dive deeper, here are some good places to learn more:
I built this product slider as part of a wine shop I was working on in 2015, and since it's also featured in a case study here on my site, I had a couple of people asking me how the animation was done.
Well, it’s really quite simple – so here’s a quick rundown on how to make the bottles dance. You can see the actual live thing in action on one of the product pages here. Grab some Grüner Veltliner while you’re at it.
Markup is pretty straightforward, just your standard slider structure. A parent div and an ul with some list items. The real production version obviously has a little bit more going on, what with that fancy ratings popover and all. But for now, this should do the job:
<div class="slider">
<ul class="slider__content">
<li class="slider__item">
<a href="link/to/product">
<img src="image_of_bottle.jpg" alt="" />
</a>
</li>
<li class="slider__item">...</li>
<li class="slider__item">...</li>
</ul>
</div>
To make this into a slider widget, you will need some CSS and a bit of Javascript. I used a jQuery plugin called Flexslider for this one, and I like it a lot. But almost any other slider would work too. The only important part for this effect is a callback function that gets triggered before each sliding transition.
Flexslider does exactly that with its before method. You pass it the $slider variable (the parent element), and then apply a class to it that later controls the animation state. After the animation has finished, we need to remove that class again. My wiggle lasts about a second, so I put in a setTimeout for that duration (plus a little more for good measure).
$slider.flexslider({
//animation: 'slide',
//selector: '.slider__item',
//animationLoop: false,
//slideshow: false,
before: function($slider){
$slider.addClass('slider--shaking');
window.setTimeout(function(){
$slider.removeClass('slider--shaking');
}, 1200);
}
});
Next up is the actual CSS keyframe animation that makes the bottles swing from side to side. Mine looks like this:
@keyframes wiggle {
25% { transform: rotate3d(0, 0, 1, 6deg) }
50% { transform: rotate3d(0, 0, 1, -4deg) }
75% { transform: rotate3d(0, 0, 1, 2deg) }
100% { transform: rotate3d(0, 0, 1, 0deg) }
}
We tilt the items first right, then left, then right again, losing momentum in each turn to simulate the inertia a real bottle would have.
The rotate3d is to force graphic acceleration, which makes for smoother animation performance. Also, be sure to include vendor prefixes for the transform - or, if you’re lazy like me, let Autoprefixr do that for you.
The last step is to apply the keyframe animation to your slider every time it gets triggered.
Two things are important here:
define the transform-origin for each object. This will be the fixed point that anchors the animation, it corresponds to the center of gravity in the real world. For my bottles, that would be center bottom.
💡__PRO TIP:__ to make it seem more realistic, apply a little delay to every other bottle, so they dont all wiggle in unison. A small offset in timing does the trick.
.slider--shaking .slider__item {
//disable hover effects while transitioning
pointer-events:none;
//set up the wiggle animation
animation-name: wiggle;
animation-duration: 1s;
animation-fill-mode: both;
//set the 'fixed point' of the animation
transform-origin:bottom center;
}
.slider--shaking .slider__item:nth-child(2n) {
//slightly offset every other item
animation-delay:.1s;
}
Aaand you’re done! Not much to it, but makes for a nice touch and a cool thing to show off. 🍾
]]>I started doing some pen-and-paper mockups and some concepts in Sketch, but the project details weren’t clearly defined yet, and things would change very frequently. I had to redo a lot of components or modify them often to reflect changes I’ve made somewhere else. It didn’t feel efficient.
Essentially, I was just drawing pictures of an interface. A pixel canvas simply wasn’t the right medium for this.
So I decided to design in the browser and make a clickable dummy that I could use to rapidly prototype the UI. I wanted a way to try new directions and change stuff quickly, without having to do the same tasks over and over again.
I opted for simple static HTML.
Since the end product was going to be built in React, I though about how to best get into a workflow that matched a component-based architecture, and design elements accordingly right from the start. This approach also had some other benefits that I discovered while refining my setup:
Sound good so far? Cool.
So how can we best go about doing this?
To build our static prototype, first we need a good templating language. My tool of choice here is Nunjucks, a powerful engine built by Mozilla. It integrates nicely with node-based build setups and is crazy extensible. But, you could just as easily do this with Liquid, Handlebars, or the like. The only important thing to remember is that your choice of templating language shouldn’t impose a particular structure on you and is flexible enough to handle anything you throw at it.
Most of these work in a very similar way: You define templates that contain “blocks”, which are dedicated areas in the markup that can then be extended by other templates, or populated with content.
The folder structure in my setup has three main parts:
📂 1) layout contains the basic templates. There is usually a base template that just holds the outermost html elements like <head> and <body> and loads the CSS and Javascript. You can then extend this base template to create other, more complex reusable layouts.
<!-- base.html (simplified) -->
<html>
<head>
<title>My Template</title>
</head>
<body>
{% block content %}{% endblock %}
</body>
</html>
See that {% block %} thing? That’s where you can inject other templates to get more refined:
<!-- layout-2col.html -->
{% extends "base.html" %}
{% block content %}
<div class="container">
<div class="row">
<div class="col-md-9">
{% block main %}{% endblock %}
</div>
<div class="col-md-3">
{% block sidebar %}{% endblock %}
</div>
</div>
</div>
{% endblock %}
📂 2) components is the folder for all the building blocks of your application. Basically anything that can be isolated and reused goes in here. This can be stuff like headers, menus, posts, user avatars … you get the idea. Files should be self-contained and named like the root class of the component.
<!-- post.html -->
<article class="post">
<h2 class="post__title">{{ post.title }}</h2>
<div class="post__excerpt">{{ post.content | truncate(50) }}</div>
</article>
The BEM naming scheme really comes in handy here, because you can properly namespace your components to avoid conflicts with other ones. It’s also good practice to have a separate SCSS partial for every component (_post.scss, _avatar.scss…).
Include your new component in other templates with {% include post.html %}.
You can of course also have things like loops and if statements, and pass data to your components:
<!-- variable {{post}} will be available inside post.html -->
{% for post in data %}
{% include post.html %}
{% endfor %}
📂 3) views is where all the different sub-pages of your app are defined. This could be stuff like index, detail or settings.
The templating system will look at the files in this folder and generate a matching HTML document for each of them, looking up all its dependencies (components and layouts) recursively.
The view files should ideally only arrange different components, and have very little to no markup of their own, to keep everything nice and DRY.
Designers (myself included), sometimes tend to make things “too pretty” to produce nice-looking mockups for the client.
In the real world however, things don’t always work that way. People will have long names with non-english characters. People will upload low-resolution or no images. People will break your carefully balanced typography rules.
And that’s OK - a good design should anticipate such problems and be flexible enough to handle them. By using more realistic data right from the start, it’s easier to think about these things.
Here’s where static HTML prototypes shine. One of their big benefits is the ability to easily incorporate any kind of mockup data into the UI. This means you can design your application with “real life” content in mind.
Mockup data generators like Mockaroo give you a simple interface to quickly produce demo data in any structure you like. Say you needed some sample users for your app:
Mockaroo lets you define your data as a collecton of fields and it has a field type for almost anything you can think of. You can generate text, images, bitcoin addresses - you name it. It can also give you a predefined percentage of random blank fields.
When you’re done, save your schema (in case it changes later), and download the mock data as a JSON file.
Finally, plug that into your prototyping setup like so:
//tasks/nunjucks.js
var demoUsers = require('app/data/DEMO_USERS.json');
...
gulp.task('nunjucks', function(){
gulp.src('app/views/**/*.html')
//this makes the data available to the templating engine
.pipe(data(function(){
return {
users: demoUsers
}
}))
.pipe(nunjucks())
.pipe(gulp.dest('dist'));
});
Whenever your data structure changes, just update the JSON. Your demo users are now available inside all components like this:
<!-- user.html -->
{% set user = users | random %}
<span class="user">
<img class="user__avatar" src="{{ user.image }}" />
<span class="user__name">{{ user.first_name }}</span>
</span>
When the time comes to move things over to the final development environment, it’s fairly simple to convert your components from static HTML to React. You can see by the variables contained in a file which props a component needs to receive. In many cases, you can simply copy-paste the HTML into a render() function as JSX. (Be sure to replace instances of class with className though).
👉🏾 In most React Setups, it’s possible to colocate styles with their corresponding component, and have them in their own folder. I think it’s more convenient that way. By scoping styles strictly to their own partial, reusing the .scss files from your static prototype is also very straightforward.
I made a custom boilerplate based on Gulp using this approach (plus a few other goodies). It’s available on Github, feel free to use/extend it anyway you want.
]]>I started making websites when I was still in school - I used to do fun little sites for local bands, events and other things. At some point I decided to do it professionally, registered my business and had my first real clients.
I’ve learned a lot since then, and I still do now. At the beginning of 2017, I did some thinking about where I wanted things to go for me.
I can still remember what it was like to build my first website. I had absolutely no clue how to do stuff, it was all trial and error. But going back and forth between blogs, tutorials and stack overflow, watching other people work, shamelessly copying bits and pieces - I improved.
The fact that I can just hit view source on any website and see how it’s made still amazes me.
Altough I have a degree in web development now, I can honestly say that I learnt most of what I do by soaking up information available on the open web.
This is only made possible by lots of talented people who not only produce great work, but dedicate their time and energy to show others how to do it, too. I don’t know any other profession with such an open exchange of knowledge.
People from around the world actually work together on open-source projects, just to build something that others can use. Top developers in the field will share their latest findings publicly in carefully crafted tutorials and code examples on Github.
Think about how amazing that is - an entire industry where you can learn every last secret of the trade for free - all you need is dedication and an internet connection.
Twitter is great because the creator of JavaScript will help you with JavaScript within 7 minutes pic.twitter.com/3XrQZtTF7E
— Wes Bos (@wesbos) January 9, 2017
Coincidentally, this is also the only way people can keep up with all the new developments being made in this fast-paced industry. If we didn’t share, we’d stop moving.
I want to keep moving.
Oddly enough, I’ve always felt that writing or speaking about the web was more difficult for me than to actually code or design things. It just doesn’t flow that well for me. However, I want to make en effort to change that.
👉 In 2017, I want to…
Sometimes it can be difficult to think of something worth sharing. There’s always a level of self-doubt involved.
"Somebody else has probably already written a better version of this anyway. Besides, you don't even really know what you're doing."
It’s easy to find reasons not to do it in the first place. But no matter what your skillset is, sharing your progress is always helpful. Even if you’re just starting out, a lot of beginners might look exactly for that first-steps perspective, where you can see questions that more experienced authors might not even consider.
But its not just giving back to the community. A much more selfish motivation (at least for me) is that by teaching others, you become a better developer yourself.
"If you can't explain something simply, you don't understand it well enough."
I think teaching / talking / writing about stuff forces you to organize your thoughts.
You need to research the whys behind any given topic to be able to explain it to somebody else - and that, in turn, improves your own understanding as well.
I’ve come to find that I learn best when I can apply new techniques to an actual real-life project. That’s why I try to include something new in everything I work on, to get myself out of the comfort zone.
Finding good, interesting work isn’t always easy though. This is still my job and I’ve got bills to pay, so there’s a business side to it. But at the same time, doing the same “standard” projects over and over again won’t push me forward, and is ultimately not why I’m in this career.
👉 In 2017, I want to…
I’ve seen a lot of “technology fatigue” posts in the last year. And I get it - things are moving so damn fast that developers are tired of having to learn a shiny new framework every two months. People are annoyed that simple tasks can require 14 different tools now. They are also worried about being left behind.
The thing is though - underneath it all, the ingredients never change. It’s always HTML, CSS and Javascript. The fundamental building blocks are the same, they’re just expressed differently.
Part of the beauty of web standards is that they never truly break. I could look at that first website I made back then in a modern browser today, and it would still work. I could still access all the content (although it would be laid out in <TABLE>s and cluttered with janky GIFs).
A good knowledge of the fundamentals also makes it a lot easier to learn the new hot stuff, because that’s whats actually under the hood.
So if there’s any safe horse to bet on in terms of learning web technology, it’s the basics.
👉 That’s why in 2017, I want to…
As with any type of resolution, accomplishing these goals will take some effort. I hope that by putting them up here, I’ll feel a little more motivation to actually follow through.
I have some changes coming up this year, and I’m excited to see where it will take me!
]]>I do this almost every year - not (only) because of my neverending quest to optimize the shit out of it, but because it’s a great way for me to try new things I want to learn on a “real life” project.
Altough it’s a fairly simple site - basically just a small portfolio section, a blog and a contact form, it’s still a good exercise to see how different modern workflows can come into play.
So here is the way I did it in 2017. This might get a bit lenghy and technical, but hang in there.
TL; DR:
All source files are available on Github, if you’re interested.
While previous versions of this site were all built on WordPress, this year I finally decided to switch to a static file generator, Jekyll.
Jekyll blogs are typically run by developers or other tech-savvy people, as they require a bit of knowledge about tooling and setup, and posts are usually written in Markdown. It’s definitely harder to get started than with a 1-minute WordPress install, but it’s worth it:
Static files are faster, safer and more resilient than a database-driven site.
Plus using any sort of CMS always restricts you to doing things a certain way - and I wanted full control over the barebones HTML.
The out-of-the box Jekyll setup includes a development server and support for SCSS preprocessing.
However, I wanted a little more - so my first step was to build a custom boilerplate with Jekyll and Gulp to do the heavy lifting.
Design-wise, I’ve always been a fan of minimalism - so it’s no surprise that this year’s iteration turned out to look very clean and reduced again.
Content precedes design. Design in the absence of content is not design, it's decoration.
— Jeffrey Zeldman (@zeldman) March 5, 2008
Keeping true to this premise, I focused more on the content, on good typography and readability; and I think it turned out nice. While I dont think that every site should look as “boring” as this one - cause I really enjoy the crazy creative things others come up with - for me, it was a good fit.
I read a lot about accessibility last year, most recently the highly recommended “Inclusive Design Patterns” by Heydon Pickering. It gave me some very valuable practical advice on the subject.
The biggest takeaway for me was to not treat accessibility as an add-on to further improve an exisiting website, but to design websites “inclusive” right from the start. Trying to think of all the use cases outside of your own bubble first, to make a site everyone can use.
I believe that a good website should be able to handle almost any scenario you can throw at it, and still manage to provide content in a usable form. So for the relaunch, I wanted to incorporate this knowledge and push for a really flexible, accommodating design.
A few of those features include:
I have to admit that I had become a bit lazy with jQuery. Relying too much on the framework to do basic tasks made me dependent on it, and using jQuery for everything adds unnecessary bloat.
So as part of my ongoing effort to really get better at Javascript, I wrote everything in plain vanilla ES6 this time.
I found a few select microservices to handle things like lazy loading or basic ajax requests, making sure to include just the absolute minimum. All of them were available via npm install and can be consumed as modules, first thing in the main file:
import FontFaceObserver from 'fontfaceobserver'; //font loading
import Blazy from 'blazy'; // lazy images
import NanoAjax from 'nanoajax'; //ajax
import Util from './lib/util'; //custom helpers
For some of the stuff that is usually provided by jQuery, I created a separate Utils.js file to import. Things like serializing form data or getting the parent DOM node of an element can easily be recreated as simple functions, and then called like Util.serialize().
I also made an effort not to use a third-party plugin for the contact form, but to build as much as possible from scratch.
The final minified and gzipped JS file weighs in at just 8.4KB, quite small compared to the hefty 68KB I had before. Feels good 😎.
Since the site basically works fine without Javascript too, there’s really no reason to have it block the rendering. The webpack-generated main JS file bundle.js can therefore be defered quite easily.
For a few ressources that are not related to the function of the site itself, I’ve taken it a step further still. I used my favourite defering snippet to make sure stuff like analytics, polyfills or the twitter API are loaded last, when the site is already done and rendered.
Taking a page out of Github’s playbook, I used system fonts for the body copy (and emoji) 🍺.
They look great, support all languages and fit in nicely with the rest of the device UI. And best of all, they’re available without a single network request.
Here’s the full body font stack:
$body-font: -apple-system, BlinkMacSystemFont, "Segoe UI", "Roboto",
"Oxygen", "Ubuntu", "Cantarell", "Fira Sans", "Droid Sans",
"Helvetica Neue", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol";
Titles are set in Playfair Display, a font availabe on Google Fonts. I chose to self-host it, and provide fonts only in woff and woff2 formats. That’s not totally bulletproof, but more future-friendly.
And besides, no font-face syntax will ever be bulletproof, nor should it be.
For @font-face loading, I used Bram Stein’s FontFaceObserver to make sure users wouldn’t see a FOIT on page load. Titles fall back to the similiar looking Times New Roman until the font is ready.
A couple of additional tweaks here:
<link rel="preload" href="playfair-display-regular.woff2" as="font" type="font/woff2">
Once the font is cached, we can use it straight away. There’s no reliable way to detect this though - I settled on a cookie-like solution (I say “like”, because it actually uses session storage). The length of a browser session is a reasonable timespan to assume for a valid cache, so I set a flag when the font is first loaded.
The whole fontface observer code looks like this:
function loadFonts(){
if(sessionStorage.getItem('fontsLoaded')){
return;
}
const playfairObserver = new FontFaceObserver('Playfair Display', {});
playfairObserver.load().then(() => {
document.documentElement.classList.add('fonts-loaded');
sessionStorage.setItem('fontsLoaded', true);
}, () => {
document.documentElement.classList.remove('fonts-loaded');
});
}
Speed was a major factor in the relaunch process. It’s not that my blog is particulary heavy or gets that much traffic, but I’m super interested in performance optimization and wanted to see how far I could take things.
I tested using Webpagetest, Google PageSpeed and Lighthouse, looking especially at three metrics:
A good method to improve page performance is to try and render the initial view within the first network request (the first 14KB or so). To achieve this, a subset of the full CSS is inlined in the page head.
Determining exactly which styles are necessary to render the page at a given viewport is a little tricky and would be quite cumbersome, if one were to do it manually. Thankfully, the smart people of the internet (namely Google’s Addy Osmani) have developed a tool for that:
Critical is a gulp plugin that takes in a set of pages and a viewport width/height, then looks at these pages, extracts all applied styles and injects them into the <head>.
Configuration is very simple:
const config = {
inline: true,
base: '_site',
minify: true,
width: 1280,
height: 800,
ignore: ['@font-face']
};
gulp.task('critical', () => {
return gulp.src('index.html')
.pipe(critical.stream(config))
.pipe(gulp.dest('_site'));
});
The task inserts the extracted styles in a <style> tag and includes a tiny script to load the main CSS after the page is done:
To establish offline support, there’s no way around Service Worker.
A Service Worker is essentially a piece of Javascript that sits between the client and the network,
to intercept requests and deliver assets, even without an internet connection. It’s a powerful thing.
A few requirements for this to work though:
There are different methods of letting SW handle network requests, you can find a great outline in The Offline Cookbook, written by Jake Archibald.
On my site, I opted for a pretty simple approach. Since it’s all static files, I can pre-cache the most important assets and top-level pages in a Service Worker to drastically reduce the amount of data and network requests necessary. When a client first hits the site, the SW installs itself and caches a list of ressources with it, after the page has loaded:
Now, once it is active, the SW can intercept all requests to files in its cache, providing them instantly on the next call.
Here’s what happens when the client navigates to another page:
Everything important is instantly available. Even when offline, these assets can be accessed.
Managing the Service Worker of course also means keeping track of what has changed, to replace deprecated assets with new versions.
To make this easy, I used a tool provided by the Chrome team called sw-precache.
It can be integrated in the build process to check for changes everytime the site is deployed.
When it finds something has changed, it generates a new sw.js Service Worker file, which will replace the old one as soon as no one’s looking. You can simply pass it a set of files to watch, and never have to worry about cache invalidation again.
Here’s how it turned out. I’m pretty happy with it!
Alright, that’s it! Let’s see how much of this still holds up in 2018. See you then!
]]>Although I’m currently not looking for a (regular) job, I thought it would be interesting to try and answer as many as possible and see where I stand.
I think when you first start learning about web development, you cover all these basic principles, but once you get better and move on to more advanced problems, you don’t really think about them anymore. So it can’t hurt to revise that stuff every once in a while, right?
There’s questions on HTML, CSS, JavaScript and general programming knowledge.
It was an interesting exercise because it got me thinking a lot about these fundamentals. If you’re interested, I published my answers on github.
Of course, some of these questions depend strongly on the context, and some are deliberately phrased to provoke a discussion. It’s a lot of questions, and I didn’t want to write a novel there (since, again, this was just for my own curiosity).
So my answers are far from perfect and in some cases barely scratch the surface of a topic. But hey, I’m not on trial here, so calm down 😉
There are some very talented people out there dedicated to poster art. Some of my personal heroes, for example, include DKNG Studios, Olly Moss and Kevin Tong. Go check em out if you have time, they’re all brilliant.
Here are a few of my own works that I made over the years:
There’s different types of sound for different tasks - for example, I like to do creative work with calm, relaxed acoustic stuff. On the other hand, some late-night coding sessions are best fueled by something with a little more drive - I like the Prodigy’s “The Fat of the Land”.
A number of options exist to provide music while working, the easiest being your own private MP3 collection. Everyone has one of those, but if you’re like me and spend a lot of time in front of the computer, your best playlists will sound dull after the 43rd rerun.
I recently switched to Spotify, which provides me with an endless stream of new songs and artists. A major drawback are the advertisements in between though, plus it’s hard too find some of the more obscure tracks. I do like some of the predefined “mood” playlists for working though, there’s quite a few of them called “Focus” or similar.
Another option is to let go of music and melody completely and switch to atmospheric sounds. There’s a couple of good ressources for that, my newest discovery is a free OSX-App called Noiz.io. It runs on your mac and lets you create your own ambient background sound mix. Choose from coffee house, light thunderstorms, a crackling fireplace - or maybe rolling waves at a beach? It’s really quite nice.
Another good ambient noise generator is defonic.com, a website with even more sources for you to choose from. You can mix your own custom background symphony.
]]>Most of these pictures were shot by my girlfriend, Tina.