Mo’ Pixels Mo’ Problems

Mo’ Pixels Mo’ Problems

Mobile devices are shipping with higher and higher PPI, and desktops and laptops are following the trend as well. There’s no avoiding it: High-pixel-density, or “Retina,” displays are now becoming mainstream—and, as you’d expect, our websites are beginning to look a little fuzzy in their backlit glory. But before we go off in the knee-jerk direction of supersizing all our sites, we must first identify the problems ahead and figure out the most responsible way forward—keeping our users in mind first and foremost.

Article Continues Below

The big problem: gigantic images#section2

In an effort to stay ahead of the curve, many of us have begun the process of making our website designs “@2x,” quietly hoping @3x never becomes a thing. While a @2x image sounds like it would only be twice the number of kilobytes, it’s actually around three to four times larger. As you can see in my @1x vs. @2x demonstration, the end result is that photos or highly detailed compositions easily start bringing these numbers into the megabytes.

“Why is this a problem?” I hear you ask. “Shouldn’t the web be beautiful?” Most definitely. Making the web a better place is probably why we all got into this business. The problem lies in our assumption of bandwidth.

In the wealthiest parts of the world, we treat access to high-speed broadband as a civil right, but for lots of people on this planet, narrow or pay-per-gigabyte bandwidth are real things. “Because it looks pretty” is not a good enough reason to send a 1MB image over 3G—or, god forbid, something like the EDGE network.

Even in our high-bandwidth paradises, you don’t have to look far for examples of constrained bandwidth. A visitor to your website might be using a high-PPI tablet or phone from the comfort of her couch, or from the middle of the Arizona desert. Likewise, those brand-new Retina Macbook Pros could be connected to the internet via Google Fiber, or tethered to a 3G hotspot in an airport. We must be careful about our assumptions regarding pixels and bandwidth.

Failed paths: JavaScript#section3

I’ll just use JavaScript.

Everyone ever

JavaScript has solved a lot of our past problems, so it’s human nature to beseech her to save us again. However, most solutions fall short and end up penalizing users with what is commonly referred to as the “double download.”

Mat Marquis explained this, but it’s worth reiterating that in their quest for speed and making the web faster, browsers have begun prefetching all the images in a document before JavaScript has access and can make any changes.

Because of this, solutions where high-resolution capabilities are detected and a new image source is injected actually cause the browser to fetch two images, forcing high-resolution users to wait for both sets of images to download. This double download may not seem overly penalizing for a single image, but imagine scaling it to a photo gallery with 100 images per page. Ouch.

Other attempts exist, such as bandwidth detection, cookie setting, server-side detection, or a mixture of all three. As much as I’d like robots to solve my problems, these solutions have a higher barrier to entry for your average web developer. The major pain point with all of them is that they introduce server/cookie dependencies, which have been historically troublesome.

We need a purely front-end solution to high resolution images.

Sound familiar? That’s because high-resolution images and responsive images actually come back to the same root problem: How do we serve different images to different devices and contexts using the same HTML tag?

The solution: good ol’ fashioned progressive enhancement#section4

Those of us involved in CSS and Web Standards groups are well acquainted with the concept of progressive enhancement. It’s important we stick to our collective guns on this. Pixels, whether in terms of device real estate or device density, should be treated as an enhancement or feature that some browsers have and others do not. Build a strong baseline of support, then optimize as necessary. In fact, learning how to properly construct a progressively enhanced website can save you (and your clients) lots of time down the line.

Here are the rules of the road that my colleagues at Paravel and I have been following as we navigate this tangled web of high-density images:

  • Use CSS and web type whenever possible
  • Use SVG and icon fonts whenever applicable
  • Picturefill raster graphics

Let’s talk a bit about each.

CSS and web fonts#section5

CSS3 allows us to replicate richer visual effects in the browser with very little effort, and the explosion of high-quality web fonts allows us to build sites on a basis of rich typography instead of image collages. With our current CSS capabilities, good reasons to rely on giant raster graphics for visual impact are becoming few and far between.

So the old rule remains true: If you can accomplish your design with HTML/CSS, do it. If you can’t accomplish your design with CSS, then perhaps the first question you need to ask yourself is, why not? After all, if we consider ourselves in the business of web design, then it’s imperative that our designs, first and foremost, work on the web—and in the most efficient manner possible.

Take a step back and embrace the raw materials of the web: HTML and CSS.

SVG and icon fonts#section6

SVG images are XML-based vector paths originally designed as a Flash competitor. They are like Illustrator files in the browser. Not only are they resolution-independent, they tend to create extremely lightweight files (roughly determined by the number of points in the vector).

Icon fonts (like Pictos or SymbolSet) are essentially collections of vector graphics bundled up in a custom dingbat font, accessible through Unicode characters in a @font-face embedded font. Anecdotally, we at Paravel have noticed that tiny raster graphics, like buttons and icons, tend to show their awkwardness most on higher-resolution screens. Icon fonts are a great alternative to frustrating sprite sheets, and we’ve already begun using icon fonts as replacements whenever possible.

Support for @font-face is great, and basic SVG embedding support is nearing ubiquity—except for ye old culprits: older versions of IE and Android. Despite this, we can easily begin using SVG today, and if necessary make concessions for older browsers as we go by using feature detection to supply a polyfill or fallback, or even using newfangled projects that automate SVG/PNG sprite sheets.

There are cases where these formats fall short. Icon fonts, for instance, can only be a single color. SVGs are infinitely scalable, but scaling larger doesn’t mean more fidelity or detail. This is when you need to bring in the big guns.

Picturefill raster graphics#section7

No stranger to this publication, the <picture> element, put forth by the W3C Responsive Images Community Group, is an elegant solution to loading large raster graphics. With <picture>, you can progressively specify which image source you want the browser to use as more pixels become available.

The <picture> element is not free from hot drama, and also has a worthy contender. The image @srcset attribute, notably put forth by Apple, is based on the proposed CSS property image-set(), designed for serving high-resolution assets as background images. Here’s a sample of the proposed syntax, (presented with my own personal commentary):

<img alt="Cat Dancing" src="small-1.jpg"
srcset="small-2.jpg 2x,  // this is pretty cool
large-2.jpg 100w,       // meh
large-2.jpg 100w 2x     // meh@2x
">

As a complete responsive images solution, @srcset has a bothersome microsyntax and is not feature-complete (i.e. it has custom pixel-based h & w mystery units and does not support em units). But it does have some redeeming qualities: In theory, the @srcset attribute could put bandwidth determination in the hands of the browser. The browser, via user settings and/or aggregate data on the speed of all requests, could then make the best-informed decision about which resolution to request.

However, as the spec is written, @srcset is simply a set of suggestions for the browser to choose from or completely ignore at its discretion. Yielding total control to the browser makes this web author cringe a little, and I bet many of you feel the same.

Wouldn’t it be nice if there were a middle ground?

Noticing the strengths of the @srcset attribute, the Responsive Images Community Group has put forth a proposal called Florian’s Compromise, which would blend the powers of both @srcset and the <picture> element.

<picture alt="Cat Dancing">
<source media="(min-width: 45em)" srcset="large-1.jpg 1x, large-2.jpg 2x">
<source media="(min-width: 18em)" srcset="med-1.jpg 1x, med-2.jpg 2x">
<source srcset="small-1.jpg 1x, small-2.jpg 2x">
<img src="small-1.jpg">
</picture>

No doubt, the <picture> syntax is more verbose, but it is extremely readable and doesn’t use the confusing “100w” shorthand syntax. Expect things to change going forward, but in the meantime, we’re currently using the div-based Picturefill solution from the Filament Group, which we find is easy to use and requires no server architecture or .htaccess files. It simply polyfills the <picture> element as if it existed today.

Under the hood, our previous demonstration was using two instances of the original Picturefill to swap sources as the browser resized. I’ve made some quick modifications to our demo, this time combining both @1x and @2x sources into one Picturefill demo with the newer syntax.

Experimental technique: the 1.5x hack#section8

Another thing we’ve been doing at Paravel is playing with averages. Your mileage may vary, but we’ve noticed that high-resolution screens typically do a great job of getting the most out of the available pixels—as you can see in this @1.5x experiment version of our demo:

Size Small Medium Large
@1x 37kb 120kb 406kb
@1.5x 73kb 248kb 835kb
@2x 120kb 406kb 1057kb

If you don’t have a high-resolution screen, you can increase your browser zoom to 200 percent to simulate how compression artifacts would look on one. The @1x image clearly has the worst fidelity on high-resolution screens, and the @2x image definitely has the highest fidelity. The @1.5x version, however, fares nearly as well as the @2x version, and has a payload savings of about 20 percent. Which would your users notice more: the difference in fidelity or the difference in page speed?

Ultimately, the usefulness of the @1.5x technique depends on the situation. Notably, it does penalize the @1x user, but maybe there’s an even happier middle ground for you at @1.2x or @1.3x. We currently see the “just a bit larger” method as a viable solution for getting a little more out of medium-importance images without adding another degree of complexity for our clients. If you’re in a situation where you can’t make drastic changes, this might be a great way to gain some fidelity without (relatively) overwhelming bloat.

Above all: use your brain#section9

Recently, while redesigning Paravel’s own website, we learned to challenge our own rules. Since we have talented illustrator Reagan Ray on our team, our new site makes heavy use of SVG. But when we exported our most beloved “Three Amigos” graphic, we made a quick audit and noticed the SVG version was 410kb. That felt heavy, so we exported a rather large 2000×691 PNG version. It weighed in at just 84kb. We’re not UX rocket scientists, but we’re going to assume our visitors prefer images that download five times faster, so that image will be a PNG.

Just use your brain. I’m not sure our industry says this often enough. You’re smart, you make the internet, and you can make good decisions. Pay attention to your craft, weigh the good against the bad, and check your assumptions as you go.

Be flexible, too. In our industry there are no silver bullets; absolute positions, methods, and workflows tend to become outdated from week to week. As we found with our own website, firmly sticking to our own made-up rules isn’t always best for our users.

Doing right by users is the crux of front-end development—and, really, everything else on the web, too. Pixel density may change, but as the saying goes, what’s good for the user is always good for business.

About the Author

Dave Rupert

Dave Rupert is the lead developer at Paravel, a three-man web design and branding agency located in Austin, Texas. He, along with Chris Coyier of CSS Tricks, co-host the the weekly podcast Shop Talk Show.

29 Reader Comments

  1. “Above all: use your brain”. Love it!

    It’s interesting that in nearly every other facet of our industry we’re pushing for device/context agnostic solutions but with images, we going the other direction. Damn pixels!

    I’ve been using SymbolSet for icons and it’s a brilliant solution.

  2. Nice overview Dave, thank you.

    Loving the emphasis on Progressive Enhancement and general common sense 🙂 I too feel that people jump on JS solutions a bit too quickly.

    You touch on this but it might be worth pointing out that creating `@2x`images does _not_ result in images that are 4x the filesize. Image compression is smarter than that.

    Re: bitmap images you already mention your `1.5x` trick but you do not mention another simple and common sense approach that shaves off even more of the filesize: simply reducing the image _quality_.

    Thomas Fuch’s has written great little blog post on this: http://mir.aculo.us/2012/08/06/high-resolution-images-and-file-size/

    E.g. one can simply create a JPG image that is about 1.5-2x as large as commonly needed and _reduce the image quality drastically_ (from say `60` to `30`). Then you _scale the image down in your HTML_.

    Because ‘retina’ screens have such high density the JPG artifacts are barely visible and the filesize will remain quite small because of the reduced quality/compression.

    YMMV, of course, so it should be tested but it might be another simple client-side solution for some (most?) bitmap images.

    I am not completely sure I agree with your statement _”We need a purely front-end solution to high resolution images.”_

    What do you think of services such as http://www.resrc.it/ or RESS in general?

  3. I tried to use icon sets in my most recent project, but the page load slowed significantly, to the point that there is always a visible jink in the page as the icon loads on test systems (in all browser tested: IE9/Chrome/FF).

    Using images seemed to be a better solution as they loaded instantly.

    Obviously I did not have the advantage of a truly scalable solution, but as I was using few icons the bandwidth requirements were similar and the performance of the page was far more important than minor artifacts on hi-res screens.

  4. @davidhund You’re exactly right. Image compression gets pretty great and it’s truly not linear. But that greatly depends on the type of image. I tried a variety of scenarios (Obama, Ronald Reagan, A cat, etc) and doubling the resolution, usually sat in the 300%+ range.

    I suppose that statement could be massaged over time with more data. That would be an interesting test case to build.

    re: Quality – I’ve heard musings of the “Quality Hack” that you described for serving images, but haven’t seen a blog post or demo outlying that. I’d be hesitant to give anyone a Quality: 30 graphic, but I bet it totally depends on the situation. You should write a blog post on that and teach the world, please!

    I like ReSRC and Sencha.IO, but I’m hesitant to use anything that doesn’t document it’s targeting strategy (is it UA sniffing? Setting a cookie or something?). When targeting fails, it fails really bad (see Basecamp Mobile, etc). Additionally, I’m hesitant to use a 3rd party service (beyond a CDN) for something as simple as images. That said, I think a lot about dedicated “media boxes” might become a part of the frontend web stack in the future, due to the increasing complexity there.

  5. I quite like the “use your brain” aspect of the article. Too many times, too many people simply use the “whatever other people use” method without understanding the impact(s) elsewhere.

    I would like to have read though that the picturefill you’re refering to in the article, that you’ve started using, does infact *require* javascript to function properly.
    Absolutely there are no server techniques required in it’s usage but it seems a bit misleading advocating this technique without at least mentioning it requires it’s usage, especially with the JS paragraph at the beginning of them article.

    Cheers.

  6. @pitchfordD Good point. I should have made that more clear.

    Worth noting: Picturefill goes to great lengths to have a proper no-JS fallback, it’s truly a progressive enhancement. Also, the goal of Picturefill is to actually “proto-fill” (maybe not a word) the Picture element as if it existed in HTML today. I’d love to not use JS to poly/proto-fill this problem.

  7. @Dave Rupert Re: the Quality trick: I assume you’ve taken a look at that Thomas Fuchs blog post? He links to a (minimal) example. @pornelski (of ImageOptim fame) also mentions the technique: http://imageoptim.com/ipad.html

    You are completely right though: this needs more testing and lowering the quality for supersized bitmap images might not be appropriate in all scenario’s.

    Good points re:server-side services. Wondering what you have in mind when you speak of ‘media boxes’. Many CMS’s already have some built-in (Just In Time) image scaling etc. so building a ‘_personal CDN_’ could be a possibility…

  8. Unfortunately for everyone involved with SVG and vector graphics, Illustrator’s output in this format is terribly unoptimized. For one thing, it insists on using _3 decimal spaces_ for every coordinate number, while most images would look great with _2 significant digits_. That would be a significant saving itself.

    Fixing this would take a bit of work. Luckily, for most images using SVG Cleaner — http://qt-apps.org/content/show.php/SVG+Cleaner?content=147974)– is more than enough. Perhaps with more interest and activity SVG tools will be improved further. For example, there is no tool in existence that would make it easy to make simple SVG animations without resorting to programming them by hand.

  9. @luminarious If you can make and popularize an SVG tool like that, you can have some of my money. If you open source it, you’ll have my respect forever.

    Considering that 99% of all SVGs used on the web will be exported directly from Illustrator, it’d be great to have that problem solved soon. There are lots of smart people at Adobe, so perhaps lobbying for a “simplified version” on export in Illustrator would be a net win for the internet. Or, you could build that SVG-Smush tool then get acquired by Adobe and then be a millionaire! We all win.

  10. @pauldewouters Picturefill doesn’t address bandwidth, but it its current state the Picture element would hand over 1x/2x determination to the browser via @srcset. Hope that makes sense.

  11. Pretty damn useful. I have been to looking for something that tackled this issue.

    Only nitpick I have is that the script should check to see if the src attribute matched the data-src before it set the value. I had Firebug open as I was resizing the browser and saw a lot of useless requests.

  12. My experience with SVG is that it’s not as rosy and well-supported as most of these kinds of articles make out.

    Want your SVG to be flexible in a responsive design? You will run into sizing problems in iOS Safari.

    Want to make your SVG a clickable link? The most reliable way is to absolutely position an anchor tag over it – which is not the most semantic.

    Want to use an SVG background with “background-size: cover”? Want to use an SVG as a border-image? Browsers handle these very inconsistently, if at all.

    In my opinion, these bugs are the reason that SVG isn’t more widely used, not web developers unwillingness to use it.

  13. Dave – Regarding SVG exports from Illustrator, I’ve always done a 2-step process:

    First, simplify the artwork in Illustrator as much as possible, that means un-grouping, merging shapes, removing extra points, etc. Time consuming depending on the graphic, but worth it.

    THEN you can do an SVG export, but be sure to open it up in a text editor and do another cleanup pass for anything that looks strange.

  14. It’s a nice article, but I feel it doesn’t deal with the issue that was actually questioned in the first place…that of people with devices that are out of sync with their environment’s bandwidth. We can’t focus on front-end solutions if we ignore that someone may have a large device on a small network speed, or a small device on a high speed WiFi.

    “I have written more about it here.”:http://niaccurshi.blogspot.co.uk/2012/09/responsive-images-no-solution-yet.html

  15. I’m curious, I downloaded the jpg’s from the POTUS example and they’re all set to 300 DPI and not 72 DPI.

    I’m curious if there’s a reasoning behind that?

  16. What are your feelings about this technique? “Adaptive Images by Matt Wilcox”:http://adaptive-images.com/

    From what I see, it has a significant PRO of being able to work on existing sites without having to change any markup. So this could be used to upgrade existing sites without having to spend a lot of time working on them. Also, you only have to have one version of the image and the script takes care of the multiple sizes for you.

    The CONs of course are that it requires PHP and uses the server fairly heavily. I’m not sure how this would impact page load times. But it sounds like a very interesting solution.

  17. Fantastic assessment of the current situation and some useful avenues to explore.

    As an aside, if you run that threeamigos.png image through ImageOptim (a handy OS X wrapper for PNGOUT, OptiPNG and AdvPNG to losslessly brute force PNG filesize down) it saves another 10KB and comes down to 74KB.

  18. @skeg64 SVG is trickier than it seems. But basic, “no-frills” embedding is more-or-less supported across modern browsers. But you should totally write a blog post with a support chart, or put it out on a site somewhere, submit it to https://github.com/h5bp/lazyweb-requests

    @splatcollision You should blog about that! The internet needs more education, less Top 5 posts. I think the “Art of Massage and SVG” would be a great blog post title! 🙂

    @John Bertucci That’s due to the source (TIFF) file’s resolution. I wanted to work with totally uncompressed images as a source just to eliminate the resolution variable there. If I can find time, I’ll do 72ppi versions and see what the difference there is.

    @ryangiglio I like Adaptive Images, I totally do. And for existing sites, like you mentioned, with years of content behind it (like Chris Coyier’s CSS-Tricks), I think it’s a great solution. I’ve heard *the following is hearsay* that there are CDN issues with that, which won’t work for high-scale sites. That said, Matt Wilcox is a smart guy and very involved in the RWDimages process.

    @jaikdean Bonus savings!

  19. Hey Dave,

    Although I agree with most of the article, the ‘three amigos’ example SVG you posted is rather misleading and as @luminarious and @splatcollision pointed out far from what it could be.

    You see, what you guys did is take whatever .ai you had and export it from Illustrator as is. Also because the source was rather poluted with dozens of overlapping paths and points the output ended up being huge as well.

    I’ve been spending rather painful amounts of time recently diving into SVG optimisation and here’s what I learnt:

    1. merge your shapes and make as many compound masks as you can to reduce the amount of SVG tags.
    2. before saving: Object > Path > Simplify (even @ 99 precision you can kill a lot of points)
    3. when saving, reduce decimal points from 3 to 1.
    4! Reuse, reuse, reuse. Learn your SVG before you start using it. Just by cloning and transforming your corner shapes rather than redrawing them, I went down to 290KB. Now if you do the same for some other shapes—e.g. those circular frames— you can drop another chunk of redundant data.
    5. Optimise output by removing unnecessary garbage, tags, whitespace, styles, etc.
    6. Minify.
    7. Always serve it gzipped – either automatically or manually as .svgz file.

    To prove these points – here’s a very rough optimisation result I achieved mostly with regex and Illustrator automation, hence few bits and pieces may differ: https://dl.dropbox.com/u/27213/threeamigos-lite.svg. When you serve it gzipped it’s only 72KB. If You reorganised the whole file I imagine you could easily get down to 50KB.

    I hope that helps anyone who wants do dive into SVGs.

  20. I started my site when I was in Nepal, so I had no choice but to use extremely small file sizes, as anything else would take too long to upload and the internet connection would cut out before it finished. Now that I’m back in Europe, I’ve actually continued using only small-sized and highly compressed photos, mainly out of a realization that many people the world over would never get to see most of the photos were they too large.

    I wouldn’t mind having larger images though, so I’ll have to try a few of your workarounds and see if they work for me.

  21. Hi Dave,

    Umm.. I have a JavaScript solution that really does prevent the image from downloading prior to selection, and it let’s you use the picture elements (or img srcset syntax) without the need for div’s etc (although it does require noscript).

    https://github.com/davidmarkclements/Respondu

    Just thought you’d like to know..

    It does have a few dirty little secrets, but if you want to avoid the double download, you gotta do what you gotta do.

    Best regards,

    Dave

    p.s. it’s a work in progress, but it does work across all modern browsers

  22. SVG has been an awful long time in coming and, while I’m very much in favour of it and have been using it in web-based stuff since about 2006 I can tell you that it is far from optimised. Performance in most browsers apart from Opera was dreadful until a few years ago and you need either plugins or compatibility libraries (Raphael et al.) for Internet Explorer which I think is only adding support with version 10. Printing was often almost impossible. Also, as it’s XML-based it’s inherently inefficient – I’d love to see a JSON version which would be both smaller and faster to parse. Support of the format between authoring tools is also terrible. Try working with the same file in say Illustrator, Omnigraffle, Inkscape and DrawPlus. Nevertheless, it really is the bees knees for logos, claims, etc. and can’t be beaten for anything that gets printed – this is the big lacuna in your article: almost everything you say also applies to print media.

    But I hate all the front-end only kludges for dealing with different pixel-densities. Backend-support preferably in the file-formats is essential. The “prefetching” behaviour of browsers could easily be modified by the browser makers or even put in the users’ hands. If you want small bitmaps of great quality – take a look at webp. I think mod_pagespeed now supports it if the browser does. It’s the best diet you can put your photos on.

  23. re 12:41 pm on September 25, 2012 by Dave Rupert

    Dave, I wonder if 99% of SVGs are exported from Illustrator as that format is quite popular with wikipedia nowadays, so whenever, say a blogger or private website owner, uses their pictures, chances are it’s a SVG, as this is handy because of the license, mostly creative commons. Private owners should outnumber web designers by far and most of wikipedia’s SVGs should be creations of open source software.

  24. You’re SVG threeamigos.svg isn’t 400kb, but 140kb.

    It can be served compressed like html, it’s xml after all, or even better, pre-compressed as svgz.

  25. What’s the difference between using the symbol packs at $29 to $50 versus free text symbols with an individual css for the font style (eg Hebrew David, or Times New Roman Latin 1)? Looks like a lot of expense for something a stunned bunny can do for free- if you have the pack you can just type in them, or what? The sites don’t explain.

Got something to say?

We have turned off comments, but you can see what folks had to say before we did so.

More from ALA

I am a creative.

A List Apart founder and web design OG Zeldman ponders the moments of inspiration, the hours of plodding, and the ultimate mystery at the heart of a creative career.
Career