Menu

By Alex Maughan – Designer

17 August 2014

Art and technology

cogs

In the documentary Tim’s Vermeer Tim Jenison, a complete newbie to painting, paints Johannes Vermeer’s The Music Lesson based on a real-life reproduction of the painting’s composition, which he painstakingly constructs himself in a warehouse. He does this using an optical setup of lenses and mirrors built on the foundations of what many believe artists like Vermeer used back in the day.

Jenison believes he’s re-discovered the means by which Vermeer was able to create his unbelievable, photographic-like paintings. His method, although requiring creative ingenuity to think up and build, as well as huge amounts of patience and perseverance, transforms the act of painting itself into a mechanical and objective process – it transforms the human doing the painting into a non-subjective machine.

Whether or not his painting is proof that Vermeer’s long-acclaimed genius is actually mechanically replicable is not something I feel a need to explore. Rather, my interest lies more in our culturally emotive reactions to this debate. The very strong feelings so many of us have against the idea of reducing art to a technologically driven and mechanical process is significant (to me, at least).

Tim’s documentary, as well as various books that advance the theory of artists using optical devices, are criticised for missing the point of Art; that by focusing on technological trickery, one naively misunderstands what it is that makes Vermeer one of history’s most celebrated artists.

So what does it lack? The film implies anyone can make a beautiful work of art with the right application of science. There is no need for mystical ideas like genius. But the mysterious genius of Vermeer is exactly what’s missing from Tim’s Vermeer. It is arrogant to deny the enigmatic nature of Vermeer’s art.

I think the Art-loving outcry emanates from how we choose to define and, in turn, place cultural importance on Art, with a capital A.

Art is a brilliantly illusive concept. There’s much confusion and flexibility around its definition, yet most of us have very emotive and philosophical affiliations to it. It seems to transcend ideas around aesthetics, and scoffs condescendingly at the proposal of it merely being form without function. Art is a prevailing means of commentary and expression. Most of all, it is regularly seen as the end result of genius. Because of this, only a special few are celebrated from one generation to the next.

I really don’t think I’m imagining the very real rift that consequently exists in the minds of many, whereby Art is seen as a special human-only artefact, while technology is the clever, but otherwise ugly, Frankenstein child of our increasingly mechanised drudgery. The two are considered by many as immiscible, or rather that technology detracts from the true essence of Art; that it somehow robs Art of its expression, diminishing its human commentary.

In The Story of Art, E.H. Gombrich writes,

There really is no such thing as Art. There are only artists. Once these were men [sic] who took coloured earth and roughened out the forms of bison on the walls of a cave; today some buy their paints, and design posters for hoardings; they did and do many other things. There is no harm in calling all these activities art as long as we keep in mind that such a word may mean very different things in different times and places, and as long as we realise that Art with a capital A has no existence.

There are three important things to take from this very popular introduction to Art (I’m persisting with the capital A, for reasons soon to be explained):

  1. The only thing that defines something as Art is whether a human artist produced it.
  2. There are many different definitions and these are mutable over time and geography.
  3. Art with a capital A does not exist, apparently.

Points 1 and 2 reaffirm my preceding assumptions.

Point 3 is where things become wonderfully contradictory. You see, Gombrich has chosen to denounce Art with a capital A because it brings with it with too many intellectual pitfalls, which he’d rather avoid. By saying Art doesn’t exist, he avoids having to define it. This is fine, I guess, but as I’ll now try to explain, it is, I’m afraid, a contradictory cop out.

It is impossible for Art not to exist. If there was only art with a lowercase a, then any person who calls herself an artist should be celebrated as one. I’m personally okay with this, but it seems the Art world, including old Gombrich, aren’t (my own emphasis added):

Praise is so much duller than criticism, and the inclusion of some amusing monstrosities might have offered some light relief. But the reader would have been justified in asking why something I found objectionable should find a place in a book devoted to art and not to non-art, particularly if this meant leaving out a true masterpiece.

Okay, so if I’m following all of this correctly, it seems there are and there are not things worthy of being called ‘art’. I’m sorry, but this is when art starts to eat all of its greens and grows into a big and healthy Art – it most certainly comes into existence when you say there is ‘art’ and ‘non-art’.

Gombrich’s hugely circulated book on the subject operates on the epistemological foundation that his subject matter can only be addressed in light of the humans who produce it and that, as humans, we define it differently over time and space. This democratic foundation quickly crumbles under contradictory conservatism, however, because he clearly reaffirms well-known Art-type criticisms by deeming only some productions as being worthy of this categorisation.

Logically speaking, one is left confused by a book framed as an introduction to a select group of celebrated people who produce something that supposedly does not exist, but has simultaneously existed in various forms since troglodytes started vandalising their caves.

Considering Gumbrich’s work is such a huge bestseller (the copy I own at home is a whopping 15th edition), is it safe to assume his framing of, and thinking around, Art is shared by a good many of us? I venture to think so. Most of us seem to share this judgemental and contradictory understanding of Art, even if we try to democratise the way we talk about it at times. I think the reason is that it fundamentally comes down to us using Art as a way to elevate ourselves; a means by which we point at our human uniqueness in relation to other animals and, in more recent times, as a cultural device to argue for our value over machinery and the ever-booming ingenuity of technology.

What is a bit concerning is that by creating this dichotomy between Art and technology, we seem to do so with the assumption that technology is anti-human. The harsh criticisms of Tim Jenison’s painting of a Vermeer fails to recognise that here is a man of glorious creativity and talent, and is by all accounts unbelievably dedicated to any undertaking he pursues. What he achieves is just as worthy of celebration in my mind as Vermeer’s unbelievable talent.

There are Vermeers everywhere. Some use paint, some use hammers, some use keyboards – but all of them use technology in some way or form. Something as simple as a paintbrush is technology. There’s nothing more human than the technology we invent and use to do great things.

As long as we don’t lose sight of the inherent humanity in our technology (which we unfortunately do from time to time), I think we’ll be okay. Art can continue to be mysterious, uplifting, and judgemental, but so can technology and it will continue to be involved in the production of Art, whether or not Art snobs like it or choose to admit it.

6 August 2014

A node package and Grunt workflow

NPM and Grunt logos

At work I’ve been faced with the problem of maintaining front-end code that is shared across different applications worked on by different teams of developers. In addition to redesigning and rebuilding a consumer-facing e-commerce-type website, there’s a significantly more hefty operational side of the business, made up of various kinds of admin interfaces that have either yet to be developed or are in need of some serious TLC.

So, there are currently two general collections of front-end code that need to be managed across a number of different apps: the new consumer-facing and the more admin-type collection of front-end components.

Some background on the actual code

Having written 99.9% of all this front-end code myself, these collections have followed a modular approach that I strongly believe in. The CSS is a compilation of component-driven Sass modules, while the JavaScript is made up of various object literals, which contain reusable variables and methods that are as concise in their purpose as possible. Each object is assigned a jQuery 1 global namespace, so its methods can be called from anywhere within the application as and when needed. Some of these objects implement custom or 3rd-party plugins – the latter of which are kept to a strict minimum to reduce 3rd-party code bloat.

The problem

The problem was how do I keep working on this front-end code in a fast, iterative manner while not having to manually update code from one application to the next, constantly being worried about whether changes in the context of one app would end up breaking another.

I frequently remind myself that the code I spend a bit too much time agonising over at times ultimately only exists for the sole purpose of delivering an enjoyable interface to a user in the best way possible. The preciousness or cleverness of my code setup should not dictate decisions around the user experience design. This means the approach can’t be rigid to fast design iterations.

If it becomes clear that a certain change or addition will improve the user experience, I don’t want that change or addition to be potentially sidelined or delayed due to it not easily falling inline with a code-centric workflow or some misplaced sense of code conservation. I need to be able to break away from my my nice, clean framework when a more optimised interface design requires it, but I need to do this without sullying the cleanliness of the fundamental design principles behind that framework, nor with it having breaking consequences elsewhere. I need the shared tools necessary to make these changes cleanly if possible, or, if not, I need to be able to create some technical debt that is easy to quarantine from one day to the next, as well as from one application to the next, but can still be deployed immediately for its specific use. I can then go back to this slightly faster and looser code and work out if it is in fact technical debt 2, and if so, look into repaying that debt by working it more cleanly into the base framework.

Beyond this need for design flexibility, there’s also the need to be able to make more considered, code-centric changes that improve the base framework, but to be able to do this in a selective fashion from one application to the next. This means updates can remain flexible to both mine and other developers’ priorities, and are also sensitive to any app-specific requirements that should temporarily delay or even skip certain updates. The aim is to do this while still only making this update to a single, distributed code repository.

So, tell us code monkey, how?

First, I worked on how best to divide up the code. What I settled on was the following:

  1. A core framework
  2. A framework specific to the consumer site
  3. A framework for admin interfaces

Both 2 and 3 are dependant on the core (1). Pretty simple, nothing too fancy about that. What was a little less simple is how I hoped to work with these repos, and how I hoped other front-end devs (if we ever found any to join the company) would work in tandem on them across different applications without crossing swords. I didn’t want to be challenged to a dual to the death every time I pushed changes to the origin. The idea of being stabbed in the face at a very efficient rate of 60fps by another front-end developer scares me.

What about Bower?

Talking through this problem with a smart and helpful senior developer in the team, his suggestion was to have a look into Bower. This made complete sense, as Bower is a tool for automating the management of front-end components and their relative dependencies. We were already using it for 3rd-party components. One could convert those 3 repositories mentioned above into versioned Bower components, and then each app can download the correct version it requires (which in turn downloads the correct version of the core framework it requires).

So I got stuck in, all excited-like. Unfortunately, the more I played around with Bower in relation to these repos, the more Bower felt wrong for this. Bower is perfect for 3rd-party stuff, as hooking up to the browser-ready distribution files of other people’s generic components is perfectly suited to how one normally uses a 3rd-party component.

One naturally installs Bower components into a publicly accessible directory, because these components are front-end in nature and because you should be using the components as they have been automatically provided – why else would you be using a dependancy automation tool like Bower if you weren’t? You would concat and minify for production, but these components would be public files all the same.

This, in my mind at least, means that you start breaking your assumptions around Bower components as soon as you start installing source files that require server-side compilation (such as Sass and SVG files and maybe, who knows, CoffeeScript files in the future).

The key difference here is that these repos are intended to be used during front-end development, not simply included or implemented. The Bower workflow kind of assumes the latter. What we really needed was a server-side module manager for front-end development. So the same, very smart developer suggested Node Packaged Modules (NPM) instead, and he setup a private registry on a local development server for me to start playing with.

I’ve been using the NPM approach (with a strong process dependancy on Grunt) for a few sprints now, and it feels like a great recipe thus far. Below is some more detail about the approach.

The solution: NPM & Grunt sitting in a (dependency) tree…

Using a private NPM registry had its problems. It couldn’t be accessed outside of the office network (I work a lot from home after hours, especially when a brainwave strikes me), but most importantly it caused problems with the continuous integration build process on the staging server. In short, there were network security issues with build process. So we instead switched to having the NPM config fetch the packages directly from Github, using git tags for versioning. The same concept as before, except instead of publishing to an NPM registry, I just push them to their repos on Github, tagging them with a version number. It, in effect, removed a step for me, as I would have maintained a git repo in addition to the NPM registry anyway.

Firstly let’s take a look at the structure of the 3 repositories in question (1, 2, and 3 above – core, consume-facing, and admin-type UIs).

Each repo contains the following:

  • An images directory with uncompressed and fully editable SVGs
  • A styles directory made up of .scss files organised into:
    • modules (global component-centric modules, each in a separate file to make it easier to cherry pick the ones you want)
    • sections (bespoke styling for certain pages or sections, also in separate files based on the page or section)
    • ie (IE specific styles which follow Paul Irish’s recommended approach to IE overrides)
  • A scripts directory made up of a collection of component-centric JavaScript objects, with reusable methods (once again in separate files).
  • An NPM config (package.json)
  • A Bower config for 3rd-party components (bower.json)

The core repo doesn’t have a ‘/styles/sections’ directory, as it never directly implements any particular page or interface. Other than this, all repos follow the same file and folder signature. The core repo contains most of the modular .scss and .js files, with the consumer-facing and admin repos extending these with their own global (i.e. section-agnostic) modules as well as with their own section-specific styling. Unlike the consumer-facing repo, the admin repo’s sections refer more to types of sections or page areas, as apposed to actual pages, as it is more generic in nature.

The admin repo introduces some different design patterns, both macro and micro in scale, as well as bringing in some more heavy data-orientated components and edit controls.

The consumer-facing repo has some global elements specific to it, while introducing a lot more page-specific styling and breakpoints to squeeze out the best design possible for our customers across a huge continuum of screen sizes and browser capabilities.

You could ask the question, Why create a repo for the consumer UI, if it is only applied to one application? That’s a mighty fine question. I’m starting to like you. Well, the answer is two-fold. Firstly, this repo can be used for other consumer-facing offerings, which have been proposed for the future. Secondly, this repo can be automatically pulled into some form of externalised documentation if need be, used to codify and communicate design patterns in a centralised place for the whole company. In other words, for the creation of a pattern or UI library of sorts that can play an important part in bolstering organisational memory – the thing so crucial to creating cohesive user experiences.

Each app then includes the admin or consumer-facing packages in their NPM config, pointing specifically to the version (via the git tag) which they are using. Those versions, in turn will be dependant on a specific version of the core (as defined in their own NPM config). An app can pull in both the admin and consumer-facing package and more (as does our actual consumer-facing app, as it has backend admin pages). Because each are kept in their own directory within the node modules directory with each in turn having their own version of the core package, you can continue to keep them separate for different sections of the app.

All you need to do is run ‘npm install’ and wazam! you have the packages you require with the correct version of core automatically pulled in. In addition to this you can add a ‘postinstall’ declaration to your packages that kick starts a process of your choosing. So we used this to kick start the installation of 3rd-party Bower components specific to each front-end package. This means, by simply running ‘npm install’ in the app, we get all our necessary front-end packages installed along with their Bower components, without having to commit these components to the package repositories themselves.

Now the second very pertinent question you may ask is, Does this mean you have to keep pushing commits to your package repos in order to see your changes reflected in the app you’re currently working on? I’m really starting to like you now, because that is indeed a great question. The answer is: npm link, which allows you to effortlessly define symlinks to your own npm registry on your development machine. Meaning any updates to your local copy will be reflected immediately in the app. Can I have a what what? Pretty sweet, huh? Once you’re done working on a specific task, feature, bug fix etc, simply update your version number and commit it with a matching git tag version. 3

How do all these things get compiled?

Now that we have all the front-end source files we need to start doing some rapid, kick-ass design implementations on a specific app, how do we actually gather and compile these files in an efficient manner. The answer is Grunt. I’d been using Grunt for a while now, but for me it has really come into its own with this particular workflow.

With the source files now available within the node modules directory, I use Grunt to automate the following (in order):

  1. Copy any browser ready scripts (3rd-party components and my own package scripts to a public location to be included into the document. I also copy all SVG source files from each package into an assets folder specific to the app in question. Seeing as all the packages follow the same directory structure, this automation is pretty painless to setup and maintain, and appropriate nesting can be maintained (vis-a-vis different versions of core etc).
  2. I then run an SVG minifier to reduce file sizes (due to all sorts of meta information and redundancies, generally from something like Adobe Illustrator). There are savings in excess of 60%, so the juice is definitely worth the squeeze.
  3. With the optimised SVGs in place, I then run grunticon, which is a wonderful plugin that generates all the necessary SVG CSS data and classes (according to my own file organisation and class naming specifications), and then uses phantomjs to create fallback PNGs for browsers that don’t support SVG. Can I have a HellsYeah!?
  4. I then do an initial compile of the Sass brought in from the packages, including the SVG data. We now have fully scalable and retina sharp images and all of them are loaded with a single http request, ready and waiting in the cache for every page thereafter. Load times after the initial load (which is pretty darn quick itself) are super-lightning quick. 4
  5. I then have some uglification setup to keep an eye on combined and minified file sizes of scripts. This will only really come into play when we start live deployments, though.

So that’s that in a very wordy hat. I could write a more fine-grained collection of posts breaking down the thinking behind certain things later, but the idea here was to share a top-level approach to what I think is quite a nice code management and workflow.

Notes:

  1. Using jQuery is a somewhat unavoidable and pragmatic choice, especially within a team.
  2. Technical debt is sometimes hard to define with front-end code – but this is a whole post in itself, which I may address at some other time
  3. I’ve yet to start using proper versioning standards (around breaking changes etc), but I will start doing that once we get closer to actual go-live deployment. Right now I’m the only one working with these packages, so it currently isn’t a problem if the version number lacks that kind of implicit information.
  4. I am busy working on splitting the image data stuff out from the main CSS and using yepnope.js to conditionally load it at the end of the document (based on SVG support) to increase initial load speed. But this is my own pedantic like-to-do right now, as the combined CSS is still surprisingly small (with all image data used throughout the consumer site, as an example, currently sitting at only 70kb).
2 August 2014

In a word, Human

Heart

The last few years, I’ve found myself using the word human with great regularity. I don’t speak of people or users, I instead harp on about a human. Some guy I’ve never met. The reason is a simple one: I worry over an ever-looming disconnect between the things we design and build, and the humans for whom these things are intended.

This is not to say that this disconnect is a ubiquitous totality. It has an ebb and flow; it gathers significance in one place while narrowing (and even disappearing) in another place in a complex and multitudinous way within the tech industry. Either way, this disconnect is something that needs to be fought. The crowbars that cause it to widen need to be railed against.

I believe this disconnect is levered open by two kinds of social crowbars:

  1. A technophile minority dictating to a technophobe majority
  2. A fundamentalist and morally bankrupt belief in capitalism

The technophilic dictatorship

We need to understand that black magic is alive and well in the modern era. It takes the form of computers (handheld, desktop, and the general internet of things). Digital technology is now the leading religion, and all us tech folk are the power-wielding clergy. Our Latin version of the bible is instead written in code, jargon, and an overwhelming sense of self-importance and impatience.

User experience design is the exorcist.

Successful user experiences happen when producer whims get sidelined, and primary focus is placed on that little bit of common sense shared by the majority of humans using your product or service. It’s when you empathetically put yourself in the most common psychological flow of a human using a piece of technology they know almost nothing about.

The above doesn’t happen as often as it should because we live in a world where everyday millions of browser-indifferent humans use a web of computers and software created and maintained by a bunch of browser snobs. Most people don’t care about the elegant simplicity of some programmer’s data model. They don’t care about how clever that programmer is. They just want to get through it. They don’t care about how the pistons fire under the hood. They just want a steering wheel and an automatic gearbox. They’ve bought the car to get to work, to take their children to school. They haven’t bought it to marvel at its engineering. They’re too tired for that.

In all our technophilic cleverness and snobbery we sometimes forget about this, or we stubbornly choose to ignore it. Our fragile egos want to broadcast how clever we are to our peers.

For some time now I’ve been feeling a bit down about user testing. I’ve been unable to shake the feeling that the specific conclusions drawn from it are mostly the result of artificial data and interpretative folly. Despite this, I believe user testing continues to be valuable in the design process. It may not be great for making very specific usability decisions, but it is invaluable in reminding the more obnoxious of us of the divide between makers and users; it regularly highlights the technological disconnect created by aloof cleverness. It breeds empathy.

If ego-driven cleverness in technology is left unchecked it starts us down the road of technological evangelism, whereby technology becomes endowed with value in its own right, irrespective of its value to the humans who are forced to use it. In one of my favourite posts of the year, Zeynep Tufekci laments over the inhumanity of caregiver robots 1:

In my view, warehousing elderly and children—especially children with disabilities—in rooms with machines that keep them busy, when large numbers of humans beings around the world are desperate for jobs that pay a living wage is worse than the Dickensian nightmares of mechanical industrialisation, it’s worse than the cold, alienated workplaces depicted by Kafka.

It’s an abdication of a desire to remain human, to be connected to each other through care, and to take care of each other.

The author says we will be fine with machines because look how much we stare into our phones! What a misunderstanding! That is the opposite of what most are doing on the phone: as I’ve long argued, it’s a desperate desire to remain connected to each other, in a world that makes this hard through suburbs, long work-hours and other physical barriers.

Tufekci’s great lament leads me nicely onto my second social crowbar, excessive capitalism.

The fundamentalist belief in capitalism

While excessive technophilia can contribute to a human/technology disconnect through the misplaced elevation of technology to such a degree that it loses sight of its original human purpose, the same happens when the evaluation of success is ruthlessly and wholeheartedly linked to business success within the capitalistic ecosystem. Business success, in pure capitalistic terms, is monetary excess. Nothing else. This is not to say that all profitable businesses lack value beyond the profits they reap, but morally and ethically questionable things are regularly done because the only real metric used to adjudge business value is monetary profit.

If you’re an Ayn Rand fan, you may argue that capitalism is self-regulating, in that if you do morally questionable things your bottom line will inevitably suffer for it. This may be true in some cases (certainly not all), but even if it is true it doesn’t change the fact that such a system creates a vapid yardstick which many do use to motivate inhuman decisions.

Just like technological evangelism, profit evangelism switches the focus away from humans. Instead of creating human livelihoods, the focus becomes a game of who can be the wealthiest irrespective of who or what it harms along the way. This harm can be very small (in the form of user frustrations caused by news sites that break their stories over multiple pages simply to game their “engagement”, which they can then sell to advertisers), or it can be much more significant – the type of harm where our health and that of our environment is compromised, and where human life is lost.

This schema for evaluating success is why thinking like this seems to be the norm in the marketing world:

The only reason to build a website is to change someone.

The goal is profit, therefore we are expected to bend humans to our monetary will, instead of bending our monetary will to service the needs of those pesky humans.

Drop those crowbars

As designers and caretakers of a vast technological landscape, we really need to not lose sight of who this technology is for and why it supposedly exists in the first place. Technology is supposed to service human needs, make their lives easier, and businesses exist in order to provide products, services, and livelihoods for these creative and hardworking humans. These are the goals. Money and clever tech are simply by-products of these goals. Do a solid for your fellow humans by not forgetting this.

Notes:

  1. In response to a New York Times piece by Louise Aronson, The Future of Robot Caregivers.