Saturday, 25 July 2015

Structured thinking is a competitive advantage

The process of design calls for a combination of investigation, strategic thinking, design excellence and project management skills.

But regardless of the nature of your client and the complexity of their project, the process should remain the same.

By breaking a project down into distinct phases, each with defined beginning and endpoints, you create logical breaks for review and decision making.

Reinventing the process each time in order to cut costs can create substantial risks to the project, and negate any long-term benefits.

Larger firms may follow a controlled and documented process, such as PRINCE2, whilst smaller agencies may have a simple five-stage plan, but in either case having a structured process appropriate to the task provides competitive advantage by:
  • assuring that a proven method is being used to achieve business results;
  • sharing the understanding of the time/cost/quality required;
  • creating trust and confidence in the project team;
  • positioning project management as smart, efficient and cost-effective;
  • building credibility for the proposed creative solutions; and
  • setting and managing expectations for the process.

However it is expressed, the design process can be seen as just a more complex version of the simple ‘story hill’ that is taught in primary school.

You need a beginning, a middle and an end, and within that you need to ask what needs to happen (and in what order) and how are things resolved?

But because the process is just the process, you still need a creative spark, intuition or leap of faith to bring it to life.

Having a structure in place lessens the background ‘noise’ and creates the space in which creative thinking can thrive.

Wednesday, 13 May 2015

Google Mobile App UX Principles

I do like a good UX framework, and Google’sMobile App UX Principles document uses practical examples to demonstrate how to improve the user experience of apps. The effectiveness of user optimisation strategies are illustrated using metrics such as app performance and user conversion on both Android and iOS platforms.

Adopt, Use, Transact, Return
In designing an app, you need to work hard to meet the expectations of users who are becoming accustomed to high quality apps that deliver usable, robust, and sometimes delightful user experiences.

Investing time and effort in creating, testing and optimising services can have a significant effect on how ‘sticky’ your app becomes.

The basics that need to addressed include optimising conversion, and avoiding interrupting users, or forcing them to think about things that should be simple. Google expresses this as a four-stage ‘Adopt, Use, Transact, Return’ framework.

Adopt, Use, Transact, Return

Adopt - Remove roadblocks to usage  
Remove all roadblocks to usage - and adoption - of your mobile app. Get users into the content / substance as quickly as possible, so that they can use, assess and experience its value to them.

First impressions count, and a splash screen gives you a short but vital window to engage a user in your proposition. But, never make users wait.

Tips / help or an onboarding sequence should only be employed if really necessary - so as not to interrupt users - but when used appropriately at key decision points, tips/help can guide the user in their initial experience and adoption.

Use - Make conversion decisions simple  
Enable people to use your app in the way that suits their needs. A clear structure combined with an excellent search facility using a range of methods, from keyword to product scanning and image search, will help users find what they want quickly and easily, satisfy their needs and drive conversion.

Transact - Provide the ultimate in convenience
Help users progress through each checkout stage with minimal effort, and with sufficient reassurance, to convert without hesitation.

Return - Self service, engagement and delight
Be useful, to engage and delight, in order to retain customers or encourage member loyalty. Because, mobile apps are the most appropriate touchpoint for repeat interactions and frequent transactions, customers and members already loyal to a brand, and mobile first use cases (that couldn’t exist without unique smartphone services leveraging rich and contextual data; etc.), are more likely to return if an app provides an engaging experience.


What not to do


Do not mimic UI elements from other Platforms 
Design for each native mobile platform – Android and iOS - because each has unique capabilities and visual languages

Do not use underlined links 
Avoid using text with underlined links, which are part of the web / browser / page model, and not part of the app / screen model. Apps use buttons, not links.

Do not take users to the browser
Keep users in-app at all times, to maintain their spatial geography and to optimise conversion.

Do not ask users to rate your app too soon after downloading it
Avoid interrupting users by asking them to rate your app if they’ve only recently downloaded it or only used it a few times. Instead, wait until they prove to be repeat users and they’ll be more likely to rate your app favourably and provide more informed feedback

Tuesday, 21 April 2015

Mobile-friendly

My portfolio website at www.robertlevison.co.uk passes Google's mobile-friendly website test. Yay!


"Mobile friendliness" will affect how prominently websites appear in Google search results pages from 21 April 2015.

A page is eligible for the “mobile-friendly” label if it meets the following criteria as detected by Googlebot:
  • Avoids software that is not common on mobile devices, like Flash
  • Uses text that is readable without zooming
  • Sizes content to the screen so users don't have to scroll horizontally or zoom
  • Places links far enough apart so that the correct one can be easily tapped
Google provides a Mobile Friendly Test developer tool so you can see if your website is mobile-friendly.

Friday, 16 January 2015

Are Annoyingly Literal Headlines Set In Title Case Optimised For SEO?

You can find them across the web, headlines written for search engines rather than readers.

Online magazines like DesignTaxi and news aggregator sites such as BuzzFeed and Huffinton Post use strangely formulaic headlines, typically including a keyword, a proper noun, a verb, and an adjective whilst avoiding simple connectives. It’s English, but not as we know it. In SEO terms the language is optimised to add ‘value’ to each headline.

But in writing for robots, you just get robotic headlines.

It’s hard to imagine classic newspaper headlines such as the Sun’s 1992 headline ‘GOTCHA’ having the same impact as ‘Royal Navy Stealth Submarine Sinks Argentinian Cruiser in South Atlantic’.

Probably the best (worst?) example is the Daily Mail Online, where the inclusion of multiple keywords in the headline means the headlines have become almost as long as the stories themselves. It's clickbait in its purest form. The logical conclusion of this process is that the headline becomes the story, just a shrieking top-line opinion seeking an instinctive knee-jerk reaction from the comment trolls.

Surely we can write better than this.

The point of SEO is to provide sufficient context for search engines to rank the story as high as possible in the search results, relative to the value of the content.

Whilst search engine algorithms are constantly being tweaked, it’s generally accepted that an editor can improve the page ranking of a story by crafting the relationship between the headline, page title and meta description.

As well as describing the story, the title needs to include a proper name and a likely keyword that the reader might be using in their search (towards the front of the headline if possible). The page title can expand on the headline, for instance using a full name when the headline just uses a shorter, well-known, shorthand (eg. Diana / Diana, Princess of Wales), whilst the meta description can include more detail for the ‘snippet’ displayed underneath the link in the search results. All three elements should aim to match the words that users are likely to use in their search, and these search-optimised keywords should also be included in the opening paragraph of the story.

Thinking more widely about the utility of the headline, fitting it within 156 characters to read fully in the search results makes it easier to circulate on social networks, and including a personal pronoun in the headline also improves the chances of readers sharing your story.

(There are of course other factors in SEO, such as unique links to the story and referring links from the story, but these are not necessarily part of the headline construction).

In 2009, usability expert Jakob Nielsen introduced the concept of writing short, snappy SEO friendly headlines that “…must be absolutely clear when taken out of context” and cited the BBC's website as a best practice example of headline-writing “…offering remarkable headline usability."

Nielsen claimed that BBC headlines have the following characteristics:
  • Short, typically 5 words or less
  • Information-rich
  • Include keywords
  • Understandable, even out of context 
  • Predictable/match for reader expectations
On the other hand, headlines from viral sites are usually the complete opposite:
  • Long, sometimes to the point of being rambling and incoherent
  • Emotion-rich
  • Few or no keywords
  • Typically non-contextual
  • Use shock or emotional language
And whilst there is value in using searchable terms, the results can be lost in translation.

The late advertising and copywriting genius, David Ogilvy, said that "On average, five times as many people read the headline as read the body copy.”

The point of a headline is to draw the reader into a story that they might not otherwise have read. The skill of the web subeditor is in knowing their audience so well that they can add their editorial tone of voice to the headline, whilst still capturing the imagination of the reader.

And if you can turn your headline into a pun, then so much the better.

The Scottish Sun’s ‘Super Caley go ballistic Celtic are atrocious’ is held up as one of the all time classic newspaper headlines.

And, although no one knew it at the time, it’s SEO friendly.

Tuesday, 23 December 2014

The Power of Structure

Creating structure is one of the key tasks for designers, but if we are creators of structure, what sort of structures are we creating?

Architecture influences the way we move through physical space. We create places for reflection and zones for action within perceptual boundaries and physical constraints.

Information architecture performs the same function, creating virtual spaces – patterns - whose purpose is communicated through space, form, colour, image, typography and behaviour.

These structures define entrances and exits to spaces where we engage in actions in both real and virtual worlds. Well-designed spaces and declare their purpose and encourage us to interact, to perform and to create.

The visual structure we build into our designs affects the way people see them. Is our visual hierarchy working so that readers find what they need, and in the right order? Are elements appropriated weighted so that their relationships are clear? Do people gravitate toward the most important information on the page, or are there elements that distract? Can our audience clearly see what to do next?

We begin to nudge the user experience by developing a conceptual structure that describes a consistent visual language. Our primary goal must be clarity. Does this graphic help to illustrate the idea, or make it more confusing? Communicating through words and images influences the way we think about things, and over time, becomes part of our brand.

Social structures influence the way we interact with others and set out the opportunities for social interaction. This area is one that designers have only just begun to investigate. Can you poke people? Favorite something they did? Engage with a brand? How is reputation managed? Are you able to import or export your relationships, and (more importantly) does it make sense to do so?

Of course these structures do not function in isolation. They overlap, intermingle, and co-exist. As designers we need to recognise the most appropriate patterns, and how to use them in our designs.

Build well.

Thursday, 20 November 2014

Design by algorithm

Logos that change based on external variables

The tension between the desire for uniformity and the need for originality has provided a rich seam for branding agencies to exploit.

The idea that the essence of visual design can be expressed via a universal set of rules has a rich history, from the greek golden section via vetruvian man, compositional techniques, and the typographic grids of modernist typography.

But whilst brands can be monolithic or flexible, their visual expressions remained fixed until the 1980’s when the introduction of desktop publishing made it possible to produce designs that change based on external variables.

Some brands might need to show diversity of service or product, while others see flexibility as a crucial competitive advantage. So for those organisations that have evolution written into their essence, a dynamic identity provides an exciting and relevant structure for brand expression.

NAI
A radical scheme for the NAI (Netherlands Architecture Institute) by Bruce Mau provided many distorted, out of focus logos that allowed for flexibility and experimentation. Soon after, the Tate Gallery took the NAI’s lead and introduced an ever-changing logo for its ever-changing displays (courtesy of Wolff Olins).

Less successful was Abbey National’s 2003 ‘soft and fuzzy’ rebrand, ditched when Santander acquired the bank, but Wolff Olins returned to the idea of flexible brands, with a more controlled iteration in PWC’s device-friendly identity where a set of translucent rectangles flex and change depending on their usage.

PWC's flexible branding
However, logo selection is often made from a tightly controlled master set rather than from dynamically created marks.

Now, the use of the algorithm has enabled the rise of tailored design, where application of a consistent set of rules to a dynamic data set produces a unique output - design expressed as art.

A recent example of this genre is MIT Media Lab’s development of its flexible identity. Created by Pentagram, and based on the same grid as its predecessor, its aggressive pixelated letterforms create an uncompromising set of marks with echoes of Wim Crouwel’s New Alphabet.

It’s not a beautiful logo, but as the visual expression of the Media Lab’s multiple research groups at the core of its academic structure, it fits.

Lockups of two characters within the grid allow for almost every possible letter combination— “an algorithm” explains Pentagram, “will generate all the possible solutions for any given group acronyms in the future.”


This visual language sets the tone for a highly flexible range of applications and future permutations of the identity that will have the same look and feel without having to be the same.

In a more sensitive use of the pixel-block style, Norwegian design studio, Snøhetta, has designed the obverse of Norges Bank new bank notes. The design, based on the boundary where sea, shore and sky met, renders images from the Norwegian coastal landscape in a Minecraft-like pixellated form, the degree of distortion related to the ‘windspeed’ that increases with each denomination.

On the 50 kroner note the wind is weak, so the boundary between sea and coast is rendered in calmer short, square shapes; while on the 1,000 kroner note the wind is strong, creating longer, stretched-out forms that allude to rolling breakers and windswept trees.




But whilst Snøhetta uses the idea of windspeed to create the pixel distortion, the execution is static. A 2010 scheme for Nordkyn from Oslo’s Neue Design Studio, also using data based on the feed from the Norwegian Meteorological Office, produces a new logo dynamically for every application.



http://horizons.dandad.org/
Although not strictly speaking a logo, D&AD’s 2013 Annual used a similar methodology to create ‘identities’ that reflected the global spread of winners at the D&AD Awards. The algorithm creates a unique composition based on longitudinal and latitudinal location data, with colours chosen by time, and meteorological data used to determine the hue

Where an entry lacks a suitable data feed to produce dynamic data, use of a picker to sample random colours from an image can provide the necessary random variable.

ITV colour picker
Similarly, ITV’s rebrand created the opportunity to tailor the colour palette of the logo using key colours and tones from the programme being promoted, so popular entertainment gets a vibrant palette, whilst the logo can take on a more sombre appearance when the programming (or news) requires it.

As well as colour, shape can have an influence. Sagmeister’s identity for Casa Da Musica needed to echo the exuberance of the architecture because ‘as we studied the structure, we realized that the building itself is a logo’.

Casa Da Musica dynamic logos
The essence of the brand identity was to illustrate the many different kinds of music performed, through an algorithm that paired colours sampled from a composers image with different facets of the building. Depending on the music the logo changes its character and works dice-like by displaying different planes and hues.

Sound can also be used as the dynamic element.

Precedent’s work for the Leeds College of Music, using a tool created by Karsten Schmidt, allows staff and students to create their own visual identity by inputting visualisations of their own music to create their own unique sound signatures to use in graphic applications.

Arguably, those dynamic designs that incorporate a random element into the algorithm achieve a more aesthetically pleasing result, negating the principle of the application of a universal rule.
But because many audiences will only see a single iteration of a dynamic identity system, it follows that if any individual variant is weak, the overall identity suffers. For the overall brand to be successful, the pieces need to equal the whole.

So the key question to ask of any dynamic identity is whether it accurately expresses the brand in all its executions.


Friday, 17 October 2014

Colour management

And so to Fedrigoni Paper on Clarkenwell Road to attend a presentation on colour management by designer Andy Brown and colour consultant Paul Sherfield of the Missing Horse Consultancy

The Print Handbook
Andy is the author of The Print Handbook, a pocket guide to help designers get the best from their print projects, and I’d met Paul a couple of years ago when he was contracted by COI to help improve our studio colour management.

As the responsibility of pre-press has shifted from printers to the design studio, creatives find themselves required to colour manage their open or PDFX output files, but have little idea of how to work within a structured colour-managed environment. Paul’s mission is to help designers improve their pre-press workflow by explaining the benefits of colour management policy, where it is needed and how it is applied.

Colour management provides a unified environment for handling colours, where a common colour reference is used at each step of production, from photography through design, plate making and printing.

It aims to unify the image throughout the entire production process by using the profiles of the various devices to manage colours.

Comparison of some RGB and CMYK colour gamuts
Basically, you have two colour gamuts, RGB for optical devices and CMYK for output devices. RGB has a wider gamut, whilst CMYK ‘clips’ the available colours into a space that can be rendered by an output device.

However, neither is an absolute – both RGB and CMYK gamuts are device dependent, so the space you work in depends entirely on the input device and the intended output device.

Problems of colour perception also arise because the designer is looking at a monitor that generates colour in RGB, a local colour proof is typically produced in CMYK on an inkjet printer (which deposits ink on the surface of the paper), whilst the commercial printing process (which is also working in CMYK) presses ink into the substrate. (Printed material reflects light, so colors also look different depending on the lighting environment!).

To achieve the best colour fidelity, you therefore need to align your input, editing and output devices, from camera through to press, so they are all working in a common colourspace (independent of any device) so that the various colours can match as closely as possible. This is the basic principle of colour management.

Translation between devices is achieved using International Colour Consortium (ICC) profiles. Based on Apple’s ColorSync engine, ICC profiles are the accepted means of maintaining the consistency of colour files when transporting them between the originator/creator, publisher and printer.

The ICC profiles manage colour between different devices, ensuring that the correct rendering intent is maintained.

The standard rendering intent for printing in North America and Europe is the Relative Colorimetric method. This compares the extreme highlight of the source colourspace to that of the destination colourspace, and shifts all colours accordingly. Out-of-gamut colours are shifted to the closest reproducible colour in the destination colourspace.

So how do you go about setting up a colour-managed environment?

Working backwards from your commercial printer, find out what colour profile your printer is using for the intended press and paper stock, and ask how they would prefer your open or PDFX files set up.

Outputting to a PDFX format retains your embedded colour profile so that your printer knows the colour intent.

For example, in the UK most printers working to ISO 12647/2 will use Adobes ‘Coated FOGRA39 (ISO 12647-2-2004)’ profile which is in the later versions of Adobe CS and used by their colour settings file ‘Europe Prepress 3’. Colour profiles created in this way will prove to be repeatable and maintain their colour fidelity for both litho and digital presses - and are therefore preferable to custom profiles.

Your local proofing device should be set up to use the same profile.

In Adobe Bridge, set your CS colour settings to ‘Europe Prepress 3’.

Calibrate your monitor into the same working space – although note that as monitors warm up, the perceptual colour will change slightly, so for colour-critical work, monitors should be re-calibrated at regular intervals.

Finally, ensure that all images have an ICC profile embedded – if not, then a generic RGB colourspace, such as sRGB or Adobe RGB (1988), will be assigned when the file is first opened or imported. (If you open a document embedded with a colour profile that doesn’t match the working space profile, in most cases, the best option is to preserve the embedded profile because it provides consistent colour management.)

sRGB is recommended when you prepare images for the web, because it defines the colour space of the standard monitor used to view images on the web.

Adobe RGB is recommended when you prepare documents for print, because the Adobe RGB gamut includes some printable colours (cyans and blues in particular) that can’t be defined using sRGB.

So having aligned the colour profiles of all the devices in your workflow, you can design in in a colour-controlled environment, knowing that what you see really is what you’re going to get.