Issuu on Google+


l a u Ann

Top tip on HTMs jQuery L, JavaSc, CSS, r & moreipt

Welcome to The

l a u n n A The World Wide Web is nothing short of magical. It connects people regardless of location and opens up a whole new world of knowledge, entertainment and community. For the creatively minded, this platform of endless possibilities is incredibly exciting, but it can also be daunting and challenging. With so many options, styles, skills, and languages, it can be easy to lose sight of the bigger picture. In this edition of Web Design Annual, we’ve collated only the best and the most helpful features and tutorials to separate the wheat from chaff, and present you with an exhaustive collection of the latest in web design. We’ve covered everything from mobile browsing to virtual reality, and provided tutorials for HTML, CSS, jQuery and much, much more. Whether you’re an experienced coder by profession or a enthusiastic hobbyist, a specialised designer with an eye for the perfect interface or still figuring out your skillset, you’re bound to find a tutorial that tests your expertise in this book. So fire up your PC and let the pages of this book show you the infinite possibilities of web design.


l a u n An Imagine Publishing Ltd Richmond House 33 Richmond Hill Bournemouth Dorset BH2 6EZ  +44 (0) 1202 586200 Website: Twitter: @Books_Imagine Facebook:

Publishing Director Aaron Asadi Head of Design Ross Andrews Production Editor Sanne de Boer Senior Art Editor Greg Whitaker Assistant Designer Alexander Phoenix Photographer James Sheppard Printed by William Gibbons, 26 Planetary Road, Willenhall, West Midlands, WV13 3XT Distributed in the UK, Eire & the Rest of the World by Marketforce, 5 Churchill Place, Canary Wharf, London, E14 5HU Tel 0203 148 3300 Distributed in Australia by Network Services (a division of Bauer Media Group), Level 21 Civic Tower, 66-68 Goulburn Street, Sydney, New South Wales 2000, Australia, Tel +61 2 8667 5288 Disclaimer The publisher cannot accept responsibility for any unsolicited material lost or damaged in the post. All text and layout is the copyright of Imagine Publishing Ltd. Nothing in this bookazine may be reproduced in whole or part without the written permission of the publisher. All copyrights are recognised and used specifically for the purpose of criticism and review. Although the bookazine has endeavoured to ensure all information is correct at time of print, prices and availability may change. This bookazine is fully independent and not affiliated in any way with the companies mentioned herein. Web Design Annual Š 2015 Imagine Publishing Ltd ISBN 9781785461798

Part of the

bookazine series

Ć– W Q H W Q R & The


8: HTML & CSS animation


16: Web of Things


122: Your best CSS ever

26: HTML & CSS patterns

130: Create a colour swatch tool with Vibrant.js

34: Customise maps with the Google Places API

134: Alter page element colour

40: Create virtual reality panoramas

138: Build custom layers with CSS

44: Build iOS-style web applications with Framework7

142: Expert guide to web 3D

48: Deploy your web apps to Heroku 54: Develop a web app quickly with Lucee

150: Interactive 3D game with WebGL 154: Image-based pop-up menus 156: On-click popup tooltips

60: Atom

158: On-click transitions

66: Create desktop applictions with Electron

160: Latest CSS4 selectors

70: Use NW.JS to develop desktop applications

164: Circular on-hover animation

76: Robust Javascript code

166: Flick background image

82: Make a playlist with Last.FM 88: Complete guide to GIT

168: Slide down on scroll menu

94: 20 Best GIThub Projects

170: The importance of typography

100: Build a friendly bot to enhance your Slack group

178: Animate type & text

106: Create API schemas with Swagger 112: Future HTML

182: UX design 188: Enhance UX with Hover CSS





What’s up with web animation? A

sk any two designers what they think of animation, and you’ll get six opinions. The web still hasn’t quite recovered from the Flash years, when loading and splash pages stressed prebroadband modems and made users wait for extravagant doodles. For designers, Flash was a brilliant way to rack up the billable hours. It wasn’t so popular with visitors, who either learned to click on the ‘Skip animation’ button as soon as it appeared, or were left wondering why they’d just spent two minutes watching a cartoon rocket land in a giant vector graphic cheesecake. Of course Flash is still around, but it doesn’t seem to be winning new fans. Today, most clients and creative directors are going to look strangely at anyone who suggests a splash screen, whether it uses Flash or a more recent technology. That could be because modern animation has calmed down and tried to make itself more of a team player. Instead of being all about the technology or the designer, animations have become more about the design. So what is animation for? It’s easy to make a site where everything moves all the time, but visitors will hate it. It’s more useful to think of animation as a power tool you can use to enhance your site’s production values, help users find their way to the content they’re looking for, and

emphasise the stories you want the site to tell. If the animation isn’t helping out, don’t use it. You can make exceptions for showcase and demo sites where you’re exploring a new technology. But for general public access, animation should always be able to justify its existence. As the technology has improved, the limiting factor for motion design isn’t the technology. You can use animation frameworks and plain CSS3 to do almost anything you want in 2D. 3D isn’t quite as developed, but it doesn’t have as many clear use cases, and most sites work fine with 2D animation, perhaps with a few understated 3D accents. The challenges have more to do with clever design, creativity and – most of all – efective communication. Animation isn’t so much about moving divs as about moving visitors. If it isn’t doing that, you may need to rethink it. To make life easy, we’ll pull out some of the recent UI and UX design trends so you can look at them in more detail. There’s no need to start with a blank editor and an equally blank expression. Modern animation design doesn’t mean starting from scratch on every project. A few hints and suggestions go a long way, as does some knowledge of what the rest of the industry is doing. If you’re ready to be moved, read on…

The SmartWater site ( has some charming animated interactive clouds that reinforce the brand’s message

Shane Mielke Creative director at

CSS and GreenSock animations are amazing ways to spice up your front-end builds. Pixi.js and three.js give us 2D and 3D WebGL/Canvas animation playgrounds. In the right hands all of these tools can work magic. We’re just missing a standardised timeline animation IDE like we had in Flash that will output clean, lightweight code.

The state of browser support CSS3 keyframes








CSS3 animation








jQuery 1.X animation








jQuery 2.X animation








Other JavaScript frameworks








WebGL [1]








[1] Requires compatible graphics hardware [2] Partial


The:HE'HVLJŨAnnual :HE'HVLJŨ Animation in Action Dreamteam The use of web animation needs to considered and have value. Alternatively, it can be something to adore and be admired. DreamTeam by Polish creatives BrightMedia sits very much in adore and admire. The homepage animation reveals itself with a simple straight line before blossoming into a fully fledged animation. The fun doesn’t stop at the homescreen, scroll down and watch more smart design unveil itself. All for show The opening animation has no real purpose other than to engage and excite the user. The moment the animation starts, the user is hooked. An attention-grabbing design is The creatives behind guaranteed to give the creators attention DreamTeam are Polish agency right across the web design community.


BrightMedia. Keep up with their latest work via Alternatively, check their Twitter @brightmediapl for latest updates.

UI animation Simple navigation animation is used to enhance the overall site. To reinforce the common purpose of the site each menu item has a rollover efect. A solid white background eases in with a subtle animation and immediately draws the user’s attention.

The technologies & tools Keep it simple with CSS3 Knowing how to work with CSS3 animations is a core skill now. CSS3 animations are split into two related tag groups. The transform tags move things around the screen, with simple support for 3D. The animation and keyframe tags control the movement. But there are downsides, and these will include some of the usual CSS gotchas. It’s easy to make animations that almost work but it’s possible that they may just start or stop in the wrong place, or don’t quite loop as you want them to. And CSS3 simply isn’t very smart. You can link animations to events in a basic way, and you can chain and link animations to create complex efects as well. But it’s dificult to make animations respond to surrounding content.

JS & jQuery: the missing link JavaScript and jQuery can fill in the features that are missing from CSS3. Do you need to make sure an element won’t fall of the bottom of the screen, or cover something important when the user reveals it? JavaScript is the solution.


There’s one catch. CSS3 runs eficiently inside the browser with superfast precompiled code. JavaScript animation is compiled on the fly, and it’s not nearly as eficient. So if you do anything complicated you’ll kill the battery life on mobile, and heat up the processor on desktops and laptops until the fans kick in. Raw JS is a powerful option with a lot of creative potential. But you will need to handle it with a lot of care.

Working with frameworks JavaScript wouldn’t be JavaScript without more libraries than a human brain can remember. Luckily only a handful are popular at a time. A few years ago MooTools’ FX Morph, Transition and Tween classes were widely used. (See for details.) Yahoo’s YUI library ( also found favour, inside and outside

Yahoo – especially in the form of AlloyUI ( which merged it into Bootstrap. Now jQuery has taken over most of the load, with its .animate function, which includes easings, durations, and the ability to animate any numerical CSS property. But that’s not enough for some projects. For more control, you can work with more advanced frameworks such as paper.js (, Raphael.js ( and Processing (there are two web versions – and For those who want a Flash-like interface, there’s also the paid-for GreenSock framework ( The big advantage over basic CSS3/HTML is support for vector graphics and simplified animation loop that saves you from dealing directly with timers. Which should you choose? Processing is the most sophisticated, and supports pixel-level manipulations – although they’re buggy in processing.js – videos, webcams and sound. It works with HTML5 Canvas tags,

Raw JS is a powerful option with a lot of creative potential. But you will need to handle it with a lot of care


Web animation API: a new solution? HTML5 technology Taking a peek at the source code reveals a surprisingly slim page. It is the HTML5 Canvas element where all the hard work is done. The animation sits quietly in the background claiming all the glory while the standard HTML creates the crucial elements needed to navigate and guide users around the site.

And there’s more The homescreen is undoubtedly the centrepiece of the DreamTeam site, but complimentary and more constructive animation is incorporated into the site. Simply scroll down the page to see how subtle user interface animations are introduced into the site design.

Motion emotion

so it’s good for big animated with it it will definitely take your data Don’t forget that many CSS tags backgrounds and digital art. vis skills up a couple of levels. – including older pre-CSS3 tags – However it’s not so ideal for can be animated. So animation making UI elements and moving can mean creating opacity them around. Raphael and paper Waiting in the wings is 3D animation fades, animated borders and are simpler, and concentrate more on and rendering. This has a lot of text decoration. vector graphic design with a hint of potential, but it’s not quite there yet. The animation. GreenSock is much used by WebGL standard is a simplified version of the corporates and adds useful functions that OpenGL 3D graphics programming API used in simplify CSS animation. They’re all worth looking at, high-poly commercial games. because knowing what’s out there can spark new ideas This sounds like a good thing. But not all platforms for existing designs. support all features, and some older hardware barely supports WebGL at all. So you can’t rely on it. And it’s If you’re working with data visualisation, the go-to hard to make it look awesome. Gamers are used to framework is d3.js. ( d3 is a monster that chews high-poly rendering with advanced lighting efects, and on data and spits it out in almost any shapes you can WebGL can’t match that. imagine and a few you probably can’t. It’s immensely Finally, it’s hard to use. The three.js framework (threejs. powerful, but also has a steep learning curve. If you can org) makes it more accessible, and there are plenty of hack it though, it’s ideal for making and animating UI demos to learn from (look out for the work of mrdoob elements, especially if you’re using them to display – you can follow him on Twitter @mrdoob). But it’s still a quantitative data. couple of levels up from plain CSS. To help you get started, the d3 site has a huge Is it worth it? For plain vanilla UI design, no. For more selection of demos and examples. Don’t expect instant experimental projects, it’s certainly worth exploring to results, but if you can spare a week or two to get familiar see what’s possible.

WebGL and 3D animation

Visualising data

If you’ve looked at native apps, you’ll know the web doesn’t have anything quite like the slick and streamline native animation frameworks built into iOS and Android. The W3C’s web animation API is an attempt to fix this. It’s not a drop-in replacement for mobile animation, so don’t expect to make elements glide or fade more easily than in the past. For better or worse, the W3C committee have gone in a diferent direction. The current proposal bundles the existing CSS3 animation features, extends them to allow simpler DOM element animation and adds some welcome extras, including support for a timeline and for play state management. Keyframes also get an upgrade, so you can do more with them. It’s also going to be possible to start, stop, restart, and pause animations using JavaScript code, which will fix some of the limitations of CSS3 animation. If you’re thinking this sounds a little like Flash – it does. Or at least, the timeline and keyframe features do. When web animation becomes widely supported it’s going to become easier to chain animations, to create animated efects by flip-booking SVG files, and to make animations respond to external events. So it’s better to think of it as an animation management system, and not so much as a new set of canned efects you can drop into your pages with almost no code. The current API proposal has issues. One big problem is lack of synchronisation. You can make events play together on the same timeline, but it’s hard to guarantee that animations on separate timelines will remain synchronised across a page. Another problem is complexity. The API proposal tries to do so much it’s not a model of elegance and clarity. It’s possible to create complex animations with it, but it’s not going to win awards for being easy to use. This is good news for designers with good code skills, who will continue to be in demand. But perhaps it’s not so good for the state of web animation in general – although it’s likely that as soon as the spec is finalised, it’s going to be wrapped into a friendlier and simpler framework so more people can use it without reaching for the paracetamol. Whatever the limitations, the API is the most exciting thing to happen to animation since CSS3. It should be ready for commercial use within a year or two. Currently it’s ‘being considered’ by Microsoft. Chrome’s developer builds include it, and Firefox has a not-quite-there implementation. Older and more obscure browsers will play catch-up, as usual. If you want to know more, check out



Make animation work for the user In a world after Flash, the point of animation is to enhance the user experience without distracting or annoying your users. It’s not quite true that animation should be unnoticeable – sometimes you want something that stands out. But it should never clash with the rest of the site design, it should never draw attention to itself without a good reason, and it should always provide a clear user benefit. Modern motion design has split into three main areas. UI sweeteners add a hint of eye candy to plain vanilla UI elements to raise production values without beating visitors over the eyeballs with designer awesome.

Staying focused

equivalent of the animated GIFs that haunt the ancient underworld of amateur web design. The next UX group are the attention-getters. When you want a user to focus on one point in the web sales pitch, add some content-related animation to make that element stand out. These elements are the descendants of the old splash pages, but they’ve been toned down for modern sites so they don’t overwhelm visitors. They’re very popular on Bootstrap sites, where one item out of two or three has added movement and maybe tells a short story. Typically they’re spread over a third or a quarter of the page, and they’re more cute than cinematic. The animation works a bit like a tiny video that dramatises the point of the element it decorates, like the

visual equivalent of an <important> tag. It highlights something you want visitors to remember. At the top of the animation tree are full-blown infographics. The genius of motion design means you can make infographics interactive (see page 48 for our tutorial). This often works better than leaving users passively looking at the screen as an animation plays through. It’s a huge win for all kinds of education and training sites, where you can build a simulation and help users learn about a topic by interacting with it. But animation isn’t an obligatory part. Adding a simple splash or bounce tells the user they’re in a new part of the site, and you’re telling them a new chapter in your site’s story. With careful tuning you can make the motion

The aim is to make your site look slicker, smoother and glossier, and generally more sophisticated and authoritative. A little CSS3 or JavaScript can do a lot of good, but if the user is more likely to remember the motion than the content you may want to rethink your design JQuery isn’t fast, and .animate strategy – especially if you start is even slower. To make your veering towards the sketchy end of animations faster and more town with insane animateeficient, try Velocity.js (julian. everything excess, and overly com/research/velocity) for a bouncy, distracting image carousels drop-in .animate and sliders. These are the modern replacement.

Speed up jQuery

Edwin Europe edwin-europe. com Check the Denim fit guide to see the 360-degree view: animation with the wow factor.

Vimeo Cameo cameo Subtle, simple and efective animations that work on diferent levels on diferent devices.



a thing of beauty that stands on its own. Make sure that you keep it short though…

Make UI engaging Why spice up a UI with animation? Too much twitching and blinking can give users a migraine. But just the right amount of animation can make the diference between a boring site and one that users will keep coming back to. In outline, there are three kinds of UI animations. Highlighters decorate existing content to suggest an afordance. The most obvious examples are link

decorations and pop-ups. Getting links right is always a challenge. A subtle mouseover underline animation can help draw attention to a link without making the rest of the page look busy and link-heavy. Another example are form error notifications. Have you ever pressed Submit on a site and then wondered why nothing happened? Site designers realised it was helpful to highlight mistakes in red, so users can see problems immediately. But sometimes this is too subtle. You can use animation to draw attention to problems by making incorrect elements move a little, as well as changing

The right amount of animation can make the difference between a boring site and one that users will keep coming back to

colour. Apple’s ‘You got that wrong, so this text box looks like it’s shaking its head’ is the classic example here. Skeuomorphisms help make the site feel more physical. The aim here is to use visual metaphors to suggest physical objects. Often, just a hint of physicality is enough for more weight and presence. Attention seekers are the final UI group. They provide important stand-out features that can’t be ignored, so they’re hard to get right. Examples include ‘FILL IN OUR FEEDBACK FORM’ pop-ups, but you can also find them scrolling up from the bottom of the page on news sites to ofer breaking news. Attention seekers tend to annoy users, so consider using animation to make them less distracting. Make pop-ups appear at the side instead of the middle of the viewport, and put breaking news in a window. The animation should always help the user, not distract them.

Catch the dragon catchthe Car manufacturer Peugeot combine video, VR and animation to create a breath-taking experience.




Create a pulsating circle Hello Monday technical lead Torben Dalgaard Jensen reveals how they created the effects on The Revelator website is built on the idea that you only need to use one platform if you want to run a music business. We wanted to showcase this idea by leading the user through an animated story that breaks the features into simple steps. Below we’ll explain how you can create the type of animation we used for the Promote feature. We’ll use trigonometry to create the pulsating efect and write it entirely in JavaScript. GreenSock TweenMax is used for the tweening, and we will be writing a JavaScript object instance for the circle so that we can preserve modularity and readability.

1. Initial setup First what we will do is create a container and an array to hold the circles. Then we are centring the container within the window. Also we are writing some stub code that we will revisit later.

2. Create DotCircle.js We will go through the methods for the stub code for the DotCircle in the next steps. We calculate the distance in


degrees between each dot and create a radius object that we will use to tween the position of each individual dot. We are using 7px as the value for distance – you can play around with this value to create a circle with less or more dots in it.

3. The init method Here we are creating each individual dot. We need to convert the degrees to radians and then calculate the initial position using trigonometry. We find it easier to work with degrees than radians, but this step can be skipped and do all calculations with radians.

4. Implode and pulse These methods tween the radius value and update the position of the dots (see next step). In this example they will keep running each other when the tween completes to create the pulsating efect.

5. Update Now what we’ll do is we will update the position of the dots based on the new radius value we are tweening. Then we will be delaying them incrementally to create the staggered efect.

6. Ready to rock! Going back to the main script – we will now create three instances of the DotCircle with incrementing radius. Then we will start the animation, again using delay to stagger them. For the full code in this tutorial, make sure that you check out FileSilo (

Torben Dalgaard Jensen Technical lead at hellomonday,com

We wanted to create a visual representation and give a sense of the excitement and relief musicians feel when they release their music and see the mood of their fans rise.



Animations: keep it simple Resources Interviewing Julian Shapiro Animations have varying levels of expressiveness and personality. The most colorful kind – like 3D flips, bounces or elastic easing – can be a great way to make an interface more fun but since they’re so distinct it’s easy for them to be visually fatiguing. It’s generally better to stick with simpler animations for things that people interact with frequently and save the more playful ones for areas that are used more seldomly. Another rule of thumb I use is to show slow and hide fast. For example, I might run a half-second intro animation for a modal dialog but hide it without any animation at all. The rationale here is that you’re more willing to watch something you requested animate in than you are waiting for something you dismissed to animate out.

Hakim el Hattab

Q. Which web animation technologies currently excite you and why? CSS transitions and animations are the technologies that I care most about. Combined they are flexible enough to achieve the efects I want and it’s been great seeing them gain broad support so quickly.

Q. The use of web animation in any project needs to be carefully considered. What advice would you give to designers and developers? When working on a web app keep in mind that excessive and lengthy interface animations can reflect negatively on the app as a whole. If animations are too slow the app itself is perceived as sluggish. If there are too many things animating too frequently it won’t feel reliable and robust. Keep animations brief and remember that not everything needs to animate.

Q. Web animation has vast potential. How do you see it evolving over the next couple of years? I expect tooling to get a lot better and I know that browser vendors are making good progress towards this. Working with animation is a visual process often requiring many iterations of number tweaking and previewing. If friction can be removed in that workflow we’ll see higher quality animations. The Web Animations API is looking promising too. There’s certainly a need for animations that can be controlled more explicitly via a script than what’s currently possible with CSS transitions and animations.

Designer and developer

What next for animation? In the past, web design was split into epic splash animations and relatively small and trivial UI tweaks, with a few sliders in the middle. What happens when the web animation API becomes more widely used and designers can work with more complex animations? Some designers will stampede back towards epic splash pages. Even though loading times and browser speeds make splash pages more feasible than they were in their heyday, it’s likely they’ll remain a sideline. The trends in current design are clear: they’re all about integrating animation with content – or rather, about enhancing content with tasteful and relevant animation. Parallax scrollers, minivideos, and animated icons do this already, with varying degrees of success. For examples of possible future trends, run a search for the animation tag on Dribble. Most examples use the

same contemporary flat, minimal, cartoon-style design language common on the web, but add an animated twist to highlight a feature or make it more memorable. Eventually some icons will be animated as a matter of course. Balance is key. Too much movement is confusing for users. Animated GIFs were the bane of the early web, and animated icons could easily become the modern equivalent. But they probably won’t, because designers are more experienced now, and there’s more of a trend towards minimal design, where designers keep removing elements until they’re left with the bare essentials. If those essentials happen to include some movement that can justify its existence by telling a story about some content or highlighting the afordances of an icon or other UI element, that could potentially make the web much more creative for everyone.

Find out what velocity.js author Julian Shapiro thinks is going to happen next on the animated web. Read his comments about the diference between good and bad animation design, learn about the imminent speed and eficiency bump, discover how too much creativity can be a bad thing, and why education matters more than ever in motion design.

Rachel Nabors’ UX Guide This excellent slide deck introduces the latest trends in animated UX design from the perspective of a trainer and designer. You may not necessarily agree with all the points – especially the one about the return of splashy flash screens. But the rest is definitely well worth reading for a creative and inspiring overview.

Val Head’s Videos Not so much of the future, but you may as well get a good guide to the state of the art in the present before you go ahead and any higher – this collection of tutorial videos covers all the CSS3 animation bases, and includes an impressive collection of design examples. There’s only about half an hour here in total though, but it’s certainly time well spent, especially if you’re just starting out.

Narayan Prusty’s Web Animation Tutorial If API specs make you dizzy you can find some simple code examples of the Web Animation API here. There’s a handy comparison with traditional CSS3 animation, so that you can see how to move from the old API to the new one. It’s not going to win any design awards, but it’s a good place to start if you’re feeling lost and confused about where animation will go in the future.

The trends in current design are clear: integrating animation with content




THINGS The IoT revolution has only just begun. The Web of Things is here to brush aside proprietary protocols and help bring web standards to the party

// Web of things

AdaFruit on IoT Limor Fried might be a media darling – her blog ( category/iot) nevertheless tends to contain all kinds of interesting morsels somehow related to the Internet of Things.

Daniel Rosenstein @IoTDan A Microsoft expert and a must-follow for anyone who uses .net technology in IoT.

IoT to WoT

Olimex @Olimex This Bulgarian veteran manufacturer of developer boards is an essential resource.

2015 5 billion It is estimated by 2017, IoT will have 20 billion connected devices compared with smartphones which will have 7 billion connections.

The paper published at dguinard-fromth-2010.pdf can be considered one of the first bits of research outlining the way from IoT to WoT.

The IPv6 protocol means that there will be 340,282,366,920,9 38,463,463,374,60,43 1,768,211,456 addresses compared to v4 which had 4.3 billion.

connected things in use.

2020 25 billion connected things in use..

IoT slideshow Hosei University in Japan has an impressive lecture slideshow (Wcis.k.hosei. Lecture11.pdf) with many application examples of IoT in action.

Atmel Corporation @Atmel

Atmel is a fierce Microchip competitor and a global leader in microcontrollers.

Microchip Technology @MicrochipTech Microchip can be considered the go-to source for reliable microcontrollers.

Hack a Day You never know what to expect visiting hackaday. com. The team does an excellent job at collecting all kinds of information which might interest tinkerers and developers.

The IoT is the next frontier of distributed computing and ambient intelligence and as such meant to change the way we live, work and play. However, IoT is still perceived by most people as ‘just’ a framework of things connected and controlled by a smart device. A more comprehensive view sees IoT as the bridge between the digital and physical world, the space where the natural boundary between the two becomes blurred and new things happen. The Web of Things is the additional application layer which implements a broader view, allowing links with the web and web data, content and services. Dizmo fully embraces the Web of Things, its concept and programming technologies. It provides the infinite whitespace where the objective of the Web of Things, to extend the reach of the IoT and to simplify its full implementation, can be achieved

Luigi Mantellassi, CMO at Dizmo 17


hen Kevin Ashton was working at Procter & Gamble’s, he recognised that RFID technology could simplify the ever-confusing factory floors. It takes but one look at the vast factory halls of an aeronautic company – be it Tupolev, Bisnovat or Ilyushin – to see, as Howard Moon would put it, a microcosm of creative chaos. Kevin Ashton used the term Internet of Things to tie his idea into the dot-com bubble. His suggestion was simple: to provide the individual containers with some degree of intelligence to enable process optimisation. For example a bakery could know that a particular type of product, ie a chocolate cake, is running low; an email to waiters would be ardent in order to inform them that pushing sales of chocolate cake is not sensible due to the lack of stock. Miniaturisation has enabled chip vendors to produce-ever smaller microcontrollers which can subsist on amounts of power which would have been obscene ten years ago. Back then, a MCU draining ten milliamps was considered frugal. Today, this would be excessive. For individuals, the Internet of Things can bring both benefits and problems. For example, insurance companies now deploy health-measuring devices which are to be worn around your wrist at all times. The data collected by these systems is transmitted to a central server, where your GP can analyse it in order to determine whether you are behaving in an healthy and sane way. Thus, a binge-drinking party might be cut short by a call from your friendly GP reminding you that you already have hypertension – whether this is a good or bad reminder will need to be decided by each individual of course. This kind of scenario is now more common though and we will look at a selection of topics connected to our everyday lives with this upcoming field of technology.

Turning IoT into WoT HOW THE WEB OF THINGS NETWORK MAKES USE OF WEB STANDARDS Ever since the OSI model was specified, networked systems had a tendency to be broken down into individual layers. The Web of Things can best be described as the application layer for the Internet of Things. What the IoT does is it looks at the technological side of things that may not have looked techy before. A cofee machine which communicates via a proprietary protocol is a perfectly valid IoT application for example. In the WoT the situation is a bit more complex. Normally, Web of Things solutions are based on standardised web technologies. This means that the individual devices tend to be addressable via a URL.

Moving our aforementioned cofee machine to the WoT would require the implementation of a standardised communication protocol: be it low-level or a more general and developer-friendly interface such as JSON via REST. One interesting extension to the topic involves real-time-systems: HTTP is badly suited for this and that’s because of its relatively complex handshake architecture. Developers can solve this problem by abusing streams of media protocols such as RTP/RTSP. XPP or WebSockets may also see some use here.

Microsoft on IoT Open com/iot in order to treat yourself to a selection of all things IoT and Redmond. You might not believe it, but Microsoft is a significant IoT player.

Arduino When working with Arduino planars, creating a system for Internet of Things can be as easy as hacking together a web interface and a basic hardware driver


When looking for IoT solutions, always keep in mind that your customers’ brains are firmly tuned to the good old radio station going by the name WIFM. In case any of you are not aware of the meaning of this acronym: it stands for ‘What’s In it For Me?’. Successful IoT projects tend to be the ones which provide monetary or comfort value to their customers: humans are, by nature, not masochists and want a bang for their buck.

Arduino and friends Picking a microcontroller is a daunting task. Manufacturers provide an insane pile of diferent models: large vendors such as Microchip sell


thousands of diferent controllers. Purchasing a chip is but a small part of the solution: all kinds of additional hardware is required in order to get the MCU up and running. This might not be a problem for experienced electrical engineers – if you don’t know the diference between an easistor, a transistor and a thyristor then taking this approach is not sensible. A variety of third-party vendors provide so-called evaluation, or development boards. Initially intended for electrical engineers evaluating a new chip type, eval boards provided simple access to various peripherals. An Italian researcher called Massimo Banzi was among the first to figure out that eval boards could also be used as single-board computers. His

Cheap as chips When purchasing electronal components, local retailers tend to be ideally suited for prototypes. Once large-scale manufacturing starts, importing your components from China provides significant savings. One good website to start out is AliExpress: be aware that shipping can take up to a month and that American Express credit cards are not accepted.

Arduino series has since established itself as the main dominant player in amateur embedded development of today.

b of things

Apple watch One of the latest, and most popular smart devices to enter the consumer market

RasPi This single-board computer can be used to connect to all kinds of items. What about a doorbell that sends a text, makes a call and captures video of who’s at your door?


What’s WoT? THE WORK THAT DIFFERS THE TWO NETWORKS Having started out with the use of RFID, the Internet of Things can best be considered the lower levels of the Web of Things. IoT-related applications are confined to state information contained in real-world objects. They must be made available to software actors and other devices, which then act accordingly. For example, a shipping container could be equipped with an RFID tag containing its ID information and a microcontroller measuring the weight of the contained iron ore. An intelligent steel plant could then parse this data in order to determine whether a second ore smelter needs to be fired up. Even though the distinction between IoT and WoT is blurry at best, at the current state of the procedures it is safe to consider lower-level work the pinnacle of IoT development. In practice, developers should not be too concerned with the two terms: just choose the one which sounds best.

The biggest strength of the Arduino Platform is its standardised expansion slot. Known by the name of Shield, it has two rows of sockets. Expansions are plugged into these, the compiler architecture ensures that driver deployment isn’t an issue in most instances. Due to the ubiquity of the Italian process computer family, a large ecosystem of Shields have cropped up. Your Arduino can be connected to wired ethernet, Wi-Fi or Bluetooth LE – it is but a question of selecting the correct Shield and plugging it into the planar. In addition to that, a large group of more or less eficient combination planars has seen the light of day. The superpricey Arduino Yún combines the AVR-based Arduino with a Linaro-based minicomputer which has a Wi-Fi transmitter. A circuit connected to an Arduino

Internet of Things can be considered the pinnacle of the convergence trend, started by smart TVs, smartwatches and similar devices. Human life would be made easier if the things surrounding us were provided with intelligence of their own. Imagine a washing machine which completes work on sensitive shirts only when its owner is on their way home. There’s also the old adage of the fridge which automatically orders more Red Bull as stock dwindles. For developers, unlimited opportunities await in this field. Take a look around you and we’re sure you will see an endless deluge of dumb devices begging to be made a little smarter. Programming their embedded firmware, is of course, one way to profit from this trend. This, however, requires significant amounts of highly technical knowledge. Fortunately, the frontend of most devices needs to look great: an area where web designers creative experience will be of unlimited value to the manufactures and the developers alike. Cutting a long story short: IoT and WoT are buzzwords which will make many people really, really rich. Web developers can profit from it in two ways: first of all, endless opportunity awaits in the creation of the user interfaces for devices. Secondarily, accessing IoT and WoT peripherals adds a new layer of context sensitivity to your systems.

Yún can link into a local Wi-Fi with minimal efort. Third parties provide similar boards with embedded RFID or Bluetooth low-energy radios. Arduino is but one possible system. The Raspberry Pi has recently seen quite a bit of use in the IoT space. Boards from the Bulgarian consulting company Olimex have also made significant inroads. Once your solution has reached production readiness, bundling it with a process computer can be expensive due to place and material cost constraints. Many single-board computers have an open hardware design, letting you embed their circuits into a planar of your own. This is especially true of the Arduinos except for the Yún: they are based on stock AVR processors that run on simple breadboards.

Many single-board computers have an open hardware design, letting you embed their circuits into a planar of your own

When working on small-scale solutions, the added cost of an evaluation board tends to be ofset by the cost of getting boards and sensors manufactured. In this case Microsoft’s Gadgeteer can be an attractive alternative – it provides a set of prespecified components ranging from displays to cameras and a wide array of other sensors. Their most significant benefit is that they can be connected using a standardised interface: building up your IoT system is as easy as connecting the elements to one another.

The IP address problem Radical proponents of the Internet of Things propose, at some point, every device in the world should be intelligent. Sadly this does not work well with the existing IP protocol, as it was developed to provide about 4.3 billion addresses. This was more than enough for the tiny network – the explosive growth of the internet has since led to a phenomenon known as IPv4 address exhaustion. The price of individual IP addresses has been on a permanent climb since.




Smart LED bulbs

Run tracking Music can transform a boring jog into a meditative experience. Apple promoted its iPhone 3GS via cooperation with Nike which has since been expanded across the entire product portfolio. The idea is relatively simple: a small sensor is embedded into a space below the sole of your pair of sneakers. While running, data is transmitted to the iPhone – it uses this data for profiling. More advanced systems could perform real-time exercise intensity tracking: overtraining is the bane of amateur and professional joggers.

Turning lights on requires you to walk to a nearby switch in the dark. Belkin’s WeMo system wants to break this never-ending cycle by providing a smart adapter which is plugged into a mains outlet. A companion application connects to the adapter, permitting you to turn the attached peripheral on and of as needed. In addition to that, smart LED bulbs are sold – their colour hue can be adjusted dynamically. Finally, third parties are permitted to license the standard for use in their own products.

Fulfilling garbage potential

Mobile Internet of Things blog

Overfilling trash cans are an eyesore. Sadly, emptying them every day puts an unnecessary drain on community resources. Bigbelly solves this problem by providing intelligent garbage cans. Each Bigbelly can contains a small solar cell. It provides energy for a small wireless transmitter which returns fullness information to a cloud service. Municipal administrators use this information to optimise garbage collection: the waste collection service no longer needs to check which cans are empty. Furthermore, valuable information on citizen habits can be collected by analysing the amount of garbage produced.

In the media, Network Address Translation (NAT) is considered one way to mitigate the problem. In practice, however, it does not really work – routing multiple devices of the same type through a router can become a nightmare to set up and furthermore makes the use of a unique port number dificult. The solution is a new generation of the IP protocol which goes by the name of IPv6. It has been around for ages, but sadly, adoption has been rather slow so far. The biggest barrier to a widespread adoption of IPv6 is that IPv4 hosts cannot contact their more technologically advanced brethren directly. In the case of the Internet of Things, this should not be too much of a problem. If all sensors and their controllers are based on IPv6, there are no legacy systems which need to be supported in the first place. As of writing, most providers are able to handle IPv6 trafic. In practice, developers rarely need to worry about this – keep it in the back of your head though in case something goes really, really wrong or some pesky VC asks you about it.


Dr. Florian Michahelles’s blog can be accessed via florian-michahelles. It contains links to all kinds of material which might be interesting to IoT devs.

Due to widespread adoption… [Bluetooth LE] is a winner – it could even thrive without the IoT itself


Harness the power

We name-checked Bluetooth LE earlier. The technology – called Bluetooth Smart when trading in the consumer space – shares the name and the frequency hopping with its famous ancestor. Other than that, it can be described as complete redesign of the original shortwave wireless service. Bluetooth LE fixes many previous problems. The discovery section, which once took more than ten seconds can now be accomplished in less than a second. The availability of the GATT profile ensures that developers and device manufactures don’t need to bother themselves with creating, specifying and certifying Bluetooth profiles.

Bluetooth LE works on a relatively simple software model based on characteristics. A characteristic is an attribute of a device: it could be the colour of an LED or the amount of energy left in the battery. Further information on each characteristic can be contained in the descriptors, both of which can be wrapped up in services. The GATT profile acts as transport layer around these concepts: you use it to read and write characteristics and descriptors. Implementing Bluetooth LE by hand is a painful and futile task: a variety of companies provide ready-made modules which can be attached to a process computer of choice via SPI or L2C. LE is far superior to proprietary protocols like the ones used in the once-popular LPRS easyRadio family. Due to the widespread adoption of

// Web of things

WoT community W3.ORG/COMMUNITY/WOT If the oficial Web of Things Interest Group is too expensive, the WOT community might be an alternative. It was first organised in 2013, and was responsible for the highly successful workshop in Berlin which spawned its aforementioned larger brother. Sadly, its activities have since started to slow down – no updates have been posted since June 2014. Nevertheless, joining the remaining 160 members of

the WoT Community Group might be an interesting way to start connecting yourself to the W3C. Linking up is as easy as visiting their website ( community). Fill out the form, and you are all set to go – if there still is anything taking place, that is. You can also look for local events. In most big cities, one or more Internet of Things meetups take place – joining one is a cheap way to find out more.

WoT Interest Group W3.ORG/WOT/IG After the success of the Web of Things Workshop held in Berlin in 2014, the W3C started a Web of Things Interest Group. The group will be responsible for defining standards: the topics discussed will range from simple things such as scripting language subtypes to more complex issues such as data encoding, metadata formats and the protocols used for communication. Being a member of this group permits the providing of

input to these standard drafts. It can also serve as a first-class opportunity for networking as the current chairs are from Siemens and Intel and it wouldn’t hurt to have these contacts. Sadly joining the Web of Things Interest Group is not cheap for an amateur developer. To join the W3C you will need to pay the minimum annual fee of €1,950 (£1,395) for two years. After that, the fees will quadruple and will stay in the range of about €8,000 (£5,725) a year.

Bluetooth LE and the highly cooperative way Special Interest Group handles licensing, this technology is a winner – it could even thrive without the IoT itself.

Sensors... By and large, single-board computers are a relatively stupid bunch. They can read digital signals, a much slower A/D converter samples analogue voltages with moderate accuracy. When performing measurements of real-world dimensions, the first step usually involves analysing the type of the quantity to be measured. If you are dealing with a voltage, the situation is easy – you need to check if the A/D converter can sample it directly or if amplification or dampening is needed. If the dimension at hand is not of an electronic type, a transducer is needed. Theses can be grouped up into two groups: dumb and smart ones. The classic example of a dumb transducer is an LDR or an NTC: they are resistors, which change their value in relationship to the measured dimension. Using them requires the amplification or dampening of the

on collected information usually requires quite a bit of voltage, which must then be digitised. Due to the power: the GPIO ports of the average MCU are not nonlinear characteristic of most transducers, a strong enough to provide the kind of current needed complex linearisation is required in order to yield to move motors, switch relays or do other funny things. useful values. This problem can be bypassed by using Well, the transistor comes to the rescue here: this smart transducers. They communicate with your component can amplify small currents, thereby single-board computer via a defined protocol: some increasing the current drive capability. Connecting simply provide parallel data, while others can be components via a transistor is not always the addressed via SPI or I2C. Using these ideal solution though because when you industry-standard bus protocols simplifies are switching mains or even higher sensor integration: in the case of an voltages, galvanic separation is Arduino, accessing the peripherals is often desired and this can be as easy as invoking the correct achieved via the use of a relay. library functions. The datasheets Relays are an electromechanic of smart transducers tend to Veteran media manager component which combines a contain one or more sample who recently was assigned to manage a large IoT focus coil and a magnetically driven circuits, which explain how to group. Retweets loads of switch. Chasing a current through condition the incoming signal for valuable messages. the coil creates a magnetic field, maximum accuracy. thereby moving the switch to the closed position. This will load currrent Reading values is but part of the solution. Acting flows over the switch, which is not connected

Diana Kupfer @dianakupfer

...and actors



Enriching lives EXPERT INSIGHT

Brad Fry Director of Strategy & Insights at Folk By the end of 2015 it is predicted that there will be 4.9 billion connected things in use. By 2020 it estimated to reach 25 billion. Where do you think the IoT’s greatest potential lies? At the micro level the greatest potential for us as end users is in convenience. There are so many ways our

technology enriches and simplifies our lives, but we need to take too many manual actions to benefit from this. When everything is connected things will just work, no buttons to tap or things to log. Moving to the macro scale, the volume of the data collected by billions of connected devices can help profoundly. It will help improve weather models, redesign trafic systems and accurately diagnose illness earlier. The Web of Things is an attempt to use existing and well-known web standards rather than specific manufacturer standards to help simplify the creation

As long as we have compatible endpoints, we can probably relax about the transport protocols uses by the devices themselves

electrically, to the relays coil. Sadly, relays can grace Motors come in incredible variety of types: entire multiple issues. First of all, the coil can create magnetic books have been written, cataloguing each of the fields, which might not be entirely acceptable in some subtypes. When dealing with the IoT, you usually have circumstances. Secondarily, the actual switching to use one of three diferent types. Stepper motors can process can cause sparks which can ignite be considered fine motor skills wizards: they are combustible atmospheres. Finally, the coil needs quite extraordinarily accurate and move by a fixed amount a bit of current – use a transistor to amplify, and don’t of degrees when provided with an impulse. When forget the kickback protection diode. combined with external gear shift, amazing levels of Relays are limited to switching on and of. accuracy can be achieved. In many cases, a more granular level of DC motors are on the other side of the control is required. This can be equation. Their rotation speed achived via D/A converters: hey depends on the load and the take a digital value and convert it voltage: the only way to ensure a into a variable voltage or a reliable RPM involves measuring Extraordinarily active variable current. it continuously and adjusting developer advocate drive voltage accordingly. working for the Bluetooth Exerting very large forces SIG. A must-follow due to A common real-world task is motor requires the use of AC motors. his diverse range of topics. driving: be it a big, beefy cable They come in the forms of the winch lifting a crate of arms, or a tiny synchronous and the asynchronous motor moving a print head across a rail. motors – sadly, discussing them in further

Martin Woolley @bluetooth_mdw

Run an engine


of IoT applications. How quickly and how efective do you think they will be? We’ve been through similar standards wars in the past: with browsers, video codecs and even network protocols (anyone remember Novell? I actually liked IPX). They have all eventually converged into a standard (although Bluetooth still has a way to go). Modern web standards now rely on compatible endpoints – dumb pipes and smart hubs – so the lack of standardisation may be less of an issue. Apple has made a big play towards acting as a intermediary in connected devices (HealthKit and HomeKit). As long as we have compatible endpoints, we can probably relax about the transport protocols uses by the devices themselves. With more and more connected devices joining the IoT party, what will designers and developers need to consider when creating UIs for applications?

detail would exhaust more space. The asynchronous one is especially critical as it has still not been modelled fully: when working on an application with an asynchronous engine, you should not be ashamed to seek help from professionals.

The power bank Power supply remains the Achilles heel of the Internet of Things: when a sensor runs out of energy, it cannot transmit any further data. Users have shown themselves to be extraordinarily hesitant to changing the battery of their gadgetry frequently. If you want to keep your clients happy, try to aim for battery lifetimes that range from six month to almost infinity. This obviously disqualifies some battery types – NiCD and NiMH types sufer from a relatively fast rate of self-discharge. This means that the internal resistance of the battery slowly destroys the energy stored inside: if the cell sits on a shelf for a few months, it might end up empty. Modern MCUs provide low-power standby modes: When no data is to be

// Web of things

Design has been moving from complex to simple for the last decade and that will continue. We’re going from the traditional engineering-led approach of making something that does everything to the user-centric things that do a few jobs brilliantly. The challenge for developers is to make smarter AI to figure out what I want before I know it myself, like Google Now. All the big strides being made in technology today are in this area just look at Timeful (recently purchased by Google), a calendar application with artificial intelligence, or Transit (just imagine what they can do with all this Internet of Things data). For my life with connected devices, voice is the interface of choice. As if being able to turn my heating on from bed wasn’t lazy enough, I’m looking forward to the day I can declare “Hey Siri, turn the heating on” – just need to wait on HomeKit!

The future THE COMMON PROTOCOL TRENDS Ever-rising marketing costs ensure that devices will get smarter over time: selling customers a service, rather than a gadget, increases loyalty and provides a continuous revenue stream. LG sold a combination of smart TV and mobile phones in the hope that if clients have a TV set which works only with LG phones, they are likely to opt for the handset too. As LG is a giant in the whiteware business, we can assume this trend will continue. The IoT will standardise itself: be it by the manufacturers establishing a common protocol, or by a third party selling a cross-platform framework.

collected, hibernation occurs. Sadly, this is but part of the solution. In practice a field of technologies, which goes by the name of energy harvesting, promises some solutions for this very problem. Energy harvesting can take more classic or more modern forms. The most traditional form, by far, is a solar cell: it can collect a small amount of energy from artificial light, and can collect much more power once provided with sunlight. This however, is but a small part of the available options: Peltier elements can be used to generate energy from diferences in ambient temperature, while more exotic systems use bloodflow or high-frequency radiation emissions to collect small amounts of energy. These will then let the attach system eke out its lifespan. Designing energy harvest in circuits requires a high level of knowledge in electronics. Fortunately various manufactures provide ready-made intergraded circuits, which require you to link them with an energy source and a temporary storage medium: The rest of the work is complied inside of the intergraded circuit.



44 112



Ĺ Q L G &R 26: HTML & CSS patterns

34: Customise maps with the Google Places API 40: Create virtual reality panoramas 44: Build iOS-style web applications with Framework7 48: Deploy your web apps to Heroku 54: Develop a web app quickly with Lucee 60: Atom 66: Create desktop applications with Electron 70: Use NW.JS to develop desktop applications 76: Robust Javascript code 82: Make a playlist with Last.FM 88: Complete guide to GIT 94: 20 Best GIThub Projects 100: Build a friendly bot to enhance your Slack group 106: Create API schemas with Swagger 112: Future HTML




S S C &














s Help to standardise your code

s Consist of a suite of code snippets

When writing code to a defined pattern it is easier to maintain consistency, even across a large-scale project. This consistency is vital as it enables you to be confident about the quality and stability of your solution. Well-structured code is much easier to test, debug and fix, for example a pattern that separates data, logic and presentation enables each area to be tested independently. This means that any issues within a code solution can be identified and eradicated with very little debugging efort. As you gain experience in working with standardised approaches and patterns, you will also find that estimating projects becomes more accurate.

Code snippets provide reusable functionality that is already tested and proven within other projects, stopping us from writing the same or similar solutions time and again. Design patterns provide us with a structure to work within, a set of rules if you like, that help to make consistent decisions when writing code. Code snippets themselves can fit within a range of patterns and when considering larger reuseable sections of code, you should bear in mind how these fit with any patterns you may be working with. Don’t expect a design pattern to provide you with answers, it will instead give you the means to find the answers more eficiently.

s Provide consistent UI behaviours

s Negate the need for user testing

When considering the user experience of a digital interface you will be thinking of When thinking about UI patterns, we are generally making sure that any journeys patterns without even realising it. Every user holds a certain level of expectation for or interactions are considered and based on proven approaches we know users the behaviours or steps through a user journey. These expectations are understand and expect. Therefore, it is easy to assume that our platform is generally based on repetition and in turn help us to establish standards fine for users without any form of user testing. This is a risky assumption over time. There is not one UI pattern that suits all cases, but by ensuring to make, users across diferent demographics, industries and even in that a platform does not switch between patterns, your users will not difering environments can and will expect difering behaviours. Even Due to the range of design get confused. This is really important to design and build digital if you are building a solution for users you have built for previously, patterns, there isn’t one master products that not only work but work well for users. it is important to remember that these user experience standards pattern that can be defined or have evolved over time and will continue to do so. found. This a good thing, because It is very easy to solely concentrate on the exact requirements and if this wasn’t the case, coding The use of design patterns span from providing an overarching agreed functionality you have defined with a client. In doing so you could get quite boring structure to an application to more granular patterns for small decoupled can be certain that you will design and build something that your client very quickly. pieces of functionality. Most projects encompass a range of design patis very happy with, making the project a success. However, what happens terns, separated by UI and functional, and one pattern may contain other when your client changes their mind or you are requested to extend or adapt smaller patterns. For example AngularJS is based on an MVC application pattern which some functionality as part of a subsequent project? This can incur a lot of refactoring separates your Model, View and Controller, however your controller can contain more or code rewrites, but by coding with patterns you can cater for future extensibility and granular JavaScript patterns such as Constructor or Modular. build something you are happy with.

Diverse patterns

s Enable reuse and maintainability

s Provide one approach for all projects

WHAT CAN PATTERNS DO FOR YOU? You may think that design patterns don’t apply to me or my work, they are only relevant to people who work on large-scale projects or sizable development teams. However it is very rare, even if you actually are working alone, that you only work with your own code, and that’s because design patterns are in use with pretty much every piece of open source code that we will find ourselves using. If these technologies and frameworks had not been produced with core underlying design patterns then they would be extremely dificult to work with, understand and maintain. In short a plugin or framework with no design pattern will not have much longevity within the community. Even if you are building in isolation, with no use of other open source code, it is likely you will need to come back to your code at some point in the future. We have all experienced code that was written some time ago

and noticed some core underlying diferences when comparing it to how we write code today. If we consider design patterns when we write our code, the problems of technical debt in our code will not cost us nearly as much time when we come back to it one or two years later down the line. These problems can be largely eradicated by understanding the design patterns and approaches that were used, making it much easier for us to get back into developing within an old code solution. If you do tend to work with other developers, design patterns can be a useful means to communicate how an application has been built or even how it is planned to be built. This can really save some time in getting multiple people up to speed on working within the same project, yet still maintain consistency and ensure stability within the final product.

Design patterns can be a useful means to communicate how an application has been built or even how it is planned to be built is just one pattern library available on GitHub


The:HE'HVLJŨAnnual :HE'HVLJŨ Design patterns have been around for a long time. Realising how we have already been using them and then taking advantage of the consistency they can provide is essential in building modern web applications. Luke Guppy Frontend director

THE DIFFERENT PATTERN TYPES UI patterns These are common design solutions to functional interfaces. They are predominantly defined by understanding our users and their interface expectations and interpretations. We provide the most eficient way for a user to complete a task or provide multiple ways to get the same result.

Persuasive patterns These leverage content to persuade the user to take a specific journey or make the decision we want. This could be achieved by reinforcing a product’s quality using social proof or creating empathy for a cause. They’re particularly powerful in eCommerce as they bring emotion into a user’s decision-making process.

JavaScript patterns Within front-end development, JavaScript provides complex functionality, and this means that the benefits to be found with using design patterns are far more obvious. JavaScript pattens help to break down your scripts into smaller more manageable pieces.

Application patterns Application builds that run predominantly within the browser are now prevalent across the web. They enable almost instant feedback and to achieve this, a number of MV* patterns have been adopted by app frameworks. These provide much-needed separation of concerns.


THE BENEFITS OF UI PATTERNS The main benefit to using design patterns when building user interfaces is that they ensure you take advantage of tried and tested solutions to common design problems. More often than not there will be a range of established approaches to choose from and in making this decision make sure you assume nothing about your users. Only with a good understanding of your user’s goals and expectations can you be sure of building the best solution. UI patterns define more than just interactive elements. Arguably the most important aspect is clear visual communication to users – this is achieved through common, recognisable iconography. A good simple example is the humble external link icon, using this sets the expectation that you will be leaving the site, therefore avoiding the issue of confusing the user when they only find this out after using the link. Confusion makes users feel uncomfortable, and when uncomfortable they will tend to exit the user journey you have set out for them.

Common UI problems There are so many UI problems we solve all the time. Reusing these approaches is probably the simplest form of design pattern, as we all do this as part of the designand-build process as standard.

s Form validation We have all experienced error messages on forms that either seem to be displayed too late, in the wrong location

or are in odd groups, and not all at once. The things you will need to consider are: positioning of error messages, alongside the relevant field is the most common user expectation; timing of messages – so don’t validate fields before the user has finished editing them and decide whether it is best to validate one at a time, whether in groups or altogether; and don’t wait for the user to submit the form before revalidating any changes – make sure the messages are clear and concise.

s Searching and filtering When you have a lot of information for a user to view, consider whether they need to get to something specific as quickly as possible, or if they need to explore and curate the data. If the former then the user will expect to be given all the tools to run an extensive search straight away. Otherwise it can be daunting for a user if you ask them too many questions at the start of an exploratory journey. In this case you will find that simpler search tools followed by additional filter options alongside will prove more successful.

Responsive patterns Adjusting layouts across screen sizes and providing clear interactions for both mouse and touch inputs make responsive patterns essential. The two main aspects of responsive builds that benefit from well-defined patterns are layouts and navigation.

Confusion makes users feel uncomfortable, and when uncomfortable they will tend to exit the user journey you have set out for them


WHAT ARE PERSUASIVE DESIGN PATTERNS? READ ON Persuasive design has been used successfully within advertising and marketing for years. Loss aversion is probably the most common form and this is based on the premise that people prefer to avoid losing something than gaining something, for example just look at sales or deals that have a limited time period.

s Layout patterns Two good examples of layout patterns, and probably the most common, are ‘fluid to stack’ and ‘column drop’. There are a huge amount of patterns in use and in most projects a range of patterns will be needed to support the range of information required by the platform. The ‘fluid to stack’ pattern relies on a fluid grid which scales down with the screen and requires the content of each column to scale down with it. A breakpoint is then defined to stack all columns when appropriate. The ‘column drop’ requires less reduction in scale of column contents but uses more breakpoints: one to stack each column (from the right) below the other columns when the width is limited.



What you have to do to appeal to a user’s sense of pride is simply reinforce a decision that has been made, and you can do this reinforcing through your social media content. You can also make use of imagery to position a product or service within an aspirational setting.

Using content which gradually builds up empathy can be a useful pattern when the key objective is to gain donations from users. To succeed in using empathy, it is vital to establish this emotion prior to asking for any commitment from the user.

more complex interactions, it takes time for users to become accustomed to them.

altogether and leave future development as mainly guess work. If you are precious about maintaining clean HTML within your projects but want to use modular CSS you can separate your modular CSS files from your HTML structure-based declarations and bring them together with a CSS precompiler. It would be advisable to refrain from overruling modular classes – start with your modular CSS and when it needs adjusting move it to your main CSS files. This gives you confidence that any modular declarations will have the build efect you expect.

The modular approach

Any responsive pattern you adopt will be created with HTML and CSS, but have you ever thought about how you approach writing it? The fully modular CSS pattern removes the reliance on a specific HTML structure to achieve your responsive interface. Instead of targeting HTML elements, or relationships between them, it relies on an extensive range of CSS classes which can be applied to any element. Each class must have a specific purpose and usually only consists of a few properties. These classes can be grouped to achieve the desired layouts across screen sizes. This approach requires a lot Navigation patterns of planning and consideration upfront. It can be dificult Providing clear functionality on smaller screens is a probto define all the required CSS declarations, and it is temptlem solveable by navigation patterns. Any ing to breach the pattern for a quick solution. solution will be partly dependent on the But, your CSS can become very confusing. amount and size of the navigation links. This approach is relied upon by responThe most common solution is a ‘togsive frameworks, such as Bootstrap Once you have decided on the gle menu’, when the navigation links – they’ve done the hard work for you design patterns that you have set no longer fit across the screen, and have established a definitive, the boundaries and rules for, they are replaced with a menu well-defined range of modular try to avoid any temptation to which, when clicked, slides down the classes for building responsive sites. deviate from these design links from the top of the screen. From patterns for any short a visual aspect it is also common to use A modular CSS pattern does come with term gains. the ‘burger’ icon to represent a menu link. its own drawbacks. To support a full website Another solution which was adopted and build the CSS will be extensive, and in assigning separate popularised by the Facebook app, is a ‘slide-left menu’. classes for a desired feature they will litter your markup. This behaves in a similar fashion but slides the page to If you take a fully modular approach, avoid adjusting the right and menu in from the left. This approach often the efect of classes based on their position within your incorporates swipe gestures to hide and show the menu. HTML. If you do start to overrule properties in this way, When it comes to multilevel navigation solutions there you can quickly lose the benefits of the modular system are still a wide range of approaches in use, as with many


Stick to your guns

Maintaining your HTML

s ‘Fluid to stack’

s ‘Column drop’

Uses a fluid grid with the content scaling down on smaller screens

Uses more breakpoints to stack columns below other columns

EVOLUTION OF USER EXPECTATION There are standard approaches that we all find ourselves building time and again, these are tried and tested UI solutions that users are familiar with. It is vital to ensure that with any interactions or journeys, the user can understand the functionality and process with very little efort. This ensures that users are comfortable and feel in control, nobody likes to feel they don’t know how to do something. These approaches, although tried and tested, do not last forever. Advances in technology can change how we think about interactions. Take the transition from the humble desktop mouse to touchscreens as an example, users used to expect hover states and click events, now they expect the ability to swipe, pinch and rotate – and all this with an indication force and velocity expected too. It is not only technology that changes users’ expectations of an interface, on mobile devices trends are set by the implementation of both iOS and Android interfaces. Many solutions and visual cues accepted across the internet today as standard were first seen with these mobile operating systems. Facebook is another key player in the evolution of user interaction, any software application that covers such a broad range of users has the inherent influence to change the way we build.



TAKE CONTROL WITH FRONT-END PATTERNS Technical solutions with JavaScript It is very easy to get lost in your code and end up with an unrecognisable mess of functions, objects and variables that you know work, but you’re not quite sure how. Design patterns can stop this from happening as they are proven solutions to common technical problems that give you some clear rules to adhere to when you’re developing. If you work with JavaScript libraries and frameworks you will already know many of these patterns, possibly without even realising it.

The constructor pattern This is a very simple pattern but is essential when building a large piece of software, as it gives control over an object’s structure and the creation of new instances of those objects. This ensures that you can be confident of what to expect from a given object when using it across your application. The constructor is a function that receives a defined list of arguments and extends itself with properties and/or methods based on those arguments. In JavaScript calling a function with the ‘new’ prefix creates an isolated instance of that object, based on the defined structure.

function Person(firstname, lastname, age){

this.firstname = firstname; this.lastname = lastname; this.age = age; this.fullname = firstname + " " + lastname; } var bob = new Person("Bob","Robson","27"); console.log(bob.fullname); // Bob Robson

The prototype pattern JavaScript is a prototype-based language, this means that every object is based on an original prototype. As seen in the last example the ‘Person’ function is the original object and ‘bob’ is based on this. If the original object, or any created from it, is extended using the prototype method, the same extension will be applied to any objects based on this original prototype. If you have used a polyfill to provide new methods to object types such as arrays, then these polyfills are leveraging this pattern to extend all array objects. Without the prototype pattern we wouldn’t be able to use new JavaScript techniques until the browsers implemented their own solution.

Person.prototype.getInitials = function(){return this.firstname. split("")[0] + this.lastname. split("")[0]}; bob.getInitials(); // BR

Without the prototype pattern we wouldn’t be able to use new JavaScript techniques

Structural patterns

cript application frameworks including Angular, Knockout, Ember and Backbone. They separate the concerns of diferent sections of an application, making these distinct sections independently testable, interchangeable and reuseable. There are three diferent MV* patterns, MVC, MVVM and MVP, all three separate presentation (our HTML), data (the information held within our application) and logic (our functional code). This gives us great flexibility on how we can build and also lets us create much larger scale software solutions in JavaScript with ease. From a UI perspective these patterns, and in turn any These patterns are frameworks that use them, concerned with the comhave given us two-way munication between binding of data within If you are new to working with objects across our our interfaces. This design patterns then the list of platform. When we gives users instant approaches can be daunting. Start want one object responses to their of simple and use patterns for to afect another actions and has real coding problems before in a defined manner, therefore really influmoving on to the these patterns come into enced web applications next one. their own. A good example of today. would be the Observer pattern, this consists of an object to hold and All JavaScript patterns to this point have maintain a list of other objects as an array. been concerned with what our code is This array can then be given functionality actually doing and how it is doing it. Patto compare, change and notify specific terns within your code syntax are equally listed subjects or even all of them. This beneficial for large-scale projects, these pattern can be useful when building listed would constitute a set of rules outlining data items that can afect one other, for anything from capitalisation of namesexample a league table or leaderboard. paces to where and when variables, public or private, are declared. By defining These are very popular in web developclear guidance on your syntax structure, ment today, also known as MV* patterns other developers will find it much quicker as they provide us with a host of JavaSto familiarise themselves with any code. The JavaScript patterns so far have been creational, structural patterns deal with the composition of objects and how these compositions relate to each other. Structural patterns ensure that objects can be related and when one is changed this change is realised across the platform. This enables us to manage object-based code in small chunks and be confident that extending or adapting one composition will not unexpectedly impact the rest of our application.

Behavioural patterns

Start off simple

JavaScript syntax

Application patterns


Brad Frost

JavaScript Live

John Resig

Henrik Joreteg






An engineer at Google, creator of Yeoman and leading voice in all things JavaScript, open web tooling and web application. Addy is currently working on Polymer and is also the author of the excellent book JavaScript Design Patterns.

An extremely experienced and well-respected web design/developer and consultant with some great insights into UI designs and patterns. Brad has worked on some excellent web tools and various resources such as Pattern Lab (

This is a great Twitter account for keeping up to date with the latest community news, events in the industry as well as approaches, techniques and framework. Make sure you sign up to their email newsletter so you never have to miss a thing!

One of the creators of jQuery and an expert in all things JavaScript (including being the author of Pro JavaScript Techniques), John is an essential expert to follow. He has been working with and sharing his thoughts on design patterns for years.

The president of &yet, confere JavaScript developer and creator of AmpersandJS. Henrik has some great ideas about how applications can be built to truly benefit people in general. He also provides updates for his latest shipping releases.




Bootstrap /

Semantic UI /

A wealth of tried and tested UI components based on a modular CSS pattern.

This responsive UI framework uses semantic, easy-to-interpret language.

Kendo UI /

Angular JS /

Benefit from a wide gamut of UI and app patterns with this extensive toolset.

Arguably the MVC application framework of choice in web development today.

Ampersand JS /

Knockout JS /

A fully modular JavaScript development approach, uses exactly what you need.

An MVVM data-binding framework, separate your data from your presentation.




that may have come close to solving it. This helps others Start with the problem, a design pattern should be to make a faster decision, if you have already gone created for no good reason. You must have a problem through a review of other possible solutions why not that doesn’t already have an established solution. To share that information? ascertain if there are already solutions out there, you Now clearly define the rules or scope of the pattern, need to review other patterns that may suffice. This be sure to include what your pattern isn’t for as well as could even mean trying some code out before making a what it is. These rules can be a simple bulleted list of do’s decision. A new pattern without a distinctly different and don’ts. Then provide a good, but simple example of purpose will just crowd the landscape and make it more your pattern's usage, with more complex patterns you difficult for others to make informed decisions for any of may need more than one example. If your pattern their problems. encompasses a range of smaller ones, it may be worth Once you have concluded that there are not any documenting it in that way. sufficient patterns that solve your problem, break the problem down into as small a piece as possible. Then Once you have created a pattern and are happy with its you can consider if and how these small separate structure, purpose and usage release it into concerns may need to relate or communicate the wild. This can be achieved by selecting with each other. A good way of visualising a small group of colleagues or peers and this is by simply sketching out this asking them to review and question structure. Each part of your pattern This process may be dificult, as your solution and decisions. should be able to be independently it isn’t easy to see your own No pattern or framework that tested and as you have broken creations come under such has fully established itself in your problem down you may find scrutiny, but by doing so you will today’s landscape came about by some smaller, existing design be confident in the reliance one person alone. You cannot afford patterns solve some of your smaller and suitability of the final to be too precious about it, but do problems. Again ensure that you reuse end product. remember that if any decisions come approaches if possible. down to opinion, the final say is yours because you own it. You will find that over Don’t leave your design pattern in your head, it is no use time and with good quality feedback, your pattern will to anyone left in there. When you go through the gradually develop into something transferable to other process of trying to find a suitable design pattern or projects and developers, which is the core purpose for it patterns, you will begin to appreciate how important anyway. Collective wisdom is so much more powerful good communication of a pattern is. Firstly outline the than that of the individual, collaborate with your peers problem you are solving, then list any existing patterns and challenge others to really interrogate your pattern.


Taking critique


REMEMBER TO KEEP THINGS CREATIVE It is possible to take the use of design patterns too far, you can become too reliant on them and find yourself producing the same solutions over and over again. Technology and people change, therefore both what impossible and what is expected changes. This means that a proven solution may become old-fashioned and if you have become heavily reliant on it you could quickly fall behind. Good eficient production is essential within a web development project, but writing code is a creative discipline. Remember there will always be more than one way to solve any UI or technical problem. We all need to keep our minds open to diferent approaches, techniques and new design patterns – it is our collective responsibility as developers to shape the digital landscape we are all part of. One of the best ways to do this is through collaboration, this can be discussing approaches and techniques with colleagues, or taking the time to read blogs, review new ideas and follow key players on Twitter. Even if you see a very small failing in a pattern, or have just the beginnings of a new idea, by enlisting the help of others we will be able to get to the ‘best’ solutions quickly and move on to the next big problem. If we repeat too many aspects of our builds it will tend to stifle our own creativity and we would risk the web becoming a very monotonous place.

Clearly define the rules or scope of the pattern, be sure to include what your pattern isn’t for as well as what it is



JavaScript Jabber

A great resource of UX insight and a host of free eBooks including Web UI Design Patterns.

An expansive repository of training material including an introduction to JavaScript design patterns.

A weekly podcast covering all things JavaScript. A good source of what is new in JS development.




Tabs to accordions


Content that’s displayed in a group of tabs in your desktop layout can be easily switched to use an accordion when the viewport becomes narrower. This way the content stays nicely structured and it remains useable as well.


This is an aspect of web development which is easy to overlook. With any UI pattern be sure to consider the semantics of your markup and the context given. Testing your platform with screen readers is useful because you can validate how accessible the platform is.

Column dropping


You may have a grid with a certain number of elements on each row. Use media queries to reduce the number of elements on each row as the user’s viewport becomes narrower, and allow the content inside each block to remain consistent.


Usability is at the heart of any user interface pattern, but remember that this will evolve over time as users and technologies change. Also bear in mind how useable an interface is when you are developing applications that may have lots of data to represent.

Shifting labels



Labels are often sat next to their respective inputs when the viewport is wide enough. When you lose space, sit them directly above their inputs. The relationship between field and label is maintained and precious screen space is saved.


Ash Robbins

Lead frontend developer at Redweb

Photoshop CC

With the introduction of artboards to Photoshop CC 2015, it is now simpler to represent multiple layouts, states and behaviours within one single PSD file. This function is most effective when you need to communicate a responsive pattern with colleagues and clients.




When working with behavioural design patterns you will predominantly be concerned with the communication between parts of your application. Any communication of data must always be properly secured ensuring the safety of users’ data within your platform.


This is a behaviour-driven testing framework for your JavaScript builds. It does not require a DOM or other JavaScript frameworks. When combining JavaScript patterns and building large-scale applications, a good testing tool is vital to maintain and ensure the stability of your code.



This is an online, interactive web prototyping tool, which enables you to quickly visualise possible UI patterns and collaborate with others in finalising solutions. You can even collaborate in real-time, get direct comments and feedback, and sync your files with version histories available.

JavaScript Design Patterns

CSS Secrets

User Testing

A comprehensive read covering a huge range of JavaScript design patterns, their uses and flaws.

A book by Lea Verou, really useful for getting to grips with modern CSS techniques.

An online system for getting your new design patterns user tested and getting real world feedback.



Customise maps with the Google Places API Use Street View and Google Maps to create a more immersive map experience that takes up less bandwidth


// Customise maps with the Google Places API


oogle Maps celebrated its tenth birthday in 2015. Ten years of sustained efort has resulted in the de facto map application both on the web and on mobile devices. Part of its success is due to the ability for people to embed maps within their own sites. This quickly made its interface familiar and is now ubiquitous on contact pages the web over. The Google Maps JavaScript library ofers more functionality than simply a map and a marker. The Places API can be used for four requests: nearby, text, radar and place details. We’ll be covering all but radar, which is used to return a large list of places within a radius but without much detail. Nearby is similar as it takes a radius and specific types of places but provides more detail. A text search is useful for organic searching.

We’ll use the Places API in a few ways that aim to give you a solid understanding of how to use it and ideas to enhance your sites. First, we’re going to build a simple text box that will allow users to see nearby places without a map. Then we’ll build a split map and Street View interface which will update as the user pans and provide information on the markers they click on.

2. Write markup and link scripts After creating the in within our index.html file we’ll add a few divisions which will hold the Google Map, the Street View and an information area. To include the Places API from Google Maps a query parameter of libraries is added. We’re using version 3.21, a changelog can be found at

1. Create index.html Start by writing some HTML linking normalize.css and our own stylesheet. The custom CSS floats both .map-areas and sets them to 50 per cent of the available width. To prevent images within Google Map windows from being too wide we set a max-width on them by targeting .gm-style-iw img.

Change the map’s look If your website has a specific visual style then you can use custom overlays and icons to visually integrate your unique style by going to


Depending on the libraries you specify, Google Maps dynamically includes what you need. We’re using the experimental version Top left

Once the Places API has returned nearby web places we display them in this dropdown Top right

When the user selects a place we get its details using the Places details service which we then show here


The:HE'HVLJŨAnnual :HE'HVLJŨ 3. Attribute to Google When using the Places API without a map you need to attribute it to Google. A ZIP file is provided from with a range of images for diferent resolutions and contrasts. You should also reference Google’s Privacy Policy and Terms of Use in a footer.

<img src="images/powered-by-google-on-white. png" alt="Powered by Google.">

4. Setup and initialisation Create a file called ‘places-dropdown.js’, this is where we’ll write the module which will list nearby places in a select box. JQuery isn’t included in this project but Google Maps provides a convenient abstraction for adding event listeners. It’s simple to use but more verbose than jQuery’s version.

(function (google) { 'use strict'; var Places; function initialise () { /* next step */ }

google.maps.event.addDomListener(window, 'load', initialise); })(google);

geolocation then it’ll request access to it and provide their coordinates.

var center = new google.maps.LatLng(51.5072, 0.1275); // London if ('geolocation' in navigator) { navigator.geolocation. getCurrentPosition(function (position) { center = new google.maps.LatLng(position. coords.latitude, position.coords.longitude); setMap(center); }); } else { setMap(center); }

5. Start the Places service SetMap won’t be called straightaway, we’ll call it once we know if we have the user’s location or not. We’re not displaying a map but a virtual element is still needed to pass to the Places service. This element only exists in memory so access is much faster than one in the DOM.

function setMap (center) { var div = document.createElement('div'); var map = new google.maps.Map(div); Places = new google.maps.places. PlacesService(map); getTextPlaces(center, 'web'); }

6. Create the form tag Still within the initialise function, we need to work out where the centre of the map should be. By default we’ve set it to the latitude and longitude coordinates for London. If the user’s browser has a browser with

7. Perform a text search One of the searches that Places lets you perform is a simple text search. We’re passing ‘web’ so it’ll look for all places that have tagged themselves as ‘web’. The radius is in metres so in this case, one kilometre. The textSearch service requires a callback which receives the data.

function getTextPlaces (location, query) { var request = {

Top left

The site asks for the user’s location (if it’s available) and centres the map on them Top right

Once the map tiles have loaded, the idle event triggers the service to make a request and drop the markers Right

Clicking a marker opens an information window with some details about that place; you can put any HTML inside


// Customise maps with the Google Places API

The great-circle formula in JS The zoomToRadius function is pretty interesting but also very math based. It uses a formula called the ‘great-circle distance’. It’s not quite 100 per cent accurate though but it gives us adequately useable results. To get the radius we need to know the centre and top right bounds (latitude and longitude) of the map. We then have a distance between the centre and the top right. To convert this into a circle we use radians by dividing each latitude and longitude by 57.2958 (or 180/Math. PI). Finally we convert this distance from miles to metres by multiplying it by the number of metres in a mile, which is 1609.344. A thorough explanation of this formula can be found at mathworld.wolfram. com/GreatCircle.html.

location: location, radius: 1000, query: query }; Places.textSearch(request, callback); }

8. Alphabetise the results The callback is passed the results and status respectively. Now check that the status is okay. If it is, then create a select element and a new array of alphabetised results (they come unordered) using localeCompare. This ensures that a precedes b, and so on.

function callback (results, status) { if (status === google.maps.places. PlacesServiceStatus.OK) { var select = document. createElement('select'); var sortedResults = results.sort(function (a, b) { return; }); /* next step */ } }

9. Add options to dropdown Now the results are alphabetised, loop through them and create a new ‘option’ element for each one and set the index as the value. As we go along we append them to the select element. After all the option elements have been appended as children to the select element it is time to add it to the page.

for (var i = 0; i < sortedResults.length; i++) { var option = document. createElement('option'); option.value = i; option.innerHTML = sortedResults[i].name; select.appendChild(option); } document.body.appendChild(select);

10. Build up details to show Within the closure – so that sortedResults is in scope – we’ll make use of Google’s event management again. This time we will listen for changes to the dropdown and perform a getDetails search. That then returns a formatted address, telephone number, website, images and many other useful properties.

google.maps.event.addDomListener(select, 'change', function (event) { Places.getDetails(sortedResults[event.

target.value], function (details, status) { var info = document.getElementById('info'); info.innerHTML = ''; info.innerHTML += '<p>' + details.adr_ address; info.innerHTML += '<p>' + details.formatted_ phone_number; info.innerHTML += ? '<a href="' + + '">' + details. website + '</a>' : ''; }); });

11. Create map-street.js We’ve demonstrated that the Places API can be used to create a list of options without having to display a full map which is useful when you have limited bandwidth or don’t want to give much prominence to a map. We’ll move on to a split map/street view app so create a new file, ‘map-street.js’.

(function (google) { var map, panorama, infowindow, StreetView, Places, TYPES = ['book_store', 'art_gallery',


The:HE'HVLJŨAnnual :HE'HVLJŨ 'cafe']; google.maps.event.addDomListener(window, 'load', initialise); /* next step */ })(google);

12. Initialise StreetViewService Similar to before, the initialise function is executed when the page is loaded. At this point the StreetViewService can be instantiated. This service provides all of the methods and properties to interact with street views.

function initialise() { StreetView = new google.maps. StreetViewService(); var center = new google.maps.LatLng(51.5072, 0.1275); // London /* next step */ }

13. Get the user’s location Obtaining the user’s position we must use ‘getCurrentPosition’. This is asynchronous so requires a callback function for us to handle the outcome. You can optionally require high accuracy by passing an object and a time-out if you like.

if ('geolocation' in navigator) { navigator.geolocation. getCurrentPosition(function (position) { center = new google.maps.LatLng(position. coords.latitude, position.coords.longitude); setMap(center); }); } else { setMap(center); }

14. Set up the map and Places To show the map we give it a reference to #map and centre it on the coordinates calculated above. A diferent type of event listener is used here, a map listener and not a DOM listener. The idle event fires when the map stops loading new tiles, eg after first load or when panning.

function setMap (center) { map = new google.maps.Map(document. getElementById('map'), { center: center }); Places = new google.maps.places. PlacesService(map); infowindow = new google.maps.InfoWindow(); google.maps.event.addListener(map, 'idle', function () { getNearbyPlaces(map.getCenter());

Manage variables There are many global variables in map-street.js which can become hard to maintain. In larger applications it’s a good idea to use getters and setters to avoid accidentally overwriting them.


}); }

15. Configure Street View Carrying on within setMap, we’ll create a Street View panorama in a similar fashion to the map. First of, start by passing it a reference to an element and some configuration options. Then to link them, call setStreetView with the panorama.

var panoramaOptions = { position: center }; panorama = new google.maps. StreetViewPanorama(document. getElementById('pano'), panoramaOptions); map.setStreetView(panorama);

16. Find nearby places The zoomToRadius function can be found at It converts what the user can see into a radius to get enough results to fill the map. Google allows you to search for various ‘types’ of places which are documented at

function getNearbyPlaces(location) { var request = { location: location, radius: zoomToRadius(), types: TYPES }; Places.nearbySearch(request, callback); }

17. Handle results from search When the search request is resolved it fires ‘callback’. First, check that the results are useable by checking that it’s the ‘OK’ status. We’re going to create a marker for each result. Although not the most eficient way to iterate, forEach is used for its readability and conciseness.

function callback(results, status) { if (status === google.maps.places. PlacesServiceStatus.OK) { results.forEach(createMarker); } }

18. Create the marker Google Maps makes it simple to add new markers, we just have to tell it which map to associate it and so on. To add the marker within the Street View, use the code below and replace ‘map’ with ‘panorama’.

function createMarker(place) { var mapMarker = new google.maps.Marker({ map: map, position: place.geometry.location, title: }); }

19. Listen to marker clicks When a user clicks a marker they expect something to

happen, usually additional information. To facilitate this use addListener with the two markers we’ve created for the main map and Street View panorama. All events are listed at

google.maps.event.addListener(mapMarker, 'click', showInfoWindow); google.maps.event.addListener(panMarker, 'click', showInfoWindow);

20. Show information window The window for each marker will show: if the place is open or closed, an image of the place, and if the place is well reviewed or very well reviewed. By setting the infoWindow’s content to null first, the content of the last one clears while details load for this location.

var showInfoWindow = function() { var open = place.opening_hours && place. opening_hours.open_now; var msg = open ? 'is open' : 'is closed'; infowindow.setContent(null);, this); /* next step */ };

21. Get details on a place The most comprehensive request that Places gives you access to is ‘getDetails’. This takes a single place (as you received it from a previous search) and gives a wealth of information on that establishment. Build up a string with HTML to append to the info window to show this data.

Places.getDetails(place, function (details, status) { var html = '<p>' + + ' ' + msg + '.' + '</p>'; });

22. Display a place’s image If any photos are available we’ll display the first one. The process is a little bit unusual because you have to call a method called getUrl rather than being provided as a property. You pass this method a maximum width and height for the image so they’re optimised for the dimensions you’ll display.

if ( { var url =[0].getUrl({ maxWidth: 400, maxHeight: 200 }); html += '<img src="' + url + '">'; }

23. Show a review breakdown If there are any reviews available then we’ll show a simple breakdown to indicate how well reviewed this place is. First, find out what the total of each rating is added together (using reduce adds the previous number to the current one). Then divide that total by the number of reviews to find the mean.

if ( { var total =

// Customise maps with the Google Places API

(previous, current) { return previous + current.rating; }, 0); var mean = total /; /* next step */ }

html += '<p>This '+ type +' is well reviewed.</p>'; } else if (mean >= 4.5) { html += '<p>This '+ type +' is very well reviewed.</p>'; } infowindow.setContent(html);

24. Review scores The mean can now indicate review status. A tweak is needed as types are separated by underscores, not spaces (eg art_gallery) so replace these. Once the data’s calculated set the window content to the HTML string.

var type = details.types[0].replace('_', ' '); if (mean >= 3 && mean < 4.5) {

25. Set the Street View panorama To set the Street View panorama to the place selected, use getPanoramaByLocation. This takes a location, a radius in metres and a callback. The radius is how far to look for matching panoramas and isn’t always accurate. If we have one update panorama with setPano and an ID.

geometry.location, 50, function (data, status) { if (status === google.maps.StreetViewStatus. OK) { panorama.setPano(data.location.pano); } else { console.log('No street view panorama found'); } });


Top right

Street View panoramas look impressive but don’t take much more efort to set up than a traditional Google map Top left

Clicking on a marker within the Street View triggers the same behaviour as clicking on the map interface Bottom

Markers are displayed within the Street View because they’ve been linked to the map on the left



Create virtual reality panoramas Bring an immersive VR experience to the browser and Google Cardboard with the help of three.js


// Create virtual reality panoramas


he web is for everyone, but sometimes technology arrives that creates a barrier to this. Virtual reality is one of those barriers. Unless you have a very expensive device like an Oculus Rift or Samsung Gear you can’t access VR content. However, if you have a smartphone then you can buy an inexpensive virtual reality viewer, like Google Cardboard, for the price of a film (between ten and twenty pounds). These low-tech devices are little more than a cleverly constructed piece of cardboard with two lenses. They take the left and right images on a screen and trick your eyes into believing it’s a single image. Its simple design means it’s easy to replicate and cheap to manufacture. You’ll learn how to use three.js to create a 360-degree video panorama that the user can look around with a device and Google Cardboard – all within the browser! Much of the terminology we’ll use comes from the videogames world. As we’re using three.js, there’s much more interactivity that you can add to this. It allows you to add any 3D object into the world, and using ray casting (to detect what the user is looking at) you could trigger interactions within it. We’re going to keep it simple by introducing the basic concepts and provide a world for the user to look around and be immersed within.

1. Create index.html Start by creating a blank page with a video element and a container. The video element’s behaviour is controlled by attributes, in this case we’ve told it to play automatically and loop indefinitely. The more sources you provide, the greater the number of browsers that will be able to play it.

<div id="canvas"></div> <video id="video" autoplay loop> <source src="videos/panorama.mp4" type="video/mp4"></source>

Your browser does not support the video element </video>

we need to know how fast to animate and how much time has passed since the app started or was last called.

var clock = new THREE.Clock(); init();

2. Reference script files Much of the heavy lifting has been done for us in the form of three.js plugins. The oficial ones that we have made use of are StereoEfect, DeviceOrientationControls, OrbitControls. Lastly paper.js will glue all of it together.

<script src=" ajax/libs/three.js/r71/three.min.js"></ script> <script src="js/third-party/threejs/ StereoEffect.js"></script> <script src="js/third-party/threejs/ DeviceOrientationControls.js"></script> <script src="js/third-party/threejs/ OrbitControls.js"></script> <script src="js/paper.js"></script>

5. Start initialising scene The renderer is the heart of outputting pixels to the page. The element is the DOM element of the Canvas. The scene is the space that everything is put into. Just like a movie, digital scenes contain objects, lights and cameras.

function init() { renderer = new THREE.WebGLRenderer(); element = renderer.domElement; container = document. getElementById('canvas'); container.appendChild(element); scene = new THREE.Scene(); }

6. Create sphere geometry 3. Create paper.js Most three.js files have three lifecycle functions: init, render and animate. Within paper.js, we’ll kick of by declaring some variables which will be accessed by more than one of these.

(function() { 'use strict'; var camera, scene, renderer; var effect, controls; var element, container; var videoTexture, videoMesh; })();

For the panorama, create a sphere and look in the inside of it. To make the sphere we generate its geometry – its mathematical 3D representation. Then apply a matrix, inverting it so the outside plane is on the inside.

var sphere = new THREE.SphereGeometry(500, 60, 40);

Develop with Unity Google Cardboard provides a Unity SDK and demo which you can use for free. Unity can export to WebGL (experimentally) and has community support. Unity has an easy-to-use interface for scenes and so on.

4. Clock and initialise The clock is a simple but extremely handy utility. This is useful for us because when animating or moving around,


Adding the video mesh to the sphere makes it a bit more obvious as to the effect that we’re striving for. Flip -1 to 1 in makeScale to see this Top left

The video is going to be ‘projected’ on the inside of this sphere. The camera’s currently placed outside but will be moved inside to pan around Top right

The three.js site contains helpful documentation including code examples and a breakdown of the terminology used for arguments passed to methods


The:HE'HVLJŨAnnual :HE'HVLJŨ sphere.applyMatrix(new THREE.Matrix4(). makeScale(-1, 1, 1));

7. Play video on devices On iOS and Android, videos can’t be automatically played, it needs a user to interact with the page beforehand. This could be linked to a button, or in this case, any click on the page will trigger the video to play. On iOS the video will be paused as you pan around it.

var video = document. getElementById('video'); function bindPlay () {; document.body.removeEventListener('click', bindPlay); } document.body.addEventListener('click', bindPlay, false);

videoTexture.minFilter = THREE.LinearFilter; var videoMaterial = new THREE. MeshBasicMaterial({ map: videoTexture }); videoMesh = new THREE.Mesh(sphere, videoMaterial);

12. Change controls 9. Camera effects The stereo efect works by passing the renderer to it and rendering everything out twice, but slightly ofset. This gives the illusion of depth and VR its appeal. To see the scene we need to place a camera within it. In this case, the perspective camera is used for a first-person view.

effect = new THREE.StereoEffect(renderer); camera = new THREE.PerspectiveCamera(95, 1, 0.001, 700);

10. Set camera’s position 8. Create a video texture The video texture maps the video to the sphere. Create the texture by passing it the video element, then set the min filter to linear as the size is unlikely to be a power of 2 (eg 16 by 16). The material describes the appearance of the object. The basic mesh will show as a flat polygon.

var videoTexture = new THREE. VideoTexture(video);

The perspective camera takes arguments in the following order: field of view, aspect ratio, depth to start rendering objects (near), and depth to stop rendering objects (far). Once created, positioning the camera is as simple as setting its 3D coordinates: x, y and z.

camera.position.set(100, 100, 100); scene.add(camera);

11. Add controls for mouse

Black screen Some built-in command-line servers (like PHP and Python on OS X) don’t serve video files correctly. MAMP or similar programs are able to handle them though.

Top left

The DeviceOrientationControls adds its own event listener to deviceorientation. As the device moves, it updates its alpha, beta, and gamma values Top right

Finally our Cardboard app comes together with the stereo effect plugin duplicating and offsetting the rendered video; it’s very effective viewing Right

Firefox provides excellent tools for debugging what’s happening in Canvas elements. It captures a frame and you can scrub through to see changes


camera.position.x + 0.1, camera.position.y, camera.position.z ); controls.noZoom = true; controls.noPan = true;

Next up we’ll add orbit controls. This allows you to click and drag to look around, which is useful for debugging when not on a device. We then set the starting position of the controls to the same position as the camera.

controls = new THREE.OrbitControls(camera, element); controls.rotateUp(Math.PI / 4);

If the environment that our code is running in fires the device orientation event then instead of using orbit controls it’ll switch to using device orientation controls. This means users can simply rotate their device to look around instead of tapping and dragging.

function setOrientationControls(e) { if (!e.alpha) { return; } controls = new THREE.DeviceOrientationContro ls(camera, true); controls.connect(); controls.update();

13. Remove event listener Once the controls are set to use device orientation, we won’t want to reinstantiate those controls every time the event is fired. To fix this, remove setOrientationControls at the bottom of the previous function.

window.removeEventListener('deviceorientati on', setOrientationControls, true); }

14. Device orientation The device orientation event is fired when the accelerometer detects a change in the device’s orientation. We’re interested in the z axis, otherwise known as alpha (beta and gamma are x and y

// Create virtual reality panoramas

Content for your panorama One of the most challenging parts is finding good content for 360-degree panoramas. A number of apps exist for smartphones (like Google Street View or Microsoft Photosynth) which takes multiple pictures and stitches them together. While these are often free and easy to use, automatic stitching is hard to ensure and there are often misaligned objects. To get true 360-degree content you need dedicated hardware, like a rig of GoPros or the RICOH THETA. The strange appearance of these images is known as an equirectangular projection. It distorts the northern and southern parts of the image and has two distinct fisheye distortions. Google Images has a few of these, but are rarely royalty-free.

respectively). Alpha goes from 0 to 360. In the init function we’ll add the listener for device orientation.

window.addEventListener('deviceorientation', setOrientationControls, true);

15. Add sphere to scene The final part of init adds the video mesh, the culmination of the sphere and video texture, to the scene. Attach a resize handler to ensure that browser resizing doesn’t look strange, and kick of animate.

scene.add(videoMesh); window.addEventListener('resize', resize, false); animate();

16. Resize function Resize is responsible for making sure the aspect ratio is maintained when resizing the window (or if going from portrait to landscape). Then the renderer and stereo efect are updated with the new width and height.

function resize () { var width = container.offsetWidth; var height = container.offsetHeight; camera.aspect = width / height; camera.updateProjectionMatrix(); renderer.setSize(width, height); effect.setSize(width, height); }

of seconds since the clock’s getDelta method was last called. It’ll be called by the animate function, which will be written shortly, and invoke getDelta.

function update (dt) { resize(); controls.update(dt); }

18. Render function The render function outputs everything to the screen. ‘Efect’ here is the stereo imaging and sets up the left and right images which behave as separate cameras. Internally it then uses the renderer (which we provided much earlier) to output to the Canvas.

function render () { effect.render(scene, camera); }

should be straightforward but there are many vendor prefixed versions that we need to accommodate. The first of the if statements is the standard, nonprefixed one and we cascade down Microsoft, Mozilla and WebKit.

function fullscreen () { if (container.requestFullscreen) { container.requestFullscreen(); } else if (container.msRequestFullscreen) { container.msRequestFullscreen(); } else if (container.mozRequestFullScreen) { container.mozRequestFullScreen(); } else if (container. webkitRequestFullscreen) { container.webkitRequestFullscreen(); } }

21. Finishing touches 19. Animate function Animate keeps our panorama responsive to movement. At each frame it calls update and render. Crucially, it also calls itself as a requestAnimationFrame callback. This ensures that the camera is perpetually updated.

function animate () { requestAnimationFrame(animate); update(clock.getDelta()); render(); }

17. Update function

20. Fullscreen ahead

The update function calls resize and updates the controls with a new delta from the clock. The delta is the number

Earlier we referenced a full-screen function which is triggered when the user clicks the Canvas element. This

Through the goodwill of the developer community, the technology exists to place 360-degree panoramas in the browser. There are currently limitations on playing back video within iOS which needs to be addressed but as an interim, an image-only panorama can be used.

#canvas { position: absolute; top: 0; bottom: 0; left: 0; right: 0; } #video { position: absolute; left: -9999em; }



Build iOS-style web applications with Framework7 Use Framework7 to put together a quick app for iOS with all the looks of native iOS but with the ease of HTML5



// Build iOS-style web applications with Framework7


ver since the first iPhone hit the market eight years ago, mobile has become the dominant platform for displaying webpages and web apps. Since then web developers have been working tirelessly to emulate the look and feel of native iOS apps in HTML5 and JavaScript. Then iOS 7 happened – and it had a brand-new look and feel; all of those libraries and designs that first appeared due to the creation of the first iPhone suddenly looked ancient in the face of new flat designs that Apple started pumping out. If users notice they’re using a web app, they inherently think it’s lesser than its native cousins, so everything web devs had used to make their lives easier over the years needed a serious overhaul – and fast. Enter Framework7, a seriously comprehensive set of libraries and assets that lets you put together a modern iOS-like app in no time at all. No more fishing around for icon images or manually animating transitions between diferent views, we get all of that out of the box. In this tutorial, we’re going to make a simple app that shows of some of London’s most famous tourist attractions.

1. Set up the project Getting Framework7 started is a little tricky, we’re going to have to do some house cleaning before we can start writing code. Before we grab Framework7 create a new folder for your project. Create an empty file called ‘index. html’ in your new folder and create a folder structure like the following:

- Project Folder (root) - assets

window and cd into your new project folder. Next run the following command:

> $ bower install framework7

3. Copy project files A new folder will appear called bower_components, go to bower_components/framework7/dist and copy all of the contents of the dist folder (all of the files inside dist, not the actual dist folder itself) to the root of your project folder, then delete the bower_components folder as we won’t need that again.

4. Clean up The files we’ve copied into our project folder is a skeleton app for Framework7 and we’re going to use it as the basis for our own project. Delete the about.html, form. html and services.html files. Then open index.html and scroll down to line 10 and make the following changes:

// From this <link rel="stylesheet" href="css/framework7. min.css"> // To this <link rel="stylesheet" href="css/framework7. ios.min.css">

5. Strip index.html A lot of the stuf that we will be making use of is in index. html, but there’s also a lot of stuf that we’re not going to be making use of. To save confusion we’re going blank slate. Delete everything from index.html between the <body></body> tags.

2. Get Framework7

6. Strip Core.js

There are a couple of ways to get Framework7, you can grab it from GitHub if you like, but we’re going to use bower to grab everything we need. Open up a terminal

Head into the js/ folder and open my-app.js for editing. Delete everything already in the file and add the following in its place and save.

var myApp = new Framework7({ init : false }); var $$ = Dom7; myApp.init(); With the myApp variable we’ve created a new instance of Framework7 to use. The $$ variable is the Framework7 Dom7 library that handles DOM manipulation – it’s like jQuery, but lighter. Finally, MyApp.init() begins our app.

7. Display in a view Mostly everything that is displayed in a Framework7 app is within a view. By adding classes to certain HTML elements we allow Framework7 to treat certain elements in certain ways. With classes, Framework7 enables us to create navigation bars, pages and toolbars. We’re going to use the first two.

8. The classes have it In Step 6, we have a number of divs with classes. On the right elements in the right order, Framework7 uses these classes to manipulate user interactions and animate elements. Class=”navbar” tells Framework7 to use that div as a wrapper for the navigation bar, class=”navbar-inner” becomes the holder for the text content and class=”center sliding” aligns our text to the centre and adds a sliding animation when the view is changed.

F7 is Retina-ready Straight out of the box, F7 comes with a great Retina-ready icon set as well as styles that let you get on with your markup and application logic. No more tweaking CSS margins, paddings and so on trying to make everything look just right.


Using Gulp and Bower, you can build a custom F7 library that only has the modules you need, but that’s painful for what we’re doing, so just copy the ‘kitchen sink’ Top left

Bower makes getting Framework7 an absolute synch. Simply open terminal, put in the command and Bower grabs everything we need from the GitHub repo Top right

The item-media class on a <div> element lets us add little icons to the items in our list


The:HE'HVLJŨAnnual :HE'HVLJŨ 9. Preview our work Framework7 relies heavily on AJAX for most actions that afect the content of the DOM. This means we need to run a local server to view our project files. On most systems that come with a Python installation, you can set up a quick static file server with:

$ python -m SimpleHTTPServer 8080

<div class="page-content"> <div class="list-block"> <ul id="places"> </ul> </div> </div> </div> </div>

10. Content with content So far, our app does little. It’s a view with a navigation bar that says ‘Landmarks of London’ and not much else – we need content. Grab the resources from FileSilo and move the landmarks.json file to the project root. This contains all of the information we need to build a list of options that our users can click. Next move the contents of the assets folder to the assets folder you made earlier.

11. Create our first page To put content in a view, we need a ‘page’ to insert it into. Think of a ‘view’ like a book cover that binds everything and a ‘page’ as a page. Add the following after the closing tag of <div class=”navbar”>. With data-page, we give our page a name we can access.

<div class="navbar"> ... </div> <div class="pages navbar-through toolbarthrough"> <div data-page="index" class="page">

Framework7 Android An iOS app on an Android system would look out of place, wouldn’t it? Don’t worry, Framework7 has a Material Design version for Android too. Same code, but with diferent styles.

Top left

With $$.getJSON, retrieve and parse a JSON file with all the information we need to build our page. Of course, this can be any JSON output Top right

Using D7 (F7’s DOM manipulation library) we can generate and insert the HTML we’ve created from our JSON file Right

Our app is pretty bare-bones, but we now have a foundation to build great things on


12. Register our view Before we can manipulate our view, we need to register it with a Framework7 instance (myApp variable). In my-app. js add the following just before myApp.init():

var mainView = myApp.addView('.view-main', { // Because we use fixed-through navbar we can enable dynamic navbar dynamicNavbar: true }); myApp.onPageInit('index', function (page) { console.log("main-view initialised"); }); Now when myApp.init() is called our view-main is registered and initialised, and then a callback is called with the onPageInit listener. We can now write code to populate a list of places from landmarks.json.

13. Populate our page with data To make a list of places a person can go to, we’re going to GET landmarks.json and use its info to populate our page. Back in my-app.js amend myApp.onPageInit() call so it looks like this:

myApp.onPageInit('index', function (page) { $$.getJSON("landmarks.json", function(data){ console.log(data); }); });

Now if you refresh and look at the console, you’ll see an object with all of the data to build our list. Remember that $$ is like the Framework7 equivalent of jQuery, and a lot of its functions follow similar naming conventions.

14. Add HTML Now that we have the landmarks.json file we can put together HTML elements based on the data we have. Inside the callback for getJSON, add the following:

myApp.landmarks = data.landmarks; var place = undefined, list = ""; for(var x = 0; x < myApp.landmarks.length; x += 1){ place = data.landmarks[x]; list += '<li class="landmark-link">\ <a href="#" class="item-link" datalandmark="' + + '">\ <div class="item-content">\ <div class="item-media"><img src="' + place. icon + '" width="29"></div>\ <div class="item-inner">\ <div class="item-title">' + + '</ div>\ </div>\ </div>\ </a>\ </li>'; } $$('#places').html(list); }); For each landmark an <li> element is created and added to the #places <ul>.

15. Assign click events Now we need to be able to tap elements. Using the landmark-link class, assign a click event to every list item.

// Build iOS-style web applications with Framework7

All-encompassing libraries Web projects, or apps written with web technologies, are notorious for becoming unmaintainable after a fair amount of development. Part of is due to the code that has to be written to tie various JS libraries that do things better or quicker than you might have done. With F7 and other similar frameworks, you don’t have to write the glue that holds everything together you just write the way the framework dictates you should to achieve the result you want. By making developers follow the convention of a framework that does close to everything, rather than using mutiple libraries to do the same thing, other developers who use the same framework can quickly pick up where you left of and maintain your code and vice versa.

... $$('#places').html(list); $$('.landmark-link').on("click", function(evt){ // Clever clicky code goes here... });

16. Clever code When we click an item in our list, we need to find out which one we clicked. On the <a> tag we added in our $$.getJSON we created a data-landmark data attribute. We can get that value and then work through every landmark stored to get the appropriate information needed to create another page.

... $$('.landmark-link').on("click", function(evt){ var clickedLandmarkName = $$( parents('a')[0].getAttribute('datalandmark') ; for(var y = 0; y < myApp.landmarks.length; y += 1){ if(myApp.landmarks[y].name === clickedLandmarkName){ break; } } });

17. F7 templating engine When we tap our list item, we want to make a new page with all of our information. Multiline strings are a pain, so

we’re going to use Framework7’s template library – we just need to activate it. Find the $$ variable in my-app.js and change the code to this:

var $$ = Dom7, $T = Template7; var landmarkTemplate = $$('#landmarktemplate').html(), compiledTemplate = $T. compile(landmarkTemplate);

if(myApp.landmarks[y].name === clickedLandmarkName){ var info = { placeName: myApp.landmarks[y].name, placeInfo: myApp.landmarks[y].info, placeImage : myApp.landmarks[y].image }; var compiledHTML = compiledTemplate(info); break; }

18. Create a template Template7 works a lot like handlebars, by using {{[VARIABLE_NAME]}} we can replace chunks of code with values that we pass. In the FileSilo files you downloaded earlier, there is a landmark_template.html file. Copy its contents and add it to the end of index.html – just before the <script> tags we already have there.

<!--Insert landmarks_template.html here--> <script type="text/javascript" src="js/ framework7.min.js"></script> <script type="text/javascript" src="js/ my-app.js"></script> </body>

19. Use our template Now that we have a template, we will go on to using it. In my-app.js we were looping through all of the landmarks we knew about to find the information for the list item we clicked. Once we find it, we can use the data to generate HTML by compiling our template. We can do this by passing an object with the variables that we used in the template ( the {{ }} ).

20. Render the template Now, compiledHTML contains the HTML for our new page, but we’ve not added it to the DOM yet. We could add it with $$([ELEMENT]).html(compiledTemplate), but that won’t register the page with our Framework7 instance. Instead, to get all the back-button and animation goodness we can use F7’s built-in router. This will render our content and animate it approriately.

... var compiledHTML = compiledTemplate(info); mainView.router.loadContent(compiledHTML); break; } ...

21. Round-up Now we’ve put together a fully interactive, native-like prototype app in no time at all. We’ve used views, explored the part that classes play in F7 and rendered valid HTML with F7’s template engine T7. Remember, F7 can do so much more, so get exploring!



Deploy your web apps to Heroku Set up SSH keys spinning up an instance and deploy to the cloud with command-line Git


// Deploy your web apps to Heroku


e all enjoy making things every now and then. This can take any form – a linter, image converter or even a web service that sends you a picture of a cat whenever someone hits a button. Often made for ourselves, sometimes, these tools pick up traction and other people start to use the thing you’ve made; you’ve solved a problem that other people have, congratulations! But this success has brought you a problem in itself, how are you going to keep your servers up with all of these new people using your shiny thing? Shared web hosting is designed to be used by a couple of hundred, maybe even a few thousand people at the same time. It’s not going to be able to stand up to something going viral – at least – not for long and that’s assuming your web project is relatively small. If you’ve made something that’s, let’s say, 2MB in size, it doesn’t seem like too much for your provider to handle, does it? What if 10,000 people access your site in one day? That’s 20GB in bandwidth you’ve got to provide! Shared hosting is not going to cut the mustard. You could provision a server and make it bigger as the site gets bigger, but that means you’ve got to select a provider, access the server, configure it for your app and, more often than not, scale it manually. This is not good enough. This is where Heroku comes in. Heroku is a cloud platform that delivers your app for you, it scales whenever you like and it has Node, Ruby, PHP and Python built into it and ready to go! This tutorial is going to walk you through setting up a Heroku account, getting SSH keys all in order, configuring a Node.js app for deployment and then firing it of to Heroku to be run.

1. Create an account Head on over to and then simply click the Sign Up button to create a new account. Check your

inbox now for an email confirming that you want to sign up and create a strong password. If you’re worried about Heroku costing you something, fear not because there’s a free tier for you to experiment, explore and learn from. This free tier is the one that we’ll be using in this tutorial.

2. The dashboard Once you’ve logged in to your dashboard, it’s here that we can create, manage, deploy and destory our apps with the help of the Heroku platform. If you click your email address in the top left of the dashboard, you’ll see a dropdown with the option Manage Account. Don’t click it, but take note of where it is located for now as we’ll be using it in a little while

3. The Heroku Toolbelt In development, the Terminal is a powerful tool. Heroku has created a Toolbelt to help us get the most out of Heroku through the command line. Most of this tutorial will be done in the CLI (command-line interface), so head to, select the right Toolbelt for your operating system and simply follow the instructions to install it.

you can read on and follow what they say. SSH keys are user credentials that don’t involve a password. Instead though, it uses public-key cryptography to generate a secure way of authenticating a user to a service, and we’re going to do just that with Heroku. This will spare you having to type in your password every single time that you want to do something.

6. Create an SSH Key In terminal, enter the following with the email address you used to sign up to Heroku: ssh-keygen -t rsa -b 4096 -C “[[YOUR@ EMAILADDRESS.COM]]” A new private/public key will be generated for you. You’ll be asked if you want to store the keys anywhere specific, just hit ‘enter’ and accept the defaults until you’re asked to enter a passphrase, choose something secure but memorable – it’s not easy to change this one!

7. Enable the SSH Keys To get your system to actually use your new SSH keys, we will need to add them to the SSH agent. After that we can add them to Heroku. Enter the following command, you’ll be asked for the passphrase you used when

4. Log in with the Toolbelt Once the Toolbelt has installed, you should now open your terminal (or the equivalent for your operating system) and simply enter:

Heroku’s tiers

heroku login

Heroku has four pricing tiers: Free, Hobby, Standard and Performance – each one is more powerful and expensive to run than the last. We’re using Free, but if you’re deploying something small that you want to run 24/7 (the free tier has to spend six hours ofline daily) then Hobby is only $7 (£4.60) monthly per instance.

5. Heroku and SSH Keys In the next few steps, we will now go over creating SSH keys on a Unix-based system (Linux or OS X). If you’re familiar with this process though, you can just skip over them and generate the SSH keys as you like, otherwise,


Signing up is easy. Heroku is for single users and businesses alike, but don’t be put of by them asking for a business name, you don’t need one Top left

From the dashboard, all things Heroku can be found and controlled. Take some time to look around and try out the various options Top right

The Heroku Toolbelt lets us control every aspect of Heroku and our apps from the CLI. It’s the most efficient way to interact with Heroku


The:HE'HVLJŨAnnual :HE'HVLJŨ creating the new keys. The SSH key will then be ready to be used by your computer and it will also have been copied to your clipboard.

eval "$(ssh-agent -s)" && ssh-add ~/.ssh/ id_rsa && pbcopy < ~/.ssh/

8. Add the SSH key Remember that Manage Accounts button we told you to take note of earlier? It’s time to press it. Head back to the Heroku dashboard ( will redirect you there if you’re still logged in) and then click the Manage Account button. Now just scroll down until you see SSH Keys, click edit and then paste the SSH key into the input (the last command that we entered also copied it into our clipboard for us).

9. Alternative method If all that GUI goodness is not your way of doing things, you can upload the SSH key to Heroku with the command-line interface. Simply head back to your terminal and enter:

heroku keys:add

Work with Procfiles When you’re working on a project with multiple team members, it’s not always a dead cert that every one is going to push to Heroku with the same provisioning settings. By using a Procfile and including it in your shared Git repo, you can be certain that every time a team member pushes a deployment to Heroku, it will run on what it’s meant to.

Top left

Once you have your SSH key, adding it is simple. Just click on Manage Account and scroll down until you see the option. Click the tab and then paste your key in Top right

Heroku won’t cost you money straight off of the bat, but if someone gets a hold of your account it could cost you dearly. Strong passwords are a must Right

Our app is just a simple static page being delivered by our Node.js app with a simple CSS animation, but gosh, doesn’t it look friendly?


You’ll be asked if you want to upload your key to Heroku. Hit ‘Y’ for yes.

10. Create a new project On your Heroku dashboard there is a + sign in the top-right corner. Click it now to create a new Heroku app. You’ll then be given a form asking you to enter a name and whether you want your server to be in the US or Europe. If you don’t enter a name, one will be generated for you, so it’s up to you whether you want to give your app a name.

11. Prepare for deployment If you have a project ready that you’d like to test deploying to Heroku, cd to the root folder of that project now. Otherwise, there is a demo project that you can grab from our FileSilo downloads. Once you’ve made your way to your chosen project in Heroku, enter the following commands:

git init heroku git:remote -a name_of_your_heroku_app When those commands have completed, we will have created a new Git repo with our Heroku app as a remote.

// Deploy your web apps to Heroku

Dyno types Heroku has a bunch of tiers, but these tiers comprise dyno types, which are the kinds of machines (specs if you will) that you can run. When scaling your services, you can choose which of these dynos you want to use, as each use case may require diferent kinds of services. For example, let’s say your web service just got some overnight fame on one of the social networks we all love to loathe, your standard instance is now starting to buckle under the pressure – you need to scale now! But even with the hundreds of hits you’re getting, your standard instance is still holding its own. Scaling up to a performance instance would almost certainly solve your problem, but it’s also probably overkill as this amount of trafic might not last that long.

With this, we can use SSH and Git to deploy our apps to Heroku with a Git commit.

12. Our demo app Our app is a simple Node.js/Express server, delivering files in our ‘public’ folder to whomever browses to /. One thing to note is the port binding on line 7 of demoServer. js: ‘process.env.PORT || 8080’. Process.env.PORT lets Heroku choose the port it wants to make our app available on as it can change per deploy or instance.

13. Deploy to Heroku We can also deploy with a Git commit. Now that we have Heroku set up as a remote Git repository, by pushing new code to it, Heroku’s build process will be triggered and will deploy our app for us. Let’s give it a go now:

git add .

git commit -m git push heroku master You’ll see Heroku building, testing and then finally deploying our app to its platform. If all has gone well, you’ll see something like

Launching... done, v1 remote: https://your_heroku_app.herokuapp. com/ deployed to Heroku

14. Debugging errors Except you probably won’t see that everything has gone right, because we’ve deliberately added an error. So how do we debug errors on our Heroku instances? If a Heroku build fails, it will refuse to deploy the broken code in favour of previously working code. But that only works if it’s a compile-time error. If the error is a run-time error, it won’t break until after it’s deployed. To find out where the

error may lie, you can enter ‘heroku logs --tail’ and you’ll see the latest entries for your Heroku server.


:HE'HVLJŨ The:HE'HVLJŨAnnual 15. Fix the error

16. Deploy the fix

If you look at our logs, the error that was deliberately made is simple, Our Node.js app is trying to listen on a port reserved by the system. Now just open demoServer. js and change line 7 so that it reads:

Now that we’ve fixed our app, we can try and redeploy it to Heroku. There’s no special procedure for this, simply commit the modified files for a Git push and then push to the Heroku origin.

// Old // port = 15 || process.env.PORT || 8080; // New port = process.env.PORT || 8080;

Heroku origins Heroku is not the spring chicken you might think it is – it’s been ticking along, in one form or another, since 2007! One of the first ‘cloud platforms’, originally, Heroku was designed as a platform for quickly deploying Ruby projects that people wanted to share. Back in 2007 (and some might argue that little has changed since) Ruby projects were


git add . git commit -m "Fixed server port issue" git push heroku master

notoriously hard to move from development to production. Heroku was designed to be a one-stop-shop for getting your latest and greatest Ruby-based works out into the world. As time moved on, so too did developer’s language preferences. Over the years Heroku made performance tweaks and added in the ability to deploy projects using almost any technology you can think of.

// Deploy your web apps to Heroku

instances, or perhaps you needed to execute a diferent command to run your node app on a Heroku server (like installing and running Bower or Gulp, for example). These are the functions that the Procfile is for. Think of a Procfile as a package.json for Heroku instances. It lives in the root of your Heroku project folder, is called Procfile, and whenever you deploy your app to Heroku, the commands inside of it are the commands that will then be used to build and run your shiny new service. DemoServer.js begins the command that runs our server (as defined in our package.json), you can have the following execute in its place.

web : npm install bower && bower install && node demoServer.js

17. Scale your web service By default, Heroku will start your apps running on the ‘web’ tier of its service. This is the least powerful of the tiers available, but it’s also free. Let’s say you have an overnight hit on your hands, how do you scale your app to handle all of these new people wanting to use the great thing you’ve made? It’s actually pretty simple, you can just enter:

heroku ps:scale

web=2 .…in your command-line interface. That will spin up two web-scale instances of your Heroku app and will split the users betwee n them – thus, each server will handle half of the load.

18. Procfiles and you Let’s say that you wanted to deploy an app to Heroku and you didn’t want to worry about the provisioning of

Heroku hardware With Heroku, we don’t have to provision machines or set up environments. So much hard work must have gone into building an infrastructure. It may surprise you to learn that Heroku doesn’t have its own hardware – it runs on AWS (Amazon Web Services), which, in a way, makes Heroku a kind of API for a the Amazon web platform. This is known as SaaS (software as a service) business model.

Code library These are some typical modules that you would probably use in a Node app. Any module included in your package.json will also be installed by Heroku when you deploy your project.

Our demo app is a simple, static webpage. This line says that whenever someone hits the ‘/’ path of our web domain, every file and resources that we will be accessing will come from the ‘public’ folder in our project.

This is a simple Express route. If you want to process some information or give a specific resource a special URL, you can do that here. If you go to your Heroku app address and add ‘/ hello’ to the end, you’ll see “Hi!”.

var express = require('express'), app = express(), http = require('http'), fs = require('fs'), querystring = require('querystring'), request = require('request'), port = 15 || process.env.PORT || 8080; for(var _ = 0; _ < process.argv.length; _ += 1){ if(process.argv[_] === "--port" || process.argv[_] === "-port" || process.argv[_] === "-p"){ if(process.argv[ _ + 1 ] !== undefined){ port = process.argv[_ + 1]; break; } } } app.listen(port); app.use(express.bodyParser()) app.use(express.static(__dirname + '/public')); app.use('/images', express.static(__dirname + '/images')); app.use(express.cookieParser()); app.all('*', function(req, res, next) { res.set('Access-Control-Allow-Origin', '*'); res.set('Access-Control-Allow-Methods', 'GET, POST'); res.set('Access-Control-Allow-Headers', 'X-Requested-With, ContentType'); next(); }); console.log("Server started.\nAvailable on localhost:" + port); app.get('/hello', function(req, res){ res.send("Hi!"); });



Develop a web app quickly with Lucee Say hello to the Lucee language, the latest CFML engine, and quickly build a powerful dynamic application


// Develop a web app quickly with Lucee


o some, CFML (ColdFusion Markup Language) may not mean anything. To others it may be perceived as an out-of-date language no longer used. In fact CFML is 20-years-old this year and it was born the same year as PHP and JavaScript. If you haven’t yet experienced building a CMFL application yet, then now is your chance to do so. Lucee, a completely open source CFML engine, was released in January 2015. As a fork from an existing CFML engine Lucee did not have to start from scratch and had a fantastic core engine to build upon. In this tutorial we will download and install an Express edition (self-contained) version of Lucee 5, which includes all of the latest additions to the code syntax and language. We will develop a simple example application using the new Lucee dialect and file extensions, and we will then show an example of how to interact with the application scope for persistence. We will use Lucee’s member functions to check for variables within certain scopes and lengths of arrays. We will also look into creating RESTful API resources using Lucee and mapping components to match REST paths. We will see how to manage sending arguments in the URL as well as dealing with both GET and POST requests to the API.

1. Download Lucee The Lucee language is available for us to download from a number of platforms in stable and beta versions. Just head on over to to do just that. We will be selecting Express version 5 to use in this tutorial, which is a version that will work across all operating systems. Now you can just unpack the archive file and use your terminal or command prompt to start up the server.

./ or for Windows: ./startup.bat

return true; }

5. Directory index 2. Default page and admin After starting up the Lucee engine you will be able to access the server in your browser via http:// localhost:8888. The default welcome page that appears here will have some useful links to guide you through the language, and it will also have two important links that will take you right through to the server and web administrators too.

Create a new file called ‘index.lucee’ in the directory. This will be the default index page for our application. To start we’ll simply request a data dump of everything available within the application scope (created from the application configuration file). You can use the tag format or script syntax to do this.

<:script> dump(application); </:script>

3. Application component Create a new file in the root of the new directory called ‘Application.lucee’. This component will hold the application-specific variables and configuration for this subapp. Create the name using a hashed version of both the current path and substring value to help ensure that it is unique.

component { = hash(GetDirectoryFromPath('./')) & '_procrastination_app'; }

4. Run onApplicationStart

6. Task model definition Create a new directory of ‘model’, and inside of this create a new file, ‘task.lucee’. This new component contains properties for our model defined, setting the type, name and default value attributes for each. The getters and setters will be created for us thanks to the accessors=true attribute.

component accessors="true" { property type="numeric" name="id"; property type="string" name="title" default=""; property type="string" name="content"

The application framework will support multiple event-driven functions which we are then able to interact with. OnApplicationStart will run the very first time that the application instantiates. Here we will then define two variables that are assigned to the application scope, and these are made available for us to access throughout the entire application.

Lucee 5 beta In this tutorial we have been using the latest version of Lucee 5, still in beta at time of writing but almost production ready. The new Lucee dialect code format and file extensions are only available in version 5.

boolean function onApplicationStart() { application.tasks = []; application.completeTasks = "";


CFML server management and configuration is a relatively simple process thanks to Lucee and its server and web admin contexts Top left

Open your terminal or command prompt window and start the Lucee Express edition engine using the platformappropriate start-up script Top right

Our model properties and the accessors=true attribute generates all of the GET and SET methods we need to populate our model, shown here as a dumped object


The:HE'HVLJŨAnnual :HE'HVLJŨ default=""; property type="boolean" name="complete" default="false"; }

CFML from the CLI CommandBox is a stand-alone native command line interface tool written in CFML with package management and much more. It also generates embedded Lucee servers for your local development work. You can find out more by going to


Lucee has been open and available to the community since its release and welcomes pull requests and community interaction Middle

Full documentation for the language tags and functions are available from the Lucee site, complete with all attribute variations and options Right

Lucee is available to download for Windows and Linux-based operating systems, in full installer versions or Express editions


7. HTML layout Open index.lucee and define the base HTML layout for the application. We’re using Bootstrap for structure here. Note the output tags wrapping the entire template. These will deal with any string wrapped by # characters as a variable for translation.

<:output> <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Procrastination List</title> <link rel="stylesheet" href="css/bootstrap. min.css"> </head>

<body> </body> <script src=" ajax/libs/jquery/1.11.3/jquery.min.js"></ script> <script src="js/bootstrap.min.js"></script> </html> </:output>

8. Form definition Create a new form now that will post back to the index.lucee page. This will have a text input field with the name attribute set to title and a textarea field with the name set to content. The submit button will have the name new_task which we will use to identify a submission. Then finally, the variable #strFlash# will be

// Develop a web app quickly with Lucee

The application framework Throughout this tutorial we have made extensive use of the application scope to persist variables throughout the application. Itâ&#x20AC;&#x2122;s not advised to use that scope as a data store (except for this example application as it saved on setting up a database connection) but it is incredibly useful for persistence and sharing objects, variables and certain bits of data. We created an application.lucee file, which could also have been an application.cfc (ColdFusion Component). This is a special file for CFML projects and enables a lot of event-driven method processing. Make sure that you check out this great introduction to the application framework in Lucee for more information

used to display any errors that appear.

<div class="container"> <h1 class="page-header">The Procrastination List</h1> #strFlash# <form method="POST" action="index.lucee" class="form-horizontal"> <div class="form-group"> <div class="col-sm-offset-1 col-sm-11"> <button type="submit" name="new_task" class="btn btn-default">Save Task</button> </div> </div> </form> </div>

9. Script block Create a new script block at the top of the index.lucee file. Create a default strFlash variable with an empty value. An if statement will check for the existence of the new_task button in the form scope (available after a form submission). If it exists we can then process the submitted values accordingly.

<:script> strFlash = ""; if ( structKeyExists(form, 'new_task') ) { } </:script>

10. New model instantiation If both the title and content values are present in the form submission then we will instantiate a new task

model, and we will then populate the properties with the values and insert that component into the task array using the built-in arrayAppend function. If either of the title and content are missing then weâ&#x20AC;&#x2122;ll create the warning message for output.

if ( len(form.title) && len(form.content) ) { task = new model.task( title=form.title, content=form.content, id=arrayLen(application.tasks)+1 ); arrayAppend(application.tasks, task); } else { strFlash = '<div class="alert alertwarning">Please provide a title and task


The:HE'HVLJŨAnnual :HE'HVLJŨ detail.</div>'; }

11. Display the list Below the header, we will now add some code to detect if we have any records saved in the array. ArrayLen is a built-in CFML function to check for the length of the provided array. If true, the following HTML panel and table will be displayed to the user.

<:if arrayLen(application.tasks)> <div class="panel panel-default"> <table class="table table-bordered table-

hover"> <thead> <tr> <th colspan="2">&nbsp; </th> </tr> </thead> <tbody> </tbody> </table> </div> </:if>

12. Loop over collection Place a new form between the table body tags to post back to index.lucee. Within this form loop over the tasks array from the application scope. The checkbox id and value hold the task id and the conditional statement will populate the checkbox if already marked as complete.

<:loop collection=application.tasks item="task"> <tr class="<:if (application.tasks[task]. getComplete())>success</:if>"> <td> <input type="checkbox" name="task"

Introducing the Lucee dialect Lucee 5 contains an updated enhanced language framework and syntax that is known as the Lucee dialect (or occasionally referenced as LuceeLang). The dialect is based on the original concepts of CFML. As a relatively new engine it continues to support CFML and the ecosystem around it, which includes compatibility support for existing versions of CFML engines including Adobe’s ColdFusion engine. As well as helping to maintain backwards-compatible syntax for existing applications it also wants to help drive the CFML environment and update the syntax and available methods and functions. This is where the Lucee dialect comes into play. Find out more about this dialect by going here:


// Develop a web app quickly with Lucee

id="check[#application.tasks[task] .getId()#]" <:if (application.tasks[task]. getComplete())>checked</:if> value="#application.tasks[task] .getId()#" /> </td> <td> <p>#application.tasks[task].getTitle()#<br /> <span class="small">#application. tasks[task].getContent()#</span> </p> </td> </tr> </:loop>

13. Update the tasks Include the following buttons before the closing form tag in the table. Once an update has been processed and any tasks are complete, check the length of the application scope completeTasks list variable. If it holds data, display the button to clear those items from the list.

<tr> <td colspan="2"> <input type="submit" name="update_tasks" class="btn btn-default" value="Update Tasks" /> <:if listLen(application.completeTasks)> <a href="/app?clearcomplete=true" class="btn btn-danger">Clear Complete Tasks (#listLen(application.completeTasks)# / #application.tasks.len()#)</a> </:if> </td> </tr>

= strTasks; }

15. Clear completed tasks If the user submits the button now to clear the completed tasks, the conditional statement will check for the existence of the clearcomplete value that was sent in the URL. This will then loop over all tasks and, once it has found the matching task using the ID, it will go on to delete it from the array before sending the user back to the index page.

if ( structKeyExists(url, 'clearcomplete') ) { for( task_id in listToArray (application.completeTasks) ) { for( task in application.tasks ) { if ( task.getId() EQ task_id ) { arrayDelete(application.tasks, task); } } } application.completeTasks = ""; location('/app', false); }

16. Creating REST resources Lucee ofers an incredibly easy way to create and generate RESTful resources, written using the familiar component structure. Create a new directory called ‘rest’ within your project, inside of which we will place all of our components. Create a file called ‘random.lucee’ with the following code. The rest path attribute dictates the URL that this component will respond to when we call the API.

component restpath=”/random” rest=”true” { }

14. Handle updates

17. Initial GET Request

Back at the top of index.lucee we will add another conditional statement within the script block. This conditional statement will check for a form update submission and it loops over the tasks array, setting each to incomplete. After that, it will then loop over the submitted tasks (using the checkbox ids) and set those tasks to complete.

Now we will define the routes and responses for each API resource within the open component block. First let’s create a new remotely accessible function that responds to a GET request. This simple example returns a random number nested inside of a JSON response. The function name has no direct impact on the resource, but ideally it should relate to the purpose of the function.

if ( form.keyExists('update_tasks') ) { strTasks = ""; for( task in application.tasks ) { task.setComplete(false); if ( structKeyExists(form, 'task') ) { if (listContains(form.task, task.getId())) { task.setComplete(true); } strTasks = form.task; } } application.completeTasks

remote any function getRandomNumber() httpmethod="GET" { return {'random_number': randRange (1,1000000) }; }

18. Manage URL parameters Create a new GET resource function in the component to accept a single argument or parameter. The restargsource attribute dictates that the argument will be part of the path and we define the argument value as

sent_value. This response uses CFML’s built-in dateTimeFormat() method to handle formatting the current time returned from the now() function.

remote any function returnResponse(string sent_value restargsource="Path") httpmethod="GET" restpath="{a}" { return { 'request_made': dateTimeFormat(now(), 'dd/mm/yyyy HH:NN'), 'value': arguments.a }; }

19. Handle POST requests Create another remote method to handle our POST requests. The restargsource attribute will be set to Form, expecting the parameter posted_value in that submission. return the submitted value, the current datetime value and the entire arguments scope, which could be used for API response debugging.

remote any function postResponse(string posted_value restargsource="Form") httpmethod= "POST" { return { 'request_made': dateTimeFormat(now(), 'dd/ mm/yyyy HH:NN'), 'you_submitted': arguments.posted_value, 'form_data': arguments } }

20. Admin configuration The REST resources need to be defined and mapped in the Lucee administrator. Log in to the web content admin and select Archives & Resources>Rest. Create a new mapping virtual name (use api for brevity) and point the physical to the directory location of the components.

21. Test POST submission Create a test lucee file in the project root to test the API. Create a new script block and make an HTTP POST request to the local API address. At least one of the http param lines, set as a form field, should be the value expected by the method. Finally dump out the cfhttp result for visual debugging.

<:script> http url="http://localhost:8888/rest/api/ random/" method="POST" { http param name="posted_value" value="Lucee is powerful" type="formField"; } dump(cfhttp); </:script>



A customisable text editor for the 21st Century. Find out how to make it what you want

// Atom: the hackable text editor


Atom is fully documented, providing an in-depth manual for normal users of the editor, to a detailed look at the editor’s internals for those who wish to change the very core of the editor itself.

Packages are used to expand the feature set of the editor. They can be searched for, configured, installed and upgraded, all from within the Atom editor itself through a simple interface.

The core keyboard shortcut to learn for Atom is Cmd/ Ctrl+Shift+P. It will toggle a small window called the Command Palette, which contains every Atom command available and displays their shortcut if they have one.

By default, the editor sports a dark theme. If this isn’t to your liking though, you can select lighter alternatives for the UI and syntax. There’s also the option of searching for more themes from the Atom community.

GitHub’s Atom editor, built for the web BUILT ON THE WEB TECHNOLOGIES FOR MAKING WEB-BASED APPS much more powerful than the JavaScript that can be The Atom editor originally started life back in 2008, found in web browsers, as it takes advantage of the as a side project from one of GitHub’s founders, Chris Node.js project to provide a powerful API for accessing Wanstrath. Atom was born from his desire to create the filesystem, networking and much more. This also an editor that was flexible, easy to customise and built includes the possibility of using over 170,000 (at the on top of web technologies. It wasn’t until a few years time of writing) modules that are provided later though, in 2011, when Atom was picked back through Node’s ‘npm’. up by GitHub and subsequently taken on Markdown All of this efort has created a as an oficial project, and that’s when support cross-platform editor, which at this work on it really started to progress. Atom’s Markdown support, point includes all of the normal Work went on behind the scenes enables you to create and features that you’d expect to find and whilst that happened, web preview Markdown-based docs. such as syntax highlighting, technology in general improved to Cmd/Ctrl+Shift+M splits the editor autocomplete, searching files, the point where it was viable to view with a side-by-side of your multiple view panels, multiple release the project publicly as a Markdown and cursors, project support and much, beta version in early 2014. A year the HTML. much more. The appearance of the and a few months later, in June 2015, editor is very flexible as well, with the version 1.0 was released. overall look and the syntax highlighting The editor is built upon a range of familiar both supporting themes, of which there are almost web technologies. At the heart of GitHub’s Atom is 800 that are already available to use. There is also Electron, which uses Chromium as a base (the open support for packages, with over 2,500 available for source core of Google’s Chrome browser) and expanding the editor in a multitude of ways. provides rendering of HTML, CSS and JavaScript as a stand-alone desktop application. The JavaScript is

David Boyer Senior software developer for NHS Wales Informatics Service

“The ability to have an editor built using familiar technologies makes it possible for me to customise it to my needs. It enables me to take full advantage of the JavaScript power within the editor.”



JavaScript With Atom being built on top of JavaScript, you’d expect it to be well supported. Packages such as linter and linter-jscs can add code analysis tools (linters) to catch common issues or mistakes. Turbo-javascript provides a raft of commands and snippets to make writing ES6 JavaScript quicker.

PHP PHP doesn’t miss out on further support through packages. To complement the built-in syntax highlighting, php-cs-fixer can be installed to maintain your coding standards. For Atom’s autocomplete feature, install autocomplete-php and standard PHP functions will list as you type.

CSS CSS gets the standard level of support from Atom with interesting packages from the community. ‘Pigments’ detects colour declarations and displays it as a background to the text and linter-css checks for common mistakes. Support is also available for preprocessors like Less.

HTML Emmet wraps a popular tool to enable writing HTML as CSS selectors, being expanded into actual HTML by simply tabbing after typing your selector. Html-entities will handle encoding and decoding of special characters, autoclose-html helps with closing your tags as you type and various linters can check the HTML correctness.

How to theme Atom ATOM’S APPEARANCE IS STYLED BY CSS AND THE COMMUNITY HAVE PRODUCED OVER 700 THEMES TO CHANGE ITS APPEARANCE Themes in Atom are built upon the Less.js (lesscss. org) library, which is a JavaScript-based preprocessor for CSS. Less adds to CSS variables, mixins, functions and various other syntax features that can help to make CSS considerably more manageable for you. This library has enabled the Atom community to build up a large amount of themes which you can

easily install and configure, all from within the Atom editor itself. These themes can come in two diferent types, syntax and UI but they’re pretty straightforward. Syntax themes are focused on the area that is displaying your code (or other file types) whereas UI themes are focused on altering the appearance of the whole editor interface.

Less adds to CSS variables, mixins, functions and other syntax features that can help to make CSS more manageable


File>Settings>Install from the menu. Here you can enter the name of the theme that you wish to install, being sure to select Theme instead of Packages, or search for other themes. Simply click the install button and Atom will retrieve it.

Finding an Atom theme is easy. All you have to do is simply proceed to the Atom theme site (by going to and you’ll be presented with a directory of published themes. This collection includes some which have been selected Chromium Selecting to be featured, the newest, most Within the File>Settings>Themes developer tools recently updated and lists of those You can take a peak under menu, you’ll find two drop-down which have trended (by popularity) the hood of Atom by using lists of UI themes and syntax for the previous day, week or Opt/ Alt+Ctrl+I or themes that you have installed, month. As mentioned previously, a View>Developer>Toggle with the currently used ones theme will be either for the UI or Developer Tools to explore the selected for you. Selecting another the editor syntax. Some themes can HTML and access theme will apply it to the editor and be more playful, like one inspired by JavaScript. some themes also allow customisation, Batman or you can have a more presenting a cog item for you to access serious Material Design-based theme. their settings. Installing The menu can also provide a list of themes that you Once you’ve found a theme to your liking, simply go have already installed, with the ability to remove them back to your Atom editor and access if desired.

Go As a language developed by Google in 2007, Go has received some decent attention recently. Go-plus provides access to the powerful toolset that Go provides from within the editor: gocode to power the autocomplete, gofmt to tidy up your code, goimports to add and remove imports automatically and golint to check for common code issues and even building or testing code.

A bigger, better Atom


Package installation

While Node.js uses JavaScript, which has already been detailed, there are additional packages available that provide even deeper support. For example, Node-debugger hooks into the debug support that Node has with Atom and displays the active line and allows execution control.

Under the menu File>Settings, you’ll find a Packages item. This provides a list of all the packages you have installed and ones that were included with Atom by default (core packages). Here you can access any settings a package may have or disable/uninstall them. For installation, proceed to the Install section

62 62_____________________________________________________________________feature

ATOM IS A VERY EXTENSIBLE EDITOR THANKS TO PACKAGES With the package count for Atom exceeding 2,500, there is no shortage of interesting additions you can add to your editor.

which provides a search input box. Type the name of the package you wish to install or keywords for something you would like to do. You’ll then be presented with a list providing a description, download count, version, link to the webpage of that package and an installation button.

Access the packages While this primarily depends upon the way the package was written to work, there are a few places that a package will present itself. The menu at the top

// Atom: the hackable text editor

of the Atom editor contains a Packages item that packages can provide actions under. They will also usually add themselves to the Command Palette, so just use Cmd/Ctrl+Shift+P to bring up the palette and then type the name of the package to filter the list and display the commands for it. Finally, keyboard shortcuts are usually created, but these will be unique to each package and best found by reading the documentation for it.

Discovering packages

Customising shortcuts

Going back to the Install area for You can view keyboard shortcuts packages within Atom, you should see an area listed for Featured Packages. This is a list via File>Settings>Keybindings. This featuring some of the favourite will display a searchable list and also packages from the Atom community. If you check out the Atom website ( provide a link to the file in which packages) you will find lists of packages that have been you can define your own. trending over the past day, week and month. This is a fantastic place to find out which packages are suddenly becoming popular with other developers in the field. Finally, the Atom website also has a blog which contains posts doing a new package roundup. They pick several new packages that have interesting features and showcase them with fuller descriptions and screenshots.

Package performance If Atom is starting to feel sluggish or takes a long time to open, itâ&#x20AC;&#x2122;s worth considering the packages that you have installed recently. A package that Atom includes by default is called Timecop and accessed via the Packages menu. It displays the loading and activation time of each installed package.

Check your code USE LINTING TO AVOID BUGS OR BAD PRACTICES Having your code checked as you type can save a lot of time that would be lost to common mistakes. Atom has a package called linter that provides that base package for checking a range of languages. Linter-jshint uses the JSHint project to perform an analysis of your code. JSHint will highlight syntax mistakes that would prevent your code from executing and highlight suspicious usage of JavaScript that are bad practice and even basic code styling. These checks are all configurable and once installed are highlighted and listed at the bottom. Another package to complement JSHint is linter-jscs. JSCS concerns itself with the style of JavaScript code, helping to ensure that you code consistently. Companies like Google and Airbnb have produced JavaScript style guides which JSCS can help enforce.


The:HE'HVLJŨAnnual :HE'HVLJŨ Working with Grunt USING A PACKAGE TO TAKE CONTROL OF GRUNT TASKS Grunt is a very useful tool for configuring sets of tasks that assist with the work you’re doing with your application. Through the use of a configuration file (Gruntfile.js) and various Grunt plugins you can perform a variety of tasks: optimising images, checking code quality, running a server, filesystem actions (copy, move), converting ES6 JavaScript to ES5 and much more. If you’re new to Grunt, their website has a great tutorial that shows you how to set up an example Gruntfile ( After you’re done running through this you’ll have some tasks available for Atom. Through installing the grunt-runner package into Atom, you can now gain access to an in-editor UI for controlling your Grunt tasks. Using the Command Palette (Cmd/Ctrl+Shift+P) you can bring up the UI by typing ‘grunt panel’, which then provides you with a button for firing of grunt tasks. When you do click the start button, grunt-runner will display a list of tasks that is found within your Gruntfile, and this will Atom Editor enable you to either type the name, click to select or @AtomEditor navigate by using the arrow keys. It then fires of the The Twitter account of the Atom editor. Allows tasks and logs the output to the UI, providing you to keep up to date on each release, blog post feedback as to the tasks being executed. and highlights useful packages.

How to create a syntax theme FOUR SIMPLE STEPS TO DISPLAYING CODE

1. Theme package files

2. Dev Mode Atom

Atom provides a built-in method for generating the initial files required. These can be generated with Packages>Package Generator>Generate Package Syntax Theme. Enter a path and generate the files.

It’d be useful to see theme changes when saving a file. To do this set Atom to use your theme with File>Settings>Themes>Syntax Theme and open the theme folder via View>Developer>Open in Dev Mode.

.variable { color: lighten(green, 20%); text-decoration: underline; border: 1px solid green; padding: 2px; }

3. Less CSS A form of CSS known as Less is used by Atom, which can also accept normal CSS as well as providing us with some very useful additions. Three of the core Less files for your theme are contained within the ‘styles’ subdirectory. So now, let’s change how variables are styled by opening up base.less and finding .variable.


cd ~/github/my-theme-syntax apm publish minor

4. Share the theme Once you’re happy with the syntax theme that you’ve created, you may decide that it’s worth sharing it with others online so that more people can reap the benefits. A command-line tool, provided by Atom, called apm, is used for this very purpose. Before you share though, you should know that it’s important to make sure that the information contained in the package.json file is correct, so do make sure that you read through the documentation at first before going on to share the themes with others.

// Atom: the hackable text editor


Minimap Adds a thin additional column to the editor window that provides a preview of the full file contents. This can then make it easier to scroll to specific code.

Git-plus Source control is important and this package provides access to Git for that purpose, all from within Atom so you don’t have to leave the editor.

File-icons Improves the filetree view and other areas within Atom by assigning colourful icons to diferent file types, making it easier to see the type at a glance.

Merge-conflicts When merging branches in Git, conflicts can occur with the changes. This package provides an in-editor: a simpler way to resolve those conflicts.


Customisation is at the heart of Atom and as we’ve already detailed, there are plenty of packages available to extend the features of Atom. These packages are written using a well-documented API which can control the editor’s various aspects.

var request = require('request'); var shell = require('shell'); var activate = function() { With Atom’s package generator, used earlier to atom.commands.add('atom-workspace', { create a syntax theme, we can create the 'my-package:create': function() base files needed for a normal package. {create();}, Contribute to Select Packages>Package }); packages Generator>Generate Atom Package }; It’s worth checking if a and enter a location for the files.

The package generator

package exists through Gist packages. You may be able to Now our main code, which grabs contribute any changes you Dependencies the selected text in the editor require, if the author is Within the package.json alter the window, submits to GitHub as a Gist willing to accept them. activationCommands, which tells Atom and opens posted code in your browser. var create = function() { how to execute our package and the var editor = atom.workspace. dependencies that include a module we’ll require. They’re written in CofeeScript, but plain JavaScript can be used.

"activationCommands": { "atom-workspace": "my-package:create" }, "dependencies": { "request": "^2.6.0" }

Menus and shortcuts iMDone Looks throughout your project’s code for comments marked TODO, FIXME and others. It then takes this information and provides a kanban-style board.

Beautify code CLEAN UP CODE MESS Sometimes code you encounter may not be in a very readable state, either through the way it was written or having been put through a minimisation process to reduce file size. This is where atom-beautify comes into play, while it won’t reverse any obfuscation, it will work its way through the code and insert new lines, spaces and indentation to try and make every line readable again. You can do this through either the Packages>Atom-Beautify menu, using Cmd/ Ctrl+Alt+B or calling up the Command Palette though Cmd/Ctrl+Shift+P and typing ‘Beautify’.

Within lib/my-package.js we can initialise the package. Request will be used within our package and shell is part of Atom that we will use to open a link in the browser. Then register our command with Atom.

Our new package will be accessed by right-clicking in the editor and through a keyboard shortcut. These are set within the following files:

// keymaps/my-package.json {"atom-workspace": {"ctrl-alt-o": "mypackage:create"}} // menus/my-package.json {"context-menu": { "atom-text-editor": [{ "label": "Create Gist", "command": "my-package:create" }] }}

Git and Atom USING GIT-BASED SOURCE CONTROL Since GitHub is behind the Atom editor, you’d expect it to support Git in some way. By default, Atom provides a useful insight into changes you’re making to a file when compared to what Git has recorded. Along the side of the editor window, lines in the file will be highlighted green if they are completely new lines or yellow if considered a modification of a line. Via the

getActivePaneItem(); var selection = editor.getLastSelection(); var options = { url: '', headers: { Accept: 'application/vnd.github.v3+json', 'User-Agent': 'Atom Package Gister', }, json: true, body: {public: true, files: {}}, }; options.body.files[editor.getTitle()] = { content: selection.getText(), }; return, function(err, resp, body) { shell.openExternal(body.html_url); }); }; module.exports = {activate: activate, create: create};

Command Palette or Packages>Git Dif, you can access a list of these modifications and jump between them. If you’re hosting your code on GitHub, the built-in ‘Open on GitHub’ package will provide many useful shortcuts. The Atom community has also put together packages like Git-Plus – useful for providing access to certain Git commands from within Atom, avoiding the need to switch to a terminal to commit changes. Merge-conflicts provides a UI within the editor to deal with conflicting changes straightforwardly when merging code branches in Git.



Create desktop applications with Electron Use pseudo elements HTML, CSS and JavaScript to create cross-platform desktop applications


// Create desktop applications with Electron


itHub have created an open source project called Electron (formerly Atom-Shell) that combines Chromium, on which Google’s Chrome is built, and Node.js, a JavaScript environment designed for building applications. The result enables the building of cross-platform desktop applications using HTML, CSS and JavaScript. Companies such as Microsoft, Facebook, Slack and more have started to make use of Electron. Electron applications normally start out with a single JavaScript file which is executed initially. The code within it has control over the main process, can create and control new application windows and listen for important events. Each window that is created can be set to render your UI through the use of HTML, CSS and JavaScript. The JavaScript executed within a window can also access special Electron APIs and the Node.js API. The Chromium part of Electron is also kept up to date with the latest releases, enabling use of the latest HTML/ CSS features and emerging JavaScript standards. The Node.js environment is also well maintained, to the point where it’s currently using a fork of Node.js called io.js, which is capable of supporting ES6 JavaScript syntaxes and can make use of npm registry modules (

1. Prebuilt Electron binaries Compiling Electron from scratch isn’t required for building applications. The project makes available precompiled versions of Electron for multiple operating systems, which can be easily installed using Node.js’ npm package manager (available via

$ npm install -g electron-prebuilt

2. Test Electron Electron should now be available on your system. This can be confirmed by running the ‘electron’ command,

which will display a standard message for the application – just ignore this message for now and close the window.

$ electron

window = null; }); });

3. Use package.json

5. Test the app

Package.json is required for Electron to know which file to execute for the application. It will also store the name of your application and your version, and ‘npm’ will add the details of any modules that you install and use. In this tutorial, the application will be used for editing Markdown-based files.

To enable us to test the application so far, a HTML file can be created containing some simple HTML for now. After adding the code below, execute the command ‘electron ./’ within the same directory as the package.json file created earlier.

{ "name": "wdMarkdown", "main": "index.js", "version": "0.1.0" }

4. The application and index.js The index.js file is the core of the application and will create our first application window to display the application. Depending on the type of application being developed, this file will usually contain the core of your application and do all the heavy lifting required. By using the ‘app’ object, we can find out when Electron is ready, and a browser window (appearing as a normal application window) can then be created.

<html><body> <h1>Hello World, from Electron</h1> </body></html>

6. Install Bower Our application has a few dependencies for the UI that will be constructed. By installing Bower, these dependencies can be easily retrieved along with anything else they may require and Bower could be used to keep them up to date.

$ npm install -g bower

7. Bootstrap framework In order to have something to display, a UI needs to be created using HTML and CSS. As a base theme,

var app = require(‘app’); var BrowserWindow = require(‘browserwindow’); var window = null; app.on(‘ready’, function() { window = new BrowserWindow({width: 800, height: 600}); window.loadUrl(‘file://’ + __dirname + ‘/ root/index.html’); window.on(‘closed’, function() {

Desktop applications For fuller and more complex applications, consider using npm modules (more information is available at for databases, network communications and more, plus a framework for your UI from the likes of React, AngularJS, Backbone.


By using the view menu we can have the editor switch between viewing the original Markdown content and a rendered HTML version of the content as a preview Top left

Here is the final UI HTML page. Its primary purpose is to include all the JavaScript files needed to construct UI, Bootstrap for the appearance and our main JavaScript file Top right

Here is what the application will look like when editing a Markdown document. The tab shows the title of the file, and the editor contains the raw Markdown content


The:HE'HVLJŨAnnual :HE'HVLJŨ Bootstrap can be useful as it provides many useful components and styles. More advanced applications may take advantage of AngularJS or React as well. By using Bower to retrieve a copy of Bootstrap we’ll also have a copy of jQuery included as it’s required.

$ bower install bootstrap

8. Markdown rendering The app will be used for editing Markdown files. Markdown is a widely used markup language used for writing formatted text in a plain text file. It can then be processed by a library such as ‘Marked’ into HTML.

$ bower install marked

issue with jQuery being unsure what kind of environment it is operating within when inside Electron.

11. Styling The Markdown editor being constructed will provide an edit area for altering the content and a preview of the HTML generated. As the app will be kept simple, the editor and preview areas will fill the application, minus an area for tabs to switch between files.

.editor, .preview {position: absolute !important; top: 58px; right: 0; bottom: 0; left: 0; z-index:5;} .preview {display:none; overflow:auto; z-index:10; background-color: white;}

9. Edit Markdown with Ace The Ace editor project is a feature-packed code editor that’s written in JavaScript and can be used within web pages. For this tutorial, we will be utilising Ace editor for editing the Markdown files as it can provide us with things like line numbering, syntax highlighting and other very useful features.

$ bower install ace-builds

10. Include Bootstrap and JS Next, make use of Bootstrap, Ace editor and Marked by including them in our index.html file. Make note of the alternative way jQuery is included, which is due to an

To debug any window you have created using Electron, call the .openDevTools() method on the instance of that window. This will open a developer tools window.

Top left

Microsoft have recently released their own editor, built on top of Electron, called Visual Studio Code Top right

Atom is GitHub’s programming editor and is where Electron started out as part of the project Right

The final version of the editor, showing the multiple tabs and some Markdown formatted content


14. FileNew function To create a file or handle opening an existing one, our UI needs to create a new tab and pane, as well as set a relevant title and content. We will use FileNew for this.

15. Use Ace 12. Make the tabs Bootstrap provides the styling and JavaScript required to provide the tabbed interface used for the application. Each tab will represent a file and clicking between them will display a related editor or preview. For now, we only need very basic HTML as we’ll build the tabs and related panes using JavaScript.

<div role="tabpanel"> <ul class="nav nav-tabs" id="fileTabs"></ul> <div class="tab-content" id="filePanes"></ div> </div>

13. root/app.js

Debugging Electron

var ipc = require('ipc'); var files = {}, rollingId = 0; var tabs = $('#fileTabs'), panes = $('#filePanes'); var ActiveTab = function() { return id = tabs.find('.active a'). attr('href').slice(1); };

This file will hold our main UI code. Using the ipc module, it can subscribe and publish events for communication with the main JavaScript process, that Electron executes (./index.js). We’ll need to identify files edited and store additional information, plus references to the tab and pane areas of the UI. Also, we will need a small function to grab the ID of the currently active tab.

With the tab, pane and content now ready, it’s time to fire Ace editor into action. Then we can store some information for easier referencing and switch to the newly created tab.

16. Menu events Now we execute the FileNew function once, present a new file tab on opening, and then listen for menu events sent from the main application process. The menu events, which we’ll create soon, are for creating a new file and opening an existing one.

FileNew(); ipc.on(‘file-new’, FileNew); ipc.on(‘fileopen’, FileNew);

17. Menus At this point, our Electron app should start up and present a fresh new tab to edit content in. However, we can’t open any more or save anything we create. So in this case, a menu is required and it’ll be built within the main ./index.js file. The ‘fs’ and ‘path’ modules from Node.

// Create desktop applications with Electron

Packaging applications Through Electron’s ‘crash-reporter’ module, you can configure your application to let users submit crashes to a website under your control. Your application code can also be archived into a more protective ASAR file, before being distributed. This requires use of the ‘asar’ command installed through npm; ‘npm install -g asar’. Electron currently only supports autoupdating on OS X through its ‘auto-updater’ module. But there’s no reason why you couldn’t implement a check against a URL for newer versions of your app and then notify the user. GitHub provides prebuilt versions of Electron at atom/electron/releases. Your app code can combine with these releases and redistributed.

js will be used for handling files. ‘Dialog’ and ‘menu’ provide opening/saving dialog boxes and an application menu bar.

/* Below ‘window = null’ */ var fs = require('fs'), path = require('path'); var ipc = require('ipc'); var dialog = require('dialog'), Menu = require('menu'); /* Within “app.on(‘ready’)” */ var SendEvent = function(name) { return function() {window.webContents. send(name);}; }; var template = [ {label: 'File', submenu: [ {label: 'New', click: SendEvent('filenew')}, {label: 'Open', click: OpenFile}, {label: 'Save', click: SendEvent('filesave')}, {label: 'Save As', click: SendEvent('filesave-as')}, {label: 'Close', click: SendEvent('fileclose')}, {type: 'separator'}, {label: 'Quit', click: function() {app. quit();}} ]}, {label: 'View', submenu: [ {label: 'HTML/Markdown', click: SendEvent('view-toggle')} ]}

]; Menu.setApplicationMenu(Menu. buildFromTemplate(template));

18. Open a file Within the ./index.js file, we can now add a function (already referred to in our menu) to show a dialog (provided by Electron), open the file and pass the data back to our UI for display. At this point, executing our app will provide us with a menu where we can open a new file and have it displayed in our editor.

19. Pass file data Saving a file requires a few more steps. When the save/ save as menu item is clicked, we send an event to the UI. The UI holds the data that we’re going to save, so it must listen for those events and send the data back to the main process for us to display a dialog and create/update the related file.

20. Write a file Our UI is sending a file-save event back to our main process (./index.js), providing the reference ID, content and path (new files don’t have one). We’ll need to listen for this event, saving the file directly if a path is provided or prompting for a location (save-as) first before creating or overwriting a file with the content.

ipc.on('file-save', function(evet, data, id, type, filepath) { if (filepath) { return fs.writeFile(filepath, data, function(err) {

if (err) return console.error(err); }); } dialog.showSaveDialog(window, { filters: [{ name: 'Markdown', extensions: ['md', 'markdown'] }] }, function(filepath) { if (filepath && filepath.length > 0) { fs.writeFile(filepath, data, function(err) { if (err) return console.error(err); window.webContents.send('file-saved', id, path.basename(filepath), filepath); }); } }); });

21. Rendering Markdown is great for quickly producing text and indicating how it’s formatted, but it’s more useful when you can see how it looks when it’s rendered. Our menu contains an option to switch to a rendered view so our UI needs to listen for it, grab the editor content, pass it through ‘marked’ for conversion and output it for display.

22. Closure Now we need to close a tab. Listen for the menu event within the UI, and after that it’s just a matter of having the editor clean itself up, and remove the tab and pane elements. Finally, switch to another tab if one exists. For the full code in this tutorial, make sure you go to filesilo.



Build desktop apps with NW.js and JavaScript Use NW.js to create JavaScript-powered desktop apps that can be run on OS X, Linux and Windows


// Build desktop apps with NW.js and JavaScript


ver the last 25 years, the browser has opened new windows (see what we did there) on content and code distribution. Whole ecosystems have sprung up to support the creation and deployment of apps that can be accessed with a simple URL. Despite how great the browser has become at integrating itself into operating systems, we still sometimes find that we want an app that we can install on our system and run at any time with just that extra little bit of power. What we need is something with the low-level access of Node with the ease of development of browsers and a way to package it all up as an app that can be run on our traditional OS like any regular, native app. That’s what NW.js (formally known as Node-WebKit) is. NW.js bundles io.js and the WebKit browser engine together in a way that lets us access the full power of Node through the DOM. This then enables us to write web apps and also to run them in self-contained windows that look almost exactly like OS native apps. In this tutorial, we’re going to set up NW.js and create a simple to-do app to show how simple it is to write native apps with JavaScript and NW.js. We will be doing this for Mac OS X.

1. Get NW.js Installing NW.js is pretty simple, all we need to do is head on over to GitHub ( and grab the build for your OS. If we were so inclined we could build NW.js from scratch, and that may be advisable if you want to be certain of stability, but for learning, the prebuilt binaries will do. For this tutorial the 32-bit release of NW.js 0.12.1 was used and tested on OS X 10.10.3.

2. Relocating the NW.js executable Unzip the package we just downloaded. It contains the prebuilt NW.js executable that we can use to run our apps. When we get to testing and deploying our apps, we’ll use the nwjs app bundle included in this folder, so move the entire folder to a safe, memorable place.

<!DOCTYPE html> <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> <title>To Do</title> <link rel="shortcut icon" href="favicon. ico"> <link rel="stylesheet" href="styles.css" type="text/css" /> <meta name="viewport" content="initialscale=1.0, user-scalable=no" /> </head> <body>

3. Create a NW.js project

Compiling code

NW.js apps are supersimple with few prerequisite dependencies, so there’s no need for a project creation tool like those found in, say, Cordova. Create a new folder somewhere on your system and create a HTML file called ‘index.html’ and insert the code below. This will make up the UI of our app and load the NW.js code to handle the creating of our window.

There is no compiling per se in NW.js, which means that you may want to keep your source code from prying eyes. There are ways to make it quite hard to get at your source, documented on NW.js’ GitHub wiki (


We don’t want to have to bundle things every time we want to test our app. Instead, we can point the NW.js binary to our project and run it in terminal Top left

When we first run our app, it looks a lot like a browser. We will need to tweak the manifest file Top right

Once we’ve adjusted our manifest properly, our app starts to look a lot more like it belongs on our system


The:HE'HVLJŨAnnual :HE'HVLJŨ <h1>Hello World</h1> </body> </html>

4. Create the manifest Just like an io.js app, NW.js uses a JSON manifest to get the wheels rolling with our app. Create a file called ‘package.json’ in the same folder as your index.html. This is the root of your NW.js, all URIs are relative to the package.json file.

5. A simple manifest Open up your new manifest file. The simplest manifest requires two properties, a main and a title. The main is the file that our app will launch when it’s first run, so in

Node-webkit NW.js is, in essence, Node WebKit. The name change accompanied the migration of the project from Node.js to io.js. Rumour is, all the cool kids call it n-dubz for short – enn-double-you-dot-jayess is such a mouthful.

Top left

When we position our window with code, the top-left corner is positioned relative to the top and left Top right

We need to navigate the hierarchy to package our NW.js app properly Right

Making ICNS files is a bit of a pain, fortunately Image2icon has us covered, but only if you distribute your app on OS X


To make our app look a little more like it belong, we can remove certain elements with our pack.json manifest our case it’s index.html. There are other properties that can afect how our app runs, but for now add this::

{ "title" : "To Do Today", "main" : "index.html" }

6. Point the app When we’re testing out our app, we shouldn’t have to package and repackage it every time we correct code. We can use the NW.js executable to run our app by pointing it to the root of our project. Open up a terminal and navigate to the folder downloaded from GitHub.

7. Launch our app The NW.js executable is buried in the NW.js app in our folder. The executable is the program where the io.js and NW.js code lives. With NW.js, we don’t change the app, we create projects that run with it. To run the app, enter:

$ /path/to/ your/project/root.

8. First impressions A new window will have opened and a NW.js icon will have appeared in your dock. We’re running our first NW.js app, and we can see our HTML file – but it still looks like a browser window. To make our app look a little more like it belongs, we can remove certain elements with our package.json manifest.

9. Enhance our manifest We’ve got some system chrome we need to get rid of now! Just after the ‘main’ property of your manifest, add a ‘window’ object and add the following properties to it:

{ "title" : "To Do Today", "main" : "index.html"

// Build desktop apps with NW.js and JavaScript

Node modules NW.js is, in part, io.js. That means that you can use modules! Well, at least in principle you can anyway. NW.js can run on any of the major platforms, but just like a native app, your packaging process will of course vary depending on the operating system that you’re actually targeting. You can’t run an EXE on a Mac, just like you can’t mount a DMG on Windows for example. If you want to make use of io.js modules that use a language other than common JavaScript, then it’s very likely that you will have to compile your package on the system that you’re targeting. That being said though, if you’re only using Node modules written in common JavaScript (like the fantastic moment.js module, for example), they should work straight out of the box.

"window" : { "toolbar" : false, "frame" : true, "width" : 480, "height" : 320, "show" : true, "resizable" : true } } The window object afects how our window behaves. We can remove the frame, define height, make it persistent in all workspaces, but for now these properties set us up.

10. Create our GUI Now if you repeat Step 7 when our app launches, you will see that we no longer have the URL bar and that our

window is the width and height we defined. All that’s left of the system chrome are the close, minimise, maximise and title. We can remove these too, but they’re handy for UX purposes, so we’ll leave them in.

11. Add scripts It’s time to start putting the component parts of our GUI together. Open up index.html and amend it like this:

<!DOCTYPE html> <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> <title>To Do</title> <link rel="shortcut icon" href="favicon. ico"> <link rel="stylesheet" href="styles.css"

type="text/css" /> <meta name="viewport" content="initialscale=1.0, user-scalable=no" /> </head> <body> <ol id="list"> </ol> <div id="UI"> <div id="delete" class="button"> </div> <div id="add" class="button"> </div> </div> <script src="scripts/core.js"></script> <script src="scripts/window.js"></script> </body> </html>


The:HE'HVLJŨAnnual :HE'HVLJŨ This is the markup for our list. The app logic isn't what we're focusing on, so we won’t write core.js or styles.css from scratch, you can download them from FileSilo.

12. Access NW.js globals We’ll now write window.js to ease into NW.js coding. Create a folder called ‘scripts’ in your root folder and a file called ‘window.js’ in it. Here, we’re going to afect the app window with events and listeners for that native feel.

13. NW.js GUI and windows Just like a Node app, we can use require() to access node modules and NW.js globals. In order to manipulate the properties of the window and listen to events, we need to require the GUI module from NW.js and abstract the window away from it. Insert the following into window.js:

var gui = require('nw.gui'), win = gui.Window.get();

The app logic isn’t what we’re focusing on, so we won’t write core.js or styles.css – you can download them from FileSilo more at home. OS X for example has the menu bar with the About and Edit dialogs that are currently missing. Adding the following to window.js will adjust our to-do apps menu bar with more relevant information.

var nativeMenuBar = new gui.Menu({ type: "menubar" }); nativeMenuBar.createMacBuiltin("ToDo", { // hideEdit: true, hideWindow: true });

15. Hide our window

14. The menu bar NW.js is platform agnostic, but there are small OS tweaks we can make for each platform that make the app look

When our app loads, we see a quick flash of white before the content of our page is loaded and initialised. In package.json change the value of ‘show’ to false instead of true. In window.js add; to the end of the file.

Now when our app loads, the scripts will be initialised and will only show once scripts have finished loading.

16. Position the window When the app is ready we can position the window. When we open a native app on OS X it’s generally at the same x and y coordinates as when you closed it. We can reproduce this, just insert the following before

var initialPosition = { top : 50, left : 50 }; var savedPosition = JSON. parse(localStorage. getItem('windowposition')); win.on('move', function(){

Node.js and io.js, What’s going on? Node is backed by a company called Joyent – this has been great for many reasons, but a lot of people have felt that the speed at which new JavaScript features and optimisations are being integrated into Node is too slow. These feelings led to io.js – a fork of the Node source code, but maintained entirely by the community. Io.js started to implement new features (like ES6 syntax) straight away, and with the benefits of a rapid-release cycle being well known to the developer community, many projects, like NW.js, migrated to io.js. Now, Joyent and the guys in charge of io.js have agreed to merge all of the changes in io.js back into Node.js and release and develop Node under the newly created Node Foundation.


// Build desktop apps with NW.js and JavaScript

localStorage.setItem('windowposition', JSON.stringify({left : win.x, top: win.y})); }); if(savedPosition !== null){ win.moveTo(savedPosition.left,; } else { win.moveTo(initialPosition.left,; } On our win object, which is the reference to the NW.js window, we've added an event listener. Every time our user moves their window, the 'move' event is dispatched to win. When this happens, our callback is called and we can save the coordinates that the window has been moved to in localStorage. Now, whenever we move our window, its coordinates will be saved in localStorage and the window will be restored to its previous position.

17. Negative numbers If our user happens to have multiple screens and our app is positioned on one of them, when the screen is unplugged our app could end up being stuck where we can’t see it! If our stored coordinates are less than 0, we’re going to override the coordinates and place our window back on the main screen. Just after if(savedPostion !== null){, add the following:

if(savedPosition !== null){ if(savedPosition.left < 0){ savedPosition.left = 50; } if( < 0){ = 50; }

18. Resize the window Just as our window is always in the same place that we left it, it should also be the same size right? Well let’s do this now. We’re going to repeat what we’ve done with the move event and localStorage, but this time we’re going to listen for the resize event instead. When our window is resized, we’ll save the dimensions to localStorage. When we restart our app, we’ll set the dimensions of our app to the dimensions stored.

var savedPosition = JSON. parse(localStorage. getItem(‘windowposition’)), savedSize = JSON.parse(localStorage. getItem(‘windowsize’)); win.on(‘move’, function(){ localStorage.setItem(‘windowposition’, JSON.stringify( {left : win.x, top: win.y } ) ); }); win.on(‘resize’, function(width, height){ localStorage.setItem(‘windowsize’, JSON. stringify( {width : width, height : height} ) ); });

if(savedPosition !== null){ if(savedPosition.left < 0){ savedPosition.left = 50; } if( < 0){ = 50; } win.moveTo(savedPosition.left,; } else { win.moveTo(initialPosition. left,; } if(savedSize !== null){ win.resizeTo(savedSize.width, savedSize. height); }

19. Click to lock At the start of this tutorial, you should have downloaded core.js from FileSilo and then included it in our scripts folder. Now, we’re going to add a little bit of code to it. In our app, you should see an unlocked padlock in the bottom-right corner of our window, it doesn’t do anything at all right now, but when we’re finished, it will be a toggle for locking our ToDo app on top of all other windows so that it is visible across all of our workspaces.

is in the global scope, so it is still possible for us to access it from our app and use DOM events to change its properties – even though it’s technically Node.js property. Then when we click on our padlock, we will work on checking the window-is-locked data attribute on the DOM element to get the window’s current state. If our window isn’t locked, then we can now use ‘win. setAlwaysOnTop(true)’ and ‘win.setVisibleOnAllWorkspac es(true)’ to lock it and vice versa.

22. Package our app If you run the app now, you’ll have a fully functioning, persistent to-do list. You can minimise it, close it and resize it. Now we need to package it. Go to the folder you downloaded earlier that was relocated to GitHub. Copy the file into the root of your project folder and right-click it. Select ‘Show Package Contents’ and navigate to Contents>Resources.

23. Copy files Open another Finder window and copy all of the files we’ve created (index.html, styles.css, scripts folder, package.json and all the others) into the newly created app.nw folder. Now if you got back to your project root and double-click NWJS, your app will be run by default.

24. Edit PLIST 20. Create the ICNS file Open up core.js for editing and scroll down until you find the addEvents function and just before the end of that function, add the following:

document.getElementById(‘lock’). addEventListener(‘click’, function(){ var isLocked = this.getAttribute(‘datawindow-is-locked’); if(isLocked === “false”){ win.setAlwaysOnTop(true); win.setVisibleOnAllWorkspaces(true); this.setAttribute(‘data-window-islocked’, “true”); } else { win.setAlwaysOnTop(false); win.setVisibleOnAllWorkspaces(false); this.setAttribute(‘data-window-is-locked’, “false”); } }, false); Now, if we restart our app and click the padlock icon, it will lock and if you click another window or move to another workspace, ToDo will still be on top for us to see. If we click the padlock again, our window will stay on the workspace it’s currently on and will disappear behind any other window that has focus.

21. What did we just do? Our package.json file sets up how our window looks and behaves when we first load it, but as we’ve seen with the moving and resizing code we wrote a little earlier, we can override these properities with JavaScript if we want to, even after the window has been shown. Our win variable

Our app still looks like a NW.js app. To afect some changes like the name and the icon, we need to change the app’s PLIST file. Right-click NWJS again and select ‘Show Package Contents’ and then navigate back to Contents. Open the PLIST file. If you have Xcode, it will open its editor, otherwise you can modify it with any text editor. Change the ‘Bundle name’ and ‘bundle display name’ to ‘ToDo’ and save.

25. Create ICNS The last touch to make any app look at home is the icon it uses. You can either create your own using something like Image2icon or you can use the ICNS file included in our files – these are available for download from FileSilo. Just simply copy this into Content>Resources.

26. Affect the changes The icon and name changes may not take efect straight away. You can relaunch Finder or you can change the name of the .app in your project root to anything and then back to ‘ToDo’. Then the icon and bundle name should update.

27. Distribute our app We now have a fully functioning, JavaScript-powered OS X app. You can now move the file anywhere on yours or anyone else’s system and fire it up, just like a regular app. When running your app on other people’s system, they may need to tweak their security settings to enable your app to run. To do so, simply open System Preferences>Security and Privacy and enable apps to be run from the Mac App Store with the toggle. Now, move ToDo into your applications folder and give it a whirl.



Get robust JavaScript code with the TypeScript library Structure your JS code more rigidly than ever before with the help of Microsoft’s TypeScript


// Robust JavaScript code with TypeScript library


avaScript’s liberal syntax declaration lets developers do all kinds of weird and wonderful things. Programmers used to restrictive class-based systems will need to aquaint themselves to the new working environment. Google engineer Addy Osmani collected design patterns which helps this process – when, and only when, applied correctly. Duck typing is even worse though: it permits the creation of bugs which occur only when the code path in question is executed. Murphy’s law then ensures that this will happen on the computer of an important client. TypeScript solves these problems by creating a statically typed dialect of JavaScript which is verified via a more stringent execution environment. Transgressions are punished mercilessly, defective code never makes it to the browser. Microsoft does its magic via a process called transpilation. A special compiler transforms the TypeScript code to vanilla JavaScript, which can then be run in a JavaScript VM of choice. State-of-the-art JavaScript runtimes are so fast that they can be used as execution environment for arbitrary code – some developers even go as far as to transpile C++ to a JavaScript-based VM.

1. Install Visual Studio Even though TypeScript can also be run from the command line, we will use Microsoft’s free version of Visual Studio 2015. Download it by visiting visualstudio. com/en-us/products/visual-studio-community-vs.aspx and install it like any other application.

2. Create a project Click New Project in order to open the new project wizard. Then proceed to opening the TypeScript tab, where a new solution based on the HTML application with TypeScript template is to be created. It will contain an HTML file and an accompanying TS file containing the actual TypeScript code.

<button onclick="btnClicked()">Button clicked</button> </body> //snip function btnClicked() { alert("Hello TS"); }

4. A question of typing TypeScript takes its name from its incredible typing capabilities. The snippet accompanying this step in the tutorial shows a few variable declarations. Be aware though, that variables don’t necessarily need to have a

3. Wire a button The following steps will demonstrate various features of the runtime via an ever-expanding method. It must be triggered from a button – open the HTML file and append the markup shown in the source code accompanying this step. Then, replace the window. onload block in the TS file with the function specified.

Vetoing Microsoft Developers harbouring a distaste for Visual Studio can use a Node-based transpiler to transform their TypeScript code into JavaScript. Sadly, doing so means missing out on various comfort features.

<body> <h1>TypeScript HTML App</h1>


Visual Studio Community combines the formerly separate web, Windows RT and desktop versions of Microsoft’s popular IDE Top left

TypeScript code resides in TS files. Visual Studio transforms them into temporary JavaScript during the deployment of the web app Top right

The transpilation process mercilessly weeds out all kinds of errors which traditional JavaScript developers can find only at runtime


The:HE'HVLJŨAnnual :HE'HVLJŨ type – if any is specified, the variable in question will not be subject to input validation.

function btnClicked() { var myString: String; var myNumber: Number; var myBool: Boolean; var myAny: any; }

5. Validation TypeScript’s capabilities can be validated by assigning invalid elements to the variables created in the previous step. Visual Studio will then flag them down the very moment the file has successfully been saved. It will,

The playground Individuals adverse to both Visual Studio and Node.js can visit the TypeScript Playground ( It provides a hosted version of the TypeScript transpiler.

furthermore, prevent deployment until any and all objectionable passages have been remedied.

6. Create an enum If variables are to contain but a few predefined values, using an enum is the way to go. The snippet accompanying this step demonstrates the definition of an enum handling aircraft types. After that, a new instance of the enum is created and a value is assigned in order to demonstrate its handling.

enum Aircraft { MIG21, MIG25, MIG29, MIG31 } function btnClicked() { var plane: Aircraft; plane = Aircraft.MIG25; }

7. Create a class For this next step, we will now start working on real

Top left

Thanks to inheritance, the BetterManager can easily change the message that is displayed in its instance’s sayHello() function Top right

Invoking a generic function with an invalid parameter is punished with yet another transpiler warning Right

Using the private keyword permits developers to hide implementation details from pesky callers that want to wreak havoc in their code


classes. Remove the declaration of the greeter class, and then subsequently replace it with the following bit of code. Our AircraftManager class contains a member variable, a member function as well as a constructor which is used to set its value during the initialisation of the object instance.

class AircraftManager { myWhatAmI: Aircraft; constructor(aWhat:Aircraft) { this.myWhatAmI=aWhat; } sayHello() { alert("Hello"); } }

8. Spawn an instance The process of pawning an instance is as easy as invoking the new operator on the class name, which can furthermore be used as a variable type. By and large, any objects that are spawned from TypeScript classes behave just like normal JavaScript objects – accessing their members and member functions via the . operator is a non-brainer.

// Robust JavaScript code with TypeScript library

Type Contains what Any Accepts any JavaScript variable, thereby disabling TypeScript type checking Array An Array of elements Boolean True or False Enum An enumeration of numeric values, adressible via their assigned names Number A numeric value String Text Void Designates that a method returns nothing, and thus can not stand on the right side of an =

To type where no one typed before The picture accompanying this step provides a rough overview of the types found in TypeScript. Developers experienced with Java or C will note the lack of specific numeric types: this is due to the reliance on the common JavaScript interpretation core. It cannot keep integers and floats apart, the weakness propagates to TypeScript at runtime. Strings share a similar fate – they are but a bit of syntactic sugar attached to a normal JavaScript var. Void is interesting from a psychological standpoint: it informs the compiler that the element at hand will return nothing. This bit of metadata is helpful as it ensures that such methods never find themselves on the right-hand sign of an equals operator.

function btnClicked() { var planeManager: AircraftManager; planeManager = new AircraftManager(Aircraft. MIG21); planeManager.sayHello(); }

this.myWhatAmI=aWhat; } private sayHello() { alert("Hello"); } }

10. Yield results 9. Hide a member TypeScript treats all members of a class as public by default. This behaviour can be modified by making use of the private keyword. This keyword can be appended to both member functions and variables – in both cases, external access is no longer permitted by the transpiler.

class AircraftManager { private myWhatAmI: Aircraft; constructor(aWhat:Aircraft) {

TypeScript’s zealotic quality control algorithms, of course, can also make amends for all kinds of class-related oddness. We can try this out with two methods: by attempting to invoke the constructor with a wrongly typed parameter or via an access attempt addressed at a private variable.

11. Check values Accessors are among some of the oldest structuring

aides known to programmers. Our snippet that can be seen below demonstrates the adding of a property which checks the values passed into it before committing them to the data store found inside the class. The getter could, in theory, also be expanded to modify the values returned.

class AircraftManager { private _WhatAmI: Aircraft; get myWhatAmI(): string { return "An Aircraft"; } set myWhatAmI(what: string) { if (what == "MIG21") { this._WhatAmI = Aircraft.MIG21;


The:HE'HVLJŨAnnual :HE'HVLJŨ } }

} . . .

12. Function cut short

13. Creation of parameters

Typing time is one of the most important ‘time wasters’ that can be encountered during software development. TypeScript permits you to set sensible defaults for parameters, which can then be invoked with a shorter parameter list. Please be aware though that all non-optional parameters will need to be in front of the optional ones.

Functions with a variable number of parameters can be extraordinarily helpful at solving some rarely encountered problems. TypeScript provides a facility which makes the creation of variable-parameter functions really easy – all you have to do is take a look at the code accompanying this step below.

class AircraftManager { private _WhatAmI: Aircraft; . . . constructor(aWhat: Aircraft = Aircraft. MIG21) { this._WhatAmI=aWhat;

{ return first + " " + rest.join(" "); }

14. Shared static elements Static elements are shared between all instances of a class: although this commonly maligned pattern could

potentially lead to some brittle code if it is overused. However, static elements can also be really helpful when it is actually applied correctly. TypeScript delights its users with the presence of a static keyword, and this behaves as expected:

class AircraftManager { . . . static sayHello() { alert("Hello"); } } function btnClicked() { AircraftManager.sayHello(); }

Learn more about TypeScript Even though TypeScript is not particularly complex, it cannot be covered in five pages worth of tutorial. Developers interested in finding out more about their working environment should visit the TypeScript tutorial found at Microsoft’s endorsement is helpful in that it ensures the presence of a lively and active community: many if not most questions can be answered via Google. Newbies and professionals facing doubts should visit StackOverflow in order to find help and consultation from their peers.


// Robust JavaScript code with TypeScript library

15. Inheritance Making classes inherit from one another simplifies the modelling of real-world relationships. Derived classes can, furthermore, overwrite the behaviour of their hosts – for example, the BetterManager replaces the message that is shown by the normal sayHello() function if invoked,

class AircraftManager { . . . sayHello() { alert("Hello"); } } class BetterManager extends AircraftManager { sayHello() { super.sayHello(); alert("Greetings from the better Manager!"); } } function btnClicked() { var aManager: BetterManager; aManager = new BetterManager(); aManager.sayHello(); }

16. Access mother object TypeScript does not hide the mother object instance from view. It can, instead, be accessed via the super keyword. Our snippet below demonstrates the usage of super in a constructor and in a member function – it could not be easier.

class BetterManager extends AircraftManager { sayHello() { super.sayHello(); alert("Greetings from the better Manager!"); } }

worker(anObject); }

18. Make parts optional An old adage states that exceptions can prove the validity of rules. Interfaces can be configured to contain optional members and of course these don’t necessarily need to be implemented. This feature, unfortunately, is not particularly helpful for us and that’s because crashes can occur if the callee forgets to check the presence of the implementation.

interface AnObject { myName: String; myNumber?: Number; } function worker(a: AnObject) { } function btnClicked() { var anObject = { myName: "AnElement"}; worker(anObject); }

19. Implement interfaces Classes can be designated as implementations of particular interfaces. This is accomplished thanks to use of the implements keyword: if ‘implements’ is present and the class misses declarations that are required in the interface then what happens is that a compiler error will be raised in order to notify the developer about his unforgiveable omission.

interface AnObject { myName: String; myNumber?: Number; } class SomeClass implements AnObject { myName: String; myNumber: Number; }

20. Modularise 17. Enforce presence via interfaces JavaScript’s duck typing is a never-ending source of pain and this is because everything is valid for everything, that is until a NullReferenceException occurs though. Using an interface enables developers to specify the presence of member variables and/or functions – and any elements that are not confirming are therefore not allowed to pass.

interface AnObject { myName: String; myNumber: Number; } function worker(a: AnObject) { } function btnClicked() { var anObject = { myName: "AnElement", myNumber: 22 };

The TypeScript transpiler does not merge the individual parts of a library into one file – instead, each TS file gets transformed into an individual JS file. Do be aware though that each JS file must be included into the website separately in order to be able to use the module’s content.

21. Go generic Developers can side-step the type-checking process and all they have to do to do this is make use of the Any keyword. The use of generics provide a safer way to creating type-agnostic classes and/or functions: the parameter informs the compiler about the type that is going to be used by a particular instance, and this can then be enforced with zealotism.

function genericF<T>(myVal: T) { return myVal;

} function btnClicked() { genericF<Number>(22); genericF<String>(22); }

22. Mixin on the loose Keep in mind that creating complex inheritance structures is not always the best solution. Mixins are ‘building block classes’ which provide a small set of functionality, and these mixins can then be integrated into larger classes. As an example, let us take a bit of code from the documentation which creates activity management logic.

class Activatable { isActive: boolean; activate() { this.isActive = true; } deactivate() { this.isActive = false; } }

23. Use applyMixing Mixins are instantiated via the implements keyword. The host class must contain a stub implementation, and this is then overwritten at runtime by making use of the applyMixing method.

class SmartObject implements Disposable, Activatable { . . . // Activatable isActive: boolean = false; activate: () => void; deactivate: () => void; } applyMixins(SmartObject, [Disposable, Activatable]) var smartObj = new SmartObject(); setTimeout(() => smartObj.interact(), 1000);

24. Mix it up The actual deployment of the mixin must be done at runtime via the function shown in the code snippet shown below. Due to the complexity of the mixins, it’s important for you to consult documentation before proceeding further.

function applyMixins(derivedCtor: any, baseCtors: any[]) { baseCtors.forEach(baseCtor => { Object.getOwnPropertyNames(baseCtor. prototype).forEach(name => { derivedCtor.prototype[name] = baseCtor. prototype[name]; }) }); }



Make playlists with your friends using the API Master Fetch and’s API by making a playlist from you and your friends’ mutual tastes


// Make playlists using the APIlibrary


usic is both universal and incredibly subjective. What one person considers the epitome of human creative achievement another may dismiss as bad taste. In this tutorial we’re going to attempt to bridge the gap by using data from to create playlists of songs you and a friend have in common. was founded in 2002 by two university students and has since been bought out by CBS Interactive. It ofers a place to store what you’ve listened to and has a number of API services that enable developers to dig into this rich mine of data. To help us achieve this we’re going to use a new method for making requests called Fetch (more information at Fetch is currently supported by Chrome 42, Firefox 39 and Opera 29. Fetch aims to replace the old and somewhat hack-y XMLHttpRequest introduced by Microsoft in 1999. The web has come a long way since then and Fetch is designed around streams and promises. The front-end portion of this tutorial is relatively straightforward but previous experience with Angular will aid understanding. The server-side portion is covered briefly but the full source code is on FileSilo.

1. Install Express The server for our simple app will be Node.js using Express. This will then enable the scafolding process of an Express app for you and start to serve it up on port 3000. We’ll be editing the app.js file at the root of the project later on. We’ll also use a wrapper for the API on the server to help with authentication.

<input class="username-input" type="text" placeholder="Your username" data-ng-model="user" data-ng-modeloptions="{ updateOn: 'blur' }"> <!-- script tags --> </body>

4. Define module 2. Install client-side packages Code that is served to the user is stored under the ‘public’ folder. Navigate to this folder and run the following Bower command. Alternatively you can install Angular 1.3, angular-query-string ( angular-query-string) and Underscore manually, and Meteor will automatically restart.

Back in the public folder make a JavaScript file called ‘playlist-app.js’. The module ‘app’ is what the ng-app attribute in the previous step looks for; likewise with the controller and its definition. We’re also injecting a LastFmService which we’ll create later on. Note the dependency on angular-query-string as well which provides UrlQueryString.

3. Capture username Now we’ll start writing our Angular app. We want to capture a username so that we can start the process of finding friends on Create index.html at the root of the project and a text input which updates the ‘user’ model when focus moves from it.

Network red herring If you’re using the Network tab in your Web Inspector to debug requests made by Fetch, they won’t appear under the XHR tab as they’re not XMLHttpRequests!

<body data-ng-app="app" data-ngcontroller="FriendController" data-ng-cloak>


When a username is entered we can use it to get a list of their friends and display it Top left

By using Bower to manage our packages we can include multiple projects at once and increase the project’s maintainability as well Top right

This input captures the username input and is tied to the ‘user’ model. Updating on blur keeps requests low


The:HE'HVLJŨAnnual :HE'HVLJŨ angular.module('app', ['angular-querystring']) .controller('FriendController', ['$scope', '$window', 'LastFmService', 'UrlQueryString', function ($scope, $window, Lastfm, UrlQueryString) { }]);

5. Get friends As a user enters a username we want to make a request to the API to get a list of their friends. There are many ways to do this, one alternative is to add a function to the input which updates on blur, the same as the model, but we’ll watch the user property for changes.

$scope.$watch('user', function (username) { if (username) {

An extensive API has 133 documented services for you to use, ranging from user information to geographical events. You can read about in more detail at

Top left

A request made by the Fetch API contains all of the same information as the trusty XHR request Top right

We will need to hand the user a link to see their newly created playlist and cement their friendship Right

Taking the user to the site provides a handy breakdown of what they’re enabling the app to do


The Promise API means that we can chain multiple asynchronous functions with ‘then’

lastfm.getFriends(username).then(function (data) { $scope.friends = data.friends.user; $scope.$apply(); }); } });

6. Create Angular service You may have noticed that we called lastfm.getFriends but currently ‘lastfm’ doesn’t exist in our app! We’ll rectify that by adding a service to the ‘app’ module. You’ll need to register for a API key by going over to api/account/create.

.service('LastFmService', function () { var url = ‘http://ws.audioscrobbler. com/2.0/?api_key=YOUR_API_KEY&format=json'; return { /* next step */

}; })

7. API call We’re going to be using the experimental Fetch API to make a request to user.getFriends ( user.getFriends). Fetch works with streams and promises which are often a little bit exotic, which is nice. The Promise API means that we can chain multiple asynchronous functions with ‘then’. Response.json() returns a stream so that the reading of large files can be progressively rendered.

getFriends: function getFriends (username) { return fetch(url + '&method=user.getfriends& recenttracks=1&user=' + username) .then(function (response) { return response.json(); }); },

// Make playlists using the APIlibrary authentication We’ve used a library to deal with the authentication process for us. You make a request with your API key and in return (after the user has granted permission) gives you a token that can be saved to make subsequent requests without having to reauthenticate. The trickiest part of this process is signing your calls to API methods that require authentication. They require parameters to be supplied alphabetically and UTF-8 encoded, then hashing it with the MD5 algorithm. Any language can be used and has a list of recommended wrappers including Python and .NET. It could even be done client-side but this would mean exposing your app’s secret key, which is not recommended.

8. Display friends Fetch is making the API call and we’ve already written a handler for it by populating $scope.friends with the result. As a last step to finish showing our user’s friends we will just need to write a simple repeater which will then show a list of the friends. We’re outputting a small simple image and we will prefer the ‘real name’ over the username.

<section class="friend-container" data-ngshow="friends.length"> <h4>Choose a friend to create a playlist with: </h4> <ul> <li class="user" data-ng-repeat="friend in friends" data-ng-

click="chooseFriend(friend)"> <img src="{{friend.image[0] ['#text']}}">{{friend.realname || friend. name}} </li> </ul> </section>

9. Choose friend You should now have a list of friends appearing and when the user clicks one of these friends we want to start the comparison process. Create a playlist array which will contain the track objects to display and call ‘compare’ on the ‘lastfm’ service. Then pass it the username of who we are and the friend’s name.

$scope.chooseFriend = function (friend) { $scope.playlist = [];

$scope.chosenFriend = friend;$scope.user, .then(function (data) { var artists = data.comparison.result. artists.artist; return artists; }).then(function (artists) { /* next step */ }); };

10. Get tracks Anticipating the response from that, we’ll loop over each mutual artist that’s returned from For each artist we call another service which looks for tracks by a given artist that our friend has listened to and select a random two from the list returned. For those two tracks we call


The:HE'HVLJŨAnnual :HE'HVLJŨ another method we haven’t written yet called ‘addTrack’.

artists.forEach(function (artist) { return lastfm.getTracksOfArtist($scope., .then(function (tracks) { _.sample(tracks.artisttracks.track, 2). forEach(addTrack); }); });

11. API calls The ‘compare’ and getTracksOfArtist methods follow a similar format to the previous API call with Fetch. In both cases we parse the response as JSON and then pass it onto the next function so that we can access the response in the controller. The comparison is done via’s tasteometer, which also returns a ‘compatibility’ rating between 0 and 1.

compare: function (user1, user2) { return fetch(url + '&method=tasteometer.comp are&limit=30&type1=user&type2=user&value1='

Fetch can also be used in service workers – this means your asynchronous processes can spawn more! + user1 + ' &value2=' + user2). then(function (response) { return response.json(); }); }, getTracksOfArtist: function (friend, artist) { return fetch(url + '&method=user. getartisttracks&user=' + friend + '&artist=' + artist).then(function (response) { return response.json(); }); },

12. Add track The logic for adding a track to the playlist is reused, so to keep things in line with DRY (the don’t repeat yourself principle) we’ve chosen to make it a separate function.

This will ensure that the track has a name and loops through the array to make sure the ID isn’t already in there. If it is, it will add an other track and then shufles the playlist to mitigate the same artist being played sequentially.

var addTrack = function (track) { if (track && { var inPlaylist = $scope.playlist. some(function (t) { return t.mbid.length && t.mbid === track. mbid; }); if (!inPlaylist) { $scope.playlist.push(track); $scope.playlist = (_.shuffle($scope. playlist); $scope.$apply(); }

Fetch: gotchas and how to use it today A few ‘gotchas’ to look out for is that you can only read a Fetch’s stream once. If you write ‘response.json(); response.text();’ the second will throw an error because the stream has already been read. If you want to do this then write ‘response.clone().text()’. Also, there is no way to cancel a request but this may be fixed in the future. This is due to the Promise-based nature of Fetch. Fetch doesn’t have comprehensive support and to tackle this, GitHub developers wrote a polyfill which wraps XHR with the Fetch API (see github. com/github/fetch). Fetch can also be used in service workers, unlike XHR requests which can’t. This means that your asynchronous processes can spawn more asynchronous processes!


// Make playlists using the APIlibrary

} };

13. Get loved tracks You could use this project to play some songs when you’re both in the same physical space. Being a good host, you’ll probably add some songs for your guest so we’ll add a couple of their loved tracks to the playlist.

.then(function () { lastfm.getLovedTracks($scope.chosenFriend. name).then(function (data) { _.sample(data.lovedtracks.track, 2). forEach(addTrack); }); });

14. Loved tracks request The actual API call is similar to the last few. Most of the services return a subset of the total results and has a pagination system so you could loop through incrementing the &page value. You can also limit the results if you know you want less than 50 results.

getLovedTracks: function getLovedTracks (user) { return fetch(url + '&method=user. getlovedtracks&user=' + user) .then(function (response) { return response.json(); }); },

15. Display playlist We’ve got some tracks coming back from the API so let’s display them! Create an ordered list and display the track name, artist and album with a hyphen between them. Diferent services will either return .name or .#text.

<section class="playlist-container" data-ng-show="playlist.length"> <h1>Your playlist with {{chosenFriend. realname ||}}</h1> <ol> <li class="track" data-ng-repeat="track in playlist"> {{}} <span data-ng-show="track. || track.artist['#text']">-</ span> {{ || track. artist['#text']}} <span data-ng-show="track. || track.album['#text']">-</span> {{ || track.album['#text']}} </li> </ol> <button data-ng-click="savePlaylist()">Save playlist</button> </section>

16. Save playlist listener We added a button to save the playlist. To create one, authenticate the user on the server. Now send over the name of our friend and the tracks. As we receive a response it sends us to authenticate on the site.

$scope.savePlaylist = function () { var data = JSON.stringify({ name: $, tracks: $scope.playlist, }); lastfm.savePlaylist(data).then(function (url) { if (~url.indexOf('')) { $window.location = url; } }); };

17. Fetch POST request POSTing data with the Fetch API is slightly more involved than the previous GET requests. Instead of encoding the data in the URL, this time the data is sent with the POST request body. It’s essential that we set the Content-type header to application/json for the server to identify it.

savePlaylist: function savePlaylist (data) { return fetch('http://localhost:3000/ save-playlist', { method: 'post', headers: { 'Content-type': 'application/json; charset=UTF-8' }, body: data }).then(function (response) { return response.text(); }); }

18. Authenticate on server Now we’re going to jump to app.js at the root of the project. We’re going to work around the code that’s already there so you might want to append the following to the end of the file or copy the contents of the included resources from this tutorial on FileSilo. Const is a type of variable introduced in ES6 which cannot be altered.

var LastfmAPI = require('lastfmapi'); const LASTFM_API_KEY = ‘YOUR_API_KEY’; const LASTFM_API_SECRET = ‘YOUR_API_SECRET’; var lfm = new LastfmAPI({ api_key: LASTFM_API_KEY, secret: LASTFM_API_SECRET });

19. Express routes To ensure everything’s running in the same place, serve the index.html file at the root level. The /save-playlist route is what we POSTed the playlist data to; this is stored in a variable that we can access elsewhere for now.

app.get('/', function (req, res) { res.sendFile(path.join(__dirname + '/index. html')); });‘/save-playlist', function (req, res) { data = req.body;

var authUrl = lfm.getAuthenticationUrl({ 'cb' : 'http://localhost:3000/auth' }); res.send(authUrl); });

20. authentication In the last step we told to send the user back to http://localhost:3000/auth once they’ve provided the app access to their account. This process returns us a unique token which we can use to then create a session.

app.get('/auth', function (req, res) { var token = url.parse(req.url, true).query. token; lfm.authenticate(token, function (err, session) { lfm.setSessionCredentials(session.username, session.key); // next step }); });

21. Add tracks To create the playlist we need a title and description. Then we use a callback function to add each track to the playlist. We’ve set a timeout of 500 milliseconds between each request or else only some tracks will be added. Then we redirect with parameters indicating its URL.

lfm.playlist.create({ title:, description: 'Generated by Playlist-omatic.' }, function(err, playlists) { data.tracks.forEach(function (track, index) { setTimeout(function () { lfm.playlist.addTrack(, track.artist['#text'] ||,; }, 500 * index); }); res.redirect('/?playlist=' + playlists. playlist.url + '&friend=' +; });

22. Share playlist URL When the user is redirected we will display a message to confirm that the playlist was created. That complete the process! Enjoy your playlists.

// client///playlist-app.js if (UrlQueryString.friend) { $scope.created = { friend: UrlQueryString.friend, url: UrlQueryString.playlist }; } //index.html <p data-ng-show="created">Your playlist with {{created.friend}} <a href="{{created.url}}" target="_blank">has been created!</a> <a href="/">Create another?</a></p>



The complete guide to


How to take control of projects and maintain multiple versions of one system 88

// The complete guide to Git

Why you need Git THE VERSION CONTROL SYSTEM HAS A HOST OF BENEFITS Linus Torvalds faced an unsurmountable problem when his Linux kernel became bigger and bigger. At some point, using ordinary VCS systems could not keep track with his work. He developed Git to address this pain point so that large codebases could be kept track of. Furthermore it’s based on a decentral concept and this means that every developer has a fully fledged repository on his workstation: network access is required only when data is to be synchronised.

What’s it good for? If you think your company – like many other small companies – does not need a version control system, think again. The availability of a VCS improves the productivity regardless of size. This is accomplished by multiple factors: first of all, code hosted on a VCS is more likely to survive ofice fires, ransomware strikes and similar occurrences of bad luck. Secondarily, code hosted in Git is much easier to modify. You can freely try out something new: if it turns out to be a disaster, undoing it is a matter of one click. Finally, using a VCS makes synchronising multiple devices so much easier.

Working with repositories NOW WE KNOW THE BENEFITS LET’S CREATE A CODE STORAGE SPACE The first step involves selecting a folder of choice. Enter the ‘git init’ command in order to create the hidden metadata directory:

tamhan@TAMHAN14:~/GitHouse$ git init Initialized empty Git repository in /home/ tamhan/GitHouse/.git/ After copying files to the repository, enter ‘git status’ to see the current state. Git will inform you that the new files need to be added via ‘add’ – if your file goes by the name somecode.cpp, add it as per the following:

tamhan@TAMHAN14:~/GitHouse$ git add somecode.cpp Git is now aware of the existance of somecode.cpp and this knowledge enables us to perform a commit: it

How Git works with GitHub THE WEB-BASED REPOSITORY AND INTERFACE IS ESSENTIAL Going decentral is beneficial in that it makes your codebase more resilient: an ofice fire is much less critical if the data is safely tucked away in the cloud. This knife, of course, cuts both ways – if GitHub goes down for maintenance (which it will do from time to time), then any work will certainly grind to a halt. Benefit number two involves value-added features. GitHub is best-known for a well-designed web interface that provides its clients with a large variety of additional functions which make wrangling with code easier. Code hosted on GitHub can also be accessed from a browser of your choice, with commonly used folders being provided as a ZIP archive. These utilities are tied into a complex social network. Figuring out which projects are being worked on by which developers is a matter of one

describes an operation which ‘shoves’ the current state of the project into the repository:

tamhan@TAMHAN14:~/GitHouse$ git commit -m "Initial commit" [master (root-commit) 3c7c91f] Initial commit 1 file changed, 6 insertions(+) create mode 100644 somecode.cpp ‘Git dif’ can then be used to perform an analysis comparing your local data with the one that was found in the repository. Pushing new code to Git is accomplished via a combination of the ‘git add’ and ‘git commit’ functions. Getting back to an older version can then be accomplished via a combination of ‘git reset’ and ‘git checkout -f’. The relevant sequence is shown in more detail in the screenshot on page 76. Even though Git is usually used with server-based repositories, the product can also generate patch files sent via email. Patch files spawn via ‘git format-patch’: the command expects the Secure Hash Algorithm (SHA) sum of the first commit to be considered and creates one .patch file for each commit.

‘Git diff’ can then be used to perform an analysis comparing your local data with the one that was found in the repository

or two clicks; issues, improvement requests and similar metadata can also be centralised in a GitHub repositiory. Projects working on 3D or PhotoShop files profit from embedded viewers: a WebGL-based utility ensures that everyone can take a look at a preview of the files’ contents. Documentation can furthermore be hosted in the form of a wiki, which comes with every repository and is maintained by GitHub automatically. Finally, GitHub has tremendous reach. Open source code tends to be hosted on GitHub as it tends to get the most views there: if a project manages to get enough likes then this placement is enough to ensure extra attention. Developers tend to be experienced with the value-added features of GitHub: subjecting them to a diferent user interface might lead to friction.

Installing Git GETTING STARTED WITH GIT IS REALLY EASY When working with an Unix-like operating system, getting Git is as easy as invoking the correct package manager command. On an Ubuntu workstation, use the following:

sudo apt-get install git Developers using Windows or Mac OS X can obtain a working version of the toolkit by downloading the prepared installers from Using them is really easy – just treat them like another typical set-up routine. Nothing prevents you from compiling your own version of the product should push ever come to shove: and you can find the source code at Git requires a username and an email address of the current user and these can be set via the following commands:

tamhan@TAMHAN14:~/GitHouse$ git config --global "TAMHAN" tamhan@TAMHAN14:~/GitHouse$ git config --global ""



1. Partial upgrade

8. Harness the shortcut

Sometimes, an outdated local codebase needs just one or two specific changes. In that case, ‘git cherry-pick <id>’ is your friend – it applies just the changes specified under <id>.

Entering frequently used Git commands over and over again is boring and tiresome. Use ‘git config --global alias.<shortcut> <forwhat>’ to create a time-saving reduced version of the command passed to <forwhat>

2. Add a note Commits can be documented with a note containing further information – unlike commit messages, they can be changed later. Note management is handled via the git notes command family.

9. Speed me up! Git’s internal management structures can become messy as time goes by. Cleaning up can be accomplished by invoking the integrated garbage collector. Doing so is easy – all you have to do is just enter ‘git gc’.

3. Track down regressions 10. Local copies If old bugs suddenly show up again, ‘git bisect’ will Use ‘git clone <url>’ to copy a remote repository. track them down using a divide and conquer The command downloads the current state of algorithm. Simply set the good and bad the code and the metadata commonly commit IDs, and test each of the Good found in the .git subfolder, thereby codebases provided. coding giving you a complete local copy. When a mistake is found in 4. Emailing patches If your team coordinates itself 11. List ignored files a branch it should be fixed. To Advanced users use .gitignore via a mailing list, patches can stop bouncing around from branch to block any unnecessary files be sent automagically by using to branch, try out the git checkout from the version control system. the git ‘send-email -to command. Go to bit. Entering ‘git ls-files --other <recipient> <files>’ command ly/1OZHHmR for more --ignored --exclude-standard’ lists sequence. File will accept information. all files which the version control wildcards such as 00**. system will not accept. 5. Send-email command On most Unix-like operating systems, Git will not recognise the send-email command by default. This is due to the email package being hosted separately – install it via ‘apt-get install git-email’.

12. Remove a file When a file is no longer needed you can make use of ‘git rm’ to inform the version control system that the element in question should not be contained in future checkouts.

6. The blame game Ever wanted to find out whose drunken coding spree caused the system to fail and mess up all your hard work? Use ‘git blame’ – it annotates your source file with information about each line’s change history along with the SHA ID of the individual commits.

13. The Git menu Like most Unix command line utilities, Git is shipped with a compact help system. Enter ‘git --help’ in order to receive a list of commonly used commands along with a short description of the role of each.

7. Sign a tag

14. Tell me more!

Developers working on highly sensitive code will be delighted to hear that Git provides a way to create tags by making use of a digital signature. If GnuPG is correctly wired into Git, then the process of signing a tag can be accomplished by simply passing -s to git tag.

Invoking a subcommand with the --help parameter tends to yield loads of extra information on the parameters taken in. If you’ve done this but this still does not solve your problem, then rest assured that a quick Google session will provide you with all the help you need.


What are branches? WORK ON COMPLEX PRODUCTS IS RARELY COMPLETELY LINEAR, AND THIS IS WHY WE USE BRANCHES Imagine yourself to be working on an application made up of two parts: a custom user interface and a third-party computational engine, which is to be accessed via an intermediary layer. Sadly, issues at the side of the supplier force a change of calculation provider. In this case, the developer responsible for changing the engines transforms himself into a roadblock: work cannot proceed until the interface has been updated. Branches can solve this problem in an elegant fashion. The codebase is broken down into two pieces as per the flow chart on the right: the user interface developers keep hacking along on the main trunk, while the person responsible for the adaption toils away on the newly created branch. Git takes care of keeping the two workflows separate and each team then work independently from one another. Once the adaptation is complete,

GitLab vs GitHub HERE COMES THE COMPETITOR: GITLAB IS READY TO CHALLENGE GITHUB ON EVERY LEVEL The company added a variety of interesting features to both stock Git and the add-ons provided by GitHub. First of all, collaborative tools received a significant boost: issues can receive attachments containing further information about the task at hand. In addition to that, a fine-grained rights management system permits you to set access levels on a need-to-have basis. Branch protection prevents ‘lowly’ users from pushing code into important branches, thereby accelerating the deployment of mission critical features. Enterprise customers benefit from proprietary extensions to the underlying VCS. It can handle very large binary files efortlessly and it is possible to add these pesky PR assets to VC. Finally, GitHub has been in the news recently due to the upload of some shady scripts and other political strifes. People who believe that vendors should steer far away from politics can use GitLab as an alternative.

// The complete guide to Git

The life of a project Branch



Fixing bad local changes is best accomplishing by checking out the afected files once again: Git will replace your borked-up copy with a brand new one. Sadly, Murphy’s law (anything that can go wrong will go wrong) ensures that problems are likely to occur only after the code in question has already been uploaded to version control.In this case, ‘git revert <id>’ is the remedy of choice. It ‘isolates’ the bad commit, and creates a new one containing the project’s state before the mishap. This measure is important in that Git aims to present a complete and accurate state of the development process: under its philosophy, mistakes should be conserved for further study. When working with services such as GitHub, the situation is more complex. Uploaded files cannot be removed reliably, as users could have forked or downloaded them – further information on that topic is available should you wish to read it at remove-sensitive-data.

the two branches can be merged into one another again. Git assists this process by making use of a set of advanced comparison tools – you can easily find out which changes and/or conflicts might occur. Git identifies the individual branches via the SHA sum of the individual files. Sadly, handling these long and unwieldy strings is not particularly comfortable: addressing them by strings would be easier. This can be accomplished by adding a tag to identify a specific

state of the project – in principle, a tag is little more than a symbolic link pointing at a version. They are commonly used to designate releases – users can easily find the code used to create a specific artefact by tracking down its tag. Git’s unique structure makes the creation of branches and tags very cheap: unlike SVN and CVS, codebases can regularily be found which contain thousands of branches and tags.

Git takes care of keeping the two workflows separate and each team then work independently from one another

Get connected HOW TO EXCHANGE CODE So a local version of Git turned out to be quite helpful. Git’s true power can be harnessed only when combined with a server: it lets multiple developers and their various devices exchange code independently from one another.

Getting on a server The following steps that we will take you through here will use a free GitLab account which will host an open source project. Sign up at, and click the New Project button to create a new project. GitLab will ask you for a name and a set of accessibility criteria which define who is able to access the files contained within. After completion, a URL similar to will be returned if you click the HTTPS button.

master command sequence: Username for '': TAMHAN tamhan@TAMHAN14:~/GitLab$ git clone https:// Password for '': Counting objects: 3, done. Cloning into 'ImagineRepo'... Delta compression using up to 8 threads. remote: Counting objects: 3, done. Compressing objects: 100% (2/2), done. remote: Compressing objects: 100% (2/2), done. Writing objects: 100% (3/3), 279 bytes | 0 remote: Total 3 (delta 0), reused 0 (delta 0) bytes/s, done. Unpacking objects: 100% (3/3), done. Total 3 (delta 0), reused 0 (delta 0) Checking connectivity... done. To Host it ImagineRepo.git Finally, code is to be returned to the yourself? * [new branch] master -> master Git’s server-side repository all with a process known Branch master set up to track components integrate as a push: remote branch master from tamhan@TAMHAN14:~/GitLab/ themselves into the user origin. ImagineRepo$ git commit management system of the OS. [master bfbd81b] Uploading now This leads to increased 1 file changed, 1 After entering git push, your local administration eforts – a insertion(+), 1 deletion(-) code is automatically transferred into problem solved by GitLab’s storage. You can inspect it via the Files tab of the web interface.


Contribute and maintain a project

The common workflow approach

With that, our repository is ready and waiting. The GitLab UI provides you with a sequence of commands which copy it to your workstation. Since we already have a repository from the steps on page 73, we will, instead, deploy it to the cloud by entering the following commands:

Now it’s time to introduce a second developer into the equation and their first act involves the downloading of the code that needs to be worked on. Git works a bit diferently from other version control systems out there in that it forces the developer to obtain a complete copy of all code that can be found online – a job which is easily accomplished via the following

tamhan@TAMHAN14:~/GitHouse$ git push -u origin

However, you should always keep in mind that all repositories are not created equal: some maintainers permit pushes from anybody and everybody, while others may limit themselves to uploads from known team members. A popular example for the second approach is that of Microsoft’s .net framework: even though the framework was open sourced recently, changes can only be accepted from the Microsoft employees that are assigned to the relevant teams.



Git Log emits a list of commits leading to the current state. The string shown after the commit header is a hash ID which identifies the individual transactions.


Commands like ‘git reset’ provide a summary of listing changes. In our simple repository, the file somecode. cpp is shown to receive changes.

// The complete guide to Git

Command-line benefits WORKING IN THE COMMAND LINE DOESN’T HAVE TO BE TIME-CONSUMING Git’s development was inspired by classic Linux and open source books such as The Cathedral and the Bazaar. Due to this, the product’s native client takes the form of a command-line tool: both GitLab and the graphical frontends mentioned on page 74 are mere shells which invoke the Git client for you. Hitting the command line yourself ofers multiple benefits. First of all, the lack of an intermediary layer means that you can access Gits features directly.

Data recovery GIT PROVIDES DEVELOPERS WITH AMPLE OPPORTUNITY TO SHOOT THEMSELVES IN THE FOOT By far the most common error involves the loss of commits due to the deletion of a downstream element. In this case, the data still remains accessible – if you happen to know its SHA1 ID, that is. Figuring out the SHA1 ID of a recently performed commit is best accomplished via the reflog command. Simply feed its output into a checkout or branch-

Secondarily, the relatively complex command structure makes for a great training opportunity. In addition, should you use bash as your shell of choice, make sure to visit Git-Basics-Tips-and-Tricks in order to obtain the Auto-Completion script. This script makes work so much easier and quickly as you can simply press the Tab key to receive IntelliSense-like suggestions and automatically fill commands.

creation command, and feast your eyes on recovered files.Another eeker involves the uploading of large files: each and every git clone will proceed to download the content in its entity even if it has been removed from recent commits. Even though this problem can – in theory – be solved with Git itself, using the BFG RepoCleaner is significantly faster (as in minutes instead of days) and is furthermore easier to use. Simply download the tool from and follow the instructions to clean up your repository once and for all.

Customising Git ADJUSTING BEHAVIOUR LEADS TO BETTER UX Git’s configuration is collected from multiple places. Systemwide settings are stored in /etc/ gitconfig, while each user’s home directory contains a GITCONFIG file with further settings. Finally, each repository can have its own attributes set via custom config files in its .git subfolder. Be aware that low-level files overwrite upper-level ones: a project can overwrite systemwide default settings. By far the most important piece of configuration involves the setting of user data, which is best accomplished via the ‘git config’ commands that were mentioned in the installation instructions:

tamhan@TAMHAN14:~/GitHouse$ git config --global "TAMHAN" tamhan@TAMHAN14:~/GitHouse$ git config --global "tamhan@tamoggemon. com" Another interesting bit of configuration involves the colouring of command line output. By default, Git will colour messages intended for display – this can be disabled via the following property:

git config --global color.ui false Attributes can be used to provide Git further information about the file types handled. For example, some binary file formats might be difable with special commands which can be registered by setting an attribute. Finally, hooks permit you to run scripts in response to specific Scott Chacon changes taking place in a @chacon repository. Further A GitHubber, open source information on the various developer and Git evangelist, properties can be found in the Scott is also a writer and worked Book of Git, which is accessible on Atom’s Flight Manual. via and bit. ly/1tOhgtG. Be aware that config files are PlainText – you can always modify them with gedit if you distrust the work performed by the config command.

4 Essential Git tools ADD IMPRESSIVE FEATURES TO YOUR GIT WORKFLOW EGit Command line git is so Seventies. Integrating the version control system with an IDE makes work easier. Eclipse does not support Git out of the box but EGit solves this problem efectively.

GitHooks VCS systems should be integrated into the development workflow so that unit tests can be run as new code is checked in. GitHooks lets you run arbitrary scripts in response to repository events.

Git-Extras This frequently maintained package contains a group of scripts adding all kinds of interesting features to Git. Take a look at its documentation to find out more – you will save time in the long run.

GitK and friends If IDEs like Eclipse or Visual Studio do not interest you, a dedicated Git browsing utility might suit. Unix heads use gitk, while Windows developers are best served with TortoiseGit.




GitHub acts as a harbour for impressive code and this selection of extraordinarily cool projects will certainly enrich your coding


// 20 best Github projects

GET TO GRIPS WITH FULLPAGE.JS fullPage.js Use for: designing slideshow-like webpages Generations of badly designed PowerPoint slides have accustomed users to full-screen presentations. This JavaScript framework provides a quick and convenient tool which transforms websites into full-screen PowerPoint-like presentations. Since its initial release, both large and small companies enthusiastically embraced the framework: it was even used on apple. com for some time. As with most other bits of JavaScript code, the usage of the fullPage.js framework will require the inclusion of the jQuery library along with some helper files. Our example here uses the bare minimum of files â&#x20AC;&#x201C; advanced scrolling efects will require the inclusion of additional helper libraries:

<link rel="stylesheet" type="text/css" href="jquery.fullPage.css" /> <script src=" ajax/libs/jquery/1.11.1/jquery.min.js"></ script>

<script type="text/javascript" src="jquery. fullPage.js"></script> In the next step, the actual content needs to be defined. This is accomplished by adding the following <div> structure to your website:

<div id="fullpage"> <div class="section">Some section</div> <div class="section">Some section</div> <div class="section">Some section</div> <div class="section">Some section</div> <div class="section active">Some section</ div> </div> By default, fullPage.js displays the section at the top of the DOM tree. You can modify this behaviour via the section active attribute shown below â&#x20AC;&#x201C; our snippet would start out by displaying the last item from the list. The individual sections can contain multiple slides which are displayed in a horizontal fashion:

<div class="section"> <div class="slide"> Slide 1 </div> <div class="slide"> Slide 2 </div> </div>

Even though the individual slides can be formatted via CSS, the mandatory bring-up of the framework can also be used in order to provide additional context such as background colours:

$(document).ready(function() { $('#fullpage').fullpage({ sectionsColour: ['#1bbc9b', '#4BBFC3', '#7BAABE', 'whitesmoke', '#ccddff'], anchors: ['firstPage', 'secondPage', '3rdPage', '4thpage', 'lastPage'], }); }); In this snippet, both the background colours and a set of anchors are provided. The latter simplifies the creation of internal links, which permit you to approach individual pages easily. Describing all parameters of the JSON object passed to the initialisation function would require an entire extra volume of this annual edition to fully explain the way it deserves to be explained. Because of that, please consult the readme file at in order to learn more, and discover everything there is to know and discover.


The:HE'HVLJŨAnnual :HE'HVLJŨ BUILD SIMPLE MOBILE APPS Ratchet Use for: making apps with simple components Creating good-looking user interfaces for mobile applications constitutes an art of its own. Ratchet is managed by the team behind the well-known Bootstrap framework: it aims to simplify this endless task. After downloading Ratchet, prepare yourself for a complete rearrangement of your application. The framework requires you to adhere to a strict sequence of controls – firstly, all ‘bar’ items must be right below the <body> tag of the individual pages:

<header class="bar bar-nav"> <button class="btn pull-left"> Left </button> <button class="btn pull-right"> Right </button> <h1 class="title">Title</h1> </header>

As for the actual application, a large selection of controls is provided. For example, tables can be spruced up with various useful gadgets conveying extra information. Apps are made up of forms, each of which is to be contained in an HTML file of its own. They are connected to one another via the Push.js framework – by default, all links are processed by it. Fortunately, designating external links is as easy as adding the data-ignore property to their declaration. This is necessary for all references which point outside of your app – the following Google link would be a classic example of such a task:

<a href="" dataignore="push">Google<a> Ratchet difers from classic GUI stacks such as jQuery UI/ Mobile due to the availability of two stylesheets which seek to mimic the design of the host operating system’s controls. Sadly, Ratchet’s platform support is limited to recent versions of Android and iOS. The developers have not yet decided whether they want to embrace Windows Phone – as for BlackBerry 10, you don’t even need to ask.


mfglabs iconset Use for: implementing fully customisable buttons Use for: embedding icons using a web font or CSS

Buttons are the epitome of touch-screen user interface design: where there is a touchscreen, expect large and small, red and green, round and square knobs en masse. The ubiquity of this common control has motivated a group of developers to start working on a GUI framework dedicated solely to the display of buttons. This insular approach is beneficiary in that external dependencies are minimised. Getting started with Buttons is as easy as including the following files to your web project – additional resources are needed only for drop-down menus and symbols:

According to a well-known proverb a figure can be worth a thousand words and these icons will surely say what you want without the text. Most desktop products provide their users with a symbol bar providing quick access to commonly used functions. Mfglabs’ iconset will difer from normal icon sets in that it is implemented via a custom font mapped into the Unicode codespace. This is beneficial to us because fonts contain vector information. Your icons will thus look great at all display resolutions and the pixelation that is commonly found in bitmap icon sets will not occur. Deploy iconset by copying all relevant resources to a folder of your web application. Then proceed to adding the following snippet in order to load the stylesheet and deploy an icon:project – additional resources are needed only for any drop-down menus and symbols:

<!-- Buttons core css --> <link rel="stylesheet" href="css/buttons.css"> Button’s developer team took great care to cover all possible approaches to development. You can create buttons both via <a> and <button> tags:

<a href="" class="button button-pill button-tiny">Go</a> <button class="button button-square button-tiny"><i class="fa fa-plus"></i></button> Buttons is able to implement a few hundred knob types by default. Since most apps are unlikely to require the entire palette, a customisation utility is provided. It enables you to strip out unneeded button types, resulting in the preparation of a custom set of resources which take up less server space and/or bandwidth.


<link rel=”stylesheet” href=”css/mfglabs_iconset.css”> <i class=”icon-paperplane”></i> User interface designers will be quick to point out that misuse of symbols is among the most common mistakes found to impact usability severely. A symbol should be used only when its meaning is 100 per cent clear to the target audience, so absolutely no confusion or second-guessing on their part. Ensuring this becomes especially dificult once products get internationalised.

// 20 best Github projects

ANGULAR ALERTS ng-notify Use for: notifying the user with error messages A small German startup introduced slide-in notifications in a long-sinceforgotten iPad competitor. But ever since Windows Phone 7 brought this concept into wider usage, slide-in alerts have been in ubiquitious usage all over the mobile market. Ng-notify is a truly tiny add-on for AngularJS apps. It provides developers with a selection of diferent notification styles which can be deployed with a single function call. Ng-notify is to be deployed like any other AngularJS module – the code shown in the snippet should be familiar to every AngularJS head:

var app = angular.module(‘demo’, [‘ngNotify’]); Dispatching actual messages can then be accomplished by invoking the set method found in the ngNotify object:

ngNotify.set(‘Your error message goes here!’, ‘error’); In addition to that, Ng-notify permits you to create custom notification styles: customise colours, slide directions and other properties. Please note that dialogs and alerts spawned by Ng-notify are not modal. This means that they are displayed only while your app is on the screen: if the product is in the background, the user will not see the information.

DISPLAY DATA SETS Clusterize.js Use for: improving the frontend Displaying large amounts of information in a list is challenging and having to handle thousands of DOM elements can overwhelm even the fastest of browsers. Clusterize.js helps us solve this problem of data by recycling the display widgets in a creative way: the framework holds a small amount of templates, which then gets populated with data as soon as the user starts to scroll. Scrollbars are then fooled into accurate positioning via the use of dummy elements. Websites that are working with the Clusterize plugin will tend to work significantly faster. It’s simply a must-have if you are using big data sets. on your site.






The Fuck Use for: optimising development Use for: lightweight CSS styles Use for: correcting commands

The initial stages of a client-server project tend to resemble a classic chicken-and-egg problem. Front-end development work cannot commence as long as the back-end functions have not been set in stone. Json-server addresses this problem by providing a surrogate. You specify the desired responses, and treat your json-server instance as if it were the actual production server. Json-server automagically takes care of returning data as required. Using the product is sensible not only from a front-end perspective.

Getting colours just right is a balancing act between readability and design: great looks and great readability are two kettles of fish. BassCSS is a collection of interesting, core CSS elements which can be integrated into your application with minimal efort. In addition to that, a selection of 96 readable and good-looking colour schemes, layout and typographic utilities, and reuseable layout modules are provided for you. It claims to be responsive by default as it’s lightweight and flexible enough to work on any device. You can use one or more of them in your app by simply copying its style declaratons into your CSS file.

Get one character wrong with the Unix command line and the whole command is refused. The Fuck is a workaround that analyses wrongly entered commands; it suggests fixes like this Python example:

િapt-get install vim E: Could not open lock file / var/lib/dpkg/lock – open (13: Permission denied) E: Unable to lock the administration directory (/var/ lib/dpkg/), are you root? િ fuck sudo apt-get install vim



IMAGE ZOOM JQUERY PLUGIN Zoom.js Use for: creating a CSS image dialog Displaying images inevitably becomes a balancing act between visible detail and screen real-estate consumption. Photographers and infographic designers obviously want their creation placed in the limelight – UI designers tend to focus their attention on reading flow. Zoom.js solves this problem by transforming small images into large clickable galleries. Clicking an image


opens a pop-up with the image in its full glory: this is ideally suited for device reviewers wanting to provide their readers with an optional, larger view of interesting pictorial material. Embedding the Zoom.js plugin can be accomplished by adding the following three files to your web application. Transition.js is not a part of the main framework, but is instead made available as part of the bootstrap framework:

<link href=”css/zoom.css” rel=”stylesheet”> <script src=”js/zoom.js”></script>

<script src=”js/transition.js”></script> Any individual <img> tags must also then be enhanced with a bit of markup in order to display the images in a larger fashion:

<img src=”img/blog_post_featured.png” data-action=”zoom”> Keep in mind that Zoom.js is no solution for the bandwidth demands of large images. Loading a large image takes some time even if it is displayed in the scaled-down version: as of writing. Zoom.js is not able to load a diferent resource as images get clicked.

// 20 best Github projects

FIVE TO FOLLOW GitHub @github Do you use a GitHub service? Then you better follow GitHub’s oficial Twitter account, which provides a never-ending source of all kinds of information which are interesting for the Giterati.

Tom Preston-Werner @mojombo Finding himself run out from his company by a (now disproven) scandal, Tom deserves credit for being the coder with the idea to give GitHub to the world.

Chris Wanstrath @defunkt The current co-CEO of GitHub takes to Twitter from time to time. Following him might be interesting if you are into adding a bunch of prominent people to your Follow list.


Justice.js Use for: analysing web performance Figuring out more about website performance can be dificult. Justice displays a helpful footer with key metrics.

DeckOfCards Use for: simulating a card efect with an API This library re-creates a realistic deck of cards. It is ideally suited to all kinds of card or poker games.

GitHub Status @githubstatus

Linus Torvalds @linus__torvalds

Like most other computer systems, GitHub will go down from time to time. Following githubstatus is an easy way to get notified of planned maintenance and unplanned server outages.

A feature on GitHub wouldn’t be complete without a mention of Linus Thorvalds. The world-famous programmer created Git in order to maintain the source code of the Linux kernel.



Go Pro!

Glyphs Use for: treating icons as fonts

The GitHub docs Learning about version control systems used to be a daunting task. Fortunately, the team at GitHub strives to make its product as accessible as possible. Visiting the help pages provides a torrent of information about both simple and advanced features of their various free and premium oferings.

GitHub’s free service is not for everyone but you can pay a bit extra and demand the best by going Pro! It’s definitely worth a try. Use for: drawing graphs Displaying graph data can be dificult. This framework takes care of rendering and even manages interactivity.

WebHostingHub provides this custom font containing a load of nicely designed symbols.

The GitHub blog

Bourbon Use for: enhancing CSS

Gossip, updates and the latest news from the house of GitHub can be found in the oficial blog.

Sass simplifies long CSS declarations. Bourbon adds mix-in support in order to make your CSS even shorter.

Search syntax

Git Manual searching-github


GitHub is a hosted instance of Git. More information on Git looms in the documentation on the website. Use for: showing of video in a widget

Find code with advanced searching syntax in Search box

This is a svelte-looking widget dedicated to HTML5 video.

slick Use for: making HTML5 carousels Slick delivers what its name promises: a highly eficient carousel for highlighting website content and products.

AwesomePlete Use for: autocompleting code Users hate typing. This slick AutoComplete text box does its magic without needing bulky external frameworks.

RandomColour Use for: generating colours Generating sensible random colours is an act of art. This helper library is dedicated to doing one task well.

App Launch Guide Use for: learning how to launch an app This repository is a guide containing a list of hints that makes sure that your app gets attention!



Build a friendly bot to enhance your Slack group Create a custom bot using Node.js to interact with your Slack group and help automate processes


// Build a friendly bot to enhance your Slack group


eam collaboration is a big part of the development world now, especially as more of us work remotely. If you work on your own remotely it’s very important to be able to still chat, share code and ask for help from a community. Couple these requirements with the benefit of being able to control, in some way, your development processes and you have Slack, the real-time messaging system that you can build upon. In this tutorial we will see how we can extend the Slack features and functions by creating a custom automated bot, written in Node.js and open source plugins and packages. We will interact directly with the bot to create custom commands. We’ll also intercept and manage incoming data from the Slack servers and services based upon transaction type. We will create an HTTP server in Node.js that will work with the bot and Slack’s custom Slash Command features to run specific tasks and receive input from an incoming Webhook feature. With the highly configurable tools and services available to integrate by default, as well as those available to enable and develop upon like our bot, Slack is an emerging dynamic way for developers, teams and communities to share, chat and collaborate.

1. Configure slack integration Sign up to Slack to get your free channel. Head to the Integration page ( new) from the menu and scroll to the bottom to see the Bot option. Enter your chosen bot name and details and copy the API token value you are given.

2. Create Node configuration To start the project of, you will need to create your Node package.json file. You can use the command-line wizard to do this. Now navigate to the desired directory location

for your project and run the npm init command. You are then able to select the default values for most of the options here.

npm init

3. Install dependencies Use your command line to install the required Node packages and dependencies, adding the --save flag to automatically save them to the package.json file. The slackbotapi will form the basis for our project as the wrapper to the Slack API.

npm install slackbotapi async lodash request walk is-up express body-parser node-slackr -save

4. Autorun function Create index.js in the root of your project. This will contain, primarily, the Immediately Invoked Function Expression (IIFE) to autoexecute on startup. Provide your bot name and bot token values and then define a reference to slackBot, a return object from a new function, initSlackBot.

(function bootstrap() { 'use strict'; var botName = ‘jeeves'; var botToken = ‘YOUR_BOT_TOKEN’; var slackBot = initSlackBot(botToken); // code to go here })();

function initSlackBot(botToken) { var slackAPI = require('slackbotapi'); var slack = new slackAPI({ 'token': botToken, 'logging': true }); return slack; };

6. Detect changes The bot will work on detecting particular events and reacting accordingly to these events. Now we will create a function that will output a response when a user (and not the bot itself) joins or leaves the Slack group. We can make use of an internal function, getUser, to retrieve the user details from a provided ID value.

slackBot.on('presence_change', function(data) { console.log(data); var user = slackBot.getUser(data.user); console.log('presence_change detected'); if ( !== botName) { if ( data.presence === 'active') { console.log( + ' has joined.'); } else { console.log( + ' has left.'); } } });

5. Initiate the bot

Customise settings

Create the initSlackBot function inside the IIFE. This accepts the bot token as the parameter and then makes use of it to instantiate a new Slack API connection. It is this connection value that we send back from the function. We will then reference as slackBot throughout the application.

You can customise your bot and have it send a random response from a list defined by you for any message it receives. Check the settings page for more options on how to do this without coding.


Configure your bot and give it some personality with a name and avatar, should you wish, from the options page when setting it up Top left

The application will log to the console when a user leaves or joins the Slack application, pulled from detecting a presence change event Top right

The bot detects the character command in the channel to run the isup request and then responds with information for the user


The:HE'HVLJŨAnnual :HE'HVLJŨ 7. Plugin management To pre-empt our bot’s functions growing and making our single file harder to manage, we’ll create a function that will load in all JavaScript files within a plugins directory. This will enable us to group all of the plugins in specific files for ease of maintenance and separation.

slack.loadBotPlugins = function loadBotPlugins() { var plugins = []; var walk = require('walk'); var walker = walk.walk('./plugins', { followLinks: false }); // more here }; slack.use = function use( plugin ){ plugin( slackBot ); };

8. Load plugin files Inside the plugin loader we can call the walker object to check each file and load every one with a JS extension. It will require that file directly for use and also add it to an array that we’ll use in the end event to display all of our loaded plugins.

walker.on('file', function(root, stat, next) { if ( === '.js' ){ console.log('loading plugin %s/%s', root,

Running an IRC bot? If you’re running an IRC bot built in Node then you won’t be alone). Transferring that code to work with the Slack API is incredibly simple and very few changes would need to be made.

Top left

Configure a new Slash Command for your group and specific channels with custom command options to call reliable services or remote APIs Top right

For another example of setting up the slackbot package check out the official GitHub repository here: xBytez/slackbotapi Right

Generating an incoming Webhook URL for your application will enable you to send data via HTTP directly into Slack for custom commands

102; try { slackBot.use( require( root + '/' + stat. name ) ); plugins.push( root + '/' + ); }catch (err){ console.error( err ); } } next(); }); walker.on('end', function(){ console.log('plugins loaded: %s', plugins); });

9. The first plugin Now we will create a new file in the plugins directory called ‘isup.js’. This will call the API for any given URL that we pass the bot. Now we will require the events module as we’ll use the emitter to detect requests. Inside the module we will need a blank bot variable as well as loading in the is-up package.

'use strict'; var events = require('events'); var emit = new events.EventEmitter(); module.exports = (function(){ var bot; var isup = require('is-up'); // more here })();

10. Emit on <Action> Our module will detect a character in a posted message and then run a function. In order for us to manage the event transmission we will need to request an event emit to run that function for when an event called

isUpRequest has been fired. What we will do next is define how that event is fired.

emit.on('isUpRequest', function(data) { isUpRequest(data); });

11. Make the request When the event is emitted it will run the following function and send in the data from the message. Here we strip back the URL from the message text and if it’s a reasonable size, send it to the isup method. The bot will post back to the channel with a response for the user using the sendMsg function.

function isUpRequest(data) { var url = data.text.split(''); url.shift(); url = url.join('').toLowerCase(); url = url.replace('https://', ''); url = url.replace('http://', ''); url = url.split('|'); if ( url.length > 1 ) { url = url[1].substr(0, url[1].length - 1); isup(url, function(err, up){ bot.sendMsg(, url + ' is ' + (up ? 'up' : 'down') + ' for me' + (up ? '...' : ' too.') ) }); } else { bot.sendMsg(, 'Did you send me a real URL? I couldnt match it.' ) } };

12. Module init and detection The return from the plugin module initialises the entire thing. Passing in the Slack reference we can detect an

// Build a friendly bot to enhance your Slack group

Free hosting for your bot One important factor to consider when building your bot is where to host it. You may already have suitable servers (whether physical or virtual) at your disposal that can serve your Node application accordingly, or it may be enough to run it on your own local machine during the day and close it down when you finish. Should you wish to use the Slash Commands to GET or POST to an HTTP server you will definitely need something remotely accessible, and there are many free options available to you for hosting. OpenShift, by RedHat, ofers a free tier of Node.js hosting (as well as other ‘cartridges’ or languages) and has the benefit of deploying your code automatically with every commit to a private Git repository it creates.

incoming message. Using some string functions we can then subsequently check for the existence of a specific character to lead the request. If found we are then able to fire the isUpRequest event. We can easily do so by using the following:

return function init( slack ){ bot = slack; slack.on( 'message', function( data ){ if (data.hasOwnProperty('text')) { if (data.text.indexOf('^') === 0 && data.text. length >= 4 && data.text.split(' ').length === 1) { emit.emit('isUpRequest', data); } } }); };

13. Incoming Webhook integration In a browser visit your Slack group channel integration menu and select ‘enable an Incoming Webhook’. Select a channel and then copy the generated Webhook URL as we’ll need that to associate incoming JSON data from our application to post back into the channel.

14. Express server plugin Create a new plugin called ‘express.js’. This will run a Node express server that the bot can interact with. Once again, define a module for the plugin and require the necessary files at the top for event management as well as for making HTTP requests.

'use strict'; var events = require('events'); var emit = new events.EventEmitter(); var request = require('request'); module.exports = (function(){

// code to go here })();

15. Slackr Webhook Within the module require the packages to enable our Express server. The node-slackr package manages our Webhook integration as we pass it our generated URL. Configure your Express server details with IP/host name and port number.

var express = require('express'); var bodyParser = require("body-parser"); var app = express(); var slackr = require('node-slackr'); var webhook = new slackr(‘your_incoming_ webhook’); var config = { "web": {


:HE'HVLJŨ The:HE'HVLJŨAnnual "ip": 'your_server_ip', "port": 9999 } };

config.web.ip); return server; };

17. Notify the Webhook 16. Default route The return statement of the module will initialise the Express server and set up the routes. Here we’ll create a default base route for our web app. This can be useful for sending uptime requests to the domain to make sure your bot is always up and running as well as looking out for any issues.

return function init( slack ){ app.use(bodyParser.urlencoded({ extended: false })); app.get('/', function (req, res) { res.send('nothing to see here... move along'); }); // custom paths go here var server = app.listen(config.web.port,

Top left

Test the endpoints are accessible from your running Express server in your browser. Also note the default route and custom command route with expected outputs Top right

The Slash Command Webhook notification prompts the bot to send a channel-wide message as well as notify you of your action Right

Custom integrations are available for selection, such as the bot and Slash Commands, to help you create something specific for your group


Add any additional routes below the default base. Here we will create a /facepalm route that uses a query parameter sent in a GET request as the name of the recipient. The JSON data is sent to the Incoming Webhook for processing and a response sent via Express for standard output.

rapp.get('/facepalm', function (req, res) { //Webhook notification webhook.notify({ "text": req.query.user_name + ' facepalmed ' + req.query.text }); // Express out res.send('You just facepalmed ' + req. query.text); });

18. Create Slash command Back to the Slack group Integration menu, We will now enable Slash Commands. Create a new command / facepalm. Provide the URL for your Express server route, select the HTTP GET method. Customise your command for user assistance with hints and descriptions.

19. Express in action Run the application using the node index.js or supervisor index.js command. Visit your express server in your browser and test endpoints are accessible. You should be able to send the query parameter to the endpoint to generate the expected response to send to the bot.

20. Get inspiration Due to its incredible ability to allow custom integrations and extensions, Slack is a popular tool for many developers who have built their own open source bots and plugins. Make sure that you check out the oficial repository listing for inspiration from members of the development community:

// Build a friendly bot to enhance your Slack group

Integrations available by default One of the many benefits of using the Slack service is the number of services that are already available to be added and integrated into your group. Whilst free tiers are limited to a certain number, you will still have the freedom to choose from those that will benefit your workflow or the requirements of your team or community. The services include a built-in Bitbucket or version control commit hooks, notifying you of who changed what; issue tracker services and monitoring tools to let you know what has broken and helps you to fix it; and Giphy API tools when nothing but an animated GIF will do to brighten the day and share some fun. Couple these with your own custom commands and you can build something really powerful.



Create API schemas with Swagger Improve your API skills and generate detailed API schemas with minimal coding and well-crafted documentation


// Create API schemas with Swagger


PIs are great – they expose content to clients and developers, and can help manage the state of data across a number of platforms and systems. However, both the creation and consumption of APIs can be tricky when there is untested code or poor accompanying documentation for your references. Ideally you want your code to pass tests and you want your clients or consumers to know what to send and what to expect back from each response. Providing these crucial details without spending too much time on creating this information is key, and Swagger is here to help you do just that. In this tutorial we will take a look at Swagger and how we can build an API and its underlying definitions and documentation using common Node.js frameworks. We will start of by installing the required Node modules and libraries to get up and running, look at some of the available features of Swagger to generate skeleton applications and sample code to use as a reference, and finally we will learn how to run local iterations of your API with and without writing any underlying server-side code. Using the highly readable YAML markup structure and the flexibility provided by the Swagger library and Node implementation, you will be writing and releasing well-documented, well-structured APIs in no time at all.

1. Install Swagger To begin with, you will need to install the Swagger module. Open up your terminal or command prompt

window and enter the following command. The additional flag will install Swagger as a globally available module so that it is accessible from any directory on your machine.

npm install -g swagger

2. Project creation Within the CLI navigate to the desired location of your project. Run the Swagger ‘create’ command, providing the name of your new project. The module will display options for you to choose your desired Node REST framework to work with. We’re selecting Hapi.js but feel free to choose your own.

swagger project edit my_music_app

5. Interactive file updates You will notice now that the swagger.yaml document is open in the left, and that the parsed readable content is on the right, generated by Swagger. Any changes that are now made to the open file now will be instantly visible in the right as well as automatically saved to the file in your local code editor. Try changing the title value to test this out.

swagger project create my_music_app

3. Run the API With the project created, let’s see what the skeleton application has built for you. Run the command to start your Swagger project, which will run the underlying Node server on a local port instance. Visit the URL provided in the command-line interface to see the sample REST endpoint in action.

swagger project start my_music_app -d

4. The Swagger editor

Unit testing

In this step we will now run the following command to open the editor and load up the current project. This will run a local server and automatically open up in your default browser to show you the associated Swagger.yaml file and this will form the core of your API definition and documentation.

If you want unit tests, then you can always put Swagger to work and make it generate tests by using the information that you have already given it about your schema definitions. Take a look at Step 20 to find out how.


Install the required Swagger Node module as a global dependency to make it available in whatever directory you choose to work in Top left

The project creation wizard will guide you and offer a selection of industry-standard proven API frameworks for you to work with Top right

Here we are running the built-in server in debug mode, using the optional flag and calling the default API method directly in the browser


The:HE'HVLJŨAnnual :HE'HVLJŨ 6. Live testing The generated sample application includes an API endpoint, /hello, which can be accessed directly in the browser if the Swagger API is running. It can also be tested live using the Swagger editor. Click the ‘Try this operation’ button to use the editor to interact with your API and view returned header and body information.

here dictates the name of the method in the controller to call for this route.

paths: /artists: x-swagger-router-controller: artists get: description: Returns an array of artists operationId: getArtists

7. Create your own path Now we will work on the paths, so under the paths declaration, let’s now create a new path for /artists. The custom Swagger route controller will map this path to a new artists.js file that we will be creating shortly. We want to manage a GET HTTP request. The operationId value

Automate tasks You could further optimise your Swagger tasks and unit tests by adding them to a Grunt or Gulp task file. You could then run your tests with every file save to make sure that your tests continuously pass.

Top left

Open up the Swagger editor on a local private port server to edit your YAML file and interact with the API through the available testing interface Top right

The editor uses a common module to manage the actual code editing and the interface acts and behaves like a standard editor tool complete with code folding Right

Testing the API default method directly from the Swagger editor gives you information on the headers and body response for quick visual debugging


8. Send parameters Our route definition will be used to filter a remote API to query for information. As such, we will want to send values through to the endpoint as well as ensure we document these too. Set the parameters block and then nest a resource called artistName, a string which will be expected as a URL query parameter.

parameters: - name: artistName in: query description: the name of the artist to search for required: true type: string

9. Error response definition The API will respond with some form of info and we need Swagger to be able to manage these responses, whether they are good or bad. Create a response element and set the default schema to reference an error response definition for anything other than a 200 status code, which we get to in the next step using the $ref syntax.

responses: default: description: Error schema: $ref: "#/definitions/ErrorResponse"

10. Successful responses Add the successful response definition to the block, setting the status code as the success requirement. Any 200 status will be classed as a success and will therefore use the applied schema reference which we’ll set here as getArtistsResponse.

"200": description: Success schema: $ref: "#/definitions/getArtistsResponse"

// Create API schemas with Swagger

Swagger specifications The Swagger.yaml file forms the core of your API definition process and is the key ingredient to Swagger’s interpretation of your API requirements. Written in standard YAML markup, it’s very easy to read and manage. There are a lot of options and sections or properties for you to use, should your API require them. They may seem a little overwhelming at first glance though, but there is a wealth of options available that can help you to build your perfect API definition. You can find out all you need to know and all of the available Nodes, properties, types and options that are open to you in the incredibly detailed Swagger specification document, available on GitHub here:


11. Response data modelling In our example we now have the intended structure of the response JSON data from the API – a benefit of calling a remote third-party solution. We can use this structure, to build up our response schema for Swagger validation, documentation and testing purposes.

{ "artists" : { "href" : " query=butch+walker&offset=0&limit=20&type=ar tist", "items" : [ { "external_urls" : { "spotify" : " artist/7qKoy46vPnmIxKCN6ewBG4"

"followers" : { "href" : null, "total" : 23552 }, …

type: "object" properties: artists: type: “object" description: "The core artist response” properties: total: type: "number"

12. Core response model

13. Nested arrays

Create the success response schema within the definitions section, as we have referenced previously in Step 11. We will now expect an object back with the root property of artist, which we will expect to be an object. By building up the nested properties we can set the definitions for our API response.

The oficial API response will return an array of information within the artists block called items. Add this to the schema by setting the type accordingly. If you declare an array now, Swagger will require you to then set the children for each item, using the second nested items block.




The:HE'HVLJŨAnnual :HE'HVLJŨ type: "array" description: "The array of artists matching the criteria" items: type: "object"

description: "The artists popularity" uri: type: "string" description: "The artist uri"

15. Run in mock mode 14. Continue as required Now continue to build up the expected response schema for as many properties as you need too. The benefit of writing this in the YAML structure is the ease of readability and maintenance, and that’s due to the nested properties. You can add description blocks here to help enhance the generated documentation too for API consumers.

properties: id: type: "string" description: "The artist id" name: type: "string" description: "The artist name" popularity: type: "number"

Top left

The desired response that we get from the third-party API request will now enable us to fine-tune our internal API response definitions Top right

Add as much information to the response schema as you can. Document every property where possible to improve user adoption Right

Running the server in mock mode helps your definitions improve as you build them without writing any server-side controller code


Creating such a detailed response definition and setting each property type lets you mock a response as you write your Swagger doc. By running the internal server in mock mode, you can obtain a sample response using the provided types without writing any controller logic.

swagger project start -m or swagger project start mock

17. Import the module Here we’ll be using a Node module packaged and ready for use to make our interactions with the third-party Spotify API easier. Install the module and save it as a dependency to your project so it is registered with your package.json file.

npm install spotify-web-api-node --save

18. Create your controller Create a new file called ‘artists.js’ in the api/controllers directory of the project. This filename matches the one given for the x-swagger-router-controller value earlier. The operationId value matches the method exposed in the controller, in our case getArtists.

'use strict';

16. Compare mock response With the internal mock server running, hit the artists endpoint in your browser. The mock data will be basic low-level responses but they will match the type specified for each property you created earlier. Compare it to the oficial API response and confirm your structure is as intended.

var util = require('util'); var SpotifyWebApi = require('spotify-webapi-node'); var spotifyApi = new SpotifyWebApi(); module.exports = {

// Create API schemas with Swagger


Using the CLI tool, contextual help is always at hand with the inclusion of the optional -h flag with each command

}, function(err) { console.error(err); }); };

20. Generate tests Use Swagger to assist your code quality and delivery. Running the following command will ask the library to create a test suite for you using the route options and response definitions that you have declared in your Swagger.yaml file.


The Swagger editor will parse and verify your config file as you work on it and will let you know if you make a mistake

sudo swagger project generate-test swagger project run-test getArtists: getArtists }

19. Method definition The getArtists method will be called by the API route, and it will need to check for the artistName value, sent through in the request context as a parameter. We are then able to send that value to the third-party API and forward the JSON response on to the end user as might be required.

function getArtists(req, res) { var artistName = req.swagger.params. artistName.value; spotifyApi.searchArtists(artistName) .then(function(data) { console.log('Search artists by "' + artistName + '"', data.body); res.json(data.body);

21. Separate modules Swagger is versatile, and can be used with pretty much any programming language or framework that you might prefer. Each toolset, including the editor, the live testing user interface and the core module itself, are available to download separately should you wish to use them. A great resource for tools and community language additions can be found here: open-source-integrations.



FUTURE HTML Web Components, the new standard in modular HTML development


Tom Dudfield

Tim Stone

Senior developer

Lead front-end developer



Web Components are a huge step towards a truly modular way of developing front-end interfaces. This mirrors design practices that have been so successful and stable within C# development for some time.

Web Components will soon become the de facto way to code reuseable widgets. Support isn’t universal because it relies on implementing four standards, but you can use it today with polyfills.


// Future HTML








sil o.c ign

Luke Fribbens Development director at IA Digital


Web Components are a developer’s dream. We have seen componentisation and its advantages in other parts of development for years – this is the next big step. The best thing since HTML5!

Matthew Bowden Creative director at Vitality Health


Tuck away the code we love to tinker with and you’re left with components exhibiting consistent behaviour, rendering correctly and most importantly, enabling a coherent brand experience across applications.

Simon Hutchings Owner of Visualise Graphics


Web Components provide so much more control of our markup – this is a fundamental shift in how we develop for the web.


eb Components are a totally new way of building parts of a website that replaces HTML, CSS and JavaScript and are only really used by the mysterious, masters of the web, who tell us all how such things should be done, right? Wrong. Web Components have been around for a long time, hiding just beneath the surface of our beloved HTML. They are simply a combination of HTML, CSS and JavaScript packaged together as an HTML element. Web Components actually account for many of the huge strides that have been made with HTML5. One of the best examples is the <video> element, by simply adding a src value to this, your video is displayed in the browser with button controls and time indicators, you have a player from just this one HTML element. These are all actually Web Components, we just never thought to stop and look at them all. When you saw this as part of the HTML5 specification you may have thought, ‘Wow that’s great’, but have you ever wondered how this was all built? How are the components we use and take for granted every day actually created? Plus, we can’t just define HTML spec for ourselves, can we? Well that is exactly what we can do. Browsers have been working really hard, over the past year or so to give developers access to this hidden part of HTML. We are not quite there, but the direction taken by all browsers to do just this is a good sign of things to come. There are also a host of JavaScript polyfills available to give us the methods needed to take full control of our own HTML. Web Components are constructed with four core elements of support that browsers have been working on. Custom Elements enables the creation and manipulation of custom HTML elements. HTML Imports can import packaged HTML, CSS and JavaScript into an HTML page, whereas HTML Templates holds content or information that is not rendered on page load, but is available to render at runtime. Finally, Shadow DOM enables the structure of a web component to be separated from the DOM of the page providing encapsulation. These four elements will be explained in more detail on the next page. The importance of this change in HTML development cannot be overstated. The evolution of HTML as a language, although excellent, has been pretty slow going. This is mainly due to it being in the hands of the browser vendors, with possible developments/ideas submitted by the general development community. These browsers then prioritise areas of development and gradually define new specifications and build support for these into their applications. But the ability to build our own Web Components gives the entire community control to directly contribute to the evolution of HTML.



Future HTML

The four core elements

Web Components V

Custom Elements Probably the most obvious aspect of a web component is the ability to create, modify and control HTML elements. The introduction of Custom Elements into browser standards enables us to break away from the limited vocabulary that HTML ofers. This means we can create true markup semantics that are really meaningful in context of the application, whatever that may be. Gone are the days of endlessly nested <div> tags with a littering of classes trying to suggest a structure as Custom Elements provide us with the ability to write truly beautiful HTML. There are a range of new JavaScript methods that enable you to create, register, style and add JavaScript properties and methods all within one new HTML element, and these are outlined really well at html5rocks. com/en/tutorials/webcomponents/customelements. One thing that is worth noting is that standards have been defined for the naming conventions of Custom Elements. Any element must contain a hyphen, this ensures that Custom Elements are easily identified as such amongst core HTML elements, as well as mitigating the risk of any conflicts with future core HTML elements.

Web Components Pros

Web Components Cons

A wealth of UI components are already being built and reused by the development community.

Standards could be ignored causing a confusing array of HTML elements, many of which are doing the same job.

Complex UI functionality can be achieved using very little HTML markup within the page DOM.

Working with code created by another developer could be confusing and dificult to use.

All of a component’s CSS and JavaScript is encapsulated, and this provides separation from the page DOM and stability as standard.

The current amount of documentation is currently limited. However, as the standard develops support for HTML, builds will only increase.

HTML Templates

This can be achieved by using the <content> tag within your template and then attaching that template to the desired element with content. Anything within a template is not rendered in the browser on load but simply made available to be integrated with the DOM using JavaScript. Any content within a template can be used and reused as you wish and if bound to new Custom Elements within it, then the Shadow DOM can be kept out of the DOM as we know it.

Templating HTML markup is not a new concept, there are many frameworks that provide this type of approach to building a user interface. They all provide a more streamlined way of managing blocks of HTML, but HTML Templates as a defined part of HTML spec is much more elegant. The new <template> tag can be used to bundle HTML, CSS and JavaScript together and keep all of its contents encapsulated. Think of these blocks of code as reuseable snippets that are easily shared with other developers and projects. The time spent writing the same HTML structures will be dramatically reduced without the need of a framework to do so. Content can be used within a template that resides outside it within a DOM element.

HTML Imports

Shadow DOM

So if we are creating a load of new Custom Elements and templates and bringing these components together, our HTML markup is going to get cluttered and confusing really quickly. This could rapidly lead to unmanageable markup and even more verbose code than before we were using these new techniques. One of the key benefits of using Web Components is the elegance of the code that can be produced, it’s all good for your code in the browser to be beautiful, but if the code you are working with is a mess then something is obviously isn’t quite right.

So last but most definitely not least, we have the Shadow DOM, this is arguably the most powerful part of Web Components. Each HTML element in the DOM, whether it is core or custom, can have its own DOM hidden within it. The JavaScript method createShadowRoot() can be used to create this for a new Custom Element. A template that has been imported can then in turn define the structure of this Shadow DOM. It works by bringing all four elements together and giving us a fully functional web component to work with.



Hands on with Web Components


his tutorial will take you through using and creating Web Components within your projects. As browsers are still undergoing development to support Web Components, Polymer will be used. Polymer is a component polyfill and library created by Google ( This adds support to all modern browsers as well a host of preset components ready to use.


1. Install Node.js and Bower The recommended way to use Polymer is to install using Bower via Node.js. Make sure Node.js is installed and install Bower. The -g flag installs Bower globally too.

npm install –g bower

2. Add Polymer to your project Now use Bower to install Polymer within your project. Make sure you are within your project directory and then install it in your CLI as shown. The ‘bower init’ command will ensure a bower.json file is created. The ‘--save’ flag will add Polymer as a dependency within this file.

This is where HTML Imports come in, the <link> tag can be used with a ‘rel’ attribute value of ‘import’, and this tells the browser to import the contents of the source file into your page. It seems just like an include but when we combine this with the HTML Templates we can keep our template code blocks, including any CSS and JavaScript, in separate files. Our working code becomes just as well written as that seen by the browser, and templates are even easier to share and reuse.

D:\Web Designer Mag\Web Components Feature\ my-component bower init bower install --save Polymer/polymer#^0.5

3. Create your first element Use an HTML Import to use Polymer to create your Custom Element using <polymer-element>. Add this as a new HTML file. The name of this element is added as an attribute and the contents held within a <template> tag.

<link rel="import" href="../bower_ components/polymer/polymer.html">

Standard HTML Pros

Standard HTML Cons

Browsers know how the current HTML standard works and ofer wide support.

The development of new elements is currently a slow process.

Taking control of new elements is restricted and standardised by the current set of browsers.

Restricted by decisions and prioritisation of browser dev, which goes against the open web as a concept.

The best thing about the current HTML standard is that it is already a universally recognised language. This means that millions of designers and developers already know how to use it.

. Large complex HTML structures are required to produce common UI elements. Our UI elements are not encapsulated from the rest of DOM, causing CSS.

Check out Polymer’s Elements collection for ready-made web components

There is much more going on here than it may seem, the Shadow DOM is not just a way of hiding code, it actually deals with the encapsulation issues inherent in HTML, CSS and JavaScript. For example, if you styled a selector that just happened to match a selector within your Shadow DOM you would expect it to be afected. But the Shadow DOM protects its contents from this, keeping a defined scope for the component. This is vital in enabling the production of reuseable components. The door is not completely closed though, you can style the contents of a Shadow DOM by using the ::shadow selector in CSS, or alternatively style an element from its own Shadow DOM using the :host selector. The Shadow DOM seems like black magic at first, but when you use it you’ll notice that it’s used everywhere.

4. Create your app Add an index.html file to your project root. Load the webcomponents.min.js polyfill from bower_components. Import your new Custom Element and then reference this. When viewed in a browser you will see your Shadow DOM content and be able to inspect this using Chrome.






Browser support

Custom Elements provide us with the ability to write truly beautiful HTML <polymer-element name="my-signup" noscript> <template> <p>Hi this is <strong>my-signup</strong>. You are looking at the Shadow DOM</p> </template> </polymer-element>



S Standard HTML



// Future HTML

<!DOCTYPE html> <html> <head> <script src="bower_components/ webcomponentsjs/webcomponents.min.js"></ script> <link rel="import" href="elements/my-signup. html"> </head> <body> <my-signup></my-signup> </body> </html>

The specifications for Web Components from W3C are still in progress but browsers are working to these. Chrome and Opera are leading the way with stable support for all four aspects of Web Components. Firefox have full support for Templates and both Custom Elements and Shadow DOM support are close. Safari supports Templates, and Internet Explorer are collating and prioritising their development roadmap for Web Components. All these browsers are working to the same goal, which is the future of HTML as we know it. It can be dificult to stay up to date with support of so many development paths, the site provides links to information on the paths.

5. Style your element Within your element file, add a <style> tag to the template and adjust how the <p> tag is displayed. Add another <p> tag to your index.html.

//elements/my-signup.html <polymer-element name="my-signup" noscript> <template> <style> p { font-family: Verdana, sans-serif; font-size: 1.2em; color: #912A07;



Future HTML


Polymer POLYMER-PROJECT.ORG The Polymer project was started by Google’s development team and makes the Web Components landscape much easier to move to. Any polyfills required to support Web Components across browsers is immediately available. On top of that Polymer ships with two element collections of prebuilt elements ready for production use – Core and Paper. The Core elements encompass many useful, single purpose components for core functionality. These work with layouts, inputs, data handling and general application structure components. The Paper collection covers much more UI-focused elements that are highly visual and interactive, and incorporates controls, interactions and UI transitions. There are a wide range of demos within the Polymer project that can help you get up to speed with using and building Web Components. There are tutorials demonstrating key aspects of web component builds and the actual API documentation is excellent. The Polymer project team have even produced a Designer tool for quickly prototyping apps using Polymer and the ability to save experiments within GitHub ( Polymer is built based on the (still-in-progress) W3C specifications for Web Components, with the view that when browsers have adopted full support the transition from the polyfills to native support will be smooth. All Polymer code is available on GitHub (, and benefits from ongoing open source development. Any bugs can be reported directly and visibility of upcoming releases is readily available.

} </style> <p>Hi this is <strong>my-signup </strong>. You are looking at the Shadow DOM</p> </template> </polymer-element> //index.html <body> <p>This text is within the page DOM and is unaffected by component styles.</p> <my-signup></my-signup> </body>


Understand the standards Before you dive in, make sure that you get your head into the standards and best practices. We will all benefit from this in the long run, all developers have been bitten by maverick coding at some point. If you haven’t already, then you will be soon enough.

Bosonic BOSONIC.GITHUB.IO Bosonic is a good alternative to Polymer, it provides similar polyfills and the tools required to build Web Components across even older browsers (including IE9). The syntax for the element creation matches the current W3C specifications exactly and working with it is very similar to working with Polymer. There are a range of elements available for production. This is all available on GitHub and open to community contributions. The getting started guide and documentation is structured in a step by step, making it great for learning about Web Components. If you want to get involved in developing and contributing to a Web Components framework, Bosonic is a good choice. It is well structured and open but without the scale of contributors as Polymer (at present).

6. Style the element host

7. Style the element from your app

If you add :host as a selector to your element styles, you can style the parent element of your Shadow DOM, ie style the Custom Element tag in your index.html from within the element itself.

Any new Custom Element can be styled from its parent page (in this case this is index.html). If you add styles to this page for your new element they will overrule those set using the :host selector. But the elements within your Shadow DOM will still be protected.

<style> p { font-family: Verdana, sans-serif; font-size: 1.2em; color: #912A07; } </style>

<style> my-signup { background-color: red; } my-signup p { color: white; /* this will not work */




X-Tag is the ofering from the Mozilla team, and is a small library providing the ability to build, extend and manage your Web Components. This is built from the same W3C polyfills that are provided by the Polymer project with extensions to ofer IE9 support. This library uses JavaScript to register new elements and this script references any HTML Template elements that are to be used, opposed to the Polymer approach of the <script> being integrated into a <polymer-element> alongside the HTML Template. One of the most comprehensive examples of use is Mozilla’s Brick (, a collection of ten UI components that can be reused in production for building interactive UIs. It is actually available as a Bower installation and is a really quick way to start using these components in your project. This library could rapidly evolve, but at present it doesn’t have the uptake that Polymer has.


As with any new browser technology, it takes time to establish a stable specification, consistent rollout across all browser vendors, and the best practices to be defined by the community. Web Components are no diferent, but it does seem that

}; </style>

This resource site brings together the best practices, updates and developments of everything Web Components related. It provides a wealth of useful content, from presentations and podcasts given by web component framework developers to the current state of native browser support. The explanations around the base polyfills (used by all libraries) can give good insight into the work that has already gone into enabling us to work with Web Components today. There are also key links to galleries, libraries and the web component development community, many of which are instrumental in the development of the tools we have already mentioned. One of the best things to take a look at on this site are the presentations, they are often an opportunity to see how other developers are using Web Components or what they are trying to achieve. This is a great place to keep coming back to and will help you to keep up to date in the Web Components landscape. The core vision of this site is to provide a reference point for everyone in the open development community, ensuring that Web Components best practices are defined and followed, making life easier for all of us.

lessons have been learnt from previous technology developments. There is a base set of Web Components polyfills available on webcomponents. js ( that are actually used consistently by web component libraries. Some of these aforementioned libraries extend them but this core set remains.

}; </style>

8. Overruling shadow styles

9. Build the signup form

If you use ::shadow you will find that any styles defined within a component can be altered.

Add the signup form markup and default styles to the template. The styles don’t afect the rest of our page so base CSS selectors can be used without conflicts.

<style> my-signup { background-color: red; } my-signup::shadow p { color: white; /* this will work */

<polymer-element name="my-signup" noscript> <template> <style> :host { display: inline-block;

Internet Explorer 9 Remember Polymer does not support Internet Explorer 9, so if you do actually need to support this older browser for your project then X-Tag or Bosonic would be a better choice of library.

This concerted efort to work to a best practice approach means that as native support increases, the removal of such polyfills will be easier. Currently, these fixes are a necessity, but based on the rapid progression of support and the wide range of reuseable components already available, it won’t be long before they are no longer needed.

10. Add values to your inputs Extend the input elements to have values that are set to namespaces within handlebars templating syntax. This will make any value entered available within the context of our component. You should see the ‘name’ value within the <p> tag as you type.

<p>{{name}}</p> <div class="form-row"> <label>Name:</label> <input type="text" id="signup_name" value="{{name}}" /> </div>



Future HTML

The community One key diference between the development community for Web Components and that of other new technologies is the sheer amount of people that Web Components does and will afect. It will be impossible for anyone working in web development to completely avoid using Web Components and it is easy to underestimate the impact this may have on the industry. With all this in mind the collaboration of everyone involved is vital to ensure that the road to full web component support and take up is as smooth as it possibly can be.

Keep up to date Keep an eye on the progress of W3C specifications, browser support for all of the key aspects of Web Components and the elements that are added to libraries in any updates. All of these areas will continue to drive and define the best practices that will end up being followed by all, keeping us on the same dev path.

11. Introduce some script Add a <script> tag to your element. Within this, call the core ‘Polymer’ class and set a default value for the {{name}} property we’ve defined in the markup. Remove the noscript attribute from the <polymer-element> tag.

<polymer-element name="my-signup"> <template>…</template> <script> Polymer({ name: "Luke" // you can set a default value here }) </script> </polymer-element>

goals of many application frameworks of today. The main diference is that this approach will be standardised across browsers in the future.

Share what you find

Share your code

Don’t assume that just because the blog post you have found is a few months old, you are the last to find it. Share it with your network and talk to your colleagues about what you find out. Only by doing this and encouraging others to do the same will the community as a whole be able to drive forward maintaining the rate of progression we have seen over the past year.

Most libraries of Web Components provide the means for users to submit their own elements for inclusion, if you have written something useful, it will probably be useful to someone else too. Additionally most Web Components projects are available on GitHub, if you find or fix issues then do so here so that everyone can benefit. There is an established community of developers who are already contributing to the Web Components landscape. If you aren’t sure about something then they can help and answer your questions. We will probably see a lot more conference talks and presentations on the subject over the next two years, so keep an eye out for those as well.

Using Web Components Even if you start on a small scale, Web Components are all ready for use in your projects. They provide excellent code separation and reusability which is one of the key

Most Web Components projects are available on GitHub, if you find or fix issues then do so here so that everyone can benefit

component and doing something with them. Add the nameChanged method to your script and whenever it is changed, the name will be logged out in console.

<script> Polymer({ nameChanged: function() { if({ console.log('Hi ' + + ' I am listening!') } } }) </script>

15. Match the email fields

This can now be used to validate that both email fields match. Adjust the ‘validate’ function to check the fields against each other and create a new email_match property within the component for tracking the fields.

validate: function(oldVal, newVal){ if( === this.email_confirm){ this.email_match = true; } else{ this.email_match = false; } console.log(this.email_match); }

14. Watching groups 12. Pass data to your component If we know the user’s name we can prepopulate it. Add an element attribute and include it as a listed attribute on our Polymer element (the attributes value can be a comma separated list to define multiple attributes).

//elements/my-signup.html <polymer-element name="my-signup" attributes=”user”> … </polymer-element> //index.html <my-signup name="Luke"></my-signup>

13. Watching properties Polymer has a range of changed watchers that are excellent for reacting to property changes within your


Another change watcher Polymer provides is the ‘observe’ method. This can watch multiple properties and fire when any of them change. Add an observe list to both email properties and bind these to a validate method. The values will be logged out when changed.

<script> Polymer({ observe: { email: 'validate', email_confirm: 'validate' }, validate: function(oldVal, newVal){ console.log(newVal) } }) </script>

16. Display an error Add an error message within the element to be displayed to the user. Use the ‘hidden’ attribute bound to our email_match property. This only appears when the emails don’t match. We have field match validation here.

<div class="form-row"> <label>Confirm Email:</label> <input type="email" id="signup_email_ confirm" value="{{email_confirm}}" /> <span hidden?="{{email_match}}">Your email addresses do not match</span> </div>

17. Set default validation You may notice that the error message is appearing before the user has even started to enter an email

// Future HTML

Inspecting the Shadow DOM



Custom Elements

Accessing the Shadow DOM feels a bit like spying but it is actually quite simple. In Chrome developer tools access the settings using the cog on the right of the inspection panel. Under Elements check ‘Show user agent Shadow DOM’, then restart the browser. Now when you inspect the HTML of the page you will see a new node wherever a Shadow DOM is present named #shadow-root, you can expand this node to see what is actually in the Shadow DOM. Without this it would be very dificult to debug any of the problems within our Web Components. Additionally this gives us the ability to interrogate other Web Components that we haven’t created ourselves. By doing so we can ascertain how to adjust the styles of a core HTML component such as changing the colours of the HTML5 video player, or enable us to understand how a Custom Element has been built. It’s worth taking some time to dig around looking for the Shadow DOM and seeing what you find, you may be surprised how prevalent its use is already.

address, this is because the email_match property does not exist until these properties are changed. Add a default value for this as true within your Polymer script.

<script> Polymer({ email_match: true … }) </script>

18. Style the validation Add some further styles within your element to display the error message to go alongside the ‘Confirm your email’ field. Again there is no need for a specific error class here as this is the only instance of a <span> tag within the component.

span { font-family: Verdana, sans-serif; font-size: 0.8em; display: block; width: 294px; margin-left: 148px;

CUSTOMELEMENTS.IO With over 900 elements, Custom Elements is a huge gallery of Web Components created by the development community. If you need an element fast, this is the place. It is open to submissions and each element is available to fork on GitHub for any adaptations.


20. Use Polymer core elements Polymer has a range of core elements that can be used. We will utilise the core-ajax element and this will enable us to POST the form data to a web service. All you have to do is import the corecomponents-page.html into your element and add the element to your HTML with the form data set as the body attribute.

<link rel="import" href="../bower_ components/core-component-page/corecomponent-page.html">

21. Make the POST url dynamic To use this signup element in multiple instances you will need to be able to change the POST url. Make sure that you add this as an attribute on the element in index.html, extend the listed attributes on the polymer element itself and finally reference this property in the core-ajax element.

//index.html <my-signup name="Luke" url="http://"></my-signup>

19. Add a submit method

22. POST your data to a service

Now add a submit function and bind to an on-click event on the button. Within this function bring the name and email properties together so that they are ready to be posted to a HTTP service.

Finally add a handleResponse function to your script and call the ‘go’ method on the <core-ajax> element within your formSubmit function. This will POST the formData and the response will then be returned to the given method (you will need a POST service available to fully run this last step, but you can stub it using Node.js).

submitForm: function(){ this.formData = {

The Component Kitchen combines some good tutorials alongside a comprehensive library of Custom Elements. Each element has its own page with usage information and fully working demo. Within the Developers section of the site you can find out how to register your own elements with Component Kitchen including setting up your demo page.

Polymer Elements BIT.LY/1OICUL0 Although not as large as the previous two component libraries, Polymer Elements is excellent and consistently built. The documentation for each of these is very easy to follow and the range available provides most base application functionality and UI interactions to help you build engaging Web Componentsbased solutions right now.


Ũ J L V H ' 122: Your best CSS ever 130: Create a colour swatch tool with Vibrant.js 134: Alter page element colour 138: Build custom layers with CSS 142: Expert guide to web 3D 150: Interactive 3D game with WebGL 154: Image-based pop-up menus 156: On-click popup tooltips 158: On-click transitions 160: Latest CSS4 selectors 164: Circular on-hover animation 166: Flick background image 168: Slide down on scroll menu 170: The importance of typography 178: Animate type & text 182: UX design 188: Enhance UX with Hover CSS


150 142

178 188



// The best CSS ever

Analyse your CSS

5 top tools




CSS Beautifier

SSLinting is just like any other type of linting, in that it is “a tool that flags suspicious usage in software written in any computer language” (from the Wikipedia page on programmatic linting). Front-end developers are likely aware of JSONLint and JSLint. Well now there’s CSSLint, which analyses CSS and helps developers write CSS that conforms to a set of performance and syntax of best practices. ‘Helps’ is understating it really, CSSLint throws errors and if used as part of a build server – say, TeamCity for example – it will cause builds to fail. This is both a good and a bad thing. It means the rules are enforced with harsh penalties, but bad rules will create an upsetting development environment. That’s the rub really – some of the out-of-the-box defaults of CSSLint are controversial, so it pays the project team to take time customising the rule set. Here are some of the ones that draw controversy for many.

more specificity later in the stylesheet, heading styles should be declared once and not need to change. Again, this is a bit extreme for the everyday project team. It’s likely that the complexity and number of heading styles that can be seen on any website is defined by the design team.

Lots of floated elements on an individual page cause longer paint times because of the extra layout calculations that browsers have to go through. However, unless you organise your CSS on a per-page basis, this rule is completely arbitrary.

Don’t use adjoining classes This is blacklisted because of browser support. IE6 doesn’t support selectors that look like this:


Turn this rule of if the project ofers legacy support towards IE6 users.

Style headings once, globally What CSSLint is getting at here is when developers write styles for heading elements and then redefine those with

A quick and simple option for creating easy-toread code.

Don’t use too many floats

class-one.class-two { /* … */ }

It’s not that there is anything inherently wrong with IDs, just that the Object Oriented ideology CSSLint was born of rejects them. OO-CSS says all CSS should be reuseable, so the concept of writing an ID into a CSS selector is very foreign. OO-CSS takes the stance that there should be no unique CSS. For the everyday project team, this is likely an extreme view, and it isn’t relevant if OO-CSS methodology isn’t being employed.

ProCSSor Helps clean and organise CSS the way you want it.

Better code hygiene All of these bad defaults make CSSLint look like a bad tool. It really isn’t – there are real advantages to rolling this tool into a project team’s build processes. CSSLint enforces syntax and code standardisation across the project team on a project-by-project basis. This code hygiene should make it easier to developers working on the project and make the codebase easier to work with as a whole. Accessibility can be enforced with styles like outline:none being outlawed. CSSLint also enforces the bulletproof font-face syntax – the best way to implement @font-face.

W3C CSS Validation service Built by the community with three options for validating code.

Style secret CSSLint has a Wiki over at GitHub that details all of the default rule options. Each rule has its own page which goes into depth of what the rule covers and the idealogy behind it. Check it out at Rules.

To follow CSS-Tricks @Real_CSS_Tricks The official account for CSS-Tricks. com, a web design community curated by CSS legend @chriscoyier.

CSSLint enforces syntax and code standardisation across the project team on a project-by-project basis

Format CSS code Pick your presets, add code and format to get the code you want and need.

CSScomb Build your own configuration to get the exact code you want.



Bring design and build together CSS PREPROCESSORS DON’T JUST BEEF UP YOUR CSS WORKFLOW, THEY CAN BE USED TO UNITE DESIGN AND BUILD TEAMS! themselves. In it, the colours are probably named so At this point it’s fair to say that CSS preprocessors have simply name variables after these colours and maintain captured the imagination of the industry and have them in a file that is specific to the purpose. We do this become prevalent in production – the most well because it will make it easy to ensure colours are known being: Sass and SCSS, Less and Stylus. At this consistent, and this is so that even if the codebase is very point in time there isn’t much to tell these preprocessors large, designers will be able to look at the preprocessor apart. They all have fairly similar capabilities, just with CSS and immediately understand the colour scheme of a each going about it a diferent way and with a slightly component. They’ll easily be able to tweak colours of difering syntax. concepts at later stages in the project, which will no The basic premise of CSS preprocessing is to provide longer create headaches for the build team. more powerful tooling to developers and also enable more expressive code. This has its pros and Name font stacks cons – the more advanced the tool is, the Organise Most websites have more than one more skill it will take to use it efectively. components font stack and they’re often used in In line with this, some of the most Split your components into combination with each other, for basic and easiest to use features of folders. Widgets belong in a example headings and body text. CSS preprocessing are also the widgets folder – same for pods, Monitoring the number of most powerful. components (think header, diferent stacks in regular CSS can These basic features can be used footer and so on) and global become a bit of a chore. Find and to bring design and build teams modules (grid systems replace can also be a bit haphazard closer together, creating codebases and the like). should two similar, but diferent, font that are more flexible, easier to stacks contain the same font family. understand and, on the whole, much better Naming font stacks has similar benefits to naming to work with. colours. It enforces consistency and makes recognising Name colours incorrect font assignment easier for both designers as Design teams are usually in possession of brand well as developers. guidelines from the client or have created their own by


6 tools


Take advantage of a preprocessor and split the styles up in a logical manner that helps introduce and describe the

Style secret With Sass, treat @extend like classes in other languages. It should be immediately clear that a style inherits other rules from somewhere else, so list your @ extends first. This also helps with overriding styles later.

codebase. No longer does the ‘biography pod’ (contains a person’s biography) have to sit in the middle of a massive stylesheet. At a glance, the build team and designers alike should be able to see all the components in the website. This will give everyone a feel for the complexity of the site and should make code reuse and time estimation easier.

Flexibility CSS preprocessors allow you to craft a more flexible codebase. Within a project, changes introduced late shouldn’t cause as many headaches for the build team as before. Within a team, resources should be easier to scale with the demands of a sprint or iteration because the separation of concerns is more apparent. Within a company, the front-end discipline should become one that can deliver realistic prototypes with the design team without having a negative impact to the project timeline.

Points to consider qMaintaining components in separate files so they are ready to be combined in diferent ways.

qSensible use of variables, enabling for changes to be made later on in projects without costing time, for example accessibility issues caused by poor colour contrast can be easily remedied.




CSSLint is a tool to help point out problems with your CSS code.

Analyze-css generates reports on selector complexity and performance of stylesheets.

Sass is the most stable and powerful professionalgrade CSS extension language in the world.


// The best CSS ever

Beware of bad code WHAT TO AVOID AND WHY Excessive nesting CSS preprocessors make it very easy to write long selectors by enabling nested code. Long selectors come at a performance cost, so consider that when nesting code.

Vendor prefixing Most preprocessors have options or third-party middleware that will add vendor prefixes automatically. Double-check the output of this behaviour – old prefixes for tools like Flexbox can cause weird behaviours.

Length of output

qThe ability to retire code quickly and easily when crafting components. If the client doesn’t need a specific component anymore, finding and deleting it is now easy. qArchitecting preprocessor files to allow for variations of the website to be easily generated. This can be easily achieved by importing selected components into new parent preprocessor files. qUsing a preprocessor to quickly make colour scheme/ typography changes site-wide, in other words prototyping. No longer does the design team have to slave away producing variations in Photoshop.

It must be said that the most valuable of these aren’t the ‘run once then throw away the results’ type, but instead tools that run on ‘build’. A build team that has a build server, which creates release builds automatically, could benefit from having a performance report generated for example.

To follow CSS Author @cssauthor

The advance of CSS tooling Perhaps the single biggest benefit of using one of the big CSS preprocessors is the tools that feed into them. There is a vast range of workflow tooling that can be used to analyse performance, automate tedious tasks and hint at code improvement – among other things.

A web design and development blog that tweets the best resources and tools for designers and developers

When dealing with many small files it’s easy for a build team to create a rapidly growing ‘main’ stylesheet. Look at the output regularly and minify as part of a build process.

Build errors Developers not previously exposed to preprocessors may be horrified to find that writing erroneous syntax now results in build errors. But this is still better than weird browser behaviour, however.

Lack of comments It’s rare in vanilla stylesheets to need comments, so preprocessors present a bit of a mindset change when it comes to functions and mixins. Good comments can explain what the developer is thinking.




Stylus is a revolutionary new language, providing an eficient and expressive way to generate CSS.

Less runs inside Node, in the browser and inside Rhino as well.

Continuous website quality and performance telemetry. Think Google Analytics for the frontend.



Style guides: why you need them HOW TO BUILD WORKING DOCUMENTS FOR CONSISTENT DESIGNS Style guides are a brilliant idea, however they are usually woefully executed in practice. The idea is that at any point in time there is a working document visible to the project team that captures the look and feel of core components – and this includes typography, buttons and form fields. What actually happens though is that the working document becomes stale and discarded by most of the project team, and that’s because it captures the look and feel of the project from two months ago.


An out-of-date working document is pretty useless because the project team can’t use it for anything they’re working on at that moment – it simply gathers dust, becoming a relic. There are probably many reasons for stale style guides, but the biggest standout is the associated maintenance cost. The cost to design and build a style guide once is mostly absorbed, as it should mirror the initial stages of the project. It’s only when it takes extra time to keep this document in sync with the working

project that it gets bumped in favour of any higher priority tasks. The style guide idea is brilliant so long as the style guide maintains itself after the initial incarnation. Thankfully, modern tooling can help improve this situation somewhat.

Build tasks Create a build task to distribute postprocessed files to all relevant locations instead of just to the website. This

// The best CSS ever

should ensure that the style guide is up to date with any CSS or JavaScript as it’s committed to the project.

Team visible Make the style guide available internally by hosting it on an internally available URL. This gives everyone within the project an easily accessible and now up-to-date aesthetic reference. It’s also worth considering authoring a ‘making contributions’ document, which provides guidelines on how to contribute to the style guide project, all without actually being a developer.

The colours example is very easy, but this could be applied to typography, buttons, form inputs and many of the other simpler aspects of build. The limit here is going to be the design team’s knowledge of CSS, but fortunately though, these are the easiest aspects of CSS that a person can pick up.

A ‘making contributions’ document provides guidelines on how to contribute to the style guide project

Seek design team contributions Once a style guide is up to date and team visible, it’s time to turn the style guide into the working document it was born to be. It’s time to ask the design team to learn some code… sort of. Following on from the CSS preprocessors section, the advice was to create a ‘colours.scss’ or similar. This file should contain all of the colour definitions used within a project and be trivial to update. It’s worth giving training on this process and basics on the preprocessor to the project team. This will enable the design team to have input on the look and feel of the site through the actual code, rather than being one step removed with concepts.

The design team picks up basic aesthetic changes, giving more control over the result of the build. They’re also free to adjust margin and padding on elements. This should then free up the build team to focus on more fundamental and specialist problems, such as box model layout, code architecture and also flexboxes.

Style secret Consider choosing a styleguide generator that has a GUI which allows for non-technical users to make adjustments without having to write code. Sc5- styleguide has a demo for just that via

To follow CSS3 @CSS3 An account that is dedicated to CSS. It offer news, previews, tutorials and more for the discerning designer.

Get better style guides A Maintainable Style Guide

Style Guide Generator Roundup


Craft a maintainable style guide, using it as a tool.

A look at style guides that run on platforms.

A gulp extension to help create of a style guide.



Build living style guides with Handlebars.

Generate documentation from your stylesheets.



5 Photoshop extensions HOW TO ADD MORE POWER FOR FRONT-END DEVELOPERS CSS Hat Including any plugin that “writes code for the developer” is always going to be controversial, but CSS Hat does an okay job. When it comes to trying to scratch something together in minutes or unpicking several font styles at once – it can be a handy pallette to have, especially for those who don’t have a mental CSS encyclopedia. If something needs exporting, is the right tool for the job. The basic premise is that each top-level group in a PSD is a ‘screen’. and child groups with names beginning with “&” get exported as separate states of that screen. also generates webfonts for icons in vector format. No more manual image exporting from PSDs.

InVision InVision takes designs and provides an interface for adding interactions – enabling designers to craft interactive prototypes complete with gestures, transitions, and animations. InVision also provides real-time presentation, collaboration and project management features. It’s a cheaper route to designing efective UIs.

Renamy Renamy is a layer renaming power tool. It allows for renaming of multiple layers the right way. It can find and replace and even handles regular expressions. Those who grew up with CS2 would be jealous of this tool. An alternative use is to rename a colleague’s PSD layers from ‘New Layer 102’ to ‘Old Layer 102’ just to upset them.

Ink Ink is a free Photoshop plugin that generates documentation with detailed information on Photoshop documents and their layers/ styles. For typefaces this is information on font family, weight, leading, size and colour. For images and other layer assets, dimensions are output. It spots minor inconsistencies at a glance.


Built-in function There are lots of Photoshop extensions that do amazing things, but Photoshop also comes with a built-in batch file processing function (File>Automate>Batch). Now, tasks like image resizing become kind of trivial.

// The best CSS ever

Improve performance TAKE THE PAIN OUT OF CSS SPRITES. AUTOMATE WITH… CSS-SPRITE! CSS sprites can really help cut down on the number of HTTP requests made on each page load. When done efectively, sprites can have a big impact on performance, especially on mobile, where the number of simultaneous connections is limited or perhaps the data connection is just plain slow. Unfortunately, they’re also quite costly to create and maintain when done manually – especially if icons need to change sizes late in the build. This is where css-sprite (an npm package) comes in: give it a bunch of images and it returns a stylesheet (in your preprocessor of choice) and a single image.

What is particularly good about css-sprite is that it is capable of laying the sprites out in multiple ways. Vertical and horizontal are options, but the best is binary-tree. This will find the most optimal layout for all the sprites, creating a smaller final image – though this is slower because more calculation is required. This is unlikely to afect most everyday projects however, and for those it does it is unlikely for the file size to need optimising. The best thing about automating this process is that the design team doesn’t have to commit to final versions of icons until later in the project. They’re free to change dimensions in response to client/user feedback.

What is particularly good about css-sprite is that it is capable of laying the sprites out in multiple ways

Sprites that have been generated from that gulp task

Style secret File size still matters for CSS sprites even when the process for creating them is automated. It’s worth adding a compression task to your workflow for the final output. Also consider grouping images into different sprites where it makes sense.



Critical Path CSS generator



DeSVG A Sass-based CSS framework that provides the launchpad for builds. Boasts a boilerplate layer.



CSS MenuMaker

CSS Sans




Create a colour swatch tool with Vibrant.js Extract colour swatches from your images to use in your designs by dragging images onto the interface


// Create a colour swatch tool with Vibrant.js


rabbing colours can be useful for a whole variety of uses and specifically extracting colours from an image. Android developers have access to the ‘Palette’ class in the Android support library, but unfortunately this isn’t available for general web developers. Jari Zwarts has taken the Android ‘Palette’ class and converted the code into JavaScript, meaning that anyone can use this in their own web projects. This provides a great opportunity for us to create our own design tool that will extract colours from an image and display them on the webpage so that they can be used in our own designs by taking the hex colour. If you took this a little further, you could easily write some PHP to mail them to yourself, then you have a saved record of great swatches for future use. Another way this could be used is if the user is creating a profile on a website and they drag their own image in. Using Vibrant.js it would be possible to set the user’s background to a personal colour from the image. What this ofers is a way to influence a personalised design for the user based on their image.

1. Link up Vibrant library Open the start folder from the project files on FileSilo in a code editor such as Brackets. Open the index.html page and add the code below inside the head section of tags, which will link up the Vibrant JavaScript library. You can download the library from, but it’s included in the project files as well.

<script src=”js/Vibrant.min.js”></script>

2. Add the header Scroll down to the body tags in the HTML document and add the header tag so that there is an image and heading on the page. This will be styled up later with CSS to give it

the right look. Do feel free to customise the name and logo for your own purposes.

3. Add the image holder When a user drags an image onto the page the image needs to be displayed somewhere. An empty div tag with the ID of ‘image’ is created so that the image will be displayed in here later on when a user adds one.

div id=”details”> <div id=”image”></div> <div class=”col”><span class=”swatch colorVibrant”></span><span class=”txt textVibrant”></span></div> <div class=”col”><span class=”swatch colorMuted”></span><span class=”txt textMuted”></span></div> <div class=”col”><span class=”swatch colorDarkVibrant”></span><span class=”txt textDarkVibrant”></span></div>

5. Start the CSS Now move back to the head section of the document and add the stylesheet tag, then link to the Open Sans typeface from Google Fonts. This is added to the body tag so that all text will be in this typeface. The background of light grey is added as is the font size for the document.

<style> @import url( css?family=Open+Sans); body { font-family: “Open Sans”; background: #dbdbdb; font-size: 24px; }

6. Add the header Next in the CSS is to add the style for the header tag and we will give this round corners and white text to go over the background image. The background is a striped grey bar image that repeats to fill the size of the div tag.

4. Hold the swatches When the image has been added it will be analysed using the Vibrant.js library, this will return the coloured swatches that will be displayed here on the page. There is also a ‘drop’ zone created so that images can be dropped on the page and then analysed.

<div class=”col”><span class=”swatch colorDarkMuted”></span><span class=”txt textDarkMuted”></span></div> <div class=”col”><span class=”swatch colorLightVibrant”></span><span class=”txt textLightVibrant”></span></div> <div id=”drop”>Drop image here <small>(Uploaded image data works in Chrome only)</small></div> </div>

7. Design is in the details The details section will display the image, the swatches and the drop zone for image. The col is the column to hold each swatch colour and text with the hex number.

#details { text-align: center; }

Three basics of CSS There are three ways to style your content with CSS. A full stop in front of CSS targets a class, a hash symbol targets an ID and just using a word will target the HTML tag.


The tutorial uses the Vibrant.js code library which is found on GitHub. The files are already included on FileSilo for your convenience Top left

At this stage all of the HTML that is needed is in place, but there is no styling yet on the design of the page Top right

The heading section of the page is designed with the logo visible and the right typeface in place. The background in here is a repeated PNG image denoting swatches


The:HE'HVLJŨAnnual :HE'HVLJŨ .col { display: inline-block; padding: 20px 0 5px; width: 80px;}

8. Add the swatch When the swatch of colour is ready for display it is placed in a circle with a three-pixel grey border around it. This then fades in in one second with new colours inside.

swatch{ display: block; height: 60px; width: 60px; margin: 0 auto; border: solid 3px #aaa; border-radius: 50%; -webkit-transition: 1s; -moz-transition: 1s; }

9. Display the hex number The TXT CSS rule displays the hex colour under the colour. The drop ID is the area that the image can be dropped onto, in order to extract the colours.

.txt { padding-top: 5px; text-align: center;

Writing to the page Using the ‘innerHTML’ command in JavaScript will automatically replace the HTML inside the tag you are referencing with what you are currently adding to update the page.

Top left

Now the drop zone for the images is clearly defined so the user knows that they have to drop the image on this section of the page Top right

As the user hovers the image over the drop zone the user’s cursor changes so that it shows that the image can be added to the page Right

Once the image is dropped onto the drop zone, the JavaScript analyses the image and brings back the colour scheme extracted from the image as swatches


text-transform: uppercase; } #drop { padding: 40px 0; margin: 20px; text-align: center; border-radius: 5px; background: #ccc; color: rgba(#000, 0.5); }

10. Add the image The small text is placed into the design with a slightly smaller typeface and the image is given appropriate CSS to display it on the page. As you can see the image is never displayed greater than 50% of the browser width.

small { display: block; font-size: 18px; padding-top: 10px; } #image img{ width: 50%; } </style>

12. Drop an image The following code will be called when an image is dropped onto the drop zone part of the page. There are some variables declared here and the event is stopped from propagating and any default action is prevented from running as we are defining our own.

handleFileSelect = function(event) { var data, f, files, parseFile, progress, reader; event.stopPropagation(); event.preventDefault();

13. Read the file As the image is dropped it is possible that multiple images might have been dropped, so the code reads only the first image. A variable reader holds the JavaScript file reader object in there and is used later. The progress function reads the image and extracts the colours.

files = event.dataTransfer.files; f = files[0]; reader = new FileReader; progress = function(event) { var el, el2, image, results, swatch, swatches, vibrant; image = new Image(200, 200);

11. Add the functionality You can display the page in the web browser but it won’t do anything yet. Move to just before the closing body tag and add the script tag as shown, then declare the variables we are using. The dropZone is the area that the image will be dropped on the screen.

<script> var dropZone, handleDragOver, handleFileSelect; dropZone = document.getElementById(“drop”);

14. Read the image The image variable grabs the event of dropping the image, takes the target which is the image and stores it. The image is displayed in the ‘image’ div tag on the screen. Vibrant is used to read the image and the swatches are stored in the swatches variable.

image.src =; document.getElementById(“image”).innerHTML = “<img src=’” + + “’

// Create a colour swatch tool with Vibrant.js

An app for that You may look at the Vibrant.js library and be thinking that this would be great to use with a mobile app. However you may also be thinking that unfortunately the user can’t drag and drop with mobile, so how can this library work? Well, by using PhoneGap the designer or developer will have access to either the user’s camera or photo library. Once the image is on the page, Vibrant will then be able to read the data of the image and extract the colours providing a very handy mobile app. An example of reading an image that is on the page is available on the demo site for Vibrant, and this can be found at As you can see, this makes the Vibrant library a great tool for creating your own resources and creative tools.

/>”; vibrant = new Vibrant(image); swatches = vibrant.swatches(); results = [];

15. Loop through the swatches The ‘for’ loop code iterates through each of the swatches that has been brought back from the image. The swatch is read and then the results array gets ready to have the coloured swatch results pushed into it.

for (swatch in swatches) { if (swatches.hasOwnProperty(swatch) && swatches[swatch]) { results.push((function() { var i, len, ref, txt, results1, results2;

16. Store the results At this point the code checks the HTML and finds the appropriate colour class and the appropriate text class to add the colour to it so the user will be able to see it. Arrays are created to hold the results of the colour and the text, the square brackets show an empty array.

ref = document.querySelectorAll(“.color” + swatch); txt = document.querySelectorAll(“.text” + swatch); results1 = []; results2 = [];

the swatch, which is then stored as a background CSS property. The actual swatch hex is also displayed as text in the HTML of the page.

for (i = 0, len = ref.length; i < len; i++) { el = ref[i]; results1.push( = swatches[swatch].getHex()); el2 = txt[i]; results2.push(el2.innerHTML = swatches[swatch].getHex()); }

18. Return the results This part of the code is simply closing down all the brackets of the if statements as well as the for loops. At the end of each section the results are returned, including no result if the image cannot be read or the file dropped is not an image.

return; })()); } else { results.push(void 0); } } return results; };

19. Connect the functions 17. Loop through the display For each of the swatches available on the HTML screen, the for loop moves through each and takes the colour of

As the image is dropped on the drop zone the code here connects up the functions already created. As the image is loaded the parseFile function is called, which in turn

calls the progress function. The image file is read as a data URL from the drop zone.

parseFile = function(theFile) { return progress; }; reader.onload = parseFile(f); return data = reader.readAsDataURL(f); };

20. Drag over When the image is dragged over the drop zone the following function is called. This will stop any default actions that are part of the way the interface behaves. The image is copied and stored in the event.

handleDragOver = function(event) { event.stopPropagation(); event.preventDefault(); return event.dataTransfer.dropEffect = “copy”; };

21. Event handlers Now the final step is to attach the event handlers for dragging a file over and dropping it on the drop zone in the display. These call the functions that have been created earlier. Now save the page and test it on a web server or by clicking the ‘live preview’ button in Brackets.

dropZone.addEventListener(“dragover”, handleDragOver, false); dropZone.addEventListener(“drop”, handleFileSelect, false); </script>



Alter page element colour on click As seen on Hidden treasures A small glimpse of each project can be found hiding away behind the plus icon. Call into action for a full-page image.

On-click colour The page elements including site title, page names and bottom strip are coloured coded to the last project page visited.

Flexible navigation The site is easily navigable and the projects can also be viewed one after another, as a slideshow.

Unique elements

Intriguing efect

The minimalist interface is not constrained by typical structures, colours or styles that are typically used to build a website.

Just like in The Wizard of Oz, the site starts in greyscale and changes to colour, drawing the viewer in to find out more.

// Alter page element colour on click


his workshop explores just a small detail of London based-artist Yenue’s portfolio site that may even have been overlooked by the casual visitor. It’s a subtle technique that changes the hover colour of the elements on the homepage to the background colour of the previously visited page. It neatly speaks to an important artistic concept, the memory and experience of viewing a piece of art. Yenue’s website of surreal and sometimes Dali-esque work inspires contemplation on the meaning and presentation of art, and, through this small detail of colour change, we have an opportunity to learn a range

of design techniques that actually have much broader applications. Yenue says: “I tailored my website to enhance the connections between the index and the individual projects. Each project is unique, so each background is customised to suit each project. I continued to find ways to connect diferent sections with small design features, such as the index border overlapping with the border of the project as you switch between them; the overlapping border is still visible. The colours of the index page are impacted by the design you choose to view. I felt that all of these features enabled me to connect my projects and overall website.”

Mi casa es su casa <comment> What our experts think of the site

“My portfolio website is another design project for me, where I show all the projects in which I have invested a large portion of my time. It is like my home on the internet and so I wanted to create something very special and visual for anyone visiting.” Yenue

Technique 1. Head first SVGs are popular for their flexibility but to take full advantage of this, it needs to look inline. You could add graphics to HTML but Drew Baker’s ( SVG inliner is a great solution. Dem Pilafian’s ‘Two Line Style Switcher’ ( provides a neat and efective method for changing a colour scheme as required.

<html> <head> <title>Change colour on hover</title> <script src=" libs/jquery/1.6.2/jquery.min.js"></script> <script type="text/javascript" src="scripts/ svginliner.js"></script> <script> var styleFile = "style" + document.cookie. charAt(6) + ".css"; document.writeln('<link rel="stylesheet" type="text/css" href="styles/' + styleFile + '">'); </script>

2. CSS in the HTML

EXPERT ADVICE Vision and technique Make sure you also check out, where Yenue showcases other ongoing work. The typographical plasticine creations are especially interesting and his projects demonstrate original thinking and first-class execution – something that designers from any field will appreciate.

To keep things simple, the styling that is specific to each page that does not want to be switched has just been added direct to the HTML. Centring elements in CSS can be tricky but if you follow through each divw and experiment with changing the settings that should help make things clearer. If you haven’t come across the ‘helper’ class before then that is worth exploring.

<style> .helper { display: inline-block; height: 100%; vertical-align: middle; }

3. Set the stylesheet Immediately before this, our helper will enable vertical centring. A JavaScript call sets a cookie that the JavaScript at the top uses to set the stylesheet. Note the attributes of the image that are used by the SVG inliner.

<span class="helper"></span><a href="javascript: document.cookie='style=2';'stpauls.html','_self');"><img id="stpauls-icon" class="svg stpauls" src="images/stpauls.svg"></a>

4. Style switch In this solution the style is switched on when exiting the homepage, and it could be argued that it’s only necessary to include the switcher code in one file here. The portfolio pages are similar to the homepage but the background colour is changed to the new theme colour.

<script> #royalalberthall-col { float:left; width:100%; position:relative; left:0%; overflow:hidden; text-align:center; background-color:#baa5a0; }

5. Theme styles It’s necessary to follow the naming system of style.css, style2.css and so on. In this solution each element is coloured the same but the technique provides plenty of flexibility to switch the styling of any page element.

#stpauls-icon:hover path{fill:#9d7554} #housesofparliament-icon:hover path{fill:#9d7554} #royalalberthall-icon:hover path{fill:#9d7554} #toweroflondon-icon:hover path{fill:#9d7554}



Create a rotating product viewer As seen on

Signposted URL Getting sales for the bike is key so a large pre-order label in the top-right of the screen will always provide access to it.

Fixed menu The menu is fixed into position on the left side of the screen and remains in place as the rest of the page is scrolled through.

Fullscreen photos The photography is key to this site and the large, fullscreen images provide a way to view the bike from diferent angles.

Fluid sections

Pausable scroll

The page is a one-page site that enables the user to scroll down each section or use the menu on the left-hand side.

The right-hand navigation controls the images and the user can click on the image, pause it or watch it from diferent sides.

// Create a rotating product viewer


ua Bikes produce a very elegant and clean urban bike that has a minimal impact on the environment due to the simplicity in its design. The bikes are handcrafted and reflect the conscious commuter who needs to get around urban environments with ease. The website needs to reflect the styling of the bike and this is always easy to do with photography when there is a strong product on display. The homepage features a large, fullscreen image of the bike that has been photographed from diferent angles so that the user can see the bike rotating on screen. The images

change automatically and gives the user a chance to see the design of the bike in detail. When the product you are designing for has such clean, elegant lines and is made from quality materials such as titanium, carbon fibre and aluminium, then it is important that the styling of the site reflects the construction aesthetic. It is so easy for designers to get carried away by adding more and more complex content to sites but sometimes all that is needed is good typography, navigation, simple colour and the product itself. Organising the content can be tricky, but the Nua Bikes site has managed to do this without overstating the simplicity of the design.

Innovative but intuitive <comment> What our experts think of the site

“The website uses the minimum number of elements to do its function. Innovative but intuitive at the same time, the site shows the product and enhances its qualities. A single guided scroll shows all components while the user can contemplate how beautiful the bike is – a website to be enjoyed on different devices.” Alicia Gomez Garcia, freelance digital art director and graphic designer


<script src=" jquery-2.1.3.min.js"></script>

1. Add the images To create a rotating image of a bike, we need to add five images to the body section of the page. Using jQuery a simple image rotator can be created which moves the image on every few seconds.

<div id="fader"> <img src="img/bike1.png"/> <img src="img/bike2.png"/> <img src="img/bike3.png"/> <img src="img/bike4.png"/> <img src="img/bike5.png"/> </div>

2. Style the images Now move to the head section of your page and add the style tags for the div tag that holds the images. This simply positions them relatively so that the JavaScript can change the images later on.

EXPERT ADVICE Keep navigation simple On the homepage the navigation is hidden away in an ofscreen menu, giving space on the page to create more of a showcase for the site. As the user moves through onto content pages, the menu is on hand to provide easier browsing and there are even previous and next buttons to move through the content.

<style> html, body{ height: 100%; background-color: #f0f0f0; padding: 20px; } #fader { position: relative; width: 100%; height: auto; } </style>

3. Library link The images have to be hidden and positioned on top of each other, so a link to the jQuery library is needed. This will aid the adding and removing of CSS to make each of the images appear at the appropriate time.

4. Move through the images After the link to the jQuery library the following code can be added. This hides all the images except the first one. They are positioned within the div tag so that they will be in the right position when they are called to fade in with the code in the next step.

<script> $(function() { $('#fader img:not(:first)').hide(); $('#fader img').css('position', 'absolute'); $('#fader img').css('top', '0px'); $('#fader img').css('left', '50%'); $('#fader img').each(function() { var img = $(this); $('<img>').attr('src', $(this).attr('src')). load(function() { img.css('margin-left', -this.width / 2 + 'px'); }); });

5. Switch images The ‘fadeNext’ function does the hard work of fading out the old image while fading in the next. This is called every three seconds by the set interval, which in turn calls the fadeNext function. Save this now and test it in your browser to see the images appear on the screen.

function fadeNext() { $('#fader img').first().fadeOut(). appendTo($('#fader')); $('#fader img').first().fadeIn(); } var rotate = setInterval(fadeNext, 3000); }); </script>



Construct custom web layouts with CSS Shapes Build layouts using shapes, and wrap content without affecting content flow tools | tech | trends HTML, CSS expert Neil Pearce

1. Get started As with all new projects, let’s begin with a new HTML document and start adding in our HTML markup. We’re going to create just the one style sheet and then below that we will use HTML5 Shiv to compensate for IE users when using HTML5 elements.

<head> <meta charset="utf-8" /> <title>CSS shapes</title> <link rel="stylesheet" type="text/css" href="css/style.css" /> <meta name="viewport" content="width=device-width, initialscale=1.0">


container div may be the only one we’ll be using, but we’ll wrap that within a page wrapper in case we need it.

<div id="wrapper "> <div class="container"> </div><! END container > </div><! End wrapper >

3. Shape up You can apply a shape to an element using one of the shapes properties and its function. In this case, we’ll be adding the polygon function to the background image of Stonehenge so we can create an interesting shape. So just underneath the container div, place a div with a class name of ‘shaped’.

<div class="shaped"></div> <!--[if IE]> <script src="http://html5shiv.googlecode. com/svn/trunk/html5.js"></script> <![endif]--> <!--[if gte IE 9]> <style type="text/css"> .gradient { filter: none; } </style> <![endif]--> </head>

4. Content and title Our page is going to have a heading and we are going to make the first word “the” slightly smaller than the other words. So to accomplish this, we will need to wrap it within a <span> tag. Our page title, along with everything else for the page, will be placed within a main content element.

<div class="content"> <h1><span>The</span> Visit To <br/> Stonehenge</h1> </div>

2. The page wrap

5. Add in paragraphs

Now that we have the head meta information done, let’s start adding some structure to our HTML file. First, we will add in a few divs that will give us some control when positioning elements on the page with CSS. The

Our page is going to have a few paragraphs, so within the content div put about four or five <p> tags and fill them with either dummy text or go ahead and add in some proper information relating to our topic.



here are so many remarkable and exciting things that we designers can now achieve using just CSS. Not only can we animate elements to jazz up our pages, but we can now create some really interesting shapes using CSS transforms. These shapes will not afect the flow of the content inside or around them. That is, if you create a triangle with CSS, for example, the shape created does not define or afect the way the text inside it flows, or the way inline text flows around it. With the introduction of CSS Shapes into the web design workflow, wrapping content in custom nonrectangular shapes and re-creating print designs and layouts on the web becomes a breeze! CSS Shapes enables us to wrap content around custom paths, which lets us break free from the constraints of the rectangle we have been accustomed to. So in this tutorial we’ll take a look at how we can use CSS Shapes to create a custom layout. We won’t be looking at all the new properties, but rather just the ones that enables us to create an interesting layout based on a visit to a world-famous landmark.

We’ve left a gap after the second <p> tags, because in the next step, we will be adding in a few extra elements such as an arrow and profile picture.

<p>Lorem ipsum dolor adipiscing elit.</p> <p>Lorem ipsum dolor adipiscing elit.</p> <p>Lorem ipsum dolor adipiscing elit.</p> <p>Lorem ipsum dolor adipiscing elit.</p>

sit amet, consectetur sit amet, consectetur sit amet, consectetur sit amet, consectetur

6. Arrow, circle and profile image As mentioned in the last step, we are going to add in some additional elements within our paragraphs. What we want to achieve here is a green circle with an arrow pointing to a profile image. And within the green circle we will have some text saying “this is me!”. Then we can use this profile image to demonstrate our second CSS shape in a later step.

<div id="arrow"></div> <div class="circle"><p>This is me!</p></ div> <img src="imgs/man.jpg" alt="profile image" class="profile_img" />

7. Put in a blockquote and footer We are going to finish up our HTML by adding in a blockquote and footer to the bottom of our page. The blockquote could be styled, so we encourage you to play around with that idea, but we will leave the footer alone as it’s there for semantic purposes only.

<div class="blockquote">

// Construct custom web layouts with CSS Shapes

Top left

With most of the markup added, we are starting to see our page come together Top right

Now we have the profile image added as well as the content blockquote and footer Bottom left

The page has some structure to it now and our first shape has been created on the main image Bottom right

We’ve now styled the main content and things are looking a lot more cleaner

Custom paths CSS Shapes enable web designers to wrap content around custom paths like circles, ellipses and polygons.

<blockquote></blockquote> </div> <footer> <p>&copy; Copyright 2015 Stonehenge</p> </footer>

container relative, which will now enable us to absolute position any element anywhere on the page.

.container{ overflow:hidden; height: 100vh; width: 100vw; position: relative; }

10. Background image The Visit To

8. Box sizing Now create a new CSS file and call it ’styles.css’, and place it in its own folder called ‘CSS’. At the top of your CSS file, we are going to add in the box-sizing rule. This will apply a natural box layout model to all elements, but enabling components to change, and it’s now considered best practice to add this to your CSS.

* { -moz-box-sizing: border-box; -webkit-box-sizing: border-box; box-sizing: border-box; }

9. Set the width and height Modern websites nowadays have fullscreen sections on their homepage and this is what we want. To achieve this, we set the viewpoint’s height and width to 100 per cent. Then we are going to position the

This page is about Stonehenge, so it’s obvious we want a main image of Stonehenge. We are going to attach this to the ‘Shaped’ class, set the viewpoint and make sure that the image is centred from the top and doesn’t repeat. We then need to make sure this is floated to the right.

.shaped{ height:100vh; width:40vw; float:right; background: black url(../imgs/stonehenge. jpg) center top no-repeat; background-size:cover; }

11. Create our first shape Having added our main image, let’s now create our first shape. Each shape is defined by a set of points. Some functions take points as parameters, but they all eventually draw the shapes as a set of points on the element. We are going to use the polygon function here and create points to create our desired shape. Using the clip-path property will clip all the parts of the image that

are outside the defined shape.

-webkit-clip-path: polygon(0 0, 100% 0, 100% 100%, 30% 100%); -webkit-shape-margin: 20px; }

12. Style the content Now we need to add some styles to our main content. This is going to be pretty straightforward and by looking at the CSS rule, you can see clearly what we are doing here. You might want to play around with the padding and font size to get a slightly diferent look, but that’s up to you.

.content { padding: 30px; color: black; font-size:15px; text-align: justify; line-height: 23px; font-family: Verdana, Arial, sans-serif; }

13. Page title The main page title is going to be nice and big, so it grabs attention straight away. We’ve used a Google Font called Sintony. Now, this is where we use our second shape. By using ‘shape-outside: polygon()’ we can create another shape that targets the main text. To make things interesting, we are going to shape it the opposite way to our main image.

.content h1{ font-family: 'Sintony', sans-serif; font-size:50px;


The:HE'HVLJŨAnnual :HE'HVLJŨ Top left

The page title is now styled and our second shape is now created for the text Top right

The page title is finished off by making the word “My” small Bottom left

We can now start to style our profile image and make it into a circular shape Bottom right

Now with the green circle and arrow added, we have finally finished our page

Polygon shape The minimum number of pairs to specify a polygon is three, which is of course a triangle shape.

line-height:1; float:left; width:350px; height:100vh; margin-top:0; padding-top:20px; color: #3f3f3f; -webkit-shape-outside: polygon(0 0, 100% 0, 40% 100%, 0 100%); }

14. Finish up the page title Now that we have used the page title (h1) to shape our content, the only thing left to do is simply to make the word “My” nice and small by adding some styles to the ‘span’ tag. Then we can pull in the word “visit” by adding some negative right margin.

.content h1 span{ font-size:.5em; display:inline-block; margin-right:-10px; }

15. Style our profile image Because this page is about someone visiting


Stonehenge, it would make sense to add a small profile image to the page. And doing so will give us another chance to play with CSS Shapes. But first we are going to add some styles to our image and float it to the left.

.profile_img { float: left; width: 180px; height: 180px; border-radius: 50%; margin-right: 15px; padding: 0px; }

16. Shape our profile image At the moment our content is pushed over to the right-hand side of our image and there’s no shape to it other than a square shape cut out of the text. But we want the text to flow around our image in a circluar shape. We can achieve this by adding ‘shape-outside: circle();’ and some margin and border.

-webkit-shape-outside: circle(); -webkit-shape-margin: 10px; border: 10px solid #f1f1f1; }

.circle { width: 100px; height: 100px; background: #9fb876; -webkit-border-radius: 130px; border-radius: 130px; position: absolute; top: 350px; left: 180px; border: 10px solid #f1f1f1; }

18. Circle content The green circle is going to have a little bit of text within it. By targeting its <p> we can make sure the text is styled how we want it and fits within the circle nice and tidy. We’re going to set the colour to the same colour as our page title.

.circle p { margin: 20px 0; font-size: 1em; text-align: center; color: #3f3f3f; }

17. Circle styles

19. The arrow

To practice with creating other shapes using CSS, we are going to add a green circle that floats next to our profile image. This circle is created by using border-radius. Then we will give it a nice green colour that represents the feel of our main Stonehenge image, and position it accordingly.

Creating other shapes without using the new CSS shape functions is good fun. In this next step we will create a small arrowhead purely using CSS and we will then position it between our main profile image and the green circle. You can play around with the rotate value using

// Construct custom web layouts with CSS Shapes

Understanding the clip-path property The clip-path property takes the same shape functions and values as the shape properties. If we pass the same polygonal shape we used in the shape-outside property to the clip-path property, it will clip all the parts of the image that are outside the defined shape. The clip-path property is supported with prefixes and will work with the Chrome -webkit- prefix added. The clip-path property is an excellent companion to the shape properties, as it helps visualise the created shapes and clip out any parts of the element that are outside the defined shapes. You will probably find yourself using it a lot in conjunction with the shapes properties.

transform: rotate(), but setting it to 5deg should work quite well too.

#arrow { position: absolute; width: 0; height: 0; top: 380px; left: 300px; border-top: 18px solid transparent; border-right: 18px solid #9fb876;; -webkit-transform: rotate(5deg); }

pointing to our profile image.

#arrow:after { content: ""; position: absolute; border: 0 solid transparent; border-top: 6px solid #9fb876;; border-radius: 20px 0 0 0; top: -24px; left: -18px; width: 24px; height: 24px; -webkit-transform: rotate(45deg); }

20. Finish up the arrow Now that we have created a small arrowhead, we can now create its tail. So again using border and borderradius we can create an arrowtail and position it absolute, using â&#x20AC;&#x2DC;transform: rotate(45deg)â&#x20AC;&#x2122; to place it exactly where we want it to be. So now we have a nice little arrow

21. Style the blockquote To finish this page of, we are going to add some simple styling to our blockquote. So all we are going to do here is centre our text so it looks tidy. You can add more styles to it to see if you can achieve something slightly diferent.

.blockquote { text-align: center; }

22. Final thoughts The current CSS Shapes specification is merely the first step. Soon, new options will give us even more control over creating shapes and wrapping content in and around them, making it a lot easier for us to turn our mockups into live designs with just a few easy lines of code. So this tutorial is just the beginning, giving you the first steps to bigger and better page layouts using the power of CSS Shapes.

#arrow:after { content: ""; position: absolute; border: 0 solid transparent; border-top: 6px solid #9fb876;; border-radius: 20px 0 0 0;



expert guide to

WEB add wow to your site with

WEBGL, HT HTM ML5, CSSS3, THREE.JS CS 142 142___________________________________________________________________feature

// Expert guide to web 3D

How web 3D is influencing web design 3D MODELS ARE LEADING THE WAY FOR THE WEB TODAY


t is an exciting time in web development in regards several ways we can execute all of those tricks and to 3D. In the Nineties we had 256 colour animated efects depending upon our individual skills or the needs, GIFs of spinning text and email icons. In the 2000s we requirements and assets of our projects. grew up a little by interacting with and spinning At its most complex level, immersive design, content prerendered image sequences, but we had to use the and interaction can now be created in the browser with Flash plugin to do it. Later Flash added features that actual 3D models, textures, particles and assets using enabled actual 3D rendering but development became libraries like three.js which tap into the power of the painfully dificult for many developers. Then the Flash hardware-accelerated WebGL drawing API. We can finally bomb dropped and the rise of mobile browsers design, develop and dream of rich interactive 3D content, and the best part is that most libraries began. Our designs and content had to 3D fall back to Canvas API, which has even simplify to pure HTML solutions in order and iOS larger support. Canvas is a simpler to be supported by all devices. Want to produce 3D for drawing API that can create believable Otherwise we would be wasting Apple devices? Well now that plexus animations, particle systems, development time and money the latest iOS versions support emitters and mouse trails to look like building multiple versions of WebGL we can create dreamy websites. With Flash slowly dying, 3D (although it does not support content for the desktop that also modern browsers thankfully began true 3D drawing like WebGL). work on a large percentage upping their game to support new At a simpler level, CSS 3D has of mobile devices features. Developers have worked hard been a welcome addition to interactive as well. to create libraries that help us create 3D design and development of pure interactive content to capture that old-school HTML elements. It has helped to add life to Flash magic again. otherwise boring flat responsive design and basic UX As a result of these changes over the past few years grids. CSS 3D is easy to develop with, and anyone with a we can finally render, animate and interact with content basic understanding of CSS can easily upgrade their in the 3D space without the need for plugins. All of those designs. It’s also robust enough that when combined ideas, tricks and lessons that were learned in the days of with libraries like three.js or animation libraries like Flash are now being revived. And fortunately we have GreenSock can create jaw-dropping results.

The big three WebGL This JavaScript API is great for rendering interactive 3D and 2D graphics within any compatible browser without using plugins. A great library for animating 2D (and soon basic 2D objects in 3D space) with WebGL is the Pixi.js rendering engine, which also has Canvas fallback.

HTML5 Canvas The HTML5 Canvas element is part of HTML5 and allows for dynamic, scriptable rendering of 2D shapes and bitmap images. It can be used to draw graphs, make photo compositions, create animations, or even do real-time video processing or rendering.

Three.js As a JavaScript 3D library, three.js makes WebGL simpler. While a simple cube in raw WebGL would turn out hundreds of lines of JavaScript and shader code, a three.js equivalent is only a fraction of that and much easier to create environments with.

We can finally render, animate and interact with content in the 3D space without the need for plugins

The current state of browser support UP TO

Shane Mielke @shanemielke WebGL, designer, developer, animator, photographer, author and Cyberdyne Systems Model 101. Previously worked at 2Advanced.

Note: three.js runs in all browsers that support WebGL







41 9











feature____________________________________________________________________143 143


3D web graphics

Textures and graphics

3D and mathematics

For me the number one selection for 3D web graphics is Fuse ( And yes, you can export WebGL with it. As a side product you’ll be able to produce native applications for mobiles as well, which makes a lot of sense for heavy duties like 3D. Fuse gives us a new programming language, called Uno, and it makes creating complex 3D renderers so much easier.

When it comes to textures and other graphics, it’s still Photoshop that wins ( photoshop.html).

I don’t think you need any more tools than that. But what you do need is more knowledge and skills (along with years as well). Creating 3D is a lot about mathematics and rendering techniques, so you better know your trigonometry. I would recommend refreshing your memory of it by watching Khan Academy’s videos about trigonometry by going to (really wish I had resources like this when I was a kid!). Also no matter what language or tools you’re using, the same rendering techniques will apply. When you reach a certain level, Nvidia’s GPU Gems can come in handy:


6 more 3D resources

As we’ve already mentioned, in the world of web 3D the big technologies and tools are WebGL, HTML5 Canvas and three.js. These are at the forefront of bringing a new dimension to the web. But to build beautiful creations, as every designer developers knows, a collection of your own favoured resources are always needed. Web Designer spoke to real-time graphics craftsman and the man behind Simo Santavirta. He gave us an insight into the tools and resources he uses to help create his dynamic online experiences. Check out his work at

3D modelling I do a lot of my 3D modelling in procedural fashion, with maths. But once in a while you need to work with actual models. Here I would select Blender, simply because it’s free, easy to use and has a big community behind it. It’s easy to find freelancers to help you in case of an emergency.


Inspiration If you’re in need for inspiration try This has pretty much every real-time graphics demo ever made. A must-see for those looking to get into web 3D.

// Expert guide to web 3D

Making fireworks with three.js BUILD A SIMPLE SCENE, ADD TEXTURES TO MATERIALS, CREATE LIGHTS AND BLOW THINGS UP WebGL is one of the best things about the modern web. We, as developers, looked at 3D modelling, shading, rendering and all the other things that come with the realm of the third dimension. The use of 3D graphics in the browser enables us to make the best of real-time graphics generation with just pure JavaScript. Rendering anything in 3D is a far more complex process than drawing a square on a <canvas> element, but with complexity comes dificulty in implementation. Fortunately, three.js is here to help us. It’s a JavaScript library that helps us write WebGL content without having to worry about a great deal of maths or 3D rendering computer science (well, not too much). Think of three.js as a kind of jQuery for WebGL, everything is simpler and follows patterns. There’s no more fiddling about with polyfills or trying to get things to light consistently. Get the full code for this tutorial at

1. Grab the resources Three.js has a ton of helper libraries and other code bits to help it get on with its work. We’re going to use the core three.js library and the OrbitControls.js library to handle our camera movements. Download the project files from FileSilo and have a look around the scripts folder.

2. Set up the renderer The meat of our program is in scripts/fireworks.js. In the init function on lines 181 – 199 we create a renderer, which is where our 3D scene will be drawn to. We then create and position a camera to see with and point it at 0,0,0 of our scene. Whatever happens in front of our ‘camera’ is what will be shown on our rendering element.

renderer.setSize( window.innerWidth, window. innerHeight ); document.body.appendChild( renderer.domElement ); camera.position.x = 2; camera.position.y = 43; camera.position.z = 35; camera.lookAt( new THREE.Vector3(0,0,0));

3. Set the scene Next we call createScene(). Here, we create the ground for our scene and some light (which we call moonlight) to light up our scene. Without light in our scene we won’t be able to see anything. The ground and the moonlight are added to our scene with ‘scene.add(OBJECT);’.

var groundTexture = new THREE.ImageUtils. loadTexture( '/assets/images/ground.jpg' ), groundGeometry = new THREE.PlaneBufferGeometry( 150, 150, 32 ), groundMaterial = new THREE.MeshPhongMaterial( {side: THREE.DoubleSide, map: groundTexture} ); ground = new THREE.Mesh( groundGeometry, groundMaterial );

moonlight = new THREE.PointLight( 0xffffff, 1, 100 ); moonlight.position.set(0, 10, 5); scene.add( ground ); scene.add( moonlight );

4. Rendering Now that we have some things in our scene that we want to see, we can tell three.js to render them. We use requestAnimationFrame() to call our render function which will draw our scene as close to 60FPS as your computer can handle with ‘renderer.render( scene, camera );’. Right now, you’ll only see illuminated turf.

renderer.render( scene, camera ); requestAnimationFrame( render );

5. Set off a firework If you press Space, a colourful firework will set of into the sky and then detonate. In addEvents(), we push a new firework() to our fireworks array and once there, our render function will work through and draw every firework we’ve set of.

8. Particles and CPU cycles When our firework explodes, we want to see colourful sparks, but do we really want to render dozens of new objects to do so? No, we don’t. Instead, we can create a point cloud, which is a fancy way of saying a particle system. This is basically one big object, but made up of loads of little bits with space in-between, it’s much friendlier for our graphics card.

9. The sparks On lines 38 – 44, we create a point for each spark that we want to have and give each a random velocity. On lines 47 – 53 we tell three.js what size and colour we want our particles to be and then we add them to the point cloud on line 55. As a cheat, we add a JPG to each of our sparks to fine-tune how each point should look. On lines 100 – 110, we check how high our firework is, whether or not it should explode and whether or not we should be animating it if the explosion has started.

What is a firework anyway? Well, in this context, it’s an object that will keep track of the position, velocity, sparks and light sources of our firework as it travels to its beautiful, but ultimately tragic demise. Once our firework reaches a certain height above our ground, it will explode and animate our explosion with the detonate() function on lines 60 – 82.

var sparks = new THREE.Geometry(); for (var i = 0; i < Math.random() * 1000 | 0; i ++ ) { var spark = new THREE.Vector3(0,0,0); spark.velocity = [ Math.random() Math. random(), Math.random() Math.random(), Math. random() Math.random()]; sparks.vertices.push( spark ); } this.sparkMaterial = new THREE. PointCloudMaterial( { size: 1.5, map: THREE.ImageUtils.loadTexture("assets/ images/spark.jpg"), blending: THREE.AdditiveBlending, transparent: true, color : color }); this.particles = new THREE.PointCloud( sparks, this.sparkMaterial );

7. Light ‘em up

10. Remove particles

Our fireworks need to make light that shines on other things around it. The material that makes up our fireworks in flight can’t (easily) do this on its own, so instead, we create a new light the same colour as the firework in the same place. Do this with ‘this.light = new THREE.PointLight( color, 10, 4 );’, which you may notice is the exact same way we created moonlight.

A firework doesn’t last all night. So once it’s petered out, it’s probably best to forget about it, otherwise it’s just going to clog up our computer’s memory. If we pass through our firework and its lights to the removeObjectFromScene() function on lines 19 – 22, three.js will remove our asploded fireworks from the scene and GC will take care of the rest.

// Line 29 this.light = new THREE.PointLight( color, 10, 4 ); // Lines 96 98 f.light.position.x = f.object.position.x; f.light.position.y = f.object.position.y; f.light.position.z = f.object.position.z;

// Lines 106 110 } else if(f.hasDetonated && f.explosionLight. distance <= 1){ removeObjectFromScene(f.explosionLight); removeObjectFromScene(f.particles); fireworks.splice(aa, 1); }

window.addEventListener('keydown', function(e){ if(e.keyCode === 32){ // (Radius, Width, Height, Color) fireworks.push(new firework( 0.2,32,32, new THREE.Color( colors[Math.floor(Math.random() * colors.length)] ) )); ground.material._needsUpdate = true; } }, false);

6. Detonation





Name of site Penny Skateboard Customiser URL Designer HelloEnjoy URL Time to complete Six months

The customiser WebGL app is fully dynamic, it lets you change board size and select wheels, trucks and bolts. Each part can be configured individually for unique results. It also uses real-time inventory data from a Magento eCommerce backend to present the user with options currently available. Loading time is optimised by preloading 3D models and textures in diferent batches using CreateJS (

Share Once the board is complete, the user is given the option to ride it, similar to a skateboard game. Using the share functionality users can invite their friends to ride their creations. The related image used on social networks is also created dynamically, rendering and uploading to the server a snapshot of each board.

The limitless potential of three.js Carlos Ulloa Interactive designer

“Building a product customiser using interactive 3D gives us complete control of how the product appears. It allows the user to virtually grab an object and look at it from any angle, like we do in real life. It also makes possible to change colours, materials and entire parts in real-time.”


USE THREE.JS TO MAKE ANYTHING, FROM GAMES TO ROTATED PRODUCTS Three.js takes away a lot of the headaches for you and gets you started faster on making some seriously cool content. While a simple cube in raw WebGL would turn out hundreds of lines of JavaScript and shader code, a three.js equivalent is only a fraction of that and much easier to code and create environments with. It’s an amazing tool that lets you either bring in existing 3D assets or create your own using primitive 3D shapes. The potential opportunities are limitless with the right projects, assets or more importantly ideas. Although you should beware, that it’s definitely not something you can

or should use for everything. While WebGL is supported on all modern browsers and has recently picked up a lot of mobile support being enabled on iOS devices it won’t work everywhere. Depending on what you’re executing or where you’re trying to display your cutting-edge 3D, content may also be pretty processor-intensive and afect mobile usage. But It has its place in our toolbox and should be used anytime you need to create, animate or load 3D scenes in the browser. We would also warn about the expectation or desire to create photorealistic environments with these new

// Expert guide to web 3D

More from Hello Monday

Mobile The experience is optimised for mobile, using the entire browser window. When running on desktop, higher resolution 3D models and textures are used. Both mouse and touch input methods are fully supported, and special care was put into making the interaction tactile and intuitive. The UI is fully responsive and supports Retina resolutions using SVG graphics.


Infiniti Q50 Eau Rouge

Graphics Developed using the three.js engine, we are rendering a very detailed 3D model. We use custom shaders to dynamically change colours as specified by the client in the backend. The product thumbnails are also created this way. Lightmaps created in Maya with ambient occlusion enhance the object details and depth.

Samsung Racer S

Web 3D and realism The key is creatively executing the content in a way that is stylised so that it doesn’t have to be real looking – it just tools and browser feature, or trying to needs to be immersive Fortunately the web has its own match the lighting or render style that emotional, memorable as version of cinematic which is can be found in popular 3D applications. well as interactive. completely diferent from what you It’s an expectation that should never be experience in real life or in a movie. We are set, especially to clients. It’s important to allowed to bend the rules and create stylised remember that we’re working with 3D in a web browser. fictional worlds and experiences that don’t have to be Nothing will ever compare to the realism of real photo/ 100 per cent realistic. video content or 3D quality that comes from hours of The web is a magical playground of opportunity rendering passes in a 3D app. which enables us to combine all sorts of diferent styles, Movie studios spend years of development time and creative assets and tools to create unique interactive millions of dollars creating custom special efects and 3D experiences. They can look however we want it to as scenes for non-interactive blockbuster feature films that long as it is authentic and compelling. That’s what we are supposed to believe are real. Only the best diferentiates the web from a movie or a photo. And that movies with the craziest budgets and the best teams is where having a tool like three.js, to make the process actually make the CG content feel natural. simpler, is vital.

Lights by Ellie Goulding




What can be done with WebGL and HTML5 Canvas? So where do we start with the two? Pretty much anything you can dream of can now be done in the browser without a plugin. In fact there has been a revival of old concepts, ideas and executions that were previously done with Flash that have risen from the ashes. This has prompted the old-school Flash developers to often say ‘We did that in Flash a long time ago. This is nothing new or cool’. In this new era of web-based interactive 3D development, individuals with experience in those old ideas, skills and motion sensibilities are highly sought after commodities by studios around the world looking to create cutting-edge content that works on both desktop and mobile. So whether it is 3D or 2D content, both the WebGL API and Canvas API give us a set of drawing tools to create and manipulate assets with better performance, control and efects than we can currently achieve with pure HTML elements. Keep in mind that both have diferent strengths, weaknesses, limitations and JavaScript libraries which can help you develop content. When choosing which to


use, one must consider the usual things we consider on all projects. This includes the end creative goal, time to develop, what assets we have to use or create and the eternal nemesis of all web developers – browser support.

CANVAS The Canvas API has been around longer and has deeper support on all browsers including Internet Explorer (IE9 and up. It also has full support on older Android and iOS browsers, but it is definitely a simpler drawing API. There’s no true 3D support, so you cannot load models with textures or do complex 3D scenes with crazy lighting, rendering or shaders. Though you can use the drawing tools and good old maths to create basic 3D shapes or you can fake it by creating and manipulating assets in ways that give the illusion of 3D. Just like the early days of Flash, things like plexus animations, particle systems, emitters, mouse trails are all

fair game in Canvas and can really give the feeling of 3D space. You’ll see a lot of these types of elements in the backgrounds or footers of websites to add that extra immersive punch. It’s also great for making things like charts or graphs. Canvas is also an amazing asset for boosting performance when handling prerendered 3D content like 360-degree image sequences or videos. Performance when interacting with a Canvas-based 360 created from JPGs is exponentially faster across all browsers and with less bugs than manipulating the same assets in HTML.

WEBGL On the other hand, WebGL has the more advanced 2D and 3D drawing API. It’s actually been around for years but does not have the depth of browser support like Canvas. It works on all modern browsers but unfortunately only works on the most recent versions of

Both the WebGL API and Canvas API give us a set of drawing tools to create and manipulate assets with better performance

// Expert guide to web 3D Internet Explorer (IE11 and up). It is supported but not Another challenge was compressing a 3D model that enabled on all Android devices and was recently enabled held 500MB of detail down to a smaller, more on browsers for devices that are iOS 8 and up. So the manageable 3MB final file size. The overall experience is WebGL handcufs have been removed on mobile (sort an immersive click and drag with the ability to pan, and of) and we can start having fun creating some singular zoom around the vehicle as well as the ability to toggle solutions that work on all systems and devices. between the Battle and Pursuit modes of the Using a library like three.js to help we can Batmobile to explore all of the key features in Canvas easily draw or load true 3D assets in the each mode. Drawing browser to create some really cool Three.js also great for those fun For a great example of the interactive 3D content in a short time. immersive conceptual projects we all Canvas Drawing API in action, With three.js you’re limited only by dream of being a part of that might check out The Hunger Games: your ideas. involve particle clouds, plexus lines, Catching Fire site (, It can be used on any project that data visualisation, globes or assets which uses the API to create the has existing 3D assets available. A that can be made with primitive appearance of 3D particles great example of this is the Batman: shapes, lines and colours, without Arkham Knight Batmobile Experience and interactions. needing 3D applications. An example ( of this is the Adobe Patent Innovation batmobile) for Warner Bros. by Five & Done project ( ( On this Batmobile project a fully adobe-patent). The project is an interactive touch detailed 3D model was provided by the client and then installation which enables users to explore all of Adobe’s three.js was used to create, load, animate and interact patents and inventors in 3D space. It is is currently with the scene. There were some initial concerns about being installed in the lobby of every Adobe ofice WebGL and a browser being able to exactly match the worldwide and will soon exist as a website. The entire lighting and style of the in-game graphics. So a more scene was created using the three.js drawing tools and holographic style was explored using a Fresnel shader as lighting to create a central abstract 3D shape a way to make the experience feel unique without the surrounded by a swarm of points and plexus lines. The pressure of replicating the game or what a final rendered points and lines represent all of the patents and artwork would look like. inventors in a connected relationship in 3D space. Users

can explore and interact with all of the points in the scene as well as view the featured patents and inventors which exist on the main abstract 3D shape.

WEBGL/CANVAS hybrid Somewhere in the middle of three.js and pure Canvas development exists tools like Pixi.js. Pixi.js is primarily geared towards creating interactive 2D content. It is, however, worth noting that it will soon support 3D manipulation of 2D elements. It is a devoted rendering engine and drawing API that is blazingly fast with amazing cross-platform support for all devices. It has full WebGL support with seamless Canvas fallback so that you can author once and deploy everywhere in browser. It was patterned after ActionScript which makes it intuitive and easy to pick up (especially if you have a history developing with Flash). When utilising WebGL, Pixi can enable you to use a huge set of existing, familiar filters such as blurs, pixelations and tints but also enables users to create their own unique filters. With a little math sand an animation package like that of GreenSock to manipulate properties, some amazingly powerful particle engines have been created. Pixi.js is very popular as a game development and rendering tool, and it’s also great for creating create crazy plexus animations, particle systems, emitters, mouse trails as well as for handling image sequences with 3D content.



Make an interactive 3D game with WebGL Learn how to make a Simon Says game with sounds and animation using WebGL and three.js


// Make an interactive 3D game with WebGL


etro games hold nostalgic appeal to people who remember playing them. Games themselves are often used as a way to engage an audience, just think about how many games there are that are used in marketing to try and sell the user something. These are not highly original games, but tap into the audience and spin the game with graphics that make it appropriate for the content. Learning to code games in the modern browser is therefore an important discipline to learn. In this tutorial the game being created is the old Simon Says game, which was a nostalgic Seventies plastic toy. The perfect choice for this is the three.js library because it can load models, has a Tween Engine and a raycasting system that enables interaction. The game consists of storing the computer’s choices and then checking what the user presses to ensure they are following the same pattern as the computer. Along the way we’ll highlight the buttons pressed and play appropriate sounds. The game could be made better by spinning the model faster the longer the game goes on or by speeding up the selections; there’s plenty to explore beyond the tutorial.

browser resizing or the mouse being pressed and call the appropriate functions, which will be created as we continue writing the code.

container.addEventListener( 'mousedown', onDocumentMouseDown, false ); window.addEventListener( 'resize', onWindowResized, false ); camera.lookAt( scene.position ); onWindowResized( null );

3. Write your own Now all code that will be added will be done just before line 184 for the rest of the tutorial. Here the function is created and fired when the Play button is pressed. It hides the title screen, starts to play some speech audio and then calls ‘begin’ after a delay of almost a second.

function ready(){ info.classList.add("hide");; tmr=setTimeout(function () {begin()}, 950); }

4. Game end 1. Get into the code From the project files open the Start folder and then open the game.html in a code editor such as Brackets. The code contains the basic CSS and HTML layout and loads the 3D model. Find line 163 in the code and uncomment the line shown here. This calls the animate loop, which hasn’t been created yet.


The next function clears the timer that has triggered this function for the end of the game and all that the next line does is remove the CSS class of hide from the div tag with the id of ‘info’. This essentially shows the title screen again and hides the game in the background.

start of the game. The compSelect array is the computer selection of sequence colours; userSelect is the same for the player. Other variables are fairly self-explanatory.

var menu ="#menu"), function begin(){ clearTimeout(tmr); compSelect = []; userSelect = []; turn = 0; goes = 0; running = false; incr = 0; gameOver = false; }

6. Resize the screen Now the browser screen resizing is handled so that the camera perspective is updated if the screen changes. The pressing of the mouse button is handled, the mouse position on the x and y axis is stored and a ray is fired into the scene to see the models below the mouse.

function onWindowResized( event ) { renderer.setSize( window.innerWidth, window. innerHeight ); camera.projectionMatrix.makePerspective( fov, window.innerWidth / window.innerHeight, 1, 1100 ); }

function end(){ clearTimeout(tmr); info.classList.remove("hide"); }

Event listeners JavaScript is an event-driven language, which means that it listens for specific events and then fires the appropriate function when this happens, such as mouse, keyboard or browser events.

2. Uncomment the event handlers Just below the previous code you will see comments that are similar to the lines of code shown here. Remove those comments. These listen for events such as the

5. Begin the game Now the begin function (called from Step 3) is created and this clears the values of any variables ready for the


The model that has been loaded is in COLLADA DAE (Digital Asset Exchange) format and has been created in Cinema 4D before exporting in the XML-based file Top left

The WebGL scene is rendered in the browser using the threejs library (, which makes creating 3D scenes relatively straightforward Top right

The game has been started with a minimal title screen and the play button only shows up after first loading all the sounds and then the 3D model


The:HE'HVLJŨAnnual :HE'HVLJŨ function onDocumentMouseDown( event ) { event.preventDefault(); mouse.x = ( event.clientX / window. innerWidth ) * 2 1; mouse.y = ( event.clientY / window. innerHeight ) * 2 + 1; raycaster.setFromCamera( mouse, camera ); var intersects = raycaster.intersectObjects( scene.children, true );

7. When the user clicks Once the ray has detected what models in the scene lie below the mouse when it’s clicked, an array of models is brought back. If the first object happens to be the red button then the button is made to glow, the right sound is played and the user’s selection is stored in their array.

if ( intersects.length > 0 && turn == 1) { INTERSECTED = intersects[ 0 ].object; if ("red"){ userSelect.push("red"); r.opacity = 1;; redAnim.start(); } if ("blue"){ userSelect.push ("blue");

Callbacks RequestAnimationFrame() will request that your animation function is called before the browser performs the next repaint. The number of callbacks is usually 60 times per second.

Top left

When the user clicks on a button on the interface it responds by highlighting, then fading out the light while playing a sound for that button Top right

On the computer’s turn a random number is created to select the new colour in the sequence Right

The computer replays its previous selections so that the player can try to remember the sequence before adding a new selection to this


8. Other colours In much the same way as in the previous step, the blue and green buttons on the Simon Says interface are detected. The opacity increase is the highlight and when the animation is called to start, this fades it back out again using the Tween library.

9. Final button As before, the yellow button is detected to see if the user has pushed this. The code below checks the user’s selection against the computer’s selection and if they don’t match up then a mistake has been made. The ‘game over’ variable is set to true if the mistake is made.

if ("yellow"){ userSelect.push("yellow"); y.opacity = 1;; yellowAnim.start(); } var c = userSelect.length-1; if (userSelect[c]!=compSelect[c]){ gameOver = true; }

10. End of the user’s turn The code here detects how many times the user has clicked a selection. If the user has clicked more times than the current amount of guesses needs, then the user’s turn is over and it is handed back to the computer. If the game is over then appropriate functions and sounds are called.

if (userSelect.length > goes && gameOver==false){ goes++; incr=0; tmr=setTimeout(function () {turnOver(0)}, 1200); } else if (gameOver==true){; incr=2; tmr=setTimeout(function () {end()}, 700);

11. Finish the mouse event The final brackets close of the mouse event and if the mouse hasn’t detected an object under it then the array is set to null. After this the animate function is created. This calls itself in a continuous loop using the browser’s built-in request animation frame. The render function is called to display the screen.

12. Each frame of the game On every frame of the game the render function updates the models on the screen and checks the game logic. The first thing to do is rotate the Simon Says toy model. It then detects if it’s the computer’s first turn and if it is, it calls a function to select a new colour in the game.

function render() { model.rotation.y += 0.001; if (turn == 0 && compSelect.length < 1){ running = true; selectNum(); }

13. Playback of existing selections The code now works out if there are previous selections made by the computer and increments through each one with the showSelect function. If it’s run through all of the selections then it needs to add a new one at the end, so the selectNum function is called to add a new colour.

if (turn == 0 && incr < goes && running == false){ running = true; showSelect(); } else if(turn==0 && incr == goes && running == false){

// Make an interactive 3D game with WebGL

What is raycasting? An important concept when creating interaction in 3D spaces is that of raycasting. When the user clicks the screen it is a 2D flat space, but it is important to find out what is in the scene below the mouse. An invisible ray of light from the camera is fired into the scene – think of it as a line extending from the mouse into the scene. A list or an array is returned with all the models underneath the mouse’s position. The first object in an array is always at position 0. All we need to do then is check that the first object in the array is actually one of the clickable buttons on the Simon Says interface. If it is one of them, then we just have to find which one and then we will play the appropriate sound and animation that will give the user the correct feedback in the end.

running = true; selectNum(); }

14. Select a new colour The final lines of the render function updates the display of the screen and runs the Tween Engine. In the selectNum function a random number is generated and if that number is zero then the red colour is selected. This is updated on the screen with the animation and sound, while this selection is stored in the computer’s array.

TWEEN.update(); renderer.render( scene, camera );} function selectNum(){ var rnd = Math.floor(Math.random()*4); if (rnd == 0){ compSelect.push("red"); r.opacity = 1;; redAnim.start();

} else if (rnd == 2){ compSelect.push("green"); g.opacity = 1;; greenAnim.start();

17. Yellow selection The final part to this code (available on FileSilo) doesn’t need an if/else statement as it must be the final number with yellow selected. This is always the final step before handing over to the user – the user’s selection array is cleared and the change function is called after a pause.

18. Set audio on change Once the change function is called it comes the player’s turn and they will have to copy the computer’s selections. Here an audio file is set to play informing the player that it is their turn. After a short pause while the audio plays, the user can make selections.; greenAnim.start();

20. Handing over The yellow colour is played in this code if that selection has been made previously as it is the last colour in the sequence. The incr, short for increment, variable is increased by one and then a pause is given so that the sound can play then the release function is called.

} else { y.opacity = 1;; yellowAnim.start(); } incr++; tmr=setTimeout(function () {release()}, 600); }

21. Test the full game 15. Blue selection

19. The computers sequence

In the same way as in the previous step if the computer randomly chooses the number of one, then the blue colour is selected. The animation and sound plays and the selection is recorded in the array ‘compSelect’.

It is necessary for the computer to play back its sequence as well as select a new colour to add to the sequence. The code here is part of the loop that detects what colour the computer has selected on previous occasions and plays that back to the player.

16. Green selection Again the if statement checks the random number generated, and this time if it is ‘two’ the computer plays the green sounds and animation. These steps are slightly repetitive but the nature of choosing diferent elements means they have to be told to play.

} else if (compSelect[incr] == "blue"){ b.opacity = 1;; blueAnim.start(); } else if (compSelect[incr] == "green"){ g.opacity = 1;

The final two functions are added which release the computer from running the section. The turnover function passes in either a 1 or 0 for the user or computer to determine whose go it currently is and change them. Save this now and test the game.

function release(){ clearTimeout(tmr); running=false;} function turnOver(who){ clearTimeout(tmr); turn = who; running=false; }



Make an image-based pop-up animation menu As seen on

Above the slider Just above our elements you can find a neat parallax slider showcasing some ESPN facts, by individual sport.

Within a slider You may also notice that the hoverable elements are contained within a slider, enabling a larger selection of elements.

On hover Hovering on the elements leads to simultaneous transitions including opacity, margin-top and hidden element changes.

Video backgrounds

The pop-up efect

The focus efect is not the only treat on this page, or within the site. Note the great use of video backgrounds.

Scroll down the page until you find the the elements which utilise the pop-up efect we are re-creating in this workshop.

// Make an image-based pop-up animation menu


over efects – they never go out of fashion. No matter how much web design progresses away from the desktop and further into the touchscreen world, where hover efects become moot, designers still love to come up with new and entertaining rewards for placing the cursor on a relevant area of the page. Not only does it deliver the doorway to the next step in the user experience, but it also make it more entertaining to knock on that door in the first place. Take sports channel ESPN’s website: a veritable feast of background video, parallax animations, responsive

layouts and, of course, impressive hover efects. In this Web Workshop, we’re going to create our own version of the pop-out menu items that appear about halfway down the ESPN page. Hover on any of the images in the strip, each corresponding to a sport on the ESPN line-up, and that image will pop out of the strip in full colour, presenting a link to the relevant page. It’s a relatively simple efect, and fairly easy to re-create, but there’s a small thrill that comes from hovering over each item to see the same efect. Combining Bootstrap CSS with some CSS3 animations, we will build our own pictorial menu and re-create the hover efects.

Don’t be a purist all the time <comment> What our experts think of the site

“It’s a common criticism that too many effects and animations are superfluous and do nothing to enhance the flow of information. While a lean path through your site is a must, it’s also important to remember how many of your users are not web designers and appreciate a bit of animated magic for its own sake.” Richard Lamb, freelance web designer at Inspired Lamb Design


</div> </div>

1. Initial HTML Our starting point is a container and three menu-item divs within. These divs will contain all the elements, visible or hidden in static and hover states. Inclusion of Bootstrap’s CSS will style the container and responsive classes. Also add a header and title.

2. Initial CSS When styling the h1 tag, make sure you are giving generous margins above and below, to accommodate raising elements. Add transition to the menu-item class, covering all possible animations. Don’t forget the vendor prefixes. To align our elements snugly, counter the Bootstrap padding either side.

3. Insert images Insert your images into each of the three menu-items. You may want to include a max-width of 100 per cent in the CSS. Then give each image an opacity of 0.3. Combined with the dark background of the div, this gives each image a darkened appearance.

<div class="menu-item col-sm-4"><img src="images/bread.jpg" alt=""/></div> .menu-item img{ opacity:0.3; }

EXPERT ADVICE Watch the browsers Browser capabilities is still a constant headache for many web designers and developers. Including vendor prefixes for CSS3 transitions and animations is as vital as it is easy to overlook. While Chrome remains top of the pile, our job is easier, but IE9 and Opera Mini will still be the ones that will miss out.

4. Link box HTML Underneath the image, but still within the menu-item, insert a linkbox div which contains an a href link. Repeat this for all three menu items. This will be hidden initially, so move on to the CSS for these elements.

<div class="menu-item col-sm-4"> <img src="images/bread.jpg" alt=""/> <div class="linkbox"> <a href="#">SERVE ME</a>

5. Style boxes As mentioned, set an initial display of none on the linkboxes. Style as you see fit, but a generous padding around both the box and the link, plus a subtle border, looks good and corresponds with the look of the ESPN site that we are trying to emulate.

.linkbox { display:none; adding: 25px; background: #03A678; txt-align: center; } .linkbox a { border: 2px solid #4DAF7C; color: #ffffff; text-decoration:none; padding: 25px; }

6. Hover effects Three distinct hover animations bring the efect to life. First, what you have to do is just set a minus value to the margin of the menu item. Then set the opacity of the image to 1. Finally, change the display value of the linkbox, bringing it into view.

.menu-item:hover { margin-top:-50px; transition: all 0.2s; } .menu-item:hover img { opacity:1; } .menu-item:hover .linkbox { display:block; }



Make on-click pop-up tooltips As seen on Of-screen tooltips The container element is also responsible for enabling additional horizontal elements to appear of screen until they are scrolled to.

Background image The background is created as a responsive image of double the container width to ensure that the efect works across all resolutions.

Container to scroll The container element has been used here to enable the map and all of the markers to scroll horizontally when the mouse hovers.

Interactive markers Markers can be clicked or tapped to reveal new content, and this enables the main design to remain clear when the content is not required.

Moving background The background image moves as the mouse pointer move â&#x20AC;&#x201C; an illusion created with container overflow scrolling.

// Make on-click pop-up tooltips


he use of hotspots and tooltips enables you to design your content, avoid your visuals for a clean appearance. These two features involve the ability to allow additional content to scroll into view as the user moves their mouse pointer and the ability to present information when the user clicks on designated hotspots. The ability to present information in this way provides benefit to design, especially where space is limited and/or where the presentation of information detracts from the website’s usability – something that can be easy to forget when focusing on making a design look appealing.

There are diferent ways that these tooltips can be used. Anyone who remembers the game Myst will see how this game’s concept is similar to the tooltip map created in this tutorial – where the game uses the tooltip concept to enable exploration and interaction with the game scenery instead of focusing specifically on using it as a method for accessing additional information. This tutorial creates tooltips and hotspots used to present information about buildings in a city. A panorama image is used for the background, with clickable link elements placed over image locations that can trigger a pop-up box for access to a new page or website.

Handy for preserving space <comment> What our experts think of the site

“Tooltips and hotspots are useful for allowing content to be presented in a way that is interactive and avoids clutter. This is good for usability experience as well as for presenting an appealing design – it’s not often that a design feature ticks the boxes for both categories.” Leon Brown, freelance web developer

Technique 1. Establish the HTML Create a text file called ‘page.html’ and enter our HTML from FileSilo – this will load the required CSS and JavaScript code as well as build the main page elements. The map is made from a block referenced data-map, with link elements inside for the tappable hotspots.

2. JavaScript Listeners Create another file called ‘map.js’. Then we will wait until the page has loaded and listen for mouse or touch movements on the <main> tag. JavaScript will set the horizontal scrolling position to that of the mouse/touch pointer when events are detected.

3. Define page body The default style rules should be defined, preparing the <html> and <body> tags to cover the fullscreen width and height. This enables child page elements to be sized in relation to the page, they don’t work for height by default.

4. Establish main styling Create a file called ‘styles.css’ for your style rules to be placed in. The first styles to be created will set the size of the HTML, body and main elements – the latter being used as the container for our tooltip map. The <main> element has its overflow set to hidden to give the scrolling illusion.

5. Relative positioning The element inside <main> has a [data-map] attribute that has the background image, relative positioning and a width of 200% of the <main> container width, ensuring that the content scrolls regardless of the size of <main>.

[data-map]{ position: relative; display: block;

width: 200%; height: 100%; background: url('img/background.jpg') #000 no-repeat 0 0; background-size: 100%; }

6. Marker elements Marker elements placed inside the [data-map] use absolute positioning that is relative to the [data-map]. Markers transition their colour when highlighted and have a default + sign as their visible content. Styling is placed ‘::before’ the DOM element.

7. Marker content Visible content that appears when a marker is selected is placed inside a child element of the marker that has a [data-content] attribute. This element is set as invisible by default, but becomes visible when the parent marker has been selected using the :target selector for the parent

[data-marker] [data-content]{ position: absolute; display: block; z-index: 1; padding: 1em; background: #ccc; color: #000; margin: 1em 0 0 -0.5em; border-radius: 0.4em; width: 15em; box-shadow: 5px 5px 5px 0px rgba(0,0,0,0.75); opacity: 0; transition: opacity 1s; } [data-marker]:target [data-content]{ display: block !important; opacity: 1; }

EXPERT ADVICE Under the influence (of games): Taking a look at the design of games like Myst, which use tooltips and hotspots, can help to identify ways that this concept can be used to increase engagement.

Exploration People like to explore. Use this as a feature that enables visitors to access new parts of the website in a way that is highly targeted to what they have shown an interest in being presented with.

Challenge Is there a way that your design can provide some type of entertaining challenge? Challenges can be a useful asset of social media marketing that gives your visitors a reason to share your content with others.

Tracking Designing the experience in a way that lets you reveal interests through interactions can be useful to help you understand more about your customers. It makes your marketing more targeted or can help to close sales.

8. Speech bubble For this speech bubble shape, insert a rotated square before the [data-content] element.

[data-marker] [data-content]::before{ position: absolute; background-color: #ccc; content: "\00a0"; display: block; height: 16px; z-index: 0; top: -8px; transform: rotate( 29deg ) skew( -35deg ); -moz-transform: rotate( 29deg ) skew( -35deg ); -ms-transform: rotate( 29deg ) skew( -35deg ); -o-transform:rotate( 29deg ) skew( -35deg ); -webkit-transform: rotate( 29deg ) skew( -35deg ); width: 20px; }

9. Style individual markers Individual markers need their positions to be defined and set with percentages so that they retain their intended position on the [data-map] background image.

#itemA{ top: 46%; left: 24%; } #itemB{ top: 60%; left: 37%; } #itemC{ top: 36%; left: 68%; }



Create on-click fading transitions As seen on

Fading transition When the site loads, the user is required to click the screen to make the preloading panel fade out and the content fade in.

Animated menu Clicking the burger menu icon to access the menu causes the whole page to slide to the right, bringing the menu in from the left.


Rollover efect

There is an icon in the bottom left of the screen showing that keyboard cursor keys can be used to navigate as well.

As the user rolls their mouse over the burger menu the lines crinkle up, giving a great efect to highlight the interactivity.

// Create on-click fading transitions

A Navigation system The area selected is shown with a cross icon. When the user clicks on a new section, the diamond animates into a cross.

s an art director, illustrator and animator, Airton Groba has worked in a variety of visual arts from designing to digital advertising and interactive content. Through working with a number of international clients, Airton knows that it’s important to pay attention to visual extras that enhance the way content works. Bringing this experience to his own portfolio site groba. tv there is wealth of extras in here that really enhance the way the content is displayed with animated icons, transitions between icons and sliding transitions between content that works both horizontally and vertically. The

right-hand side menu switches to the bottom when content is sliding left and right as this enables the user to locate themselves in the overall site without getting lost and shows a good understanding of considering the user experience. The illustrated content appears as the background on each section of the site and this is great because the content fills the background but has the downside of having the text over the top. To solve this, each page has a minus icon inside a diamond and when the user clicks on this, the content over the top fades out to reveal the illustration in all its digital glory.

Informative illustration <comment> What our experts think of the site

“The idea was to create a website to promote my skills with an easy and practical navigation. I [highlighted] my last jobs, mixing 2D and 3D illustrations with graphic and digital design so that the user can see my work with all the details. The website is responsive, fullscreen and works with keyboard too.” Airton Groba, freelancer

Technique 1. Fade out/in click The site features a preloading efect that fades out when the user clicks to reveal the site fading in. To start this efect, add the HTML content and image available on FileSilo.

2. Set up the CSS In the head section of your page add some style tags and then add the CSS to style up the page. This includes setting the background to black, removing the padding and margin – this enables the preloader overlay to work.

3. Overlay a preloader panel While a site loads, the preloader needs to sit over the top of the rest of the page to hide it. This is positioned absolutely over the other content with a higher z-index. The background is set to black to hide content below.

#preloader { position: absolute; top: 0; left: 0; width: 100%; height: 100%; z-index: 100; background-color: #000; }

EXPERT ADVICE Keyboard control The website also features keyboard control, as well as the standard mouse control and clicking as found on other websites. This works because the site has transitions to content both horizontally and vertically, using the keyboard cursor keys gives a quick way to get around the site.

4. Style the page The rest of the content will sit in the ‘content’ div tag. Here the background image is made to cover the entire webpage and this div tag is given the width and height of the browser viewport in order to support that.

#content { background: url(img/bg.gif) no-repeat center center fixed; -webkit-background-size: cover; -moz-background-size: cover;

-o-background-size: cover; background-size: cover; width: 100%; height: 100%; }

5. Finish CSS touches The last CSS rules position content in the middle of the page by centring the content vertically within the browser. The last rule hides the content of the page so that it can be faded in using jQuery to fade out the preloader and then fade in the page content.

.middle { position: relative; top: 50%; transform: translateY(-50%); } .hide{ display: none; }

6. Bring it all together The site requires a click to move past the preloader so once everything has loaded, add the click function to the preloader, which fades it out and when that’s finished the page content fades in. Load the page into your web browser to see the efect.

<script src=” jquery-2.1.3.min.js”></script> <script> $(function() { $( “#preloader” ).click(function() { $( this ).fadeOut( “slow”, function() { $( “#content” ).fadeIn( “slow” ); }); }); }); </script>



Inside the latest CSS4 selectors Take a look at some of the new CSS4 selectors and how we can use them right now


// Inside the latest CSS4 selectors


rogress does not stand still, especially with CSS. Not that long ago we were introduced to the awesomeness of CSS3, which quite literally reanimated the way we build our webpages. But today we see the emergence of the new CSS level 4 specifications. This is very new of course and browser support for many of these CSS4 selectors are very poor, so we don’t recommend using many of these in production. But in this tutorial we will take a look at the ones that are supported and also a quick peek at some of the ones that are not yet available, but will be very soon. The specification explains selectors as patterns that match against elements in a tree. Most of the selectors from the Level 4 specification are pseudo-classes. No new pseudo elements were added, but it does seem as though they might be added in other modules in the future at some point. As the development of pseudo-classes was paid a lot of attention, they are now at the fourth level and have gotten a lot of cool new additions. So in this tutorial we will take a closer look at some of these and how we can work with them!

1. The range pseudo-classes The :out-of-range and the :in-range pseudo-classes, are used to style elements that have range limitations when the value that the element bound to is outside of the specified range limits. This is handy when you would like to add a date picker to your webpages or web app. In addition to this, you could just have a simple input field with a number range.

2. The HTML for the date picker Let’s suppose you wanted a simple date picker on your webpage. Perhaps you’ve got a booking page for

example. In any case we need to first add in the HTML and what we’ll do here is we will set the range from 1974 to 1990 and set the default date outside of the range. This will then enable us to style it accordingly by using the ‘range’ label.

<input type=”date” min=”1974-01-01” max=”1990-02-01” value=”1973-01-01”> <label for=”range”></label>

3. Style the date picker So for us to see our date picker in all its glory, we need to add some CSS. First of all we will add in the :in-range rule that will indicate that it’s in range by giving it a blue 1px solid border. And secondly we will add in the :out-of-range rule, which is default and is also indicated by a red 1px solid border.

input[type=date]:in-range { outline: lightblue solid 1px; padding: 1em; } input[type=date]:out-of-range { outline: red solid 1px; padding: 1em; }

4. Out-of-range notification We have styled our input borders with a colour red to show that the date is out of range. But we can also simply add a more dynamic notification by adding in some text to our page. So by targeting the ‘label’ element after the input field, we can add some content in green to give the user a more clearer indication of the date or number being out of range.

Please pick another one.”; color: green; }

5. Number range The other simple thing we can do with these pseudo-classes is to create a simple input field that has a number range. This would also be useful in a booking form where you would only want a maximum of 20 items, products or people perhaps. Here’s the HTML and CSS for doing that:

<h5>Numbers:</h5> <input type=”number” min=”1” max=”20” value=”0”> <label for=”range”></label> input[type=number]:in-range { outline: lightblue solid 1px; padding: 1em; } input[type=number]:out-of-range { outline: red solid 1px; padding: 1em; }

6. Pseudo-class ‘:has()’ The relational pseudo-class, :has(), is a functional pseudo-class taking a relative selector list as an argument and this is very similar to jQuery’s has() selector. The has

Level 4 specification The selectors Level 4 specification is currently in Working Draft status and an Editor’s Draft can be found here:

input:out-of-range + label::after { content: “This value is out of range!


The date picker is a handy element to have and viewing this in Google Chrome will give you more options Top left

We’ve now added in the numbers range and you can see that this is almost as simple as the date picker, but also very handy Top right

Now that the out-of-range notification has been added to the CSS, it helps to make things more initutive and useful


The:HE'HVLJŨAnnual :HE'HVLJŨ isn’t just another word for ‘contains’, which is how the jQuery method works, and this is because it can also mean ‘has a specified element following it’ or ‘has an immediate child’.

7. Target containing elements As mentioned in the previous step, the :has() pseudo-class can target an element that’s being contained. This can then give us a great deal of control as to the elements of our choosing that we want to target and style. So in this CSS rule, we are simply targeting any section that has a header element and we will then change their colour to red.

section:has(h1,h2,h3,h4,h5) { color: red; }

8. Target other elements As mentioned there’s more that we can do with this nifty little :has() pseudo class. The first rule in this code shows a way of targeting a paragraph which immediately follows an image. Then in the next rule we target a list item with a ul as a direct child. So as you can see we can get a lot of power using the :has() pseudo-class.

img:has(+p) { color: red; } li:has(> ul) {

Browser support Just remember that some of these selectors are in draft and are not yet supported by browsers. So make sure you test which ones are before using them in production.

Top left

For us to see the :read-only and :read-write pseudoclasses in action, we first need to add some HTML Top right

This is what the :read-only and :read-write input fields look like after some CSS has been added Right

The :invalid and :valid pseudo-classes can be attached to an input field with an ‘email’ data type


color: red; }

9. Logical combinators The next few selectors we will look at are considered logical combinators or logical psuedo selectors. The first one we’ll look at is the :matches() pseudo-class which can enable us to group and match items in our CSS document. Then what we’ll do is we will take a look at the :not() logical combinator and see how simple and efective that is.

10. The :matches() pseudo class The :matches() pseudo-class can save time and typing. Normally if we wanted to target a selection of elements such as anchor states, then we would specifiy them one after the other (shown in the code below as ‘Old way’). But using :matches() we can pass in a list of parameters (shown below as ‘New way’) and get the same result.

/* Old way */ li a:link, li a:hover, li a:visited, li a:focus { color: red; } /* New way */ li a:matches(:link, :hover, :visited, :focus) { color: red; }

don’t we try using it in a more complicated situation? The CSS rule is pretty self-explanatory and you can see the power that we would get from using this simple selector called :matches().

section:matches(.active, .visible, #veryimportant) { background: blue; }

12. The :not() pseudo-class The last logical combinator we are going to look at is the :not() negation pseudo-class. This was introduced in the CSS3 specification, but it became even more powerful in Level 4 with the ability to add multiple arguments. The code in this step will apply a red colour to all of our paragraphs to which the active or visible class are not assigned in the markup.

p:not(.active, .visible) { color: red; }

13. Take it further With the addition of the :nth-last-child pseudo-class we can combine the :not() pseudo-class with this to make a more complex selector. Now in this rule we are selecting all of the divs, apart from the ones that are direct descendants of .container elements and are the last two siblings. As you can see, these rules can get very powerful and complex.

div:not(.container>div:nth-last-child(-n+2)) {    }

11. More complicated situation

14. :Read-only and :read-write

The code we looked at in the previous step is pretty straightforward and hasn’t really tested us much. So why

These pseudo-classes are what we call Mutability pseudo-classes and they represent elements that either

// Inside the latest CSS4 selectors

Location pseudo-classes The location pseudoclasses refer to the visitor’s location on your site and should not be confused with geocoding. A couple of useful changes may be coming to them in CSS4. One that we looked at earlier in Step 20 is a hyperlink pseudo-class called :any-link in which stands for any element that is the source anchor of a hyperlink. The second is the :local-link pseudo-class which styles hyperlinks, depending on the website visitor’s location on the site. This pseudo-class also can diferentiate between external and internal links, something we didn’t really go into in great detail. The :local-link refers to an element that has a source anchor hyperlink whose target is the same as the element’s document URL in non-functional use.

have or have not got user-alterable content. The :read-only pseudo-class represents an element that is not user-alterable and :read-write represents an element that is. So this is all pretty straightforward and over the next few steps we’ll take a look at how we can put these pseudo-classes into practice.

outline: solid 1px red; } input:read-only{ padding: 5px; margin: 10px; outline: solid 1px blue; }

15. Add some HTML For us to see the :read-only and :read-write pseudo-classes in action, we will first need to add some HTML. So using some input fields we can specify in our CSS which input fields are disabled to the read-only state and which input fields are alterable.

<p>This input is “read only”:</p> <input type=”text” readonly> <br> This input is “disabled”: <input type=”text” disabled> <br> This input is normal: <input type=”text”> <div contenteditable></div>

16. The CSS The CSS is pretty straightforward for this step. The first two elements that we have here will have a blue outline because they are set to ‘readonly’ and ‘disabled’ in the HTML respectively. Then you’ll notice that the third element will have a red outline because it’s naturally editable (‘read-write’), and that’s the same for all of the inputs by default.

input:read-write { padding: 5px; margin: 10px;

19. Location pseudo-classes In the next few steps we’ll take a look at what is called location pseudo-classes. The first one we will look at is called :any-link and then we will take a look at :local-link. These location pseudo-classes will give us more control over the styling of links.

20. The :any-link pseudo-class 17. Validity pseudo-classes The validity psuedo-classes are very useful in HTML forms as they can give visual clues as to the validity of the data that the user has entered. This is something that would normally be done using JavaScript. The two validity psuedo-classes we’re going to be looking at are :valid and :invalid.

18. Valid or invalid A good way we can test out these validity psuedo-classes is to use an input type. So what if we wanted to check whether or not the input type that we have is an email? Well, let’s first add in the HTML for an input type specified as ‘email’ and then, by making use of CSS we can check whether or not it’s actually valid. If it’s not valid then what we’ll do is we will make the border red, but if it is valid then we’ll make it green.

Email: <input type=”email” required> input[type=email]:invalid { outline: red solid 1px; } input[type=email]:valid { outline: lightgreen solid 1px; }

The :any-link pseudo-class gathers definitions of a:link and a:visited and then puts them into one, so that you don’t have to write them both as you may normally need to. So now it no longer matters whether a link that has been visited or not as they will be styled the same either way regardless of either scenario.

a:any-link { color: red; }

21. The :local-link pseudo-class Our second pseudo-class :local-link is a lot more interesting and very handy. If you wanted you could give a diferent style to the links that target your homepage and you could then leave all others untouched. You could also combine the :not() pseudo-class and specify that any links that are pointing to the current page will not have text decoration.

nav :local-link { text-decoration: none; } :not(:local-link(0)) { color: red; }



Build circular on-hover navigation As seen on

Inside content The navigation items are made as regular HTML containers, so any content type can be inserted like images.

Animation trigger The animations on the page are triggered when the user hovers the mouse cursor over one of the navigation items.

Circular shape The circle shape of the items is achieved by using a border radius of 100 per cent, enabling any sized circle to be made.

Navigation items

Container control

The items are regular navigation links refined to show as table cell style elements for content flow and sizing.

Each navigation item displays with a navigation container, allowing size control and location of the navigation items.

// Build circular on-hover navigation


ome types of navigation are meant to be highly noticeable. In the case of the inspiration for this tutorial, it is clear that the navigation’s purpose is to present itself in a way that directs the user to a specific part of the website. This type of navigation is ideal for areas of a website that diferent types of visitor are accessing by making it clear where the information of interest can be found. This approach to website design can be highly useful for projects that rely on making enquiries or sales conversions from users who have never previously visited the website. In these scenarios, users who don’t

immediately see what they want are more likely to click on the back button, meaning that the website has failed to achieved its purpose. The consequences of these ‘bounces’ can be costly in terms of lost opportunities to produce conversions and actual cash expenditure. The ability to present clear options doesn’t have to be restricted to multipage websites – the same concept can also be used to navigate to sections on the same page. Just use ID names for page content elements and refer to them in your navigation elements using # followed by the ID name to navigate to in the href attribute. Make sure you download the full tutorial code from FileSilo.

Keeping it simple <comment> What our experts think of the site

“The use of highly visible navigation can help to make content of all types much easier to navigate by giving users the option to avoid irrelevant content. As the area of websites and apps start to merge, this type of navigation will become more relevant to web design – especially in the area of web apps.” Leon Brown, full stack developer

Technique 1. Define HTML First create the main HTML content that contains the efect’s elements. We’ll use a <nav> element to contain the <a> links that become the menu items. The HTML links to CSS and some JavaScript for the visual efects.

2. Adapt navigation HTML Create a file called ‘menu.js’. Our technique requires the HTML navigation items to have two span items – the first is used as the background circle and the second will contain visible content. Adapting navigation items with JavaScript means that the default HTML is good for SEO.

3. Define page basics Create a new file called ‘styles.css’. Insert the initial element formatting CSS to define the page body and navigation. This example will have the navigation set to have child content placed in the middle and display as a block element to display at full screen width.

4. Navigation Items

EXPERT ADVICE Pay-per-click clarity This type of navigation can be highly useful for pay-per-click campaigns that are designed to direct people to a targeted landing page. People often want to see an overview of the information and then select for themselves to see the parts that interest them, which this type of navigation can be made to provide.

Page navigation items are the <a> links inside the navigation container. These display with a red border and have a bigger font size. Navigation items will animate when hovered, therefore the transition property is used to define an animation transition for an opacity of one second for later CSS states.

5. Navigation interaction The navigation items should fade to be semitransparent when the user hovers over an item that isn’t being selected. This is achieved in two stages – the first defines all navigations to have a quarter opacity, then the second stage selects the item to have full opacity.

nav:hover a{ opacity: 0.25;

} nav:hover a:hover{ opacity: 1; }

6. Background animation The appearing circle animation used in the background is made from the first <span> item used as a square block refined into a circle using clip-path. Only the first <span> element has the properties applied to it to show as the animated circle when the user hovers over the nav item.

nav a span{ position: absolute; display: table-cell; vertical-align: middle; text-align: center; z-index: 0; top: 0; left: 0; width: 100%; height: 100%; } nav a span:first-child{ background: #c00; opacity: 0.5; transition: -webkit-clip-path 1s, -moz-clippath 1s, clip-path 1s, border-color 1s;, opacity 1s; -webkit-clip-path: circle(0% at center); -moz-clip-path: circle(0% at center); clip-path: circle(0% at center); } nav a:hover span:first-child{ -webkit-clip-path: circle(30% at center); -moz-clip-path: circle(50% at center); clip-path: circle(50% at center); border-color: rgba(0,0,0,0); }



Create a flickering background image As seen on

Menu rollover icon As the user rolls their mouse over the burger menu, the circle becomes a baseball in keeping with the site theme.

Vintage film efect The background image is given a greater vintage impact with a flickering, film-scratched movie over the top with a low opacity.

Reactive logo The main logo on the screen moves around as the user moves their mouse around the screen, reacting to user input.

Outer border

Historical timeline

The border around the page is orange, but as the user scrolls down, the border adjusts to the new coloured background.

As the user scrolls down, a timeline appears on the left of the screen showing where the user is in the history of the team.

// Create a lickering background image


ith the web being such a new medium and it only existing on shiny screens, it is important that websites are able to capture the mood of what is trying to be designed, going beyond the gloss that computers bring. The St. Louis Browns baseball team have a great microsite dedicated to the historical society of the team. As the site is dealing with history it chronicles the team’s story through a timeline approach using a single-page scrolling design. To give maximum impact to the page, the first image takes up exactly the size of the browser window and there is a

lovingly crafted vintage, film grain and scratches efect over the top of the image. This is created by stretching an MP4 movie with dust and scratches over the top of the image, with a very low opacity that allows the original image to show through. The efect is subtle due to the low opacity – it is not distracting to the viewer but at the same time it instantly communicates a sense of age and the historical legacy of the team. To continue this theme, all of the images used have a slightly worn look to them as if the ink hasn’t quite adhered to the block when printed. This takes the design beyond shiny computer graphics and reinforces the theme to the user.

A time-travelling story <comment> What our experts think of the site

“The indelible history of the St. Louis Browns inspired a vintage design on a modern storytelling platform. Historical photography brings to life anecdotes and stats from their 52 seasons. And thoughtfully considered interactions and animations create an immersive, delightful full-screen experience.” Justin Striebel, art director, HLK

Technique 1. Create a composition In After Efects create a new composition 1000 x 562 pixels, 24 frames per second with a duration of 2 seconds and 1 frame. Click OK to create this, then go to Layer>New>Solid and choose black as the layer colour.

2. Add noise Add a Tint efect to this layer. Turn on the stopwatch to add keyframes and at diferent points along the timeline change the tint colour to diferent greys or blacks. Now use the Add Grain efect, then make the noise monochromatic and turn the intensity up.

3. Random lines Deselect all layers and grab the Pen tool. Click at the top and then the bottom to draw a line. Make sure it has a white stroke. Now add keyframes for it at diferent places along the timeline, moving the line on the screen for each keyframe. Repeat this step with another layer.

5. Over to CSS In the CSS for your web project add the following rules. The body is set to have an image covering the background, while the video is set to have a very low opacity, stretched over the top of the image, to give the flicker efect over the top.

body, html{ margin: 0; padding: 0; background: url(img/bg.jpg) no-repeat center center fixed; background-size: cover; } video { position: absolute; z-index: 100; top: 0; left: 0; min-width: 100%; min-height: 100%; opacity: .0875; filter: alpha(opacity=8); }

6. Add the video tag In the body section of your code add the video tag as shown below. The CSS will automatically place this over other content and stretch to fill the screen. Save the file and view it in a web browser to see the flicker efect over the top of the image.

<video src="img/flicker.mp4" loop autoplay></ video>

EXPERT ADVICE Reduced colour palette The St. Louis Browns’ site features duotone images in the background. This means that there is one colour and a greyscale image – so the image is made out of two colours. This is a great way to keep file size of images down because there is less information to store and it doesn’t interfere with text over the top.

4. Add to render queue Next we will go to Composition>Add to Render Queue. Here we will leave all of the options at default and then just hit the Render button. After that we will put the video through Miro Video Converter (mirovideoconverter. com) to get an MP4 file that is more suitable for playing on the web.



Create a slide-down on-scroll menu As seen on

The regular menu When the site first loads, the burger menu is at the top right-hand corner of the screen and works as a normal menu.

Scrolling change As the user scrolls down, the regular menu scrolls of the top of the screen and triggers a black header to slide down.

Loading animation A black background transitions to white with a wipe animation from left to right to show the loading progress.

Parallax scrolling

The new header

The elements on the page like the text and images scroll at diferent speeds but always form the main content in the middle.

The black header (when active) is fixed in position with a semitransparent background so that content can still be seen.

// Create a slide-down on-scroll menu


enus are one of the most important aspects of any site, they provide the way that your visitors are going to actually navigate and interact with your content. This provides a tremendous opportunity to do something unique, at the very least the designer needs to take something that’s quite familiar and do something a little unexpected. The creators of have done just that with their transformation of the familiar burger menu. As the page loads the burger icon is present at the top of the page in the right-hand corner of the screen, so no big

change there. But as the user scrolls down the page this icon will then scroll of the top as any normal content scrolls with the page. Just as the icon is scrolling of the page another header bar slides down with a diferent coloured background, containing another burger menu. It’s a simple but efective change and certainly grabs the user’s attention. When the icon is clicked, the of-screen menu slides up from the bottom of the screen and overlays with the new header, covering the entire screen. The way the menu slides up until it touches the header is actually very pleasing and shows how a simple animation can be efective.

Make the wait interesting <comment> What our experts think of the site

“ is the branding site of Lella Baldi company, an Italian company famous for women’s shoes completely made in Italy. The site style is like that of the brand: minimal chic, elegant, fresh and fluid, the site is fully responsive and mobile friendly. It’s a refined style and unique, as are their creations.” Lattanzi Eros, project manager

Technique 1. Creating the menu reveal The Lella Baldi site has a unique menu system, to get the same efect start by adding the jQuery library and the CSS styling to the head section of your document. The body is set to have no padding and the header is made into a fixed element. Make sure that you download all the code for this Web Workshop on our FileSilo.

2. Finish the styling The content of the page is given an arbitrary 1,600 pixel height so that it is larger than the browser window, and just so that there is some scrolling on the page to reveal the header. The menu is placed as a fixed element of the bottom of the page.

EXPERT ADVICE Pushing the menu In your quest to make your menu more amazing than other sites, always make sure your menu is as obvious as possible regardless of what new twist you are trying to give to your interaction. Every user has to understand how to navigate your site, otherwise what you’ve created is a complete failure.

#content{ width: 100%; height: 1600px; } #menu { width: 100%; height: 100%; background: #222; color: #fff; padding-left: 20px; z-index: 201; position: fixed; bottom: -100%; } </style>

3. Add the HTML tags Now move to the body section of the page and add div tags. These correspond to the CSS added in the previous two steps. If this was a real page then you would put your own content in the div with the id of ‘content’.

<div id="header">Header</div> <div id="content">Page Content goes here</div> <div id="menu">Menu</div>

4. Add the functionality Below the HTML tags, JavaScript content can be added. The document is checked to make sure it has loaded, then two variables are set to track whether the header has slid down onto the page or if the menu is on.

5. Scroll detection When the scrolling has gone more than 100 pixels the header is animated onto the page to sit at the top. If the user scrolls up it slides back of again.

$(window).on('scroll', function() { var scrollTop = $(this).scrollTop(); if ( scrollTop > 100 && down == false) { $('#header').animate({"top": 0}, 300 ); down = true; } if ( scrollTop < 100 && down == true) { $('#header').animate({"top": -100}, 300 ); down = false; } });

6. Click the header Rather than just click a burger icon, the whole header is set to be a button to move the menu onto the page. The menu slides up from the bottom and stops below the header. The menu slides of if clicked a second time.

$( "#header" ).click(function() { if (menuOn == false){ $('#menu').animate({"bottom": -100}, 500 ); menuOn = true; } else { $('#menu').animate({"bottom": "-100%"}, 500 ); menuOn = false; } }); }); </script>



// The Art of Web Fonts



Think about the kinds of message the text should be he primary duty of typography is to convey conveying. Do you want the site to be more child-like or information in writing. To paraphrase Emil Ruder, more professional and business-looking? Your typeface the great Swiss typographer, a website which cannot be read or understood becomes a product without will impart an emotion to the reader. You might want purpose. However typography does not just convey them to feel excited, comforted, reassured or even tense. information. It also imparts feeling, emotion and Consider typefaces with these emotions in mind. sentiment, as well as arouse preconceived ideas of All of us associate certain letterforms with topics, content, tone, trust and suitability. places and times, so identifying typefaces that As a designer, the way that you set have a relationship to your content or Superfamilies your type – for example when you are client can give a sense of place, evoke a If you have chosen a body tweaking the size, spacing, colour and mood or even reference a specific text font part of a superfamily, context – will go a long way to period in history. you could save time and look forming a negative or positive Above all, experiment. Once straight away to using the display experience. The quality of your you’ve found a good direction, make styles within that family, knowing typesetting can have a big impact sure to try variations alongside your that the two fonts will sit on how your reader feels when they body copy to fine-tune your choice. together well. see your creation, but the most significant contributor of all will be your choice of typeface. When considering a typeface for your website body copy, that is to say the bulk of the text to be read, your decision will largely be driven by practical considerations. The type you choose for body text should remove as much friction as possible between your reader and the text. The last thing a reader wants is to spend time with a typeface that keeps trying to grab their attention – that would just get tiresome and irritating after a while. After your initial choice between a sans and a serif, you will be looking for similar traits in a typeface; maybe you are looking for something that has sturdy and simple shapes, with a low contrast between the thick and thin strokes as well as a generous x-height. The upshot of making all of these practical decisions is that the diferent choices of body text that you make will not provide big Senior designer, Clearleft diferences in your reader’s mood, although they will still “Choosing the right typeface is register in a subtle way. vital in creating an appropriate If you do want to have a big efect on your reader, the mood and emotion for your opportunity is still there: headings and display text are design. Setting type at display your attention grabbers. They set the scene and draw sizes lets you set up instant people in. Visitors to the webpage will ‘see’ this type context before your site visitor before they ‘read’ it, and that’s your chance to choose a has even begun to read.” typeface that immediately expresses what the text, and indeed the entire website, stands for.

Mikey Allan

Typography does not just convey information. It also imparts feeling, emotion and sentiment

5 Questions you need to ask before choosing a font What kind of message should your text convey? The font choice can set the tone of the text. Consider what is appropriate for what you want to do. For example, do you want a child-like enthusiasm? Formal business-like tone? Cutting-edge tech style? A hipster and craft feel? Futuristic design? Retro look? A sensible and secure study? A fun and informal composition? Or an authoritarian and trustworthy concept?

How do you want your reader to feel? Your typeface will impart all kinds of diferent emotion to the reader. You might want them to feel excited, comforted, reassured, surprised, respectful, intrigued or even tense. Think carefully about the feeling that you want to convey.

What time or place do you want to invoke? All typefaces come with the baggage of history. If the text is pertinent to a particular point in time such as the Twenties or something a bit more futuristic, or hints at a distant location such as Mexico or Scotland, your font could hint at this.

What features do you need the font to have? Many fonts come packed with OpenType features such as swashes, ligatures and alternative characters. These features can enhance your design so whittle down your shortlist accordingly.

How much does it cost and is it available as a web font for your project? Some fonts are free and some are expensive. Do you have a budget? Not all fonts are available as web fonts yet, and some only through a service, which can be a limiting factor.




3sixteen Classic combination of expressive display font and hard-working geometric sans for the navigation and body copy, lending a high-quality feel.

Pelican Books

How your body responds when you quit smoking Online and paper text designed as one. Brilliant single-page site with subtle animation and perfect typesetting.

Set yourself apart YOUR TYPEFACE CHOICE AND HOW YOU USE IT HELPS YOU TO BE UNIQUE Just as we judge someone based on the clothes they are wearing, we make judgements about text based on the typeface in which it is set. Choosing the same typeface as everyone else, especially if you’re trying to make an impact, is like turning up to a party in the same dress or to a meeting in the same suit, shirt and tie. We’ve talked about the psychological efect a typeface can have. When choosing a font for your display text, you also need to think about the picture it will paint. You should consider large type in the same way you might a photograph: there’s a visual impact that comes before the actual words, which can anchor your layout as well as set the mood.


The type designer Christian Schwartz says there are two kinds of display typefaces. The first kind are the workhorse typefaces that will do whatever you want them to do. Helvetica, Proxima Nova and Futura are good examples. These fonts can be shaped in many diferent ways, but this also means they are found everywhere and take great skill and practise to work with in a unique and striking manner. In order to make an individual statement with this kind of font, you’ll need to pay extra attention to the details. Make judicious use of colour, choose an extreme weight such as very light or very heavy, or an extreme width such as expanded and condensed. Carefully adjust letter

Students’ Union, Manchester Metropolitan University A rich, interactive user experience that is not afraid to make the typography the star.

spacing and line height to create an eye-catching and immaculately typeset ’picture’ with the words. The second kind of typeface is one that does most of the work for you. Like finely tailored clothing, it’s the detail in the design that adds interest. Good examples are Clerkenwell, Marr Sans and Bree. These typefaces carry much more inherent character, but are also less malleable. It’s harder to adapt them to diferent contexts. Both kinds of typefaces have their place, but it’s good to be aware of what you’re using. Look for typefaces designed specifically for display use. Sometimes these will be obvious, like Strangelove, a very narrow handwriting font which can only be used in big sizes. Others will come as part of larger font families, so-called ‘superfamilies’. It’s worth noting that very condensed styles in superfamilies have the advantage of allowing more text to be squeezed in, and so can be set particularly big for added impact.

// The Art of Web Fonts

Makeshift Magazine Lovely balance of impactful display type and beautifully set body copy. Matches the printed magazine perfectly, not by copying, but by adapting the design to the web.

Does size matter? WHAT ABOUT RATIO TOO?

More sophisticated type families will have specific ‘display’ styles. These are often variations on the body text

More sophisticated type families will have specific simple webpage to which you can add your display text ‘display’ styles. These are often variations on the body and start playing with diferent typefaces. Fontdeck and text, especially adjusted for setting at large sizes. For most other webfont services let you try out fonts for example, closely comparing Benton Modern free, and some font foundries such as Dalton RE with Benton Modern Display reveals Maag have trial licenses. Explore type that the contrast (diference between The best place to get your foundry websites thick and thin strokes) is increased, inspiration is where fonts are actually If you find a typeface you love, as is the size diference between being used. Screenshot nicely see what else that foundry or lower and uppercase letters, both designed websites (, tear out designer has created. Foundry typical attributes of display styles. pages of magazines, visit design websites usually have wonderful The most expressive display review sites (like examples of their typefaces in styles exaggerate design features of and read typography blogs action which you can use body text. Questa Grande beautifully ( Identify for inspiration. enhances the fine curves, flicks and graphic and web designers you like and curls hinted at in the text styles. The try adopting their palettes. Also look at the process of choosing a display font can be great font choices and how they are used. fun. Make yourself a typographic prototype by first Spend time on type foundry websites. They are often deciding on a font for your body text and making a great sources of beautiful and expressive typography,

When considering display text, we’ve been talking about text that makes an impact – big text. When deciding precisely how big that text should be, you should consider a scale. A scale provides consistency by way of a mathematical relationship between diferent design aspects, between body text and title text for example. A scale you could use is the golden ratio, where each potential type size is 1.618 times bigger than the other, giving a sequence of type sizes like this: 16, 26, 42, 68, 110 One scale does not serve all screen sizes. The golden ratio has significant jumps between the text size and so works well for large screens. For smaller screens, where your biggest text cannot be anywhere near as big as for a large screen, you need a smaller scale as found in the classic typographer’s scale: 16, 18, 21, 24, 36

as they are trying their best to show of their fonts in the ways that they were designed for. If there’s a typeface you particularly like, find other typefaces by that designer. You may find that their other typefaces work for what you want to do too. Above all, be expressive with your type. All type can have an afect on the reader, so take advantage of that and allow your type to have its own vernacular and impact. Don’t be too reverential, dogmatic, or ordinary. Be brave and push a few boundaries.

Tim Brown @nicewebtype Tim keeps his ear to the ground for all the latest developments in web typography and is one of the most knowledgable people around.


:HE'HVLJŨ The:HE'HVLJŨAnnual How designers and non-designers see fonts It’s easy to forget that fonts mean diferent things to diferent people. Sarah Hyndman’s online Font Census asked participants whether they had professional experience or not. The diference between the answers from the pros and the consumer is fascinating. It’s also well worth remembering when choosing a font for a project. Here are a few quick examples:



Helvetica DESIGNER “Intellectual”, “intelligent”, “stylish”.

NON-DESIGNER “Everyman”, “meh”, “dull”.

DESIGNER “Simple and elegant”, “classic design”, “clean, crisp, classic”.

NON-DESIGNER “Boring”, “ordinary”, “honest”.


Bauhaus DESIGNER “Architecture”, “art movement”, “technical”.

NON-DESIGNER “Silly”, “friendly”, “doughnuts”.

EXPERT Sarah is a graphic designer with over 20 years experience and the author of The Type Taster: How fonts influence you. Source: The Type Taster: How fonts influence you by Sarah Hyndman (

Getting technical CSS OFFERS FINE CONTROL OVER TYPOGRAPHY WITH WIDESPREAD BROWSER SUPPORT. USE THIS TO YOUR ADVANTAGE TO FURTHER SET YOUR TYPE APART Typography is about sweating the small details. These tend to be subtle, invisible and seemingly trivial in isolation, but together make a typographic picture greater than the sum of the parts. Firstly, make sure kerning is turned on. This uses the instructions in the font to automatically close up the gaps between letters such as ‘W’ and ‘e’ in the word ‘we’.

text-rendering: optimizeLegibility; font-kerning: auto; Next check the spacing between your letters. If you are using sentence case you can normally tighten the spacing a bit. It’s generally best to avoid loosening the spacing with lowercase as it afects legibility.

letter-spacing: -0.02em; Conversely if you are setting display text in capitals, then increase the letter-spacing by five to ten per cent.

letter-spacing: 0.05em;


True responsive design allows for fluid layouts, where lines of text vary. However computers acting this way don’t have a sense of what’s good or bad – they can’t make informed typographic decisions. Normally the rules you provide in CSS will enable you to get body text set comfortably, however short passages of text provided for impact will ideally require a human hand. Consider line breaks and the shape of your text. A simple guideline is to alternate shorter and longer line lengths. If you know in advance what the words will be then you can stick in some judicious line breaks, but check that the layout doesn’t break in your responsive design. If so you may need to add a breakpoint specifically for that heading. A useful catch-all (particularly if the words are coming unknown from a database) is to automatically insert a nonbreaking space between the final two words of a

heading. This will prevent an unsightly word sitting alone at the bottom of the heading. Pay attention to your line spacing. Display text can be set much closer than body text. Start by setting ‘solid’:

line-height: 1; If you’re certain precisely which words will appear on which line, you can try setting even tighter. There is a rule saying descenders and ascenders must never touch. There is an exception to this rule: touching is allowed if it looks better.

Letterspacing tightened and line-height reduced to improve appearance of text set large

// The Art of Web Fonts


Comic Sans


PERSONALITY Comedian, everyman, storyteller VALUES Friendly, welcoming, loud STYLE Novelty, quick’n’ easy, comfortable

PERSONALITY Everyman, idealist, leader VALUES Conventional, confident, modern STYLE Neutral, credible, calm


Baskerville PERSONALITY Intellectual, academic, wise VALUES Traditional, conventional, trustworthy STYLE Neutral, credible, knowledgeable

PERSONALITY Leader, idealist, thinker VALUES Modern, confident, capable STYLE Neutral, practical, comfortable

How to influence people To really connect with an audience, consider who they are. You need to speak to them in the right tone. Don’t pick a kid-like font (Comic Sans) for a business site.

Source: The Type Taster: How fonts influence you by Sarah Hyndman (



Questa Grande


FS Clerkenwell


Abril Fatface


Marr Sans



Ostrich Sans





Strangelove Next


Playfair Display


Benton Modern Display

Calypso E


The:HE'HVLJŨAnnual :HE'HVLJŨ 5 tools

Thinking in an all-device world


TO MAKE THE MOST OF THE POWER OF THE WEB, YOUR TYPOGRAPHY SHOULD EMBRACE TRUE UNIVERSAL DESIGN More and more web-enabled gadgets are entering the hands of readers on a daily basis. You should aspire to make your designs flexible and pliable for screens and devices of all shapes and sizes. Your design must adapt to the reading context, and the best way is through the methodology we know as responsive design. There are two core principles to the technical side of responsive design. The first is liquid layouts. This means no fixed widths and allowing your text to reflow as needed. This is fine for paragraphs in body text but a trickier proposition for headings in display text. The second principle is the use of media queries to add an extra layer of control and adjustment at the point the liquid layout breaks. This breakpoint usually occurs as the layout encounters bigger or smaller screens, but don’t make the mistake of determining screen size as a way of deciding upon break points. Your media queries are there to help along the typography, so you should use a typographic assessment of when the design might break. This means setting your breakpoints in ems rather than pixels.

The above code increases the size of your h1 heading when more space is available to you, as determined by how much text can fit in. This will mean that the breakpoints take into account your reader’s default text size – this may vary from device to device and can also be changed by users. In that example, we set a very large text size (typically equivalent to 96px) for large screens. It’s important to remember the enhanced experience we can give to those with equipment at the other end of the spectrum. It’s also vital to acknowledge that devices and browser windows come in all shapes and sizes. A device such as the Hudl with a 16:9 screen has a very shallow height when held in landscape mode. This will mean that the use of very large headings delivered by an assessment of width may look out of proportion, so you will need to add breakpoints in the vertical plane as well, for example like this:

h1 { font-size:2em; } @media all and (min-width: 56em) { h1 { font-size: 4rem; } }

@media all and (min-width: 74em) and (min-height: 46em) { h1 { font-size: 6rem; } }

@media all and (min-width: 74em) { h1 { font-size: 6rem; } }

Fonts in Use A public archive of typography searchable by typeface, format and industry. Also includes commentary and reviews from founders Sam Berlow, Stephen Coles and Nick Sherman.

Typewolf A comprehensive resource curated by Jeremiah Shoaf. Includes a site of the day (always good) and is searchable by typeface.

Remember how people read

WhatFont People read diferent kinds of whatfont.html devices in diferent modes – at a This is a bookmarklet and desk, on the sofa, in bed or browser extension to identify standing on a train. Take this any web font on a live website. into account when Very handy indeed. considering sizes.

On Web Typography Jason Santa Maria’s excellent book, On Web Typography, is all about how to apply classical typography principles to the web.

Ampersand Conference A fun, practical one-day conference in Brighton dedicated entirely to web typography. Great at covering a diverse amount of topics, ranging from type design to web font performance. uses media queries and vw units to adapt the heading size


// The Art of Web Fonts

Tuxedo No2 has some lovely use of Bookmania swashes over on

OpenType features OPENTYPE WAS CONCEIVED THE SAME YEAR AS CSS AND NOW PROVIDES A WORLD OF TYPOGRAPHIC POSSIBILITIES Many typefaces are designed with hidden gems enabling you to be even more creative in your type setting. These hidden gems are powered by a technology called OpenType which bundles optional letterforms such as ligatures, swashes and alternates within the font files. Swashes in particular can add a sophistication to your text. Swashes add a typographic flourish, by way of a flamboyant addition to a character, such as an exaggerated serif, tail or entry stroke. You can turn on swashes in CSS by using:

Maintaining word wrap TRULY RESPONSIVE TYPOGRAPHY Viewport units let you specify length and size in terms of the size of the viewport. The units are vw, vh, vmin and vmax, where a value of one is equal to one per cent of the viewport width or height. This means that you can set your font size in terms of viewport size, so on wider windows your text is proportionally bigger. So if you have a heading you want to wrap precisely over two lines, you can set the font size with vw units and it should wrap at the same point.

-webkit-font-feature-settings:”swsh” 1; font-feature-settings:”swsh” 1; Other OpenType features to play with include discretionary ligatures, for example an ’s’ connected to a ’t’ with a loop to give a high-class or historic feel.

font-size: 5vw You’ll need to experiment with the font size until you get it right, and ensure that the text doesn’t look too small or big.

-webkit-font-feature-settings:”dlig” 1; font-feature-settings:”dlig” 1; You can play with more OpenType features at clagnut. com/sandbox/css3.

Hazlitt uses an alternate ‘O’ in the main heading



Animate typography and text effects Give your typography the attention it deserves with these must-see animated effects with CSS3


// Animate typography and text effects


ne of the biggest evolutions with CSS3 has been the ability to write behaviours for transitions and animations. These animated efects are a must-know for any designer or front-end developer as they unlock all kinds of interactive possibilities and visual feedback options. In this tutorial the focus is on text with three diferent efects that ofer some great possibilities. The first actually will not use standard text on the page, but will instead create the text inside an SVG element. The reason for this is that SVG enables strokes on text which is not possible to do with regular HTML text, and sometimes you may just need strokes with text. Using the SVG right in your HTML will keep your text accessible and will stop you having to rely on GIFs. Once the stroke is in place, it will be given five diferent colours and set to march around the text with animation. The next text efect shows how to make a text rotator so that diferent words can be cycled through on the screen. The final efect will use text clip to clip the image to the text so that the image only shows inside the text. This will be turned into a call-to-action button with a sliding image efect.

1. Set up the document Open the project folder in Brackets or a similar code editor and then open start.html. Create style tags in the head section and add the CSS shown. This will import the right typeface that will be used from Google and sets up the basic HTML settings for the pages.

@import url( css?family=Oswald:400,700); html, body { height: 100%; font-weight: 800; } body {

background: #35483e; background-image: url(img/bg.jpg); background-size: cover; font-family: Arial; }

4. Add the CSS for the SVG Move back to the CSS section of the page and add the rule for the SVG. This will display the object as a block element so that it can be centred on the page with the margin set to auto. The font for this element is set to Oswald and a large text size.

2. Write an SVG graphic As SVG graphics are written with tags they can be easily authored without any graphics application. Move to the body tag and add the start of this SVG graphic, which creates text to display in the browser. Later this will get styling from the CSS that will animate this.

<svg viewBox="0 0 960 300"> <symbol id="s-text"> <text text-anchor="middle" x="50%" y="80%">Kinetic Design</text> </symbol>

3. Create graphic lines The next code that is added finishes of the SVG, more importantly though it creates five graphics nodes that will be styled using CSS to create five diferent coloured strokes. These target the text that was created in Step 2.

<g class="g-ants"> <use xlink:href="#s-text" use> <use xlink:href="#s-text" use> <use xlink:href="#s-text" use> <use xlink:href="#s-text" use> <use xlink:href="#s-text" use> </g> </svg>

class="text-copy"></ class="text-copy"></

5. Style specific text Now the CSS is targeting the specifics of the text and the fill is turned of while a white stroke is added to the text. The stroke isn’t applied all the way around the text by using the dash array. The stroke is widened and told to take five seconds to apply the animation.

6. Start the animation By adding keyframes the stroke will immediately start animating around the edge of the text. Now each graphic element is given colour and a slight delay in its movement to create the basis for the rotating stroke around the outside. At present there are orange and dark red strokes.

@keyframes stroke-offset { 100% { stroke-dashoffset: -35%;} } .text-copy:nth-child(1) { stroke: #5c0404; animation-delay: -1s;


CSS keyframes


The CSS keyframes rule enables the designer to specify either ‘from’ or ‘to’ values, or alternatively it enables them to use a percentage that states what should happen.



The next text elements are added to the HTML and given some basic styling for us to place the text under the animated heading Top left

The SVG element is added to the page, and basic CSS styling places this in the correct position on the page Top right

The fill colour is removed and a stroke is added that is not shown all around the edge of the text elements, which is still an interesting look


The:HE'HVLJŨAnnual :HE'HVLJŨ } .text-copy:nth-child(2) { stroke: #d6801c; animation-delay: -2s; }

7. Finish the stroke As in the previous step, the CSS here is targeting the diferent children of the graphics object and they are given diferent colours and ofset in their own animation. This now gives the efect of five diferent colours marching around the edge of the text.

.text-copy:nth-child(4) { stroke: #ffff9e; animation-delay: -4s; } .text-copy:nth-child(5) { stroke: #55981b; animation-delay: -5s; } @keyframes stroke-offset { 100% { stroke-dashoffset: -35%;} }

8. Second effect That completes the first efect that is being added to text, so now move down to the body tag and add our code from FileSilo to the SVG added earlier. This readies the

Naming keyframes Notice that the keyframes in Steps 14 and 6 have been given a unique name so that they can be called by the right piece of animation.

Top left

Keyframes are added to the animation and each list element is cycled through by sliding down to the next and back up again. The text is now about to slide up Top right

A ‘more’ button is being styled up and the background image is only visible through the text. The image will animate when the user rolls over Right

At this stage this image rollover is working. The colour is different from the previous step, but it needs to stand out and look more like a call-to-action button


setup of a text rotator that will move through the diferent list elements with animation, great for showing of a range of skills.

9. Style up the text With the next content in place, move back to the CSS style tags and our code on FileSilo places the text in the centre of the page under the animated heading created earlier. At the moment this still looks like a list, but that will all change as more CSS is added to complete the efect.

10. Static section The text is made to float left, where one half of the text is static, ie not moving, hence the name of the class that is controlling it here. Once floated to the left, the overflow is hidden. The height is set up so that only one line of the moving section can be seen.

.static { float: left; overflow: hidden; height: 40px; }

12. Set the unordered list Now with this rule targeting the unordered list, you will notice that the text for the first element is sitting alongside the static text. The other list elements are below this but because the overflow is cut, it isn’t being displayed until the animation is added.

ul { margin-top: 0; padding-left: 130px; text-align: left; list-style: none; animation: 6s linear 0s normal none infinite change; }

13. Size up the height Styling up the list elements applies the right colour for these elements and more importantly sets the line height so that the text moves to the right section and can only see that section at any one given time. If the line height was smaller it might be possible to see parts of the other text on the screen.

11. Paragraph style

14. Make it move

The paragraph tag is targeted and given a light yellow colour that has also been used in one of the strokes around the edge of the previous text, just to keep consistency with the colours. Again this text is floated to the left so that the list can sit alongside it.

The final setup for the CSS part is to make it move by defining the keyframes called ‘change’. Step 12 calls for these keyframes and adding these will immediately start the text rotator sliding up and down to show the text. Because the animation was set to infinite, this will just keep looping.

p { display: inline; float: left; margin: 0; color: #ffff9e; }

@keyframes change { 0% { margin-top: 0; } 15% { margin-top: 0; } 25% { margin-top: -40px; } 40% { margin-top: -40px; } 50% { margin-top: -80px; }

// Animate typography and text effects

CSS animation To take advantage of the CSS animation, instead of relying on JavaScript, it is important to understand exactly what is going on. Transitions provide a change from one state to another, while animations can set multiple keyframes of transition. Transitions must have a change in state, and you can do this with :hover, :focus, :active and :target pseudo-classes. The most popular is hover as this provides rollover changes. There are four transition related uses, transitionproperty, transitionduration, transition-timingfunction and transition-delay. Animations set multiple keyframes that tell an element what change they should undergo with @ keyframes and called by the animation using it. Only individual properties may be animated.

65% { margin-top: -80px; } 75% { margin-top: -40px; } 85% { margin-top: -40px; } 100% { margin-top: 0; } }

font-size: 3em; font-family: 'Oswald'; text-align: center; position: relative; display: block;

15. Call to action That completes the second animation that we are exploring. Now itâ&#x20AC;&#x2122;s time to add the final animation, which will be for an animated rollover button, using a newer CSS command: clip text. In the body add the HTML from FileSilo below the other content that is on the page.

18. Apply the image Now for the real nuts and bolts of the process. Here the clip-text rule is continued and the background image is set to display, however instead of it being in the background like normal, the text is set to clip it. This means that the background only appears inside the text.

16. Centre the box

background-image: url(img/text-bg.jpg); background-position: bottom; background-size: cover; -webkit-background-clip: text; -webkit-text-fill-color: transparent; transition: 2s ease all;

The first part of this is simply to centre the box and for us to clear the floated elements in the content above. This box is going to be quite small on the screen so it is being made to be 400 pixels wide. The auto margin centres the div box on the screen.

.wrapper{ clear:both; width: 400px; margin: 0.5em auto;}

17. Start the clipping The next rule will set up the clipping of the text. Firstly the right typeface is applied to this and the type is given quite a prominent size of 3ems. The text is centred with the block and some margin and padding sets this up nicely on the screen.

.clip-text{ margin-top: 4em; padding: 0.3em;


19. Rollover effect Adding the code here will set the rollover for the text and the background image. The image will be set to be positioned at the top in the background. Previously it was set to the bottom so the text will have the image slide through the text as the user rolls over it with the mouse.

.clip-text:hover, .clip-text:hover::before { background-position:top; } .clip-text:before, .clip-text:after { position: absolute; content: ''; }

20. Create a border This code is the first step in creating a border around the text. This is achieved by placing the image behind the text. The downside at this point is that the text is invisible as the image is there. The final step will correct this.

.clip-text:before { z-index: -2; top: 0; right: 0; bottom: 0; left: 0; background-image: inherit; background-position: bottom; background-size: cover; transition: 2s ease all; }

21. Clip the image The final part is to place a semitransparent background colour behind the text and to give this a slightly smaller size than the box. The result is that the text and border now both clip the image so that it is only visible where they are. Save this now and test it in the browser.

.clip-text:after { position: absolute; z-index: -1; top: .125em; right: .125em; bottom: .125em; left: .125em; background-color: rgba(214, 128, 28, 0.9); }




// UX design



n your job as a designer, do you frequently meet with users in the form of interviews, user testing, card sorts and desirability testing? Do you know who your target users are? Do you test your early concepts and designs with users? Finally, do you periodically test your designs once they are deployed? If you don’t do any of these tasks – that is, you are designing without user input – you are not doing user experience. You are what is referred to as a ‘wireframe monkey’. A wireframe monkey is a designer who churns out wireframes, based on nothing more than their own or their client’s assumptions. A good UX designer will look at a client’s requirements and then ask what research their requirements are based on. If the client responds that they do not have any research, a good UX designer will try and persuade the client of the benefits of primary research and user-centred design (UCD) methods. A good UX designer should ask lots of questions before they start to sketch/design. They will also understand that they need to observe user behaviour (user testing) rather than listen to what users say (focus groups). If you look at some definitions of user experience, they all incorporate multiple uses of the word “user” (as you would expect) eg: s User experience (UX) focuses on having a deep understanding of users, what they need, what they value, their abilities, and also their limitations. s Every aspect of the user’s interaction with a product, service, or company that make up the user’s

perceptions of the whole. User experience design as a discipline is concerned with all the elements that together make up that interface, including layout, visual design, text, brand, sound and interaction. UX works to coordinate these elements to allow for the best possible interaction by users. s As Jakob Nielsen explains: “User experience encompasses all aspects of the end-user’s interaction with the company, its services, and its products” ( As well as focusing on acquiring a deep understanding of user motivation and behaviour, UX involves taking into account and supporting the business objectives, ie the key performance indicators – increasing the likelihood of increased user performance, satisfaction and return on investment. UX also needs to be distinguished from usability which is a subset of UX. Usability focuses on how well a user can complete their task, rather than the entire end-to-end user interaction with a product. To design a successful human-centred product, the following disciplines can be involved: user research, information architecture, interaction design, visual design, content strategy and accessibility. Lots of well-known organisations make use of user experience, like Facebook, Apple, Google, the UK Government and Philips all understand that to make great products with the best user experience possible, they need to first have a deep understanding of people’s needs and desires.

A good UX designer will look at a client’s requirements and then ask what research their requirements are based on

The theory behind UX HOW THE TERM CAME TO BE ALL THANKS TO AN APPLE JOB ROLE In 1993 Don Norman (who was vice president of the Advanced Technology Group at Apple) decided to change a job title from ‘user interface architect’ to ‘user experience architect’. He believed this term better reflected the user’s interaction with a system. Anyone who has studied product design will have heard of Don Norman. He is a cognitive psychologist who is considered one of the founding fathers of user-centred design. In the field of humancomputer interaction (HCI), Norman was responsible for various HCI concepts such as the action model (incorporating the gulfs of execution and evaluation), perceptible afordances, rationality in human behaviour, the designer’s conceptual model and the Activation-TriggerSchema model. Other HCI concepts include: s#BOOFSCMJOEOFTT BUZQFPGXFCCFIBWJPVS where users consciously or subconsciously ignore any content that looks like an advert/banner. s5IFUIFPSZPGAŢMPX XIJDISFGFSTUPUIF positive set of experiences that people experience – often when gaming – when they are so involved in an activity that nothing else seems to matter. s5IFDPODFQUPGDPQZBOEQBTUF EFWFMPQFECZ Larry Tesler and Tim Mott, while they were working for Xerox PARC. Tesler (who spent 17 years at Apple) advocates that usability testing should always be done before a designer finalises unproven or controversial interface elements. We can see that Tesler and Mott were using a user-centered design approach when they worked on their text-editing system back in the Seventies.

If you are planning on designing a product with an exemplary user experience, you [will first] need to understand the user. Too often, technology or ‘cool’ features come first, with user needs second.

Stephanie Ellis User experience consultant



The UCD lifecycle USER-CENTRED DESIGN (UCD) IS AN APPROACH THAT PUTS THE USER AT THE MIDDLE OF THE DESIGN PROCESS UCD is an iterative design process, involving multiple methods that ensures that your product will be easy to use and (delivers a positive experience for the user as well. This design process generally consists of four stages:

1. Planning Defining a project plan will usually be undertaken by a project manager and involves defining the project scope, assembling a project team and allocating project resources ie team members, tasks and timings. Project scope is important as this will outline what activities the project will include and in particular, what UCD activities are planned (depending upon time, budget and personnel resources).

2. Analyse This is the discovery stage where you learn about your users, their tasks and environments. Tasks in this phase can include user testing the existing site (to find out what works and what doesn’t), contextual interviews (usually conducted if designing an enterprise or business-tobusiness product), surveys (a series of questions designed to elicit feedback from site users) and card sorting (a method where users group and categorise a site’s information). One aspect to consider is the recruitment of participants. Unless you can be sure your client can

User-centered design means working with your users all throughout the project.

Don Norman Norman is the cofounder and principal of the User Experience/Usability consulting firm, the Nielsen Norman group


recruit representative users to serve as research participants, outsourcing the recruitment to a specialist recruitment company will save you time and money in the long run. This is because the time spent by staf on recruiting participants ranges from one and a half to two hours of staf time for each participant recruited. Also worth mentioning is doing a content audit (on the existing site). This is useful in identifying duplicate and obsolete content, and ascertaining whether content needs to be rewritten or moved to a diferent section. Outputs from this stage will include a list of usability issues of the current site, existing pain points and future requirements, qualitative persona user data, and label categories and navigation information for structuring the information architecture of the site. Using the data collected in this stage will enable you to develop personas, determine requirements (both user and technical), develop user journeys and define the information architecture.

MOBILE Tinder has useful and novel content for users who all have the same goal. Viewing faces activates the fusiform facial area, a part of the brain that makes human pay attention to human faces. The swipe method is easy to learn and is highly useable ie users can swipe and filter content quickly.

3. Design Using the data collected in the analysis phase, you can now start designing your site. Typically design outputs include user journeys (the user flow through the site for the main tasks), sketching the site’s templates and converting your sketches into wireframes, paper prototypes or an interactive prototype. Do you need to do wireframes, paper prototypes and an interactive prototype? No, do as much as your budget and/or client needs. Wireframes are a good tool for communicating the actual design without any visual design. Paper prototypes (sketches of the design) can be good for collecting feedback on an initial concept while an interactive prototype will be able to show just how the design works digitally (though this will take you some more time to prepare). This stage typically depends upon the client and what they are comfortable with. Some clients are fine with static wireframes whereas others can only really understand and make sense of the design if it is delivered as an interactive, clickable prototype.

‘You only swipe once (YOSO)’ is a simple and engaging concept. The whole concept taps into users’ emotions (in a totally superficial way but reflects human behaviour). By linking to a user’s Facebook account, users are not disclosing any personal content that is not already within the public domain.

4. Test and refine In addition to involving the user at all stages, UCD emphasises an iterative approach to design. In theory, this means frequent evaluation of design solutions with typical users (we’ll discuss what invariably happens in practice later on). Types of evaluation can include chalkmark/first-click testing (to test the site’s navigation), evaluating paper prototypes to ascertain the degree to which the design solution meets user needs and lab-based usability testing – if an interactive prototype has been developed – to measure task efectiveness, and eficiency and user satisfaction with the design.

Jesse James Garrett @JJG

JJG is one of the world’s most widely recognised technology product designers. He is also the author of The Elements of User Experience

// UX design

The ROI of UX


DESKTOP The concept of the total user experience encompasses more than just the user interface. It involves having a digital strategy that covers the user experience across multiple channels, above and below the line. Lush has successfully managed to create an ecosystem where print, digital and social media connect and scale internationally. The entire product page is exemplary but what stands out is the innovative use of video (the video plays as soon as the user lands on the page), showing the user how a product can be used. The full ingredients list also helps users in deciding to make a purchase. The site is fully responsive and works just as well on mobile as it does on the desktop. The mobile site has the same content and feature parity as the desktop site. Desktop usability hasn’t sufered as Lush has retained global navigation on their desktop site, rather than using the hamburger menu.

Doing is everything

One of eCommerce’s biggest challenges is to create trust and credibility. Even in 2015, it is surprising how many eCommerce sites make it dificult for users to find delivery and refund information. Lush prominently displays this information in their footer, creating trust with the user and aiding their decision-making.

Watch what users do in the testing stage, not what they say in focus groups. Jakob Nielsen said focus groups can give “inaccurate data because users may think they want one thing when they need another.”

Is there any evidence to show that iterative design works? Clearly, using a ‘design-test-change-test’ process is going to be more expensive than a design method where the requirements are determined by the client and a designer churns out some wireframes. This is because the principle of early and continual focus on users will therefore result in a useable product and it will deliver user satisfaction. But also, using a ‘design-test-change-test’ iterative cycle during a project improves task success, performance time and overall user satisfaction. UCD can additionally reduces the risk of an unuseable product ie one that is based on badly defined system requirements. As technology is becoming increasingly ubiquitous, understanding user needs and contexts is essential when designing for multiple contexts of use and avoiding badly defined system requirements. When designing we now need to think about constraints such as designing

for small screens, interruptions when using devices, and using gestures rather than a mouse and variable connectivity. Conversely, designing for smaller screens provides us with features – GPS, camera and microphone – that can be used to reduce the user’s workload. But of course, design decisions are based on evidence, not opinions. In theory, this decision process should prevent stakeholders proposing unrealistic project goals (both user and technical) and helps focus decisionmaking on solving user problems and meeting their expectations. Avoiding costly features (that are more often than not proposed by senior stakeholders) that users do not require and/or cannot use can save considerable development time. With user research methods such as contextual interviews and user testing, a designer can identify valuable insights and uncover opportunities. That is, user research can drive new product innovation by

uncovering problems and creating novel design solutions or by simply adding value for the user. Often, when conducting user testing sessions, a facilitator will ask a participant how they would improve a product to which the response is invariably, ‘make it more user-friendly’. On its own, this statement is pretty meaningless but having done some user testing, we are able to recognise latent user issues and articulate the phrase ‘user-friendly’. Amending a prototype design is also cheaper than modifying a fully-coded design and if we let our design teams regularly observe users interacting with their design or a competitor’s design, we can gain empathy and learn the culture of use. Empathy is important for designers as it enables us to understand a user’s frustrations while appreciating how a user approaches a task, ie their user journey, we can understand how a product may fit into a user’s life.



‘Sell’ UX If you work for an organisation that has not embraced UCD, or your clients say they have done research (usually this is a survey) or that you don’t need to do any research as they know their customers, then how do you persuade stakeholders of the benefit of the ‘design-testchange-test’ method? First Include UCD tasks in your original quote so that you don’t have to try and ‘sell’ UX to the client when the project has already kicked of. Then run an informal user testing session, inviting the stakeholders to observe – you can use paper prototypes. Observing users struggling with a product is an extremely powerful way of teaching the benefits of UCD methods. You can also use proprietary screen-recording software and test three to five users from within your ofice to start gathering data, using the thinking-aloud method. Using screenshots and quotes from a session can be a good way in attempting to convince stakeholders that user testing need not be a big production. Josh Seiden (coauthor of Lean UX) advises you to talk to customers. If you have a sales team, ask them if you can accompany them on sales visits/ calls and/or listen in to some customer calls if you have access to a call centre. Day 1 AM: User/stakeholder interviews PM: Personas/Scenarios Day 1 Learn

1st cycle Day 3 Measure

Day 3 AM: Get stake holder feedback PM: Refactor

Day 2 Build

Day 2 AM: Flow design PM: Refine

Jeff Gothelf @JBOOGIE

Jeff Gothelf is the author of Lean UX: applying lean principles to improve user experience, published by O’Reilly in 2013


UCD-less designs WHAT ARE THE CONSEQUENCES OF NOT USING A USER-CENTRED DESIGN APPROACH? So what happens when a product is designed without UCD? Let’s take a look at Apple’s iOS 6 Maps app – created so that Apple would not have to pay Google’s fee structure and stay under their control. It turned out to be a big UX failure due to the underlying content quality that directed users to the wrong location and placed important landmarks in the middle of lakes. While the Maps app functionality was great (the turn-by-turn direction was excellent and it was fully accessible for visually impaired users), the content quality was missing and users are fully aware of how a maps app should work and what a great user experience Google Maps provides. Other UX failures include the US government’s rollout of its site in October 2013. Multiple usability guidelines were breached such as forcing users to register before they can view their options, positioning key calls-to-actions below the fold, using nonstandard dots as a progress metre in place of the standard numbered steps or completion metre and requiring users to create a complex username rather than allowing them to use their email address. What have the Apple Maps app and taught the design community? That user expectations have changed from the days when people spent hours learning how to program their video recorder. We now expect products to be easy to use (without having to read through a 20-page instruction manual) and know now that a badly designed product is the designer’s fault, not ours. These substantial failures could have been avoided if included UCD activities to evaluate the design (prior to it going to the build stage). Or maybe these organisations did do user research and stakeholders ignored the evidence – this does happen. Adopting a Lean UX method (discussed below) or discount usability techniques could have avoided some of the problems Apple and experienced with their designs. Lean UX emphasises a focus on short, iterative design cycles rather than the waterfall-style production of lots of documentation (requirements, site maps, wireframes, technical specification and so on). Lean UX also encourages the designer to focus on the design phase and keep deliverables light and editable, for example by using whiteboards for your initial concepts and inviting the feedback from the project team. This method prevents you from working in a silo, with the benefit of other team members having a stake in the design and a sense of ownership. The result is that everyone in the team is working towards a common purpose. Using a lean UX method prevents the designer from wasting time on a design that is not technically feasible as the developers are able to review the designs earlier on in the design stage.

Work with Agile Agile software development is characterised by releasing fast iterations of products, in sprints of usually one or two weeks. This sprint will keep the team’s focus on user needs and makes sure that users find the design intuitive and desirable. As the Agile methodology does not explicitly mention how to integrate UCD activities so as to develop useable software, using a lean UX approach is probably the best of way of introducing UCD techniques into an Agile project. To be a successful designer when working with Agile you must understand your user and be flexible in adapting UCD techniques so as to fit in with reduced timescales.

// UX design


Responsive web design It’s been around since 2010 and responsive web design (RWD) has more and more sites moving towards this mobile-first approach to deliver an optimal user experience.

Mobile use Mobile is now a big player in the UX market, because for the first time ever (April 2015), the time spent on mobile devices surpassed desktops and laptops in time spent online.

The hamburger menu 2015 will see the continuation of the hamburger menu debate – a mobile design pattern where global navigational categories are hidden from view in a drop-down menu.

Superfast development

Digital ecosystem

The Agile software development process accelerates transparency and ensures development issues are identified earlier (in each sprint), and not at the end of the development stage.

The user experience community will need to consider future UX as housed within an ecosystem of products rather than its own virtual product.


As one of the top sites for attracting worldwide traffic, BBC News only implemented a fully responsive site this year

Future-proof the user experience It’s been around since 2010 and responsive web design (RWD) has improved considerably, providing users with a positive mobile user experience. Sites that are not mobile optimised are penalised by Google and achieve a lower rank than sites that provide a good mobile experience. Clearly, Google is hoping this move will force more companies to adopt mobile sites and responsive design. Google’s move to penalise nonresponsive sites certainly makes sense given that for the first time ever (April 2015), the time spent on mobile devices surpassed that spent online via desktop and laptop (2 hours, 26 minutes each day to be precise). Some users will use mobile as their only form of internet access (about.

As RWD has become the norm, web design has become somewhat generic and lacking in creativity ie most homepages consist of a large hero image or carousel at the top of the page and boxes and grids elsewhere. A predicted trend is for the UX community to go beyond design patterns and think of UX as virtual product design, housed within an ecosystem of products or omnichannels. UX is now at the stage that we don’t need to be re-creating design patterns for a checkout process or logging in. We should be thinking about how our designs fit in with bigger picture or a user’s digital ecosystem that comprise multiple devices. Which brings us to the next trend: the Internet of Things (IoT). As discussed above, the way we interact with computers has changed considerably within the last 15 years. Less of us are using desktop computers and more of us interact with a plethora of digital devices IoT refers to the range of connected physical devices, embedded with sensing abilities – that are able to gather

The Internet of Things (IoT) refers to the range of connected physical devices, embedded with sensing abilities – that are able to gather and analyse data.

and analyse data. IoT devices can include wearables (fitness trackers and smart watches), home technology (heat and light regulation), medical devices (weighing scales, blood pressure monitors) and much more. What does this mean for UX? There will be a wide variety of environments where interactions will occur. Many devices will be concealed and without a screen. Some devices may not have the facility for input with interactions done on one device, and another delivering the output or input will be managed by an app. Debate over the hamburger menu will continue. Although it seems to be ubiquitous, this menu minimises and hides the global navigation making it more dificult for users to find what they’re looking for. Using a hamburger menu makes sense for sites that have a small number of navigation categories that can easily fit in a horizontal bar, but for sites that have multiple categories eg eCommerce, using the hamburger menu can increase the user’s workload in navigating the site.



Enhance your UX with Hover.css and Font Awesome Use Hover.css, a CSS framework that enhances HTML elements with CSS animations, to improve your UX


// Enhance your UX with Hover.css and Font Awesome


orms are everywhere; search forms, log-in forms, order forms, complaint forms – all strewn across the web – their ubiquity is astounding! With all of these forms floating around vying and screaming for your attention it can be a little overwhelming when it comes to filling them, especially when there are many fields or a multipart process. On the other hand, you may not notice it at all, perhaps you’ve become oblivious to their presence. Either way each new form and each new interaction comes with its own learning curve – some are natural to use, some are a nightmare, you might expect that there would be a standard way of presenting forms so as to reduce the efort, but there aren’t any. Instead, there’s the field of UX. When creating anything, considering the end-user – their experience and how we expect our users to respond to our web designs – is almost exclusively under review of UX specialists these days. In this tutorial, we’re going to look at how we can enhance the UX our user forms with Hover.css with the aim of encouraging completion of the form filling process.

2. Grab the Font Awesome assets

6. Load the CSS

Font Awesome is not a packaged part of Hover.css, but it does play beautifully with it so we’re gonna grab that and use it in this tutorial too. We can grab Font Awesome from

Open index.html and add the following lines of code into the <head> tag of the document. Add them just before where we import our styles.css file. Now, we have hover. css and FA ready to go. We could access them from a CDN, but it’s always best to have a local version in case the CDN fails or gets hacked.

3. Grab the base code We’re not going to be writing a great deal of code in this tutorial, instead we’re going to enhance some prewritten code. You can still grab the files that you’ll need from FileSilo though.

<html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> <title>Hover.css Demo</title> <link rel="shortcut icon" href="favicon. ico"> <link rel="stylesheet" href="css/hover.css" type="text/css" /> <link rel="stylesheet" href="css/fontawesome.min.css" type="text/css" /><link rel="stylesheet" href="styles.css" type="text/css" /> <meta name="viewport" content="initialscale=1.0, user-scalable=no" /> </head>

1. Grab the Hover.css assets We need to make a few stops around the internet before we can kick of on this project. First of all, let’s grab the poster child for this project. Head over to Hover.css’s GitHub page and download the entire repo.

4. Put the Hover assets together

7. Change directory

Now that we have all of the pieces, we need to put them together. Open the project assets from FileSilo, open the CSS folder, unzip and open the Hover.css repo we downloaded. Open the CSS folder of the Hover folder and copy the file hover.css to the project CSS Folder

We need a quick server to deliver our Hover.css page and resources. If you’re on a Mac or Linux system, this is

The power of UX If someone wants to achieve something with a website or service but it’s hard to do, they’ll look for an alternative. By considering UX in your web design, you can almost always be certain people will leave happier.

5. Put the assets together Unzip and open the Font Awesome repo, copy the ‘Fonts’ folder to the root of your project folder, then open the FA (Font Awesome) CSS folder. Copy the font-awesome.min. css file into your project CSS folder.


Hover.css clocks in at 127KB. It’s always helpful to reduce bytes where we can and fortunately, we can just take the bits of Hover.css we need rather than using the whole lot Top left

The forms we’re using are very simple, they’re only here to illustrate a point – in fact, Hover.css can be used in almost any context, but we’re going to use it to help us sell books Top right

Font Awesome has a lot of assets, but don’t worry about importing them. If we include the FA resources at a path relative to the CSS file, our browser will sort all of that


The:HE'HVLJŨAnnual :HE'HVLJŨ super simple. Open up a terminal window, cd inside your project directory and run the following command:

going to add a small arrow that moves to the right when the user hovers over our button.

$ python -m SimpleHTTPServer 8080 Now, if you open up a browser, you can see your webpage if you go to localhost:8080.

8. Our forms Our page is pretty simple, fill in the form and click continue to proceed to the next form. Nothing too dificult, that is, unless something goes wrong. Using Hover.css we can enhance certain actions and draw attention to interactions as required. Using visual feedback, we can give our users a sense of progress without having to output any dialog which may distract.

9. The Continue button Right now, our Continue button is entirely static. Using Hover.css we can create a small visual cue that suggests our form will proceed to another form rather than submitting and redirecting our user. In this case, we’re

10. CSS class is in session Consisting entirely of CSS, the only way Hover.css has to interact with our DOM is through classes and attributes. To add an efect to an element, add the relevant class to the tag. Open index.html and add the class hvr-iconforward to the button elements found in our first and second forms (around line 30 and line 40).

<input type="text" name="postcode" required placeholder="Postcode"> <input type="text" name="country" required placeholder="Country"> <button type="submit" class="hvr-iconforward">Continue</button> </form>

11. Invalids Like most things involving people and computers, things can go wrong. A typical error in form submissions is a required field being ommited when the form is submitted. Each browser has a diferent way of displaying this error, but we can use Hover.css to signify that something isn’t right before we let the continue.

Why Font Awesome? The beauty of using a font for icons is we only make one request to get all of the icons, instead of making dozens for images across the site giving us much faster loading times. We also get the added benefit of scalable icons! No more pixels on your lovely Retina screen.

Top left

Scalable vector-based graphics are instructions that can draw images at any scale, we can have all of the pixels we need! Top right

::Before and ::after are immensely useful pseudo-elements that let us create and style content, but not all elements are blessed with the gift of pseudo-classes Right

Built-in UI can vary across browsers, by catching the validation error and creating our own error graphics and dialog, we can create a consistent experience


12. Catch the error On lines nine to 37 of core.js, which you can find in your scripts folder, there’s some code that handles our forms

when a required field is ommited. With a little bit of JS we can change the appearance of the continue button when something has gone wrong. Let’s make it shake back and forth when something is wrong. Enter the following code to line 34 of core.js

(function(btn){ btn.setAttribute('class', 'hvr-wobblehorizontal'); setTimeout(function(){ btn.setAttribute('class', ''); }, 1000); })(form.getElementsByTagName('button')[0]);

13. Taking what we need. Hover.css isn’t just about adding classes to things to make them do stuf for you, it can also be used as a modular-ish animations library. We can head into the CSS, pick out the animations that we will need, and simply tweak them until our heart’s content. Open css/ hover.css and then search for .hvr-skew-forward. Now copy that to your clipboard and then paste it at the end of styles.css

14. Adjust the CSS Most animations from Hover.css will have several classes and pseudo-classes with diferent kinds of properties assigned. This is great when a user is actively interacting with an element, but it can also be really unhelpful when all you want to do is programmatically trigger an animation. Let’s now take the content of .hvr-skewforward, paste it with the CSS selectors [data-isinvalid=”true”]{} and then finally add the skew transform to the end of the rule.

for(var k = 0; k < allInputs.length; k += 1){ if(allInputs[k].checkValidity() === true){ allInputs[k].setAttribute('data-is-invalid',

// Enhance your UX with Hover.css and Font Awesome

TranslateZ If you’ve had a browse through the classes and rules of Hover.css, you may have noticed that almost everything will have the property transform:translateZ(0) and you may have thought this peculiar. “What’s the point of it?” would be a very astute question. Animating elements of the DOM has never been an eficient process, for example on mobile devices dropped frames, stuttering and outright jumping about the place are a familiar sight. Transform:translateZ(0); is a little performance hack that kicks of hardware accelerated rendering. Adding a 3D transform to any DOM element triggers hardware rendering on that element. This means that that particular element will be rendered separately from the rest of your webpage by your computer’s optimised graphics hardware rather than the CPU.

'false'); }

15. The validation trepidation Now we have an animation in place that will draw our user’s attention to the fields that have been left either unfilled or have had invalid values entered. Whenever we try to submit a form, every element that has a required or specified value will be skewed five degrees to the right until the user has remedied the situation.

*[data-is-invalid="true"]{ vertical-align: middle; box-shadow: 0 0 1px rgba(0, 0, 0, 0); -webkit-backface-visibility: hidden; backface-visibility: hidden; -moz-osx-font-smoothing: grayscale; -webkit-transition-duration: 0.3s; transition-duration: 0.3s; -webkit-transition-property: transform; transition-property: transform; -webkit-transform-origin: 0 100%; transform-origin: 0 100%; -webkit-transform: skew(-5deg); transform: skew(-5deg); }

16. Progress process At the bottom of our order form, we have three dots which light up and show how close we are to completing the process, as we progress through the

process. A great number of form drop-ofs occur because users don’t know how far they are from completion. This little visual cue encourages our users to stay the course.

17. Border animations To achieve this ripple efect, we simply add the hvr-ripple-out class to our progress markup. As we work through the order process, we can make use of the combination of data attributes and class names to highlight the process with colours, all without having to use too much of JavaScript.

CORE.JS (lines 28 - 29) progressMarkers[progress]. setAttribute('class', 'hvr-ripple-out'); progressMarkers[progress]. setAttribute('data-this-far', 'true'); INDEX.HTML (lines 73 - 75) <span data-this-far="true" class="hvrripple-out">1</span> <span data-this-far="false">2</span> <span data-this-far="false">3</span>

18. Font Awesome icons Hover.css has its own built-in icon set for rendering arrows and special characters, which is great, because we can add scalable icons that look good on any kind of screen at any scale. To use FA icons, simple add an <i></ i> tag with the class of the icon you want to use

<i class="fa fa-battery-quarter"></i>

19. Back to the start At the end of our process we would normally take our user back to our store, but short of a lot more paper and time, we can’t really re-create that process here. Instead what we’re going to do is reset the process so that it can then be repeated. But what’s the best way to show this? Well, we’re going to add a slightly subtler animation from hover to the #reset button in our HTML .hvr-underlinefrom-right. This is only an animation of the border property of our button going from the end of our process right back to the start.

20. Inputs beware! One final thing to note here is the input element. The HTML5 spec doesn’t exactly allow for <input> to make use of the ::before or ::after pseudo-elements. Hover.css relies very heavily on the before and after tags though, so if you do decide that you want to use Hover to enhance your forms, make sure that you are careful in your selection.

21. Rounding up That’s it, now we’ve learned all about Hover.css and the power of animations in enhancing the lucrative user experience. It’s certainly a subtle and sometimes, subjective art, but with the help of careful, thoughtful considerations and research into our resources, it’s entirely possible to make good use of UX to benefit both our users and us as designers with the most powerful toolsets going.


tri Spe al ci of al fe r

Enjoyed this book?

Exclusive offer for new

Try 3 issues for just



* This ofer entitles new UK direct debit subscribers to receive their first three issues for £5. After these issues, subscribers will then pay £25.15 every six issues. Subscribers can cancel this subscription at any time. New subscriptions will start from the next available issue. Ofer code ZGGZINE must be quoted to receive this special subscriptions price. Direct debit guarantee available on request. This ofer will expire 30 November 2016. ** This is an US subscription ofer. The USA issue rate is based on an annual subscription price of £65 for 13 issues which is equivalent to $102 at the time of writing compared with the newsstand price of $14.99 for 13 issues being $194.87. Your subscription will start from the next available issue. This ofer expires 30 November 2016.

About the mag

Uncover the secrets of web design Practical projects Every issue is packed with step-by-step tutorials for HTML5, CSS3, Photoshop and more

In-depth features Discover the latest hot topics in the industry

Join the community

Get involved. Visit the website, submit a portfolio and follow Web Designer on Twitter

subscribers to…

Try 3 issues for £5 in the UK* or just $7.85 per issue in the USA** (saving 48% off the newsstand price)

For amazing offers please visit Quote code ZGGZINE Or telephone UK 0844 848 8413+ overseas 01795 592 878 +Calls will cost 7p per minute plus your telephone company’s access charge



To access FileSilo, please visit


Follow the on-screen instructions to create an account with our secure FileSilo system, log in and unlock the bookazine by answering a simple question about it. You can now access the content for free at any time.


Once you have logged in, you are free to explore the wealth of content available on FileSilo, from great video tutorials and online guides to superb downloadable resources. And the more bookazines you purchase, the more your instantly accessible collection of digital content will grow.


You can access FileSilo on any desktop, tablet or smartphone device using any popular browser (such as Safari, Firefox or Google Chrome). However, we recommend that you use a desktop to download content, as you may not be able to download files to your phone or tablet.


If you have any problems with accessing content on FileSilo, or with the registration process, take a look at the FAQs online or email filesilohelp@

NEED HELP WITH THE TUTORIALS? Having trouble with any of the techniques in this bookazineâ&#x20AC;&#x2122;s tutorials? Donâ&#x20AC;&#x2122;t know how to make the best use of your free resources? Want to have your work critiqued by those in the know? Then why not visit the Web Designer and Imagine Bookazines Facebook pages for all your questions, concerns and qualms. There is a friendly community of fellow web design enthusiasts waiting to help you out, as well as regular posts and updates from the team behind Web Designer magazine. Like us today and start chatting! 194


l a u n n A







The Web Design Annual