Issuu on Google+






Systemd strengthens the ultra-private distro

Take advantage of GitHub’s cross-platform tool

Expert server configurations • Sharing, backup and remote access • Network administration FOSS PYTHON




Make a Minesweeper game with lava traps

Distribute your data processing tasks





ARCADE HACK Upgrade your gamepad to a full RetroPie console


Intel adds its Curie chip to the new Arduino board




Imagine Publishing Ltd Richmond House, 33 Richmond Hill Bournemouth, Dorset, BH2 6EZ ☎ +44 (0) 1202 586200 Web:

to issue 163 of Linux User & Developer

Magazine team Editor Gavin Thomas ☎ 01202 586257 Production Editor Rebecca Richards Features Editor Oliver Hill Designer Sam Ribbits Photographer James Sheppard Senior Art Editor Andy Downes Editor in Chief Dan Hutchinson Publishing Director Aaron Asadi Head of Design Ross Andrews

This issue

Contributors Dan Aldred, Keila Banks, Joey Bernard, Christian Cawley, Kunal Deo, Terence Eden, Alex Ellis, Gareth Halfacree, Tam Hanna, Richard Hillesley, Jon Masters, Ashish Sinha, Richard Smedley, Alexander Tolstoy, Mihalis Tsoukalos


Digital or printed media packs are available on request. Head of Sales Hang Deretz ☎ 01202 586442 Sales Executive Luke Biddiscombe ☎ 01202 586431

Assets and resource files for this magazine can now be found on this website. Support


Linux User & Developer is available for licensing. Head of International Licensing Cathy Blackman ☎ +44 (0) 1202 586401


For all subscriptions enquiries ☎ 0844 249 0282 (UK) ☎ +44 (0) 1795 418661 (Overseas) Email: 6 issue subscription (UK) – £25.15 13 issue subscription (Europe) – £70 (ROW) - £80


Look for issue 164il on 7 Apr

Head of Circulation Darren Pearce ☎ 01202 586200


Production Director Jane Hawkins ☎ 01202 586200


er? Want it soon

Finance Director Marco Peroni


Group Managing Director Damian Butt

e Subscrib ! y toda

Printing & Distribution

Printed by William Gibbons, 26 Planetary Road, Willenhall, West Midlands, WV13 3XT Distributed in the UK, Eire & the Rest of the World by: Marketforce, 5 Churchill Place, Canary Wharf London, E14 5HU ☎ 0203 148 3300 Distributed in Australia by: Gordon & Gotch Australia Pty Ltd 26 Rodborough Road Frenchs Forest, New South Wales 2086, Australia ☎ +61 2 9972 8800


The publisher cannot accept responsibility for any unsolicited material lost or damaged in the post. All text and layout is the copyright of Imagine Publishing Ltd. Nothing in this magazine may be reproduced in whole or part without the written permission of the publisher. All copyrights are recognised and used specifically for the purpose of criticism and review. Although the magazine has endeavoured to ensure all information is correct at time of print, prices and availability may change. This magazine is fully independent and not affiliated in any way with the companies mentioned herein. If you submit material to Imagine Publishing via post, email, social network or any other means, you automatically grant Imagine Publishing an irrevocable, perpetual, royalty-free license to use the material across its entire portfolio, in print, online and digital, and to deliver the material to existing and future clients, including but not limited to international licensees for reproduction in international, licensed editions of Imagine products. Any material you submit is sent at your risk and, although every care is taken, neither Imagine Publishing nor its employees, agents or subcontractors shall be liable for the loss or damage.

© Imagine Publishing Ltd 2016

» Set up a rock-solid network from scratch » Hack your old Xbox controller with a Pi Zero » Secure your website with Let’s Encrypt » Run a Raspberry Pi 2 Docker Swarm Welcome to the latest issue of Linux User & Developer, the UK and America’s favourite Linux and open source magazine. Setting up a complex network, whether for your home or the office, is no mean task, but it is far easier than you might realise. And the payoff? A wealth of functionality, directly configured to suit your needs: perfect port forwarding and proxy servers, automatic backups, remote access, file and media servers, file sharing, monitoring tools – all of it put together by you and easily managed from a single control node. Turn to page 18 and we’ll walk you through the whole thing, from the basic DNS and DHCP servers right up to the fun stuff. If you’re in the mood for tinkering with hardware then look no further than our lead Pi project this month – the Xbox Zero Arcade (see page 60). Find out how to tear down your old Xbox controller, or a generic USB pad, and fit the Pi Zero right inside the casing, then wire it up for some plug-and-play MAME arcade action. There are some great tech tutorials this month, too – get started with big data in our Hadoop tutorial, switch to the new website certification authority Let’s Encrypt, and find out how to distribute workloads across Raspberry Pi 2 boards by turning them into a Docker Swarm. Be sure to check out our review of the Genuino 101 on page 86 – Intel’s latest venture into the maker market. Enjoy! Gavin Thomas, Editor

Get in touch with the team: Facebook:

Linux User & Developer


Buy online


Visit us online for more news, opinion, tutorials and reviews:

ISSN 2041-3270


Contents e Subscrib! & savt ouer 28

Reviews 81 Video calling software

Which platform offers the best service for quality VoIP calls in Linux systems?

Check ou fer! of great new ers US custom ibe cr can subs via page 80

18 Sysadmin: Total Guide to Networks

Firefox Hello


Google Hangouts

KDE Telepathy

Server config, backup, monitoring, remote access and more

OpenSource Tutorials 06 News

The biggest stories from the open source world

08 Free software column Expert insight into open source and free software

11 Coding column

Learn problem solving and systems programming in C

12 Interview

Keila Banks on growing up with Linux and open source

30 Systems programming: Developing server processes

Tame daemons and bend them to your will

36 Hack the Atom editor

Find out why you need to use GitHub’s tool

40 Add textures in MonoGame

Boost your image quality in MonoGame projects by wrapping realistic textures

44 Secure your web server with Let’s Encrypt

48 Big data: Set up a Hadoop cluster

94 Letters

52 Computer science: Creating and using binary trees

Your views shared and your questions answered


The ultra-private distro has rebased on Debian 8 and added a secure installer

90 Free software

Richard Smedley recommends some excellent FOSS packages for you to try

Learn to work in hierarchical data structures

Resources 59 Practical Raspberry Pi

Set up a rock-solid system Hack an Xbox pad with your Pi Zero to make a MAME machine

88 Tails 2.0

Crunch your data in a distributed filesystem

19 Sysadmin: Total Guide to Networks 60 Xbox Zero Arcade

Intel partners with Arduino for its third foray into the maker market

Discover the hot new SSL certification service

16 Kernel column

The latest on the Linux kernel with Jon Masters

86 Genuino 101

96 Free downloads

Find out what we’ve uploaded to our digital content hub FileSilo for you this month

Mod an Xbox controller with the Pi Zero, master motion-tracking in Python, set up a Docker cluster with Raspberry Pi 2 nodes, and re-create Minesweeper in Minecraft

Join us online for more Linux news, opinion and reviews 4





01/2016 Best Price/Performance Ratio





The 1&1 Cloud App Centre is the fastest way to get your application up and running! Choose from over 100 cutting-edge applications, and combine them with the speed and performance of our Cloud Server – proven in independent benchmark testing.

Secure and powerful platform No server experience required Pay-per-minute billing









0333 336 5509 *1 month free trial, then from £4.99/month. £9.99 setup fee payable in advance and refundable if the product is cancelled within the first month. No minimum contract period. Prices excl. 20% VAT. Visit for full T&Cs. 1&1 Cloud Server powered by Intel® Xeon® Processor E5-2683 v3 (35M Cache, 2.00 GHz). Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. 1&1 Internet Ltd, Discovery House, 154 Southgate Street, Gloucester, GL1 2EX.

06 News & Opinion | 12 Interview | 94 Your questions answered MOBILE

Canonical launches first ‘converged’ Ubuntu tablet Partners with BQ for its first flagship mobile computing device Canonical has announced the launch of its first ‘converged’ Ubuntu tablet device, in partnership with Spanish mobile maker BQ: the Aquaris M10 Ubuntu Edition. Canonical’s vision for a future where the distinction between mobile and desktop computing becomes blurred began with the Ubuntu Edge, a powerful smartphone which, due to a failed attempt to crowd-fund development capital, never left the drawing board. The company’s vision didn’t die, however, though the convergence feature which would allow the device to perform the role of both mobile and desktop computer was nowhere to be seen in the first Ubuntu smartphones to hit the market. The Aquaris M10 Ubuntu Edition, then, is the first device to arrive off-the-shelf featuring Canonical’s convergence system. Used as a tablet device, the Aquaris M10 operates entirely standalone and with a touch-centric interface; connected to an external display, keyboard, and mouse, the tablet seamlessly switches to a traditional desktop user interface. “We’re bringing you everything you’ve come to expect from your Ubuntu PC,” claimed Canonical chief executive Jane Silber at the launch. “This isn’t a phone interface stretched to desktop size – it’s the right user experience and interaction model for the

given situation. Also, in terms of applications, we have something no other OS can provide: a single, visual framework and set of tools for applications to run on any type of Ubuntu smart device.” Internally, the M10 is based on a MediaTek quad-core MT8163A processor running at up to 1.5GHz and with 2GB of RAM – significantly less than was promised on the Ubuntu Edge – with 16GB of internal storage, expandable via microSD. A micro-HDMI port provides connectivity to an external display, while the 7,280mAh battery allows the device to be used on the go. The tablet’s

display is a 10.1-inch full HD model with capacitive multi-touch, nestled behind Asahi’s protective Dragontrail impact-resistant glass. “The Aquaris M10 Ubuntu Edition is our third mobile device to ship with Ubuntu,” BQ deputy chief Rodrigo del Prado explained at the launch. “Our customers were delighted with the Aquaris E4.5 Ubuntu Edition and Aquaris E5 HD Ubuntu Edition phones, and we’re excited to be the first OEM to ship the converged Ubuntu experience. It’s this kind of innovation that makes BQ and Ubuntu such a great fit.”

Used as a tablet device, the Aquaris M10 Ubuntu Edition operates entirely standalone and with a touch-centric interface

Above Canonical and BQ’s first ‘converged’ tablet can be used in standalone or desktop modes



SourceForge and Slashdot acquired


New features in pacman 5.0.0

An end to the contentious ‘DevShare’ program?

Above Pacman is the default package manager for Arch Linux and appears in many more distros

1 Transaction hooks

Above SourceForge’s bloatware effectively killed its open source street cred

Popular software hosting service SourceForge and content aggregation site Slashdot have been acquired by BIZX, and the first order of business: the elimination of the DevShare advertising programme. Following SourceForge’s acquisition by Dice Holdings in 2013, the DevShare programme was launched as a means of providing revenue to the owners and the developers of software hosted on the service. Its wrapping of software in bundle packages which included potentially unwanted programs (PUPs) and advertising software, however, led to the programme being declared malware. “To misquote Marge Simpson,” said Red Hat developer Justin Cliff at the time, “‘They not only crossed the line, they threw up on it.’” Although Dice Holdings would defend the program, projects left the site in droves – the company’s decision to prevent the deletion of previously-hosted projects made the exodus less obvious. As a result, its new owners are cleaning house. “As of last week, the DevShare program was eliminated,” explained Logan Abbott, president of SourceForge Media. “We want to restore our reputation as a trusted home for open

source software, and this was a clear first step towards that. “We’re more interested in doing the right thing than making extra short-term profit. As we move forward, we will be focusing on the needs of our developers and visitors by building out site features and establishing community trust,” continued Abbott. “Eliminating the DevShare program was just the first step of many more to come. Plans for the near future include full https support for both SourceForge and Slashdot, and a lot more changes we think developers and endusers will embrace.” Previous users of the site have welcomed Abbott’s announcement, though some have declared it too late to save the site from the impact of its now-tarnished reputation among both developers and end-users. “Your reputation is already shot, thanks to Dice. No open source developer I know will ever use SourceForge again,” spat developer Joey Kelly in response to DevShare’s demise. “Trojan us once, shame on you. Trojan us twice? We’re not going to give you the satisfaction, we’re going to route around you, and that’s that.”

For the first time in version 5.0.0, the pacman package manager can now run hooks either before or after transactions. Hooks can be specified using the alpm hook file format, with the user describing a single action that is to be run based on one or more triggers.

2 .files support

With this latest release, pacman gains the ability to sync and read .files databases using the -Fy switch, as well as the ability to perform basic searches for files contained in sync repositories using -Fs and -Fo.

3 Corruption handling

If pacman 5.0.0 encounters a corrupt database, it will automatically update from the remote copy even if the remote is the same version. The new version also gains improved signal handling and lock file removal.

4 Dependency info

When encountering a dependency error, pacman 5.0.0 will print out much more information – and more useful information – than its predecessors, making diagnosis of the problem much easier and guiding you to a simple resolution.

5 Enhanced security

A number of potential security issues brought to light in a Coverity analysis of pacman’s source code, largely centred around failing to free memory upon error conditions, have been resolved in this latest release.



Your source of Linux news & views


Mad, bad & dangerous Companies may enjoy the benefits free software brings to them, but few enforce free software licencing policies The GPL has never been the flavour of the month in the corporate world, where copyrights and patents are seen both as revenue streams and as instruments to inhibit competition, and the GPL and copyleft are often dismissed as mad, bad and dangerous to know. Companies take advantage of what free software can bring; few make an active contribution to the enforcement of free software licences. GPL enforcement is left to community efforts like Software Freedom Conservancy (SFC), which depends on voluntary contributions. Funding of GPL conservancy has re-emerged as an issue because of the suggestion by Karen Sandler, SFC’s executive director, that “over the past year, and in particular since we launched the VMware suit, some of our corporate funding has been pulled,” and by the withdrawal of the individual right of community members to vote for members of the board of the Linux Foundation shortly after Karen Sandler announced her intention to stand for election. Matthew Garrett, a senior Linux kernel developer, asserted that “the Linux Foundation has historically been less than enthusiastic about GPL enforcement, and the SFC is funding a lawsuit against one of the Foundation’s members for violating the terms of the GPL. The timing may be coincidental, but it certainly looks like the Linux Foundation was willing to throw out any semblance of community representation just to ensure that there was no risk of someone in favour of GPL enforcement ending up on their board.” Corporate hostility to copyleft and the GPL is nothing new, but it’s worth remembering how such companies have benefitted from the GPL. Linux grew from small beginnings. Most of the early contributors to Linux and the projects that sprang from the early success of Linux were characterised by friends and enemies alike as hobbyists, and few had any great history in computing, yet after eight or ten years of development, GNU/Linux, which still had many technical shortcomings, was adopted by the traditional Unix companies as a replacement for Unix. During the 80s and 90s, Unix was touted as the universal operating system, and each of the Unix


Richard Hillesley

writes about art, music, digital rights, Linux and free software for a variety of publications

Corporate hostility to copyleft and the GPL is nothing new, but it’s worth remembering how such companies have benefitted from the GPL

companies poured vast resources into developing proprietary versions of the same operating system at the expense of the hardware, services and userland software that were often their core business. Solaris, which was developed by Sun Microsystems, wasn’t quite like Irix (SGI), which wasn’t quite like HPUX (HP), which wasn’t quite like AIX (IBM), which wasn’t quite like Tru64 (DEC). None of them played well together, which complicated the market for ISVs and inhibited the development of common interfaces and utilities. Each implementation of Unix was unique and proprietary. Proprietary operating systems cost vast amounts of money, and the shortcomings of maintaining a multiplicity of interfaces also created hurdles for other parts of the business. Software that ran on one version of Unix wouldn’t run on another. The GPL changed the game, because it enforced the idea that there was a mutual advantage for companies to contribute back to a single code base, and share the rewards with others. The participation of corporate interests accelerated the development of Linux and ensured its success in the enterprise, which was also enhanced by its portability across a wide range of hardware. Copyleft insured against a Unix-like fragmentation of the code. To develop a proprietary operating system from scratch on the scale of GNU/Linux would have cost many billions of dollars. As the code is shared so too are the costs and the technical enhancements. Copyleft was the driver of change, and ensured the commonality of the code, protocols and standards which also made it easier to port software between different machines and architectures. Dozens of companies have contributed to the Linux kernel, and have benefitted from the contributions of others. They do not contribute out of generosity, but because this effect has made it possible to port anything to anything at a vastly reduced cost. This makes it all the more surprising that the companies who ‘own’ so much of the Linux code, courtesy of the GPL, have not made it a part of their business to help SFC to defend and protect the licence that made it possible.



Warner Bros. cancels Linux port plans Poor reception to Batman’s Windows outing to blame Media giant Warner Bros. has cancelled plans to release Batman: Arkham Knight on Linux, following major problems with the game’s original release. The latest entry in the Batman: Arkham franchise, Batman: Arkham Knight was due to be released on Linux and OS X shortly after its debut on Windows, PS4 and Xbox One. Unfortunately, crippling performance problems and game-crashing bugs on the Windows release saw publisher Warner Bros. pull the title from sale, restoring it only after several patches had been released to address the most serious issues. These patches, however, did little to address the core issues of the title, leaving Warner Bros. to announce that it was offering all buyers of the

Windows release a blanket refund regardless of how long they had owned the game. With Windows users claiming the refund in droves, concerns were raised about the planned Linux and OS X ports of the title – concerns that proved entirely accurate. “We are very sorry to confirm that Batman: Arkham Knight will no longer be coming to Mac and Linux,” the company announced in a brief post to the Steam Community site. “If you have pre-ordered Batman: Arkham Knight for Mac or Linux, please apply for a refund via Steam.” For Linux gamers, the loss of a triple-A title from a major publisher is a blow – and doubly so for Valve, which is relying on cross-publisher support for its Steam OS Debian-based gaming platform.


Linux Foundation launches Fast Data Project The Linux Foundation has announced its latest collaborative effort: Fast Data (, an open source project which seeks to provide a highspeed input-output services framework for next-generation network and storage software. Initial contributions to include vector packet processing (VPP) capabilities in a fullyfunctional vSwitch/vRouter setup based around the Data Plan Development Kit (DPDK). This, the Foundation claims, offers high-performance hardware-independent IO, along with a full build, tooling, debug, and development environment, along with an OpenDaylight management agent and Honeycomb netconf/yang agent. “The adoption of open source software has transformed the networking industry by reducing

technology fragmentation and increasing user adoption,” claimed Jim Zemlin, executive director of TLF, at the launch. “The project addresses a critical area needed for flexible and scalable IO services to meet the growing demands of today’s cloud computing environments.” “ is more than just fast networking and fast storage,” says David Ward, chief technology officer at VPP contributor Cisco. “With the modular nature of VPP Technology, flexible architecture and the inclusion of a dev, test and continuous performance toolset, the project was designed with a vibrant open source community of contributors in mind.” The initial Fast Data release is live now, with full information available at

Linux Mint servers and ISO images hacked The Linux Mint project – consistently the most popular Linux distribution on trackers such as – has suffered an attack that compromised its website and left infected ISO images on its download mirrors. The attack was acknowledged by the Linux Mint team in the early hours of Sunday 21 February, who stated that “hackers made a modified Linux Mint ISO, with a backdoor in it, and managed to hack our website to point to it.” Gaining access via a PHP script in the Linux Mint website server’s WordPress installation, the hackers redirected visitors to their own server, which hosted the compromised version of a 64-bit Linux Mint 17.3 Cinnamon ISO. The Linux Mint team advises that anyone who downloaded the 64-bit Cinnamon edition on Saturday 20 February is probably infected with malware. At the time of writing, the Linux Mint website remains offline. The malware itself appears to be Linux/ Tsunami-A, also known as Kaiten. It is a relatively old tool that relies on the oldfashioned technique of connecting to an IRC server in order to receive further instructions. One way to establish whether or not your Linux Mint installation has been compromised is to look inside /var/lib/man. cy – if this directory is not empty, there is a good chance that you are infected.

Above Only the Cinnamon edition of 17.3 was compromised by the attack on Linux Mint



Your source of Linux news & views



Top 10

(Average hits per day, 19 January – 18 February) 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Linux Mint Debian Ubuntu openSUSE Zorin Fedora Android-x86 Manjaro Arch CentOS

2,998 2,317 1,718 1,375 1,208 1,162 1,145 986 942 940

This month ■ Stable releases (26) ■ In development (12) Android-x86 has increased in popularity dramatically this month, to the detriment of Mageia and Deepin, which slip off the top ten

Highlights Android-x86

Android-x86 4.4-r5 brings with it fixes for a number of issues. The biggest changes include compatibility with official Microsoft Surface devices and full support for booting on legacy BIOS and UEFI machines.


The Wayland demonstration distribution RebeccaBlackOS now includes a switch to Debian Testing for packages, compatibility with legacy and UEFI systems, and the ability to load a KDE Plasma session on top of Wayland.

Scientific Linux

The 7.2 release of Red Hat-based Scientific Linux includes an installer capable of quickly locating the fastest local network mirror and initial support for Scientific Linux Contexts.

Latest distros available:


Keybase gets encrypted, distributed file system Aims to make file sharing both simple and secure Keybase, the public-key social-networkingpowered cryptography service created by OkCupid co-founders Max Krohn and Chris Coyne, has a new string to its bow: an early implementation of a cloud-backed, fullytrusted shared file system. Dubbed the Keybase File System, or KBFS, the new software allows Keybase users to automatically store signed files for public viewing or encrypted private files which can be accessed by any of their Keybase-linked devices. These folders can also be created for multiple users: two or more users can share a dynamically-created public or private folder, for example, with files being signed and encrypted appropriately. “Our goal: smack-dab in the middle of a public Reddit or HackerNews or Twitter conversation, you should be able to say ‘Hey, I threw those gifs/libraries/whatever in our encrypted Keybase folder’ without ever asking for more identifying info,” Coyne explained of the feature. “If that person hasn’t installed Keybase yet, your human work is still done.

Above KBFS silently requests user info, scrapes data, downloads blocks and presents plain files

They can join and access the data within seconds, and your device will quietly handle the verification and rekeying, without ever trusting Keybase’s servers.” Available to selected Keybase users as an early alpha test, the Keybase File System provides 10GB of remote storage. Coyne has confirmed that additional page storage will “likely” be made available once the service has been fully released.


Docker moves to Alpine Linux Hires Natanael Copa to aid with the switch The Docker project has confirmed that it is moving its official images from Canonical’s Ubuntu Linux to Alpine Linux, and that it has hired distribution founder Natanael Copa to assist with the transition. “Today most people who use Docker build containers with unnecessary distro bits in them,” admitted Solomon Hykes in a post to Hacker News. “But before Docker, 99% of them didn’t build containers at all, and struggled with non-repeatable deployments and dependency hell. Even better, now all these people use a standardised Dockerfile which can

be changed and improved over time – for example to move away from a heavyweight distro like Fedora or Ubuntu, to something like Alpine or even more minimalistic.” That move, made with a view to reducing the size of Docker images, is currently ongoing with Hykes inviting pull requests on the official code repository at to speed the change along. “Even one such pull request could go a long way to promoting smaller containers,” claimed Hykes, “since those images have been downloaded over half a billion times.”


Integer division

Dividing one number by another is simple‌ until you start turning the logic behind it into code! Find out how it’s done and why Manually implementing integer division is not a particularly difficult problem per se but there are many details that need to be taken care of. In a division operation you have two numbers: the dividend and the divisor. The result of a division is one number, called the quotient. If the division is not perfect, you might have a remainder. So you have the following equation when dividing natural numbers:

for (;;) { $res++; $a -= $b; if ( $a < 0 ) { last; } }

dividend : divisor = quotient + remainder This can also be written equivalently as: dividend = quotient * divisor + remainder We must decide whether or not an algorithm will support negative integers. This is not a big issue but it is something that you should decide before you start programming. We will examine two scripts here, both available in division.c and Each uses the same approach: subtract the divisor from the dividend until what is left from the dividend is smaller than the divisor. What is left from the dividend is the remainder. The only thing you should definitely make sure of is that the value of the divisor is not zero, because you cannot divide a number by zero. Failing to make the appropriate check might crash your program in a very bad way if you use the usual division operator (/). However, in the case of the manual division implementation, the program will do something nastier instead of just crashing: it will run forever! This happens because, as you will see in the code, the conditions for terminating the loops will never be false! The C version uses a while loop whereas the Perl version uses a for loop. Although the while loop appears more natural for such a program, the for loop is used as a proof that while and for loops have the same capabilities. The interesting part of the C code is the following: while (dividend >= divisor) { quotient++; dividend = dividend - divisor; } The interesting part of the Perl code is the following:

Mihalis Tsoukalos

is a UNIX administrator, a programmer, a DBA and a mathematician. He has used Linux since 1993

The only thing you should definitely make sure of is that the value of the divisor is not zero, because you cannot divide a number by zero

You will notice that the Perl program uses simpler variable names instead of more descriptive and clearer ones â&#x20AC;&#x201C; although this is not a good practice, there are times when using simpler variable names saves programming time. On the other hand, the C version uses more expressive but longer variable names that make reading the code easier and writing the code more tedious. Which style you prefer does not really matter for such small amounts of code; however, when you are dealing with bigger projects with more than one programmer, longer variable names are considered a better practice. Both programs take care of signed integers and print the appropriate sign when they finish and use the same approach for finding out whether they are dealing with negative numbers or not. The approach used can be summarised in the following C code: if ( (dividend < 0) || (divisor < 0) ) { if ( dividend * divisor < 0 ) sign = 1; dividend = abs(dividend); divisor = abs(divisor); } This says that if at least one of the numbers is negative, the program must manually calculate the sign of the result. If there are no negative numbers, no special care is needed. As you can see, although taking care of signed integers looks easy at first, it is not, because you will also have to take care of it when printing the output. This is why argv[1] and argv[2] are used during printing instead of the dividend and divisor variables. As a final exercise, try to implement division between floating point numbers. As usual, the source code is also at



Your source of Linux news & views


Undefinable Keila Keila Banks, 14, was a speaker at this year’s SCALE 14x and got a standing ovation at OSCON 2015. Meet the rising star of the open source world…

Keila Banks

is a young technologist, programmer, entrepreneur and open source advocate. Keila has been giving inspirational talks at conferences around the world since she was 11 years old.

Above This is Keila with her father, Phillip Banks. Phillip has been a SCALE organiser for years and introduced Keila to the community


So, tell us a little bit about yourself. Well, I just turned 14 about four months ago. I’ve been speaking at conferences since I was about 11 years old… [laughs] I’ve always been into computers, probably since I was about three years old. I started getting into coding when I was nine – that was when I wrote my first website.

How did you start to get involved with conferences? My dad was an organiser for SCALE, and he asked me if I wanted to go and talk, and I said that sounded great. Then we went to another talk, and I ended up going to more and more places, and then when I got to OSCON it kind of just blew up. That was my first keynote – before then I was just giving small talks at SCALE to groups of kids.

So you grew up surrounded by it? My dad was an IT consultant, and he always did his projects in his back office. I actually ended up finding his work book, and so that’s how I ended up getting into it. I wasn’t actually sure about it until I read the book, and then I started getting support and encouragement from my dad. I actually had to find it out myself.

What was it like after that? Well, I definitely got a lot more connections to people; a lot more people began to reach out to me instead of me having to reach out to them. MasterCard actually reached out to me – I was able to speak at the LAC [Latin America and Caribbean] Forum. It was a business conference in Miami, and they invited me to go there. That was just a wonderful opportunity, and it opened up more opportunities in the future, people reaching out to me.

What was your first Linux distro? My first Linux distro? I think it was a KDE because my dad used that one. Then when I went to the SCALE convention I got more into the different operating systems. I really liked Ubuntu, so that’s when I started getting into it.

How did it go this year at SCALE 14x? It was a little different. Actually, it didn’t go exactly as I’d planned, because my slides didn’t work, but I still just

Open source luminary Keila Banks has been giving talks at Linux and open source conferences for years, but her ‘Undefinable Me’ presentation at OSCON 2015 brought her into the wider public eye. Available to view on Prezi ( vcxglzmkv_xy/undefinableme), her excellent keynote is an investigation and celebration of identity in the community. Banks begins discussing her appearance with the audience, asking them to consider how they would define her, and then challenges this definition again and again by revealing more about herself, her passions and her skills. By the end of the presentation, it’s impossible to not be impressed by her accomplishments, passion and drive. Do yourself a favour and watch it: watch?v=xkTcSoQ-q5Q.

talked. It seemed like it was just another talk, but Jon “Maddog” Hall and Jono Bacon were also on the stage with me. And they knew my name! That was something to brag about! [laughs] What inspires you to give talks – for example, on the future of Linux at SCALE and your Undefinable Me talk? Well, since I’ve been at SCALE, it’s not really discussed much, but a lot of people know SCALE as the future. I’ve never seen a conference with so many teens and kids there who know all this stuff. Probably since SCALE 11x, there were a lot of kids, their moms, all trying to get into it, even some kids from my own school. I don’t really talk about it at school – when people think about coding, they just think it’s huge and complex, they think of people typing in code all night on their screens. So I kind of wanted to clear up these labels and encourage people to get into it. I really want people to see coding as something that’s easy – actually, not even something that’s easy to do, just something that they can do. It’s a really valuable skill. Are you involved with many open source communities? Oh, yeah! My main two organisations are PyLadies and LinuxChix. I just went to the PyLadies’ Ruby on Rails workshop. But LinuxChix, they’re like my own family. They’re so cool, so close to me. What do you think of free and open source software? It’s definitely important! I’ve definitely seen it enabling people. There are tons of people who photo-edit or want

When people think about coding, they just think it’s huge and complex… I wanted to clear up these labels and encourage people to get into it

to photo-edit, for example, and they think, ‘I don’t know where to start – I could probably buy Photoshop but it would be hundreds of dollars…’ Now a lot more people know about GIMP, and so it really advances them in wanting to photo-edit. I think it can really jump-start a career. Seeing this progression – seeing everybody know about it over the years – it really warms my heart, my open source heart. [laughs] Do you have any favourite open source projects? Of course, I have to say GIMP. GIMP will always be my alltime favourite software – I use it probably at least once a week, because I do a lot of photo editing and I also make an online magazine. I also really like OpenShot. I can’t remember the person behind it at the moment, but I’ve spoken with him a few times before. I used to do a lot of video projects in third grade. Are you working on anything at the moment? Actually, I’m developing an app right now. I can’t really talk too much about it, but I’m just letting you know. It’s going to be multi-platform – or, at least I’m really hopeful that it will be so. It’s probably going to be an HTML file. Are you involved in the maker side of the community, too? Do you have a Raspberry Pi? Yeah, we do have a Raspberry Pi. Me and my dad were just talking about this last night – we were looking on Pinterest for some Raspberry Pi projects. Since we just moved into this new house and we’ve got a new TV,



Your source of Linux news & views

Get into Linux Have you been inspired to attend your first conference or join a group? Well, a few interesting events are just around the corner. This year, the Girls in Tech Catalyst Conference is being held in Phoenix, Arizona, on 17-19 April (phoenix.catalyst. With a focus on entrepreneurship and education, the event is packed with speakers and promises some excellent workshops and discussions. O’Reilly’s Open Source Convention, OSCON, will be held on 18-19 May in Austin, Texas, and the event typically takes in a wide spectrum of open source technologies and has been known for groundbreaking project announcements. The PyLadies community will be down at PyCon 2016 to run a charity auction, so if you’d like to get involved with them then get to Portland, Oregon, for 28 May – 5 June. If you’re interested in LinuxChix, head over to their website ( and sign up to the mailing lists to join the conversation – once you’re on a list, you can then search for and join a local chapter. SCALE won’t be back until March 2017, but put LinuxCon in your calendar – 22-24 August in North America and 4-6 October in Europe.

Above The LinuxChix crew are often at community events like SCALE – why not say hi?

we were thinking about some projects to mount up, because we have this Mythbox – we were thinking about having a little router and some other stuff with the Raspberry Pi. We planned to do a lot of other Raspberry Pi projects, especially after we went to SCALE. That gave us a lot of ideas for the Raspberry Pi, so I’m really excited to be working on it.

– and they were talking about it. They asked if I’d be interested in teaching an elective after I graduate, so it wouldn’t be too awkward to do that with the people in my class. I think they’re really trying to push that forward, and I think that would be really interesting, teaching a computer class, doing innovative things in inner-city schools.

Do you get to learn much about coding and computing at school? Oh, I don’t learn anything in school! Actually, in school I’m a whole different person. People don’t know anything about what I do unless they stalk me on the internet. [laughs] We do have this computer class, but even though it sounds like a computer class, we just go on YouTube and stuff like that. One time, a sub came in and we were actually talking about some interesting stuff. It was nothing too advanced, the teacher was comparing Android to Apple – and I know that’s a really small thing –but I hope they can really push that sort of thing forward.

What do you say to people who are interested in computers and open source but don’t really know where to start? The first thing I would say is to go to a conference. There’s so much. If you just go on the internet, you’ll see that all this stuff is everywhere. Go to a talk and there’s so much information. At SCALE, me and my dad were giving tours, and two ladies and their daughter came – she was a complete newbie, she didn’t know anything, she didn’t even know what an operating system was – but before she left she had her Ubuntu live CD and she was playing around with hacking. I think anyone interested should definitely go to a conference; it definitely just broadens your mind. Going to talks really just helps you to understand this stuff. That’s been one of the most important things for me.

So you’re mostly self-taught? Mostly No Starch Press, actually. I love the people there, and they love me back! They told me if I ever wanted to learn anything, they’d send me a book on it. If I ever want to learn anything, I get a book from No Starch Press or I go to W3Schools. There are a ton of different websites and workshops, all this great material out there. Would you like to be able to learn more at school? I’ve been talking to one of the people at the offices. He found out about the video I did – I don’t know how


Are you planning to give any more talks soon? I will be speaking at OSCON. I’ll also be going to the Catalyst Women’s Conference in Phoenix, Arizona. For OSCON, I’ll be doing the same talk that I gave at SCALE, but for my future talks, I’m thinking of doing something a little different. I’m not actually 100 per cent sure yet, but I have a ton of ideas that are right now all in my mind.


Your source of Linux news & views


The kernel column Jon Masters explores the latest happenings in the Linux kernel community as the merge window for Linux 4.5 closes

Jon Masters

is a Linux-kernel hacker who has been working on Linux for some 19 years, since he first attended university at the age of 13. Jon lives in Cambridge, Massachusetts, and works for a large enterprise Linux vendor, where he is driving the creation of standards for energy efficient ARM-powered servers


Linus Torvalds announced the first few release candidate (RC) kernels for what will become Linux 4.5. In his announcement, he noted that 4.5 was shaping up to be “a fairly normal release – neither unusually big or unusually small”. One element he specified in the 4.5-rc1 announcement was the tremendous work done over the past five years by the 32-bit ARM Linux community towards “multi-platform” kernel support that has culminated in 4.5. Linus giving the ARM community praise is a far cry from his outbursts just a few years ago about the state of platform support. The 4.5 kernel includes many fixes, though not wholesale subsystem changes of the kind we have sometimes seen in previous cycles. Instead, there are incremental features, including support for the MADV_ FREE application memory address space “shrinker” interface flag option to the “madvise” system call. This new feature allows applications to register some of their virtual memory ranges as “volatile”. Such regions of memory may be arbitrarily reclaimed by the kernel when the system is running low on unused memory without breaking the application. Many of us have been looking to and discussing such a feature (long a part of Windows) for years, so it is good to see it finally land in Linux 4.5. Other features that are promising in Linux 4.5 include support for direct memory access to persistent memory devices – the newer non-volatile memory technologies being advanced and promoted chiefly by Intel – and the next wave of efforts to restrict root level access from userspace applications to /dev/mem (system RAM). Such accesses, while requiring security privileges from the root user, can nonetheless be dangerous to system stability if not performed with great care. The advent of large, direct-mapped “persistent memory” devices means that the “mistake surface” for erroneous accesses to /dev/mem by errant applications is significantly increased. To mitigate this, the latest set of patches will prevent access to IO memory regions via /dev/mem unless those regions are marked as “idle” (not associated with a device driver), making it not possible to accidentally write to persistent memory devices without unbinding the Linux persistent memory driver first. As we go to press, Linus had unleashed “a valentine for everybody” in the form of several more RCs. He

recommended that “in between romancing your significant other, go out and test”. If things remain on track, the final 4.5 kernel will come in mid-March, with the 4.6 merge window closing just in time for an Easter Sunday surprise.

Multi-platform kernels

The ARM Linux community has grown into one of the largest contingent sets of kernel developers, pumping out support for a wealth of innovative and exciting devices. At the same time, the advent of the 64-bit ARM architecture (part of ARMv8) has lead to many new opportunities for ARM outside of its traditional embedded scope. Yet, for all the rise of 64-bit computing, there remain a great many 32-bit ARM devices on the market today, and many more are still to come. These ARMv6 and ARMv7 32-bit architectures (as well as, technically, the 32-bit AArch32 state of ARMv8) are supported by the kernel’s arch/arm directory (arch/arm64 contains the core for the newer, 64-bit architectural state). Early 32-bit ARM devices were generally embedded machines for which a dedicated kernel was compiled, complete with a static configuration and many built-in assumptions about the specific platform upon which that kernel would later run. Over time, ARM devices became more complex and feature rich, and users sought to run mainstream Linux distributions on those devices. But distros had a problem: they are used to shipping one kernel for a given architecture, not one kernel for each different shipping system (and shipping configuration). The latter gives rise to many hundreds of possible kernel builds that could be used, a number that is far too unreasonable. Combine this with a desire to make Linux platform configuration a runtime configurable option and you will see some of the reasons for the creation of the “DeviceTree” specification (a derivation of OpenFirmware with many additional bindings added that were not part of the original POWER/PowerPC specification), which leads to the “dts” and “dtb” files you may see on embedded ARM boards, including the RPi. A DeviceTree describes a 32-bit ARM system in a flexible markup language that the kernel interprets at boot time to determine how it should be configured. Yet a DeviceTree alone won’t guarantee the desired “one kernel to rule them all” single binary.

Enter the ARM multi-platform work. This effort, led by Arnd Bergmann et al, sought to clean up the ARM kernel code such that the many varied built-in assumptions around specific combinations of devices were removed. Instead of individual device drivers and core code assuming that the very fact they were built means that they should be used, the many thousands of changes over the past five years have led to robust, flexible kernel code that can determine at runtime whether it should bind to any specific devices or go unused on a given platform. It is a direct result of this work that you can now get single binary pre-built Linux distributions that support a wide range of ARMv6 and v7 devices.

Ongoing development

Planning for the upcoming Linux Storage Filesystem and MM Summit (LSF/MM) to be held in Raleigh, North Carolina, USA, from April 18-19 is ongoing. A number of proposals have been made for session tracks via the Linux Kernel Mailing List. One proposal came from core VM developer Rik van Riel, who wanted to gauge potential interest in discussing Virtual Machine (VM) containers. The growth in container technology (such as Docker, and the broader Open Container Initiative, or OCI) has piqued interest in combining containers with virtual machines to get the best of both worlds – the isolation between OS instances that comes from true virtualisation, and the speed, low overhead, and convenience of Linux application containers. Lee Jones posted an updated patch building upon a longstanding series of conversations related to “critical clocks” in embedded systems. Such systems expose a lot of platform-specific information to the operating system, including information about all of the clock networks (pulses that drive the individual components on mobile System-on-Chip, or “SoC” processors) connected to devices such as IO controllers. This information is included in the same DeviceTree structures mentioned under the multi-platform work above. A problem exists, however, on contemporary Linux systems. The kernel will generally attempt to save power by powering down the clocks that are connected to currently unused devices (known as clock gating). However, Linux doesn’t always know which of these

Above The Raspberry Pi uses DeviceTree to auto-configure HAT modules, among other things

The ARM Linux community has grown into one of the largest contingent set of kernel developers, pumping out support for a wealth of exciting devices

clocks drive things that really cannot ever be shut down safely at runtime without crashing the system. The new patches add a CLK_IS_CRITICAL flag that tells Linux to leave certain clocks well alone. Dave Airlie noted that problems persist with the handling of GPUs integrated into laptops (and other devices) with Windows 10. On these systems, the ACPI driven methods for powering off the GPU differ from those used in previous OS releases, with the net result that many users are having problems with their hardware failing to power down correctly. Work is ongoing to address this through changes to the kernel graphics code. On the subject of devices, Linus Walleij noted that Alan Cox is no longer maintaining the “Linux Assigned Numbers Authority” (LANA) that provides unique numbers for character devices used by the Linux kernel. He instead sent a patch which updates the Linux kernel documentation to reflect that this is now a collective document maintained by the overall community. Finally, Andrey Ponomarenko posted to let everyone know he is working on a “new database of computer hardware configurations running Linux”. He has collected just shy of 5,000 entries so far and may use these to produce a coordinated catalogue of support devices:





Build the perfect network for your home or office, from assigning IP addresses through to monitoring web traffic and server uptime

With so many different devices filling our living rooms and our desks at work, it’s more important than ever to know how to get them all talking to each other. It makes sense, for example, to have your phone in touch with your home’s music library, so you can easily play your favourite album without having to switch to your desktop. And it makes sense to have one computer in your office in charge of backing up all the others on a regular basis. Sometimes you need to be able to fine-tune the permissions that various connected devices have on your network. All this and more is possible by simply building a network from scratch to include these features.


Over the next few pages we’re going to take you through a network setup one step at a time. Starting with the fundamentals, we’ll set up one node in our network of devices to act as a gateway to the internet and manage IP addresses for the rest. Then, you can simply add in the functionality that you need from the pages that follow – remotely accessing files from outside the network, streaming files and media inside the network, automatic backups and server monitoring are all covered, and there are plenty of extra tips on things like port forwarding, public DNS, diagnostic tools and more. Grab your ethernet cables and let’s get cracking.


Start setting up your control machine to provide DNS and DHCP servers for all devices across your network Before we dive in, there are just a few key concepts to explain regarding the base setup that we’ll need for our fullyfeatured network. Any device on a network needs some basic information to be part of a computer network: IP (Internet Protocol) Address: An IP address is a unique identifier for a device on the network. This identifier is used by other devices on the network whenever they need to communicate or pass data. Currently we have two types of IP addresses: the commonly used IPv4 and the newer IPv6. An IPv4 address is a set of four decimal digits separated by a full stop (.). An example of an IPv4 address is IPv6 was created to address the problem of the limited number of addresses possible with the IPv4 scheme. An IPv6 address is a set of eight hexadecimal digits separated by a colon (:). For this feature we are focusing on the former style. DNS Server Address: A DNS (domain name system) server is used to resolve human-friendly device addresses to machinefriendly IP addresses – basically, to translate memorable strings into fairly unmemorable numbers. For example if we want to reach we need to contact a DNS server

REMOTE SERVER ACCESS WITH PORT FORWARDING There will be times when you will need to access your server from the internet but that server is not directly connected to the internet. In these cases, we can use a technique called port forwarding. Port forwarding allows you to forward network packets from your router’s IP and port to one of the LAN’s IPs and ports. For example, let’s assume your router’s internet IP is and you are running a web server on one of the computers on a LAN with the IP address at port 80. When using port forwarding, if somebody on the internet is accessing, the router will automatically send the traffic to To enable port forwarding, you can go to your ISP’s router config page and look for the port forwarding option. For example, on Asus Ac68U, this option is available under WAN > Virtual Server / Port Forwarding. Here you can add the port number you want to forward (in this case 80) and the local IP address and port. It is possible to serve a request coming to port 80 on port 81 on LAN and vice versa.

to know the IP address ( Only when we have the IP address can we can reach the website. Gateway/Router Address: A gateway or router is used to reach the devices that are part of outside networks – for example, website servers that we access via the internet. To continue with our previous example, once we have the IP address for linuxuser. the device contacts the gateway to reach to the server. While we can configure these settings manually on each device, it is often very tedious and error-prone when you have a lot of devices on our network. A DHCP server, therefore, allows us to configure these settings automatically for all the devices on our computer network.


We will use Dnsmasq to provide DHCP and DNS Servers for our network. Dnsmasq is a piece of lightweight software that is very popular and easy to use with small networks. To get started, install the dnsmasq package on the Linux machine that will be acting as the server. On Debian-based distros you can use the following command to install the dnsmasq server:

$ sudo apt-get install dnsmasq Once installed you will need to edit the /etc/dnsmasq.conf file to configure the service.

PUBLIC DNS SERVICES One of the key bottlenecks in your internet speed is the DNS server. Any time you load a web page it goes to the DNS server for an IP address. If your DNS server takes too long to respond to the request it makes the complete network slow down. Public DNS services like Google DNS and OpenDNS provide fast DNS servers that can accelerate your complete internet experience, but before you switch your DNS you should benchmark your ISP’s DNS using namebench to see if you will get any benefits.

Above We’ll use DNS and DHCP servers, plus an internet gateway, to get our client devices online




CONFIGURE IP ADDRESSES AND SET UP AN INTERNET GATEWAY Now we’ve got Dnsmasq installed, let’s start assigning IP addresses to our network devices and getting them online PROXY SERVERS If you have a large number of devices on your network, a proxy server can help you save bandwidth by caching the regularly accessed content directly on the server. This is not the only use of proxy servers; proxy servers also allow you to control and monitor the internet traffic on your network. Squid is a popular proxy server that provides all these features. To install it: $ sudo apt-get install squid To configure squid, edit /etc/squid3 /squid.conf. Squid runs on port 3128 by default. You can change it by editing: http_port 3128 The following lines will enable the access for the network acl mylan http_access allow mylan On client devices, you open the proxy settings and use http://<ServerIP Address>:3128 as the proxy server.

First, disable /etc/resolv.conf, as /etc/resolv.conf contains the address of the DNS server used by the local system:

no-resolv Dnsmasq will forward the DNS query to the following servers (Google’s Public DNS servers) if it doesn’t know the address itself:

server= server= We can add the custom DNS entries as follows:

address=/ Custom DNS entries get first priority and can be used to override public DNS entry. If we add an entry like address=/ then on LAN will resolve to instead of the IP address. We can add as many entries we want in the same format. Dnsmasq also acts as the local cache for the upstream DNS server. This will allow the server to quickly resolve the IP address instead of querying the upstream server. The following sets the dnsmasq query cache size to 1000 hosts:

This will enable the DHCP server to assign addresses between and The lease time of 12h means the address is leased for 12 hours and the client must renew the address after 12 hours. To set up the gateway server, use:


DHCP server options

The following option enables the DHCP server component of dnsmasq and sets up the DHCP range, subnet mask of the network, and lease time:


dhcp-option=option:router, If this option is not set, it is assumed that the gateway is the same machine where dnsmasq is running. To set up the DNS server, use:

dhcp-option=option:dns-server, Sometimes we may want to reserve a particular IP address for a device. We can do this as follows:

dhcp-host=1a:22:33:44:55:66, This option will set the IP address as if the device’s hardware (or MAC) address is 1a:22:33:44:55:66. To specify the NTP (network time protocol) server:

dhcp-option=option:ntp-server, Right Webmin makes it simple to set up proxy servers – here are Squid’s configuration options broken out into a GUI


Setting up an internet gateway

The internet gateway will provide devices on our network with the internet. We can use the same machine where we installed dnsmasq as the gateway or any other computer that’s available.

TOOLS TO DIAGNOSE YOUR NETWORK Sometimes you may need to troubleshoot a misbehaving client on the network or want to optimise the network. Here we will talk about some tools that help with various aspects of the network: ifconfig: Ifconfig can be used to configure and monitor the network interfaces. Without any arguments, ifconfig will list the details of all the active interfaces. eth0 Link encap:Ethernet HWaddr 00:01:2e:40:92:c1 inet addr: Bcast: Mask: RX bytes:120825640761 (120.8 GB) TX bytes:31932701206 (31.9 GB) route: The route command will show you the kernel routing table of the system:

We will assume that you have two network cards: eth0 is connected to the ISP’s network or router while eth1 is connected to the local area network (LAN):

auto eth0 iface eth0 inet static address netmask network gateway In the above we are configuring the ethernet card eth0 with the IP address of and setting its gateway as This can be the internet service provider’s (ISP) gateway or the router’s IP address provided by the ISP. Similarly, we are configuring the Ethernet card eth1, which is connected to our local network:

auto eth1 iface eth1 inet static address netmask network The next step is to enable IP Forwarding and Masquerading, using iptables to configure the Linux kernel firewall:

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -A FORWARD -i eth1 -j ACCEPT You can install and use a package called iptables-persistent to save these rules and load them at boot time. The last step is to enable packet forwarding in the Linux kernel. This can be done by editing /etc/sysctl.conf. Add or uncomment the following line:

net.ipv4.ip_forward=1 To make this change without rebooting the system, use:

$ sudo echo 1 > /proc/sys/net/ipv4/ip_forward

Above A network monitoring tool in action. We are monitoring the SSH traffic on LAN Left Port forwarding configuration in Asus routers. This method works similarly across other routers

Destination default

Gateway * *


Flags UG U U

Metric Ref 0 0 0 0 0 0

Use Iface 0 eth0 0 eth0 0 vpn_azureasia

In the above command output it says that traffic with destination network should go via eth0 interface without using any gateway. Similarly, any destination with should go via vpn_azureasia interface. Default gateways are used when the any other destination is being used. traceroute: This command allows you to find out the route to a particular host on the network or internet: $ traceroute 1 ( 0.275 ms 0.241 ms 0.240 ms 2 ( 1.984 ms 2.234 ms 2.237 ms …….. 8 ( 8.600 ms 8.552 ms 8.662 ms 9 ( 8.652 ms 8.674 ms 8.713 ms 1 is the LAN gateway and 2 is the ISP Gateway, so we can see that there are a few servers in the middle before it reaches the final destination 9, which is serving ping: This command allows you to verify if a host is reachable on the network or not. It also finds out how long it takes to get a response from the server. Use this command as ping <hostname/ipaddress>. dig: Dig is a DNS lookup utility that can be used to verify if your DNS server is behaving correctly. In the following command we are checking the DNS server at to see if it resolves $ dig @ nethogs: Nethogs is a tool to monitor the current network traffic by process. It is helpful in locating which processes are consuming all the bandwidth. PID 6403 8487

USER kunal root

PROGRAM sshd: kunal@pts/1 /usr/bin/python

DEV eth0 eth0

SENT 0.540 200.040

RECEIVED 0.077 KB/sec 100.040 KB/sec

In the above output, we can see that the user root is running Python, which is consuming 200 KB/Sec. Unlike the other tools discussed here, you will need to install nethogs.





Set up a smart network backup system that monitors specified folders and duplicates the data to your schedule BACKING UP USING GIT Git is an amazing decentralised version control system. Bup is an open source project has harnessed the power of Git to provide an efficient backup solution. It uses git’s packfile format to store backups, meaning you can access the data using git tools directly. It supports par2 to create recovery blocks and recover backed-up data in case of corruption, and a rolling checksum algorithm to split large files.

Setting up a centralised backup solution on your network is very useful for your clients as they don’t have to worry about doing it separately. You can conveniently back up at a central location and have a full report on backup status across the network. UrBackup is a no-frills open source backup server with support for both Linux and Windows clients. While UrBackup is minimal it still has some essential features that are important, such as differential backup, automatic folder monitoring, space efficiency (if the same file is backed from multiple clients then it keeps only one copy), reliable backup for files in use, and an easy-to-use web interface to manage backup. UrBackup can be installed on most of the Linux distros as well as on Windows and NAS devices like Synology’s. We will be installing UrBackup server on Ubuntu. We will install the UrBackup server using the following PPA:

$ sudo add-apt-repository ppa:uroni/urbackup $ sudo apt-get update $ sudo apt-get install urbackup-server During the installation, the installer will ask for the backup folder location. Give the folder the location with the most free space available. If you want to compress the backups then you should

set the location on either a Btrfs or ZFS formatted partition. For added data protection you should also considering storing backup files on RAID 5 so that in the event of disk failure, data can be easily recovered. After all, a centralised backup server should be as reliable as possible. Once UrBackup server is installed you can access the web interface from http://localhost:55414. By default, the web interface gives admin privileges to everybody. To fix this go to Settings > Users, then click “create user”. Enter the admin user password here and click Create. If you want to receive notifications via email regarding backup, you can set up the SMTP server using the Settings > Mail tab. If you want to edit the backup folder location you can do so from Settings > General > Backup Storage Path. For everything else, the default is fine.

UrBackup client setup

UrBackup clients are available for Linux (with special builds for Fedora/CentOS and Arch Linux) and Windows; these are available at A Mac version of the client is currently in the works. UrBackup clients support auto-discovery of the backup server, and once that is discovered you can specify the folders you want to back up, then schedule and start backups from the server’s web interface.

BACKING UP WITH RSYNC Rsync is an incremental network and remote file copy tool. You can use rsync to keep folders or files in sync with each other between different directories or system. For a given folder, rsync only transfers files which are not already transferred. For files, it checks which part of files have been changed and is able to transfer just the modified part. Due to these features, rsync is a very popular choice of backup. To transfer files on the same computer you can use the following command: $ rsync -Pr source destination This command is similar to cp but with key differences: in this transfer, the -P option tells rsync to show progress and keep partial files. So if for some reason you were to cancel the transfer in the middle, this will be automatically resumed. We can also use rsync over SSH to transfer files over the network. Keep in mind that in this case, rsync needs to be installed on both the sending and receiving systems. Above UrBackup displays the backup sources and status, and adds a handy client download option


$ rsync -Pr apps kunal@


Log in to your server via the command line from anywhere in the world, or share your desktop over the network to get a GUI SSH (Secure Shell) is a very common way to log in to a system remotely. OpenSSH is the most popular SSH implementation that is available across all the popular platforms: Linux, Windows, Android, iOS, etc. To install the OpenSSH server, use:


$ sudo apt-get install openssh-server Once installed, openssh-server is ready for action without further configuration. Log in from a remote machine using:

$ ssh username@<server ip address/hostname> The username should already exist on the server. By default, OpenSSH is configured for password authentication. A far more secure way to log in is to use public-key authentication. On the remote machine, generate the public/private key pair:

$ ssh-keygen -t rsa Use the default options. You will have two files in the <YourHomeDirectory>/.ssh folder: id_rsa and . Id_rsa is your private key â&#x20AC;&#x201C; do not share this with anybody. Copy the content of (your public key) and paste in on the folder <YourHomeDirectory>/.ssh/authorized_keys file on the remote server. Next time, the server will log you in without a password.

Above X forwarding enables you to use Linux software from OS X

Running GUI applications remotely

SSH can also run GUI programs remotely via X forwarding, so you can use software installed on your server from remote machines, without local installation. X forwarding is cross-platform, so you can send programs to any client with an X server installed. On Linux, the X window server is already installed. However, if you want to run software on OS X and Windows then you need to install X server separately. On Mac, you can install Xquartz; on Windows, install MobaXterm. To start an X forwarding session, simply add the -X switch to the end of your SSH login command.

TIGHTVNC FOR REMOTE DESKTOP SESSIONS VNC is a popular desktop sharing system implemented using the RFB protocol so you can share your current desktop over the network. VNC can be used to provide multiple users on the server with an isolated desktop environment simultaneously. This means they all can run their own apps in their own desktop environments. To install TightVNC:

$ sudo apt-get install tightvncserver xfce4 xfce4goodies autocutsel xfce4 We are also installing a lightweight desktop environment called Xfce, which we will use to boot the GUI session for remote users. Autocutsel is service used in order to add copy-paste support between host and the client. We will need to configure the desktop environment, too. Create ~/.vnc/xstartup and add:

#!/bin/bash startxfce4 &

Make this file executable with a quick sudo chmod +x xstartup. For each user, a service file needs to be created, and this will set up the display number and resolution for the VNC server. If you want multiple users accessing their own desktops then you need to create an xstartup file in each userâ&#x20AC;&#x2122;s home directory, and assign different display numbers and users in each service file. You can start the vncserver using the following command as the current user:

Almost all modern Windows operating systems come with the remote desktop connection (commonly referred as RDP) feature. You can use this service to access the Windows desktop of a remote computer. This includes the access to features like Clipboard, Sound, Disks, Printers etc. From Linux, you can use Remmina to connect to RDP. Remmina is actively developed and provides the most complete feature set of RDP. To install Remmina: sudo apt-addrepository ppa:remmina-ppa-team/ remmina-next sudo apt-get update sudo apt-get install remmina remminaplugin-rdp libfreerdpplugins-standard

$ /usr/bin/vncserver :4 -geometry 1024x768 -depth 16 -pixelformat rgb565 Here, :4 is the display number on which the VNC server is running. Based on the display number, vncserver will be available at port 5904. If the display number is 1 it will be at 5901. Now all you need to do is use any VNC client and connect the server, in this case it would be <serveripaddress>:5904.





Stream your movies, music and photo galleries around the house, and maintain a central sharing platform for documents A file server also allows you to extend your own storage with network. Modern file servers allow you to work on the files from the remote server rather than downloading them onto your local storage first (unlike FTP). One of the modern file servers that supports this feature is called Samba. Due to its support for Windows, the file sharing protocol it works on all the platforms including Linux, OS X, Windows, Android, iOS, etc. This makes it a better choice than NFS, which is not supported as widely. To install the Samba server, install the following packages:

$ sudo apt-get install samba samba-common You can configure the options for Samba by editing the file /etc /samba/smb.conf. Letâ&#x20AC;&#x2122;s start with the global section. Here we are setting up the workgroup name used to identify the server:

[global] workgroup = WORKGROUP server string = Samba Server map to guest = Bad User To set up the log file:

log file=/var/log/samba/log To set up the guest account:

guest account = myguestuser To set up a guest share (with no password):

[SharedFolder] path = /media/MySharedFolder

Above Plex can hook in web TV channels such as BBC iPlayer


Above Plex has a great UI that makes it easy to follow along with the setup instructions

MEDIA FILE FORMATS Media file support varies when you take account of smart TVs, smartphones, tablets and game console. An mkv file that plays fine on your PC may not be supported everywhere. In order to get around this, Plex Media server implements a technique called On the Fly Media Transcoding. Based on the Plex client request, Plex Media Server will do transcoding of the requested file into something that clients report as playable and stream it. This process is CPU intensive so make sure to use a multicore CPU if you are going to use this feature.

comment = My Shared Folder writeable = yes browseable = yes guest ok = yes Since this share will be mapped to the guest account you will need to make sure that myguestuser has full write permissions to /media/MySharedFolder. You can access the shared folder from Windows using the address \\<ServerIPAddress>\MySharedFolder. On Linux and Mac clients, this folder is accessible using the URL smb://<ServerIPAddress/MySharedFolder. In order to create a password-protected share, add a group call smbgrp:

$ sudo addgroup smbgrp Add the existing user to this group:

$ useradd user1 -G smbgrp Then create the Samba password for the user:

$ smbpasswd -a user1 Now create a folder that you want to share as user1. This way user1 will have all the necessary privileges to access the folder. The last step is to create the share in smb.conf file:

[MyProtectedShare] path=/home/user1/shared

REPLACE DROPBOX WITH SEAFILE Dropbox is very popular with both individuals and businesses for its simplistic approach of sharing and syncing files over the internet between multiple devices. Seafile is an open source implementation of the same concept that allows you to keep files on your own server. It runs on Linux, Mac and Windows. On mobiles it runs on Android and iOS via free apps. One of the key features of Seafile is built-in file encryption. This feature allows you to encrypt your library using a password. This way even server admins or snoopers on the server will have no access to your files. To install Seafile, install the supporting packages: $ sudo apt-get install python2.7 libpython2.7 pythonsetuptools python-imaging python-ldap sqlite3

Plex is a dedicated media server that provides an easy way to share files valid users = @smbgrp guest ok = no writable = yes browseable = yes

Setting up a media server using Plex

You can easily store media files on the Samba share itself and most of the players will be able to pick it up and play it without any problems at all. While this solution works, it is not very elegant; it is like watching media files on your TV by browsing a pen drive you have connected to it. Plex is a dedicated media server that not only provides an easy way to share media files on the network, but it also provides support for the largest number of clients, including the usual suspects like Windows, Linux, Android and iOS, but also Android TV, Xbox One, PS4, etc. Plex is officially available for the Ubuntu, Fedora and CentOS distros. You can obtain the downloads from downloads. Once downloaded, you can simply use the following command to install it:

$ sudo dpkg -i plexmediaserver* Once installed go to http://<serverIp>:32400/web to configure it further. You will see that on the sidebar you have Channels and Playlists. Click on the + icon to start adding the folders. You will be asked to select the media type, depending on the type of content select from Movies, TV Shows, Music, Photos and Home Videos. After making the selection you can change the library name – the default is good in most cases – then click Next.

Download the server for generic Linux from https://www. After extracting the folder, run to run the guided setup program. This will install the server using sqlite database, which is good for personal usage but not large scale deployments. If you want to use MySQL as the database, you should use to install the server.

From the next screen, add the media folders. You should add the directory that contains only one type of media – do not mix and match movies, music, photos, etc, as it may confuse the indexing engine. After adding files, the media server will automatically refresh the content and download additional metadata (like posters, plot, actor details) from the internet. Using the same process, you can additional media types as well. Plex can play media directly from its web client, you can visit http://<serverIp>:32400/web from any device to consume all the media. For a better experience with Plex, specific versions are also available; for desktops you can download Plex Home Theater, for example. On Ubuntu you can obtain the Plex media theatre from PPA repository :

Above With VNC, you can use your favourite Linux desktop from a tablet

$ sudo add-apt-repository ppa:plexapp/plexht $ sudo apt-get update $ sudo apt-get install plexhometheater For devices like iOS, Android, Xbox One, PS4 and Roku, simply search the respective app stores to download the Plex client. Sometimes you may come across a device that does not have a Plex client on it (like a smart TV) – here, you can make use of the DLNA to play your media. DLNA (Digital Living Network Alliance) is an industry standard for sharing media files over a home network. Lot of devices and media players support it, including TVs and game consoles. To enable DLNA Server on the Plex media server, go to Settings (the gear icon at the top-right), then select the Server tab and click DLNA on the sidebar, then check Enable DLNA Server. Save the changes to start it.





Ajenti provides one central location to configure and monitor your server and allows you to interact with the servers over the web Ajenti provides a consistent way to manage Linux systems across its supported distros, and boasts an easy-to-use dashboard that is customisable. Head to and choose your distro base, then follow the provided instructions to install the software. After installing it, start Ajenti with this:

$ sudo service ajenti restart Ajenti is available at https://<ServerIP>:8000, the default username is root and the default password is admin. You will obviously need to change this as soon as possible – head to Password from the left navigation bar now and change your password to something much stronger. While installing, Ajenti generates a self-signed certificate which is okay for normal installation but not good for production. You can change the certificate by providing a pem file at Configure > General > SSL > Path to certificate. If you are already running a service at port 8000 then you may not be

able to start Ajenti; to change the port for Ajenti, edit /etc /ajenti/config.json and change the port in the bind section:

“bind”: { “host”: “”, “port”: 8001 }, To monitor your server, select Dashboard from the menu bar. Here you will see widgets that display data related to the system. To have the data automatically refresh, click the clock icon at the top-right of the dashboard. To configure the various system services, filesystems, networks, processes and users, you can go to the System section of the menu bar. To configure the additional software installed on the system, like the Apache web server, MySQL and Samba, go to the Software section of the menu bar. You can also manage the service state by clicking Software > Services section.

MONITORING TOOLS Htop is an interactive process viewer that uses ncurses for an accessible interface. You can scroll the process list and take action on selected processes, such as killing them. The lsof command is used to list open disk files. It is very handy when you are not able to unmount a disk and want to know which process is using it.





Here you have the option to configure Ajenti itself. Use this to configure ports, SSL and feedback. Plugins allows you to view the status of installed plugins.

All of the system-related configurations take place here, like cron, the firewall, hosts and filesystems. You can also use the packages to install and update packages.


Here you can find a list of installed services. You can use this to check on the status of a service, and start or stop a service.


Dashboard is a collection of widgets showing little pieces of the information about the system. Additional widgets can be added using Add New Widgets. You also have the option to refresh or autorefresh the widget data.

This provides you with easy access to a file manager, text editor, task manager and terminal from any web browser. These tools give a helping hand when the browser is all you have.


This is where you configure settings for additional server packages. Depending upon the installed servers, this list will change. Here we have Apache, MySQL, NFS, Netatalk and Samba.

Dedicated Servers Get 1Gbit/s connectivity

Unlimited monthly data transfer

Smart SSD technology NEW

UK data centres and 24/7 support

Dedicated Servers from:

£29.00 per month

£29.00 ex VAT for 3 months then £39.00 ex VAT per month. 12 month minimum term contract. One-off set up of £49.00 ex VAT applies. 6 months discount available on selected servers. See website for terms and conditions



Mihalis Tsoukalos

is a Unix administrator, a programmer (for Unix and iOS), a DBA and also a mathematician. He has been using Linux since 1993

Resources Text editor GCC compiler Go compiler Perl interpreter


Systems programming Using server processes Develop server processes that can be unchained from the terminal to run in the background as daemons A server process is a special kind of process with unique characteristics. The most common feature of server processes is that they run in the background, which means that they run without having a controlling terminal associated with them. As usual, most of the presented code will be written in C. However, you are also going to see examples in Perl and Go because these languages allow you to create reliable server processes with the help of their existing modules and libraries. Please remember that if you are developing a server for a production system, the server process should behave perfectly; therefore, we do not recommend creating a server process that you plan to put on a production environment without proper planning, documentation and extensive testing.

What you need to know

Tutorial files available:


The general algorithm for a server process involves a number of specific steps: • First fork() • Second fork() • Become session leader

• Orphan process group • Define logging parameters • Handle signals • Create lock file • Do the actual work First of all, server processes usually call fork() to generate a child and terminate the parent process using exit(). Then, they call setsid() to create a new session, because a server process is normally detached from the terminal and runs in the background without the need for a terminal. Servers change their current working directory to the root directory in order to normalise the environment and be independent of the filesystem structure. They also define the PATH environment variable on their own. As you will also see, they usually write their process ID to a file in /var/run or in a similar location depending on the operating system. They use the logging facility of the system to write various types of messages to log files. Additionally, they handle various signals in order to interact with the system they run on. Finally, they change the creation mask to a known state using umask().

Left Here’s the logging.c script that we’re going to work towards in action

As you can imagine, a server process is most often programmed to run forever! The only way to stop a server process is by sending the appropriate signal to it, which requires that you can get the process ID of the server process. For security reasons, the root user does not own server processes unless it is absolutely necessary, which typically is not the case. Usually, each important server process, such as a web server, a database server or a system service like email, is owned by a specific and dedicated user account. Nevertheless, all server processes are started by root, which executes a setuid system call to change the active user of the process to the desired one. Depending on the port number you want to use when creating a server process that uses TCP/IP, you might need admin privileges to run it because port numbers 0-1024 are restricted and can only be used by the root user. Generally, it is better to avoid these port numbers and choose something else provided it is not already in use by another running process.

Developing a server process

The developed server process will do nothing useful at the moment. It will just behave like a proper server process and run in the background. The simple.c file contains the necessary C code, so download that from FileSilo and have a read through. When a controlling process, such as a shell, terminates, its UNIX terminal becomes available to other users and a new session can be established on it, which might be a problem when processes from an older session try to use this terminal. An orphaned process group allows processes from an older session to continue executing even after the process leader from the older session has been terminated. Additionally, you should know that when a process group becomes an orphan, its processes are automatically sent a SIGHUP signal by the system, which usually leads to their

termination. But, if a program ignores the SIGHUP signal or has a different handler function for it, it can continue to run in the orphan process group even after its controlling process terminates. However, it cannot access the controlling terminal any more, which is free. The following output makes sure that your process belongs to an orphaned process group, which is a result of the second fork() system call, and therefore cannot allocate a controlling terminal:

$ ./simple Created child with pid = 1561 Created child with pid = 1562 $ ps -x -o pgid,pgrp,pid,ppid,command | grep simple PGID PGRP PID PPID COMMAND 1561 1561 1562 1 ./simple $ ps ax | grep 1561 | grep -v grep

Logging facilities A logging facility is like a category used for logging information. The following line of the /etc/rsyslog.conf configuration file defines that all logging information for the local7 facility will be stored in a file called /var/log/cisco.log file: local7.* /var/log/cisco.log The value of the logging facility part can be any one of the following: auth, authpriv, cron, daemon, kern, lpr, mail, mark, news, syslog, user, UUCP, local0, local1, local2, local3, local4, local5, local6 and local7. The next line of code defines the logging facility in logging.c as well as the name of the program that will appear on the relevant log entries: openlog(“logging.C”, LOG_PID, LOG_LOCAL7);




The good thing with the log-related system calls is that they can be used even if you are not developing server processes Logging levels A logging level or priority is a value that specifies the severity of the log entry. There exist various logging levels including debug, info, notice, warning, err, crit, alert and emerg, in reverse order of severity. As you can see in logging.c, the programmer defines both the priority and the facility that a program uses. The main configuration file of the rsyslogd(8) service is /etc /rsyslog.conf; it holds all the information regarding the logging activities of a Linux system.

As you can see, the command ./simple has a process group ID of 1561. However, the last command shows that there is no process with a process ID of 1561, which means that ./simple is an orphaned process group. Last, it is time to talk a little bit more about the setsid() system call. The setsid() system call creates a new session. The calling process is the session leader of the new session, is the process group leader of a new process group, and has no controlling terminal.

Terminating a server process

The C code used for this section is saved as killProcess.c (available in FileSilo along with the other scripts for this tutorial). It is based on simple.c with two additional features: the process will stop once it receives the USR1 signal – this is implemented with the help of a signal handler; additionally, it increases the value of the counter every time it receives the HUP signal. The killProcess.c program also writes its process ID in the /tmp directory in order to help you find it and kill it – this file is often called the locking file of a process. An additional benefit from this approach is that with the help of the locking file, you can make sure that only one instance of the server process is running at any given time. The following C code implements the required steps for writing the server process ID to a separate file inside /tmp:

#define LOCKFILE “/tmp/” #define LOCKMODE (S_IRUSR|S_IWUSR|S_IRGRP|S_IROTH) int fd = open (LOCKFILE, O_RDWR|O_CREAT, LOCKMODE); char buffer[32]; if ( fd < 0 ) exit(0); ftruncate(fd, 0); sprintf(buffer, “%d”, getpid()); write(fd, buffer, strlen(buffer)+1); close(fd); You should be pretty familiar with the presented code if you’ve been following the previous tutorials in our Systems Programming series.

Log files

This part will teach you how to send information from a server process to the syslog service and to log files. Despite the obvious fact that it is good to keep information stored, log files are necessary for server processes because there is no other way for a proper server process to send information to the outside world, as it has no terminal to send any output. As you might have guessed, there exist specific system calls that will help you: openlog() takes three parameters and opens a connection to the system logger for a program. Please see its


man page for more information about its parameters. Although the call to openlog() can be omitted, it is highly recommended to use it. The closelog() function closes a previously opened connection to the system logger. The syslog() system call assists you to create a new log message, which will be distributed by the system service responsible for delivering and writing log messages. This used to be syslogd(8) but there is also rsyslogd(8) which is an improved and more reliable version of syslogd(8). All three system calls return void, which means that they return nothing. Last, the setlogmask() system call specifies which calls to syslog(3) will be logged and returns the previous log priority mask. The good thing with the log-related system calls is that they can be used even if you are not developing server processes. Generally speaking, using a log file is better than writing output on screen for two reasons: first, because the output does not get lost as it is stored on a file and second, because you are free to print really important and urgent output on screen if this is possible. Another side effect is that you can search and process log files easier using common Linux tools such as grep, awk and sed, which cannot be done when messages are printed on a terminal window. You can see a working example with the first three aforementioned functions in action inside logging.c.

Running a process in the background

If you put a special control operator (&) at the end of a command then the shell will automatically execute the command in the background. Additionally, the shell will not wait for the command to finish and will immediately return with a return status of 0. The bad thing is that if you kill the shell that was used for executing the command, the command will be terminated. Another unwelcome side effect is that the output from the original command will be printed on the terminal asynchronously. The nohup command invokes a utility with its arguments and sets the SIGHUP signal to be ignored. If the standard output is a terminal, the standard output is appended to the file nohup.out in the current directory. Similarly, if standard error is a terminal, it is directed to the same place as the standard output. If you combine nohup with & then you can run a command in the background. So, even when the user logs out, the command will keep running, which is similar to a server process. The image to the right shows a command executed in the background. The nohup command is usually used for executing Linux commands that take a long time to complete or server processes. Despite this capability, it is considered a better practice to design and develop a program as a server from the start instead of finding workarounds like the one presented here. You can use the jobs command to get a list of all processes that run in the background in the current shell session.

A more advanced example

This time, the developed server process will do something more useful and will have a full set of features. The advanced.c program accepts local UDP connections using port number 12345 that is hardcoded inside the script. Generally speaking, most server processes, including

web and mail servers, work using TCP/IP in order to accept network connections from the world. Don’t worry too much about this for now; more about TCP/IP network programming will be covered in forthcoming tutorials. The developed server process will perform another simple task: it will hold the total number of successful connections using a counter. The program writes a message to a log file each time it receives the HUP signal, which shows the current value of the COUNTER variable. The USR1 signal will terminate the server process. The full source code of advanced.c, which is based on logging.c, can be seen below. You can use advanced.c as a template when you are developing your own server processes.

#include #include #include #include #include #include #include #include #include #include #include #include #include #include

<stdlib.h> <stdio.h> <unistd.h> <sys/types.h> <sys/stat.h> <signal.h> <fcntl.h> <string.h> <syslog.h> <sys/time.h> <sys/resource.h> <sys/socket.h> <netinet/in.h> <arpa/inet.h>

bzero(&server_address, sizeof(server_address)); server_address.sin_family = AF_INET; server_address.sin_addr.s_addr = htonl(INADDR_ANY); server_address.sin_port = htons(port); bind(socketfd, (struct sockaddr *)&server_ address, sizeof(server_address)); return socketfd; } static void handle_sighup(int signo) { if( signo == SIGHUP) syslog(LOG_INFO, “Total number of connections is %i”, COUNTER); else printf(“Unknown signal!\n”); return; } static void handle_sigusr1(int signo) { if( signo == SIGUSR1) { // Delete lockfile // NEEDS TO BE IMPLEMENENTED syslog(LOG_INFO, “Exiting...”); closelog(); exit(0);

int COUNTER = 0; #define LOCKFILE “/tmp/” #define LOCKMODE (S_IRUSR|S_IWUSR|S_IRGRP|S_IROTH) #define MAXSIZE 1000 int createUDP(int port) { int socketfd; struct sockaddr_in server_address; socketfd = socket(AF_INET, SOCK_DGRAM, 0);

} else printf(“Unknown signal!\n”); return;

Logging with Go The existing Go library, as implemented using packages, allows you to create server processes in a surprisingly easy way. The provided server program server.go shows how you can send logging information using Go by using the syslog Go package, which provides a simple interface to the system log service. Take a look at the Go code of server. go, which produces output of the following form: Jan 21 18:01:39 mail server.go[32155]: 2016/01/21 18:01:39 Logging in Go! Unfortunately, you do need to execute a Go server using the workaround nohup command, which is a little outdated.

} int main(int argc, char **argv) { int pid; int fd0, fd1, fd2;

Left The nohup command enables you to run processes in the background when paired with &



Switching to Perl Check out the source code of the supplied, which uses the TCP protocol with port number 12345 to communicate with the rest of the world. Once again, you should use the nohup utility in order to make keep running even after you log out from your Linux machine. You should use the Sys::Syslog Perl package to be able to send logging information. The logging information from has the following format: Jan 21 17:45:02 mail[31614]: connection from Bear in mind that as Perl is an interpreted programming language, everyone will be able to look at the source code of your server process.


struct sigaction signalAction; struct rlimit r; umask(0); if ( (pid = fork() ) <0 ) { printf(“First fork() failed!\n”); exit(0); } else if ( pid != 0 ) { printf(“Created child with pid = %i\n”, pid); exit(0); } setsid(); signalAction.sa_handler = SIG_IGN; sigemptyset(&signalAction.sa_mask); signalAction.sa_flags = 0; if ( sigaction(SIGHUP, &signalAction, NULL) < 0 ) { printf(“Cannot ignore SIGHUP!\n”); exit(0); }

// NEEDS TO BE IMPLEMENENTED int fd = open(LOCKFILE, O_RDWR|O_CREAT, LOCKMODE); char buffer[32]; if ( fd < 0 ) exit(0); ftruncate(fd, 0); sprintf(buffer, “%d”, getpid()); write(fd, buffer, strlen(buffer)+1); close(fd); if (signal(SIGHUP, handle_sighup) == SIG_ERR) { syslog(LOG_INFO, “There was an error while handling the SIGHUP signal.”); return -1; } if (signal(SIGUSR1, handle_sigusr1) == SIG_ERR) { syslog(LOG_INFO, “There was an error while handling the SIGUSR1 signal.”); return -1; }

if ( (pid = fork() ) < 0 ) exit(0); else if ( pid != 0 ) exit(0);

struct sockaddr_in client_address; int port = 12345; char line[MAXSIZE]; int socketfd; int n; socklen_t alen;

if ( chdir(“/”) < 0 ) exit(0); if (getrlimit(RLIMIT_NOFILE, &r) < 0) { exit(0); } if (r.rlim_max == RLIM_INFINITY) r.rlim_max = 1024; int i = 0; for (i = 0; i < r.rlim_max; i++) close(i); fd0 = open(“/dev/null”, O_RDWR); fd1 = dup(0); fd2 = dup(0);

socketfd = createUDP(port); syslog(LOG_INFO, “The iterative UDP server is running!”); while(1) { alen = sizeof(client_address); n = recvfrom(socketfd, line, MAXSIZE, 0, (struct sockaddr *)&client_address, &alen); syslog(LOG_INFO, “Got: %s”, line); sendto(socketfd, line, n, 0, (struct sockaddr *)&client_address, alen); COUNTER++; } return 0;

openlog(“advanced.C”, LOG_PID, LOG_LOCAL7); syslog(LOG_INFO, “Logging started!”); if ( fd0 != 0 || fd1 != 1 || fd2 != 2 ) { syslog(LOG_INFO, “There is an error with the standard file descriptors!”); exit(0); } // If the lockfile already exists, then exit



More about signal handling

Server processes frequently handle signals in a special way. The Apache web server will be used as an example. The Apache parent process can handle the TERM, USR1, HUP and WINCH signals. The TERM signal makes the Apache parent process to immediately try to kill all of its children and should only be used in extreme situations. Then the parent process will also exit. The USR1 signal causes the Apache parent process to advise the children to

Left A quick ls -l shows the process IDs stored inside the /var/run directory

exit after serving their current request. After all children are done, the parent rereads its configuration files, reopens its log files and restarts children processes. If the new Apache configuration file has errors in it, then Apache will not restart; it will exit with an error. The HUP signal does the same job as the USR1 signal but with a big difference: instead of waiting for the children to exit gracefully, it kills its children. The WINCH or gracefulstop signal causes the Apache parent process to advise the children to exit after serving their current request, remove its pid file and stop listening to ports without quitting. After the termination of all children, it will also quit itself. This functionality is very helpful when you are upgrading Apache but it can cause deadlocks and race conditions sometimes. The aforementioned approach is a very common practice and it is highly recommended that your server processes follow a similar method to handle signals. On a Debian 8 system, the process id of the Apache parent process can be found inside a plain text file:

$ cat /var/run/apache2/ 2937 This helps programmers, advanced users and administrators to find the process id of Apache easily. As you can see in the image above, almost all server processes store their process IDs inside /var/run.

The Erlang approach

Erlang is a programming language developed with reliability and high availability in mind. As a result, it offers great support for server processes. In Erlang you can easily developer a supervisor that watches after a server process. The important thing is that by writing the required Erlang code to supervise a server process, if the server process

goes down for some reason â&#x20AC;&#x201C; unexpected events should be expected to occur on occasion â&#x20AC;&#x201C; the supervisor process will automatically start a new server process! Another important thing to keep in mind is that the supervisor process is responsible for the server process without the server process knowing about it, and without the programmer having to change a single line of code inside the server process project files! Therefore, if you decide to develop a crucial server program, you should definitely consider writing it in Erlang in order to take advantage of the capabilities of this brilliant programming language.

Other points of note

A server process needs reliable hardware to run because there is no point in making your software robust and reliable and then running it in defective hardware. The daemon(3) function allows your programs to detach themselves from the controlling terminal and run in the background as system services. Nevertheless, daemon(3) is not defined in POSIX so its implementation might behave differently on different UNIX platforms. Finally, on Linux systems with Glibc, daemon(3) only executes one fork() call and does not use umask(2). Therefore, the method presented in this tutorial is better and gives more control over the process so it is preferred. If you decide not to use C, you might need to learn some new functions from a different programming language but the general principles are the same. As a rule of thumb, if you choose to use a different programming language, only do so if you are going to write less code, make your server process more robust in some way, or both. Apart from Erlang, Go and Perl, into which we have gone into some detail in this tutorial, other good alternatives for developing efficient server processes are Python, Ruby, Haskell, Rust, Elixir and C++.



Atom editor

Hack the Atom editor An editor built on web technologies and inspired by the extensibility of Emacs? Time to investigate…

Richard Smedley

A Unix jack-of-all-trades, Richard always has a shell open, so learned scripting by osmosis. It’s not that he dislikes GUI apps – he just loves the command line. A lot.

Resources Atom

Atom Flight Manual

Right Atom is built for the web from web technologies, but its thousands of packages make it very versatile


For coders, writers and sysadmins – essentially anyone who spends a lot of their screen time entering words through a keyboard – the text editor is something of a sacred tool. Keyboard shortcuts are embedded in muscle memory, and the peculiarities of the environment – whether constant modal shifting in vi(m), or Escape-Meta-Alt-Control-Shifting in Emacs – second nature. Moving to another editor, even trying one out in a more than desultory fashion, is a serious undertaking, requiring real motivation. Big news, then, that in its short life (the main components were open-sourced less than two years ago, and version 1.0 reached just last summer), GitHub’s Atom editor has pulled in hundreds of thousands of regular users. How has it done it? Built from web technologies – JavaScript, in the form of the more elegant CoffeeScript; CSS, in the extended form of the Less preprocessor; HTML; and running on Node.js – extending Atom takes the same knowledge as modifying a webpage, while it’s great to use as-is with no modifications, and the settings can be changed through the GUI. Add in a good Fuzzy Finder search, Node.js APIs and packages for nearly everything, and maybe by now you’re ready for us to walk you through some of what makes Atom so appealing, in particular its hackability, in a little more detail.


Welcome (for 64-bit)

Your package manager has a recent Atom if you’re on 64-bit (32-bit you’ll need to compile for yourself, and ARM support – Raspberry Pi and some Chromebooks – is not quite there). It’s worth keeping in mind that Atom’s greed for resources means that it doesn’t run particularly well on an old PC, so factor that into whether Atom is going to be right for you. Our first instruction is the necessary:

apt-get install atom … or your distro’s equivalent, and you’re up and running. Pace of development for Atom is fast. Even over the duration of writing this piece, Atom has gone from 1.3.2 to 1.4.1, and 1.5.0 should be out by the time you read this. Our tutorial should apply equally to any of these versions. Open it up and on first run you’ll see the Welcome screen (which you can access again via the Help menu): the screen is split into two panes. The left pane links to documentation and help, while the right gives you choices for opening a project (not just a file – Atom is built for dealing with modular code), or hacking and customising straight away. It’s a declaration of intent from the developers.


Welcome (64-bit)

Left Useful instructions are baked right into the Atom editor


Keyboard first

Learn some shortcuts is the last entry on the righthand pane, and recommends you try Ctrl+Shift+P – known as the Command Palette – which shows you shortcuts for everything from splitting the Window into more panes, to opening up the GitHub issues view, and lets you find all the commands by typing, rather than trawling a menu. Atom may be a GUI editor, but everything can be put within range of the keyboard. As well as defining shortcuts to some of the commands shown by the Command Palette, you can install an Emacs-mode or a vi-mode, to keep using the keyboard shortcuts you have built into your muscle memory. Use one of the shortcuts – Ctrl+Shift+O – to open a Project that you’re working on (just choose a folder with some code or text files in it). You’ll see that Atom spawns a new window for this project; this is the default behaviour, with each project having a Tree View of the files – you can toggle this on and off with Ctrl+\.


Taking panes

Like any decent editor, Atom lets you split the window into panes: in the screenshot above the next column you can see two vertical panes open, and the right-hand pane about to be split in two via the Command Palette. The keyboard shortcuts for panes all begin Ctrl+k. Ctrl+k with the right arrow will open a new pane on the right, for example. (If you install a package that reassigns Ctrl+k to kill a line, you’ll need to modify that or Atom’s ‘Pane: Split Right’ command’s keyboard shortcut.) Each pane has the options you are already familiar with – there’s a lot of Chromium in there, remember. You can see the Tree View of the files to the left (the toolbar on the right in the screenshot is for PlatformIO, a useful Arduino plug-in). Incidentally, this talk of panes and buffers – the latter being the text content of a file (the file in Atom’s memory – not

What to avoid

the same as that on the disk until you save) – is part of Atom’s inheritance of Emacs-inspired functionality. However, Atom’s extensibility, good looks and more modern outlook means that Emacs-phobes won’t be automatically put off.


Web environment

Atom’s CoffeeScript/Less (CSS)/HTML construction and Node.js APIs make it easy to extend Atom, as we’ll see, but they also make themselves visible in the way the GUI settings menu is laid out, with different fields directly corresponding to CSS settings. The way panes are laid out is controlled by Flexbox. This is a CSS layout module that’s yet to be standardised for the web, but in a controlled environment like Electron, Atom’s developers get to use all of the latest web toys – part of the appeal of working on a project like this. Speaking of CSS, open the Welcome tab (it should still be in the right-hand pane) and click ‘Customize the Styling’, then ‘Open your Stylesheet’ – you are now editing (the initially empty) ~/.atom/styles.less, which gives you total control of the editor’s appearance. Ctrl+Alt+R (or View > Reload) reloads the editor with all current windows to take advantage of new packages – including those installed in another Atom window.

Atom, although young, is developing rapidly – but naturally the volunteer effort tends to go to interesting or immediately useful changes and enhancements, and core improvements happen slowly, under a small team. This means that problems that have dogged Atom from the beginning – slowness, and inability to work well with very large files – still persist. Although improvements can be seen, Atom is best used on modular, rather than monolithic, code bases.



Atom editor


Attractive code

Right The syntax highlighting is excellent and the dual-pane view a real boon while coding


Alternative editors Atom isn’t the only cross-platform text editor based on web technologies that’s calling for consideration. Light Table is also based upon Electron (formerly Atom Shell) but is written in ClojureScript. Billing itself as “the next generation code editor”, it has similar abilities to Atom, albeit with a smaller community and developer base, but feels faster. The Light Table developers are also working on the ambitious Eve: “a set of tools to help us think.” (http://




You can jump straight to the Settings menu with Ctrl+, to change the appearance. You can also view and change the keyboard shortcuts (overwriting them by editing ~/.atom/ keyboard.cson), see what packages are installed, search for and install more packages, and change the theme. Look at the installed packages and you’ll see that you already have plenty, listed under core, thanks to Atom’s modular design. Through the settings tab you can disable individual packages (such as Metrics, which reports usage information to Google Analytics. It does this to track performance and use, information that can be used to prioritise improvements). A few packages also allow changes to settings here. There are thousands of community-contributed packages for just about every possible tool and language. The Search function isn’t always brilliant at finding exactly the package you want, but a little persistence will unearth most plug-ins. One of the reasons that there are so many is that it’s so easy to write a package – as you’ll see shortly.

Attractive code

Much has been made of Atom’s appearance. The default theme is certainly quite nice, the other available themes are okay, and the community themes are interesting, but once more it’s Atom’s Less (CSS) underpinnings that means you can make it look however you want (see Step 4). While all of this may matter at least a little, beautiful code is far more important, and one of the first packages you should try is atom-beautify, which will beautify your HTML, CSS, Markdown, CoffeeScript, Golang, etc. Select a block of text, then open the Command Palette and type “Beautify” – or call Ctrl+Alt+B. If no block is selected, the whole block will be beautified. With the autopep8 package it will manage Python, too. For PEP8 validation, just head across to the settings and then make sure that soft tabs is selected for spaces, and that spaces is set to 4. Linters are available, but the Python linter is not a very co-operative package at the moment – rapid updates means that this may have been resolved by the time that you try it, however.


Version control

Given Atom’s origins, it is no surprise that Git and GitHub integration is quite strong. Ctrl+Alt+Z, for example, will sync the open file by checking out HEAD revision – you will need to confirm before it overwrites your changes. Git status is replicated in Atom by Ctrl+Shift+B, which shows the project’s untracked and modified files. Git even gets its own chapter in the excellent user manual: If git merge doesn’t solve your conflicts automagically, install the Merge Conflicts package: it will detect the conflict markers when you run Alt+M > D, which is ‘Merge Conflicts: Detect’ from the Command Palette, and will give you controls for navigating through and resolving them.


Developer tools

Left Dev tools can be brought up without needing to switch windows, and they enable you to hack the interface


Atom your way

You can extend Atom with JavaScript, but the developers recommend you use CoffeeScript, and Less for the CSS. If your skills here are rusty, Atom points you to and, both of which contain excellent short tutorials on their homepages. For simple customisations, – or init.js – in ~/.atom will be read upon startup (or reload). The simplest entry would be along the lines of atom.beep(), although atom.close() is not without a certain sense of humour. The manual features a more useful example of a snippet to generate Markdown links to URLs in the clipboard. Start typing “generate” into the Command Palette and “Package Generator: Generate Package” will come up, defaulting to a new package ~/github/my-package/ that will open up in a new Atom window. There are a few good examples in the Atom documentation, including a text-to-ascii art converter. We also spent a little time putting the clock example from into Atom – since that example was written more things have been automated, such as the generation of the keyboard shortcut on the keymaps/[package-name].cson file.


Developer tools

Ctrl+Shift+I toggles Chrome’s developer tools: now you don’t have to jump between editor and browser. Atom’s Markdown and PDF previews might be matched elsewhere, but this is one area where Emacs or vi(m) don’t quite suffice. The dev tools are also especially useful for hacking changes on Atom itself, such as making themes – either for the user interface (UI theme) or for highlighting (syntax theme). Open the Command Palette and start

typing “generate syntax theme” for the latter, then start modifying the .less files. For a UI theme, the recommendation is to fork one of the built-in themes – atom-dark-ui or atom-light-ui. Open up dev-mode:

atom -dev … or hit View > Developer > Open in the menu to work on the forked file, give it a new -ui name in package.json, and place or apm link in ~/.atom/packages, then reload and enable in settings. Being in dev mode, any changes you make from now on will appear as you make them, without you having to reload again.


Join in!

If you’re looking for something with much of the power and extensibility of Emacs, in a modern, webnative text editor, Atom has a lot going for it. CoffeeScript is certainly easier than elisp when it comes to writing extensions. If you need to open larger files, want a nonUS dictionary, or get too annoyed with Atom’s slight but persistent slowness, you might not want to jump exclusively into Atom use – but perhaps you’d like to contribute, and to help make it the editor you want? The forum at will help you get up to speed on the various issues, but there is a direct link to bugs and missing features that are suitable for beginner contributors at Before diving in, be sure to read the guidelines at bit. ly/212VXVw, including the Code of Conduct, which is a model for an inclusive and welcoming FOSS project, particularly worthy when other high-profile projects have lost contributors through the brusque style of intercourse between contributors, making them unfriendly places for many who’d love to help the free software movement.




Apply textures in MonoGame for realistic models Bitmaps provide a significant boost to image quality while not increasing resource consumption Tam Hanna

had an Xbox 360 fall into his lap and soon after found himself prototyping new projects. This led to intimate familiarity with XNA, leading to knowledge that transfers across to MonoGame rather seamlessly

Resources MonoGame

Right Here’s the simple texture we’ve wrapped around our cube: a triangle, mirrored to form a square for each face

Tutorial files available:


The past few tutorials in this series have promoted a solid understanding of models, the rendering pipeline and the role of shaders. Essentially, a shader is a program that handles part of the projection process which transforms three-dimensional objects into two-dimensional scenes fit for display on a PC screen. Sadly, the creation of realistic models requires the deployment of a large amount of geometric detail. If colouring is to be achieved, the amount of vertices needed grows sky-high – at some point, even the most high-performing GPU will declare itself beaten. Textures provide a nice workaround. Think of them as skins that wrap around the vertices: instead of rendering an entire part of a model in one colour, the colour values used are obtained from the texture bitmap. Combine this process with a lightning-fast, hardware-accelerated lookup process, and consider yourself in heaven. As this tutorial aims to teach texture handling, we will abandon the pre-manufactured models used in the last instalments of the tutorial. Instead, we will return to the manually-created cube that we used previously. Our example was based on the VertexPositionColor class. It consists of a Vertex coordinate along with a colour:

public class CubeClass {     public VertexPositionColor[] myBones; Texturing can be understood the easiest if we start out with a little illustration. The left-hand image on the next page shows a plane described by three three-dimensional vertex coordinates (shown by the triangle with the gradient). In addition to that, each of the ‘edges’ is also equipped with a set of two-dimensional coordinates. These coordinates – referred to by experts as textural or texture coordinates – map each of the vertices to a place on the bitmap. When rendering, the GPU interpolates the position; thereby obtaining the correct colour automagically.

Let’s texture!

Once the theory is out of the way, it is time to update our cube coordinates. In XNA and MonoGame, vertex information is stored in classes which implement the IVertexType interface – for our purposes, VertexPositionTexture shall suffice. Reflect it to receive the code shown:

namespace Microsoft.Xna.Framework.Graphics {     public struct VertexPositionTexture : IVertexType     { public static readonly VertexDeclaration VertexDeclaration; public Vector3 Position; public Vector2 TextureCoordinate; VertexDeclaration IVertexType.VertexDeclaration { get; } In addition to the two Vector elements holding the actual coordinate information, we are also dealing with a method returning a VertexDeclaration. It is a data structure informing the graphics pipeline about how the data found in the type is to be handled – i.e. which flag maps to which field.

Get dicey

While very advanced shaders do require the creation of custom VertexDeclarations, we are fortunate as the basic Vertex data type completely suffices for our current needs. The bottom-right image below shows the effect we want to achieve along with the texture used. This information permits us to determine the texture coordinates to be assigned: as in issue 160, we will limit ourselves to printing the coordinates of one side in order to save space:

myBones[12] = new VertexPositionTexture(topLeftFront, redCorner); myBones[13] = new VertexPositionTexture(topRightBack, whiteCorner); myBones[14] = new VertexPositionTexture(topLeftBack, interCorner2); myBones[15] = new VertexPositionTexture(topLeftFront,

redCorner); myBones[16] = new VertexPositionTexture(topRightFront, interCorner1); myBones[17] = new VertexPositionTexture(topRightBack, whiteCorner); Each of the sides of our cube is made up of two triangles, which are mapped onto the corners of the texture. The texturing process can be simplified by assigning friendly names to the individual parts of the texture. In the case of our example, the relevant block looks like this:

Vector2 Vector2 Vector2 Vector2

redCorner= new Vector2(1.0f,1.0f); whiteCorner= new Vector2(0f,0f); interCorner1= new Vector2(0f,1.0f); interCorner2 = new Vector2(1.0f,0f);

Missing your LibSDL? MonoGame's packaging and deployment process was changed quite a bit recently. Should you find your programs failing to start due to a libsdl-related error, sudo apt-get install monodevelopmonogame is the command to run!

Please be aware that texture coordinates don't necessarily need to be one or zero. You can also place elements in the ‘middle’ of a texture – simply use a scaling factor from 0 to 1 to describe the distance from the textural origin.

Shading time

As our example from issue 160 was based on the BasicEffect, we need to start out by changing the rendering pipeline to use a shader. Before we do that, however, we will quickly check the validity of our texturing by changing the body of the Draw() method so that the BasicEffect will respect the texture:

protected override void Draw (GameTime gameTime) { ... cubeEffect.Projection = Matrix.CreatePerspectiveFi eldOfView(MathHelper.PiOver4, graphics.GraphicsDevice. Viewport.AspectRatio, 1.0f, 1000.0f);

Below Left Here, we have threedimensional vertex coordinates translated to twodimensional texture coordinates (i.e. TX and TY for point 1) Below Right We use triangles of texture to make squares, then map these onto each face of our cube



x1, y1, z1

0, 0

Tx1, Ty1

x & Tx

y & Ty



Texture coordinates Casual observers might wonder why texture coordinates are housed in a value range of 0 to 1. This has a simple reason: the larger a texture, the heavier the load on its GPU. Using ‘scaled’ texture coordinates permits developers to swap textures out on the fly – far-away elements can be drawn with a simpler texture.


cubeEffect.VertexColorEnabled = false; cubeEffect.Texture = myTexture; cubeEffect.TextureEnabled = true; BasicEffect and transformations have been discussed excessively in the past – the only modification involves the fetching of the texture in the LoadContent method:

public class Game1 : Game { . . . Texture2D myTexture; protected override void LoadContent () { . . . myTexture = this.Content.Load<Texture2D>(“Untitled.png”); } With that, it’s time to run the program for the first time – the cube will present itself in all of its white-reddish glory. Now the actual texturing shader needs to be created. Feel free to use one of the more advanced lighting ones – we will, for now, limit ourselves to the deployment of a very basic shader that harvests the colour to be used for display from the texture passed in. This means that its global parameters must be expanded by one variable containing the texture reference:

float4 Position : SV_POSITION; float2 TextureCoordinate : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : SV_POSITION; float2 TextureCoordinate : TEXCOORD1; }; Texture coordinate information is to be stored in the TEXCOORD channel of the GPU memory. Attaching TEXCOORD0 to the TextureCoordinate field of the VertexShaderInput structure is helpful in that it instructs the pipeline to dump the contents of the 0th texture coordinate channel (basically, our hand-written texture coordinates) into that field. As our vertex shader does not do anything with the texture coordinates, it limits itself to writing the input value into the output structure with a simple copy:

VertexShaderOutput VertexShaderFunction(VertexShader Input input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); output.TextureCoordinate = input.TextureCoordinate;

texture ModelTexture; Sadly, simply passing in a texture is but part of the solution. Obtaining actual values from it requires the use of a sampler object, which is created via the following block of code:

sampler2D textureSampler = sampler_state { Texture = (ModelTexture); MinFilter = Linear; MagFilter = Linear; AddressU = Clamp; AddressV = Clamp; }; Texture sampler objects are described by five parameters. Texture sets the name of the variable containing the textural data to be used for lookups. MinFilter and MagFilter describe the way data is sampled: the position of rendered pixels rarely lines up 1:1 with pixels in the texture bitmap. Passing in Linear uses an interpolation based on the well-known equation of kx+d – a good trade-off between speed and looks. Finally, AddressU and AddressV describe what the sampler should do if provided with coordinates outside of the legal range of 0 to 1. Passing Clamp makes the sampler throw away the excess – pass in 2, 3 or 4, and the shader will simply restrict itself to 1. The first part of this tutorial was spent on creating a textured cube. Harvesting the texture information contained in the model requires changes to the input and output structures of the vertex shader:

struct VertexShaderInput {


return output; } Finally, the colour part of the shader needs to be modified so that it obtains the colour from the sampler by passing in the coordinates of the element at hand:

float4 PixelShaderFunction(VertexShaderOutput input) : SV_TARGET0 { float4 textureColor = tex2D(textureSampler, input. TextureCoordinate); textureColor.a = 1; return textureColor; } With that, we only need to adjust Draw() in order to populate the ModelTexture field of the shader:

protected override void Draw (GameTime gameTime) { graphics.GraphicsDevice.Clear (Color.CornflowerBlue); myEffect.Parameters[“World”].SetValue( Matrix. CreateTranslation (0, 0, 0) * Matrix.CreateRotationX (0.2f)); myEffect.Parameters[“View”].SetValue(Matrix.Create LookAt(new Vector3(10,5,15), Vector3.Zero, Vector3.Up)); myEffect.Parameters[“Projection”].SetValue(Matrix.Cr eatePerspectiveFieldOfView(MathHelper.PiOver4, graphics.

GraphicsDevice.Viewport.AspectRatio, 1.0f, 1000.0f)); myEffect.Parameters[“ModelTexture”].SetValue(myTexture); foreach (EffectPass pass in myEffect.Current Technique.Passes) { pass.Apply (); graphics.GraphicsDevice.DrawUserPrimitives (PrimitiveType.TriangleList, myCube.myBones, 0, 12); } base.Draw (gameTime); }

float4 PixelShaderFunction(float2 texCoo : TEXCOORD0) : COLOR0 { float4 sceneColor = tex2D(sourceSampler, texCoo); float4 switchColor = tex2D(passoverSampler, texCoo); if (switchColor.r < Threshold) { sceneColor.a = 1; return sceneColor; } else { sceneColor.r = 1; sceneColor.g = 0; sceneColor.b = 0; sceneColor.a = 1; return sceneColor; }

Calculating with textures

So far, our textures were passive containers for colour data. The information contained in them can also be used to power more advanced computations. Space constraints restrict us to soft blending between scenes – while the images would normally be carried in via shaders of their own, we will assume the second one to be an empty red screen. The handover between the two scenes will be accomplished via the rules set out in a texture. It can be used to create a smooth handover between two images by increasing an arbitrary threshold variable slowly. In the next step, the passover texture is sampled. If the returned value is below said threshold, image information from scene 1 is used – if not, we look at scene 2. For this, a new shader needs to be created. It will work on the entire screen, choosing the colour value written into the display memory according to the value in the switcher texture:

float Threshold; texture PassoverTexture; sampler2D passoverSampler = sampler_state { Texture = (PassoverTexture); MinFilter = Linear; MagFilter = Linear; AddressU = Clamp; AddressV = Clamp; }; texture SourceTexture; sampler2D sourceSampler = sampler_state { Texture = (SourceTexture); MinFilter = Linear; MagFilter = Linear; AddressU = Clamp; AddressV = Clamp; }; First of all, two textures are now required alongside with the samplers. Do not fret about memory use: samplers are not memory-intensive, and having a group of them in memory is not an issue. The actual pixel shader is not particularly complex. Each pixel’s processing cycle starts out by sampling the main and the switch texture. If the grey level in the switch texture is smaller than the threshold, the scene colour pixel is returned – if not, we return a solid red area:

} One interesting detail corresponds to the technique declaration. Our Passover technique does not affect vertices. Due to that, we don’t have a vertex shader – a state best described via a technique that has but a PixelShader block:

technique SmoothPassover { pass Pass1 { PixelShader = compile ps_3_0 PixelShaderFunction(); } }

Your turn!

Due to space constraints, the activation of the pixel shader does not fit within this instalment of the tutorial. Since, by and large, it is a very interesting task, let us consider an exercise of sorts. Do not worry about getting it wrong, either – the sample we’ve uploaded to contains a complete working version. Your mission starts out with the creation of an alternative rendering target. The RenderTarget2D class allows you to redirect rendering into a texture, thereby populating the source texture. In the next step, use the stock graphics device to render the textures with the shader. When done, the cube should fade in and out in a James Bond-esque fashion – without any extra demands being put on your CPU.

Advanced shader techniques

If you ever wondered yourself how bitmap editors render effect previews with lightning speed, the likely answer is the use of advanced shader code. The next part will demonstrate using shaders for the creation of curves, colourisation filters and similar effects with minimal CPU load. This, of course, is but a small part of the power of shaders. The final instalment of our MonoGame tutorial will show even more impressive tricks – stay tuned!



Let’s Encrypt

Secure your web server with Let’s Encrypt Make setting up a secure web server easy with this simple way to manage HTTPS Christian Cawley

is a former IT and software support engineer and since 2010 has provided advice and inspiration to computer and smartphone users online and in print. He covers the Ras Pi at

Resources Let’s Encrypt

Right You can use the Let’s Encrypt client to obtain free SSL certificates from the new certificate authority


Whether you manage your own Linux web server or you’re in the process of setting one up (for live use or for testing and development), there is a strong possibility that you will need to configure HTTPS and acquire SSL certificates. The “secure sauce” of online shopping, banking and data management is secure encryption, but it has traditionally been characterised with a clunky, outdated set of procedures. Validation emails, configuration editing and avoiding the dreaded expired certificate issue – a problem that can break secure websites – all inspire nothing but trepidation. Given the weaknesses in SSL certification – highlighted in 2015 with the Superfish malware scandal, and the problems maintaining OpenSSL – it is becoming increasingly important to ensure that certificates are genuine, and trusted. Using Let’s Encrypt enables local creation of private keys – a major advantage in the de-centralisation of secure certification. Currently in beta, Let’s Encrypt streamlines the process of acquiring and maintaining certificates, and can be installed relatively easily to your Linux web server. Once up and running, you can interact with the Let’s Encrypt client in the terminal. Here, you’ll be able to install plugins that configure the client based on your requirements, such as adding a HTTP > HTTPS redirect, and revoke certificates as and when.


Install Let’s Encrypt

For the best results, use Debian, with web server software already installed and configured. Some distros include letsencrypt in package form. Find this first with an update:

sudo apt-get update Then:

sudo apt-get install letsencrypt pythonletsencrypt-apache The final python-letsencrypt-apache package can be omitted if you’re not planning to use the Apache plugin. Once installed, use letsencrypt to run. You can also install the letsencrypt wrapper from GitHub, but to do this you will first need to have git installed on your system. Use sudo apt-get install git to do this. Tap Y to complete installation. Next, install the letsencrypt-auto wrapper, which will take the same commands as letsencrypt, directly from GitHub:

git clone letsencrypt

Revoking a certificate

With this downloaded, cd to the letsencrypt folder and install:

./letsencrypt-auto --help As this completes, you’ll notice the --help text appears towards the end, providing information on how to use Let’s Encrypt. Included here are the various commands that can be used, such as certonly (“certificate only”) to obtain a certificate but not install it, and plugins, which will display information about the plugins that are installed.

standalone webserver to obtain certificates), webroot (writes to the webroot directory of a running webserver to obtain the certificate), and manual (provides instructions to perform manual domain validation). Another plugin, nginx, is also available; more powerful than the other, it automatically obtains and installs a certificate, but at the time of writing is in the experimental stage, and is not installed with the letsencrypt-auto wrapper. Plugins are installed with letsencrypt, so they can be invoked at any time. You should be able to do this with relative ease in the terminal, simply by appending the plugin name to a command. For instance, to issue a command using the webroot plugin, use:

As easy as Let’s Encrypt makes it for you to use the letsencrypt client and obtain secure certificates for your website, there may be times when it is necessary to revoke a certificate. In this situation, a single command can be input to revoke the certificate. This prevents it from being used as a means of establishing secure communication with your websites. $ letsencrypt revoke --certificate-path certificate-name.pem

letsencrypt --webroot Remember that if you’re using the letsencrypt wrapper from GitHub, you will need to substitute the longer letsencryptauto as the first part of the instruction.


Install on other platforms

Let’s Encrypt’s client is available for Arch Linux, FreeBSD, or OpenBSD as well as Debian and its forks. No surprises here in the installation; to install on Arch Linux, use:

sudo pacman -s letsencrypt letsencrypt-apache Meanwhile, FreeBSD users can install with:

pkg install py27-letsencrypt Similarly, OpenBSD users should use:

pkg_add letsencrypt Once installed, the letsencrypt client software runs the same on each platform, communicating with the Let’s Encrypt servers to acquire free SSL certificates.


Choose your plugins

As mentioned, various plugins are installed with letsencrypt. These are apache (for obtaining and installing certificates with Apache 2.4), standalone (which uses a


Manually obtain a certificate

While you can make use of the various plugins to obtain and manage certificates from Let’s Encrypt, you can also get your hands “dirty”, as it were, and obtain certificates manually. Even this method of certificate acquisition is more streamlined than what you may be used to, however. This makes use of the certonly condition, which ensures that only a certificate is requested. To begin, use this command:

letsencrypt certonly --manual You’ll then be required to input the domain name, agree to IP logging, and follow the instructions until the certificate is downloaded and saved to the appropriate directory on your server.



Let’s Encrypt


Find more plugins

Right A full list of plugins can be found on GitHub, including Icecast2 - a media streaming solution


Certify with nginx While not installed with the GitHub version of letsencrypt, it is possible to automatically obtain and install certificates with very little input from you, thanks to the nginx plugin. Since it is experimental, however, it is only available with the package version of the letsencrypt client. To use, simply enter: $ letsencrypt --nginx Other commands discussed on these pages, such as certonly, can be used alongside this.

With the webroot plugin you can obtain a certificate without stopping the server. This method essentially means that the certificate is requested, downloaded and written to the webroot directory. This is achieved by using the certonly and webroot conditions. So for instance, you might use:

letsencrypt certonly --webroot


Using Let’s Encrypt with Apache

There is a very strong chance that you’re planning to use the letsencrypt certificate management software with a server running Apache. Using the apache plugin for letsencrypt is relatively straightforward, but does require that the libaugeas0 package is available. If you don’t have this installed already, not that libaugeas0 is part of Debian-based operating systems, you might need to install manually if you’re working with an OS from a different branch. To run letsencrypt with the Apache plugin, use:

./letsencrypt-auto --apache You will then be prompted to begin some configuration steps, including entering one or more domain names (multiples can be listed with comma separation) and specifying an email address. There is also a ToS to agree to. Once you’re done here, the credentials are saved in the /etc/letsencrypt directory, which you’re advised to make a regular encrypted backup of, as it will also be the home of the certificates and private keys obtained with Let’s Encrypt.


Obtain a certificate for webroot

This would issue the basic instruction that writes directly to the webroot directory. Of course, this might be inappropriate if you wish to specify a particular directory. As such, you would specify the name of the target directory:

letsencrypt certonly --webroot-path /var/www/html Alternatively, you might have multiple domains you wish to obtain certificates for from Let’s Encrypt. In this scenario, you can list the target domains and their directories as shown:

letsencrypt certonly --webroot -w /var/www/ yoursite/ -d -w /var/www/mysite/ -d -d Certificates would be written to the /var/www/yoursite webroot directory for the first domain, and the /var/www/mysite directory for the second and third domains. In order to use the webroot plugin successfully, ensure that your webserver is configured to allow the serving of files from hidden directories.


Find more plugins

Although the five specified plugins – apache, standalone, webroot, manual and nginx – are packaged with the letsencrypt client, other plugins, developed by the user

community, are available for you to install. These include an extension for the web hosting automation software Plesk, Gandi simple hosting and even the Icecast2 streaming media solution. These plugins are invoked in the same way as those packaged plugins, and can therefore be used in the same way, such as using the --help switch to display all available options as seen here:

When renewing certificates, the /etc/letsencrypt/live directory updates with the latest versions. Note that these are symlinks, however – the originals are stored in /etc /letsencrypt/archive and /etc/letsencrypt/keys. Various files in .pem format are found in the /etc /letsencrypt/live/ directory. For instance, privkey.pem is stored there, the document that stores your private key for the certificate. As with any private key encryption system, this key must be kept totally secret, and never shared. Let’s Encrypt even specifies that it must not be shared with them. Other .pem files include the cert.pem, which is the server certificate, chain.pem (which stores certificates to be served by browser) and fullchain.pem (where all certificates are stored).


Automate with configuration

A certain degree of automation is possible in letsencrypt with the use of a configuration file. With this setup, you can change the encryption type, for instance, specify domains, and more. You can find out the various conditions that can be added by typing:

letsencrypt --help all letsencrypt --help You can see the full list of plugins at letsencrypt/letsencrypt/wiki/Plugins, and naturally, anyone who has created or is developing a plugin for the letsencrypt client can add their own to the list.


rsa-key-size = 4096 Renew certificates

If you’re using the Let’s Encrypt certification authority service, you’ll need to ensure that you renew your certs every 90 days. The service only issues shortlived certificates, so avoiding a situation when expired certificates are being stored on your server and served to visitors to your website would be wise. To renew certificates, all you should need to do is enter the letsencrypt directory and run the software, issuing a command as you have in the previous steps, or some variation thereof. Using the software prompts communication with the server, and refreshes certification once the same values you originally entered are submitted. As an alternative, you might also use the --renew-bydefault flag at the end of the command to help automate renewal. Note that as the software and the Let’s Encrypt service are currently in beta, automation is yet to be perfected. The use of crontab to automate renewal – perhaps monthly or every two months, as long as it occurs before the 90-day cutoff – is also an option.


… and take inspiration from the various options that are outlined. Among them you might see:

As you might guess, this ensures a 4096-bit RSA key is used, instead of a 2048-bit key. You might also include a list of your domains:

domains =, You might also use a configuration file to save time with the webroot file path, instead storing the path in the configuration file:

webroot-path = /usr/share/nginx/html When you’re done creating your config file in nano, emacs, vim or whatever your preference, save as cli.ini the /etc /letsencrypt/cli.ini directory.

Finding certificates on your server

While the use of Apache and nginx (see boxout) will automate the management of certificates to such an extent that you will barely need any exposure to them, if you’re a bit more hands-on, or simply want to know where the certificates are stored, they can be found relatively simply. You can find the certificates by navigating to this directory on your web server:





Set up a Hadoop cluster Break into Big Data by setting up a network of Hadoop nodes to process vast swathes of data Ashish Sinha

is an author, developer, teacher, blogger and true open source enthusiast at heart. With five years of experience as a developer in a MNC, Ashish enjoys exploring new open source technology and tools

Resources Hadoop

With the advent of new technology – and a growing number of social networking sites, scientific research projects and new business models – available data is growing rapidly. This data is mined to churn out the relevant information: DNA data is studied to analyse hereditary diseases; data from shopping sites is used to find the buying trends of customers and suggest new products to them; weather station reports are used for weather forecasting, and so on. Now, since this data is mostly unstructured and mammoth in size, it’s hard for a traditional database to store and manipulate it. This led to the emergence of an open source technology called Hadoop. Hadoop stores the data across the several nodes in a cluster in HDFS (Hadoop Distributed File System) format. The data is replicated in HDFS to ensure high availability and reduce fault tolerance. To store data that is terabytes and petabytes in size, we need to set up a cluster where standalone computers act as nodes. The benefits of Hadoop are that it uses commodity hardware, and is highly scalable, distributed, fast and resilient to failure. In this tutorial, you will learn how to install and configure a basic Hadoop cluster using the Apache Hadoop distribution on a Unix operating system.

failure rates. Even the large database class machine is not recommended as they stand low on the price/performance graph. Typically, the choice of machine for running HDFS and node manager would have the below specification:

Cluster specification and sizing

The prerequisite to set up a Hadoop cluster is that you must have some basic knowledge of Java and Unix. Hadoop runs both on Unix and Windows and requires Java to be installed;

Hadoop works with commodity hardware, but commodity does not mean “low-end”, as cheap machines have higher

Processor: Four hex/octo-core 5 GHz CPUs Memory: 256-1024 GB ECC (Error Correcting Code) RAM (ECC is highly recommended) Storage: 12-24 x 1-4 TB SATA disks Network: Gigabit Ethernet with link aggregation The size of the cluster depends upon the data you want to store in HDFS. If you have 5 TB of data with a replication factor of 3, and need some additional 30% storage for intermediate files and log files, then you must have 20 TB space on HDFS. From the network perspective, a normal Hadoop cluster consists of a two-level network topology with 30-40 servers per rack, a 10 GB switch for the rack and an uplink to a core switch or router – a ‘rack’ is a storage area with all the data nodes put together.

Java installation

Core layer

Right The distribution layer represents your physical server racks, controlled by a core layer and serving info to the access layer (ie, the client machines)


Distribution layer

Access layer

in this article we will use a Linux machine. You can download the jdk for Linux from java/javase/downloads/jdk7-downloads-1880260.html. Now create the directory in root mode and install jdk from the tar file using the commands below.

$ $ $ $ $

sudo su mkdir /usr/lib/jvm cp ./Desktop/jdk-7u79-linux-i586.tar.gz /usr/lib/jvm cd /usr/lib/jvm sudo tar xzvf jdk-7u79-linux-i586.tar.gz

Restart your terminal and then append the following lines in /etc/profile:

#java JAVA_HOME = /usr/lib/jvm/jdk1.7.0_79 PATH= $PATH:$HOME/bin:$JAVA_HOME/bin JRE_HOME = /usr/lib/jvm/jdk1.7.0_79/jre PATH= $PATH:$HOME/bin:$JRE_HOME/bin export JAVA_HOME export JRE_HOME export PATH After appending the lines run the below commands

vsudo update-alternatives --install “/usr/bin/java” “java” “/usr/lib/jvm/jdk1.7.0_79/jre/bin/java” 1 sudo update-alternatives --install “/usr/bin/javac” “javac” “/usr/lib/jvm/jdk1.7.0_79/bin/javac” 1 sudo update-alternatives --install “/usr/bin/javaws” “javaws” “/usr/lib/jvm/jdk1.7.0_79/bin/javaws” 1

sudo update-alternatives --install “/usr/bin/jps” “jps” “/usr/lib/jvm/jdk1.7.0_79/bin/jps” 1 sudo update-alternatives --set java /usr/lib/jvm/ jdk1.7.0_79/jre/bin/java sudo update-alternatives --set javac /usr/lib/jvm/ jdk1.7.0_79/bin/javac sudo update-alternatives --set javaws /usr/lib/jvm/ jdk1.7.0_79/bin/javaws Once this is done, restart the terminal and run the command java –version to check the Java installation. This step needs to be done in all the machines of the cluster.

SSH configuration

After installing Java, you need to create a password-less connection between the nodes of the cluster by configuring SSH. Secure Shell (SSH) is a Unix-based command interface and protocol to access remote computers. Different users need password-less connections and for this you need to generate a public/private key and place it in an NFS location that is shared across the cluster for every user. SSH works in the client server mode, so install the open-ssh client on the client machines by running yum -y install openssh-clients. Since SSH uses port 22, make sure that it is opened. On the master machine, install the openssh-server by running yum –y install openssh-server. Now run service sshd start in the terminal to start the ssh services.

Hadoop installation

The tarball for Hadoop can be downloaded from hadoop. For this article we are using Hadoop-2.2.0.tar.gz; untar this file in the Hadoop installation

Storing data in HDFS The data is copied from various resources and kept on HDFS. Hadoop dfs –put <<local_ filesystem_path>> << HDFS_filesystem_ path>> is the command to put the file on HDFS. Please note that Hadoop does not let you put the files in an already present directory; every time you add a file, you have to create a new folder or else it will give the exception “Output directory already exists”. Make a directory using the command hadoop dfs –mkdir /Data and then the data is put on HDFS by executing hadoop dfs –put / usr/UN.txt /Data. The data can also be seen in the browser by browsing the specified file.

Left All the environment variables are stored in



Set up for production For a production installation, users should consider one of the Hadoop cluster management tools like Cloudera Manager or Apache Ambari as they provide welltested wizards for getting a working cluster in a short amount of time. Extra features like unified monitoring, log searches and rolling upgrades are also present so that you can upgrade the cluster without experiencing downtime. Also, the balancers program redistributes the data blocks by moving them from over-utilised DataNodes to underutilised one. RPM and Debian packages are available from the Apache Bigtop projects and they provide a consistent file system layout.

Right Hadoop’s configuration settings are held in core-site.xml



directory by executing sudo tar –xzvf Hadoop-2.2.0.tar. gz. The file that contains the environment variables which are needed to run Hadoop is The command export JAVA_HOME=/usr/lib/jvm/java-1.7.0openjdk- needs to be added to set the Java path to ensure whole cluster uses the same version of Java. The value of parameter HADOOP_HEAPSIZE is edited to set the memory for each Hadoop daemon, which is by default 1GB. HADOOP_HOME/logs is the default path of logs but you can configure it by specifying the path for the HADOOP_LOG_DIR parameter, which ensures that logs are kept in one place even if the installation directory changes.

cluster there is only one NameNode and multiple DataNodes, which are physically different machines. Since NameNode is a single point of failure, there is a Secondary NameNode that can exist on different machines, which store the image of the primary NameNode from time to time and act as a backup in case NameNode fails. The data stored in DataNodes is also replicated to ensure the high availability of data in case any single DataNode is down – the default replication factor is three. All these configurations are done in hdfs-site.xml. In order to get a Hadoop daemon up and running, the following three properties need to be set:

Configuration to run Hadoop daemons

• – list of directories where the NameNode stores its persistent data; contains commaseparated directory names; default value is file://${hadoop. tmp.dir}/dfs/name • –list of directories where the DataNodes store blocks; contains comma-separated directory names; default value is file://${hadoop.tmp.dir} /dfs/data • dfs.namenode.checkpoint.dir – list of directories where the secondary NameNodes stores checkpoints; contains comma-separated directory names; default value is file://${hadoop.tmp.dir}/dfs/namesecondary

To run Hadoop daemons, three files need to be configured: core-site. xml, hdfs-site.xml, Yarn-site.xml. The core-site.xml file contains the configuration setting for Hadoop, such as I/O settings that are common to HDFS, YARN and Map Reduce. The IP address configured is the IP of the node which acts as NameNode.

<configuration> <property> <name></name> <value>hdfs://</value> </property> </configuration> In order to store data in HDFS, the NameNode, the secondary NameNode and the DataNodes must be configured. The NameNode is the node in the cluster that stores the metadata of the data – like the number of blocks, on which rack the data is stored and more – whereas the DataNodes are the machines that store the blocks of data. In a multinode

Moving on, we need to configure yarn-site.xml for YARN daemons, a resource manager, a web-app proxy server and the node manager. YARN (Yet Another Resource Negotiator) is Hadoop’s cluster resource management system, which decouples MapReduce resource management and scheduling capabilities from the data processing component. Yarn provides its core services through two daemons called Resource Manager (one per cluster), which manages the

Left The YARN resource manager needs to be configured inside yarn-site.xml

resources per cluster, and Node Managers running on all the nodes that are used to launch and monitor containers. To run an application on YARN the client contacts the resource manager, then the resource manager in turn finds the node manager, which launches the application master – the container. The configured parameters are: • yarn-resourcemanager.hostname – the hostname of the machine where the resource manager runs • yarn-nodemanager.local-dirs – a list of directories where node managers allow the storage of intermediate data • yarn.nodemanager.aux-servces – a list of auxiliary services run by the node manager • yarn.nodemanager.resource.memory-mb –amount of physical memory in MB that may be allocated to the container in question • yarn.nodemanager.resource.cpu-vcores – number of CPU cores that may be allocated to the container The mapred-site.xml file is present in the same folder as yarn-site.xml. Copy the contents of mapred-site.xml into another file as a backup, then add the below lines to the configuration section of mapred-site.xml:

<configuration> <property> <name></name> <value>yarn</value> </property> </configuration> Open the bashrc file by typing the command vi ~./bashrc in the terminal and update it as shown. Check the Hadoop version by executing hadoop version in the prompt.

Formatting, starting and stopping the daemons

A brand new HDFS installation needs to be formatted before use, as the formatting process creates an empty file system by creating the storage directories; the command is hdfs namenode-format. After formatting, we need to start and stop the daemons across the nodes of the cluster through scripts, but before that you need to tell Hadoop which machines are part of the cluster. The ‘slaves’ file in Hadoop’s configuration directory is used for this: simply populate it with a list of the machine hostnames or IP addresses that are part of cluster, one per line. The slaves file also lists the machines that the data nodes and node managers should run on. The HDFS daemon is started with the command This script starts the NameNode on each machine returned by executing hdfs getconf-namenodes; it starts the DataNode on each machine listed in the slaves file; and it also starts the Secondary NameNode on the machines returned by executing the hdfs getconf-secondarynamenodes command. In a similar way, YARN daemons are also started by the script, which starts the resource manager on the local machine and the node manager on all the machines listed in the slaves files. The and scripts are used to stop the active daemons.

Other properties

If the need arises to add or remove the nodes of a cluster then you need to configure them in a file that contains a list of authorised machines that may join the cluster. The files are specified using dfs.hosts/ dfs.hosts-exclude. The default buffer size for Hadoop is 4KB but this can be set by passing the value in bytes using the io.file.buffer.size property in coresite.xml. Hadoop also has a trash facility, where the deleted files remain for a minimum period before getting permanently deleted; the property fs.trash.interval present in core-site.xml is used to specify the time interval.



Mihalis Tsoukalos

is a Unix administrator, a programmer (for Unix and iOS), a DBA and also a mathematician. He has been using Linux since 1993

Resources Text editor GCC compiler Python compiler Perl interpreter

Binary trees

Computer science Understanding binary trees Find out how to develop and use the flexible and good-looking tree data structure in your scripts Here we are going to go through the structure of binary trees as it is the most frequently used kind of tree. This tutorial will present the necessary knowledge and code that you need to know in order to begin using the tree data structure and help you understand how a tree works and why it is so fast. The presented code will be mostly written in C but you are also going to see examples in both Perl and Python. The Go and Rust methods of creating binary trees will also be discussed. The main benefit of using a tree is that you can find out if an element is present or not in a very short time. Trees are also appropriate for modelling relationships and hierarchical data.

The theory

Tutorial files available:


Strictly speaking, a ‘tree’ is a directed acyclic graph that satisfies the following three principles: it has a root node that is the entry point to the tree; every vertex, except the root, has one and only one entry point; there is a path that connects the root with each vertex. A directed graph is a graph where the edges have a direction associated with them. A directed acyclic graph is a directed graph

with no cycles. The root of a tree is the first node of the tree. Each node can be connected to one or more nodes depending on the tree type. If each node leads to one and only one other node, then the tree is a linked list! A leaf is a node with no children – leaves are also called external nodes whereas a node with at least one child is called an internal node.

The structure of tree data

A binary tree is a tree where underneath each node there exist at most two other nodes. “At most” means that it can be connected to one node, two nodes or no other node. The depth of a tree, which is also called the height of a tree, is defined as the longest path from the root to a node, whereas the depth of a node is the number of edges from the node to the root node of the tree. A tree is considered balanced when the longest length from the root to a node is at most one more than the shortest such length. Balancing a tree might be a difficult and slow operation, so it is better to keep your tree balanced from the beginning than trying to balance it after you have created it.

Putting a significantly large number of elements into a balanced binary tree does not change the speed of the tree If a binary tree is balanced, its search, insert and delete operations take about log(n) steps, where n is the total number of elements the tree has. Additionally, the height of a balanced binary tree is approximately log2(n), which means that a balanced tree with 10,000 elements has a height of about 14, which is remarkably small. The height of a balanced tree with 100,000 elements will be about 17, and the height of a balanced tree with 1,000,000 elements will be about 20! Putting a significantly large number of elements into a balanced binary tree does not change the speed of the tree in an extreme way. You can reach any node in a tree with 1,000,000 total nodes in under 20 steps! The adjacent diagram shows a graphical representation of a small binary tree. You can see that all values that can be found in the sub-tree that has the node with the value of 2 as its root are smaller than all values in the sub-tree with 18 as its root. In other words, all keys in a child node of a binary tree have values between their left and right parent keys â&#x20AC;&#x201C; so binary trees are ordered by design. The good thing is that you do not have to take special care with ordering them; putting an element in its right place keeps them ordered. As you will find out, deleting an element from a tree is not always trivial because of they way that trees are constructed. The depth of the example tree is 8 because in order to reach either element 7 or element 10 you will need to pass over 8 other nodes (16 > 2 > 13 > 4 > 12 > 11 > 6 > 9 > 7 or 10). It is easy to see just by looking at the presented tree whether it is unbalanced. Another important point is that a tree with the same elements but with the 15 key as the root node would look totally different! See the two illustrations on page 54 for an example of how the same elements can form two different kinds of trees. You should also be aware that there exist trees that have nodes with more than two children.

Creating a binary tree

As it happened with linked lists, you will need a special C structure to hold the data of a tree node. For a binary tree, each node should be able to reference two more nodes. Therefore, the C structure used for describing the nodes of a binary tree is the following:

struct node { int val; struct node * right; struct node * left; };




















Above Each node can connect to a maximum of one node above and two below



Binary trees

Binary Tree A

Binary Tree B


0003 0010




0009 0002



0008 0008


0004 0007



Right While they contain the same elements, these are different trees with different key nodes

Binary trees in Go Go can program many things including binary trees. In fact, there is a Go implementation of a binary tree that is offered by Go developers. Head over to https:// tour and take a look at the tree.go file. Each node is defined as follows, which is what you would expect: type Tree Left Value Right }


struct { *Tree int *Tree



The data that each tree node holds, – also called the key – is an integer; nevertheless, you can use any data type you want, even another C structure! If you want to make your implementation more sophisticated, you can add a variable to the node structure that will keep the count of a node – the count is the number of times this element was “inserted” into the binary tree. You have no obligation to use the count variable but it is good to have such a capability – this functionality prohibits the insertion of duplicate nodes into the binary tree but requires extra code, because you will have to consider the value of the count variable before deleting an element and when inserting an existing element. The code of binTree.c can be seen below. This implements the insertNode() and traverseTree() functions used for inserting elements into a binary tree and visiting all its nodes, respectively.


} // A utility function to do traverseTree traversal of BST void traverseTree(struct node *root) { if (root != NULL) { traverseTree(root->left); printf(“%d ”, root->val); traverseTree(root->right); } }

struct node { int val; struct node *left; struct node *right; };

int findNode(struct node *root, int key) { if (root != NULL) { if (findNode(root->left, key)) return 1; if ( root->val == key ) return 1; if (findNode(root->right, key)) return 1; } return 0; }

// A utility function to create a new BST node struct node *newNode(int item) { struct node *temp = (struct node *) malloc(sizeof(struct node)); temp->val = item; temp->left = temp->right = NULL; return temp;

// Insert a new node with the given key (val) struct node* insertNode(struct node* node, int val) { // If the tree is empty, create a new node and return it. if (node == NULL) return newNode(val); // If the key already exists, return!

#include <stdio.h> #include <stdlib.h>

Balanced Binary Tree

Unbalanced Binary Tree

5 0005




3 -4



3 -2





2 0006


1 -1



2 -8



1 0008


1 -15

1 -7

0000 -4


0010 -10






0009 -3










1 0001



if (val == node->val) return node; // Otherwise, search the rest of the tree if (val < node->val) node->left = insertNode(node->left, val); else node->right = insertNode(node->right, val);

root root root root

= = = =

insertNode(root, insertNode(root, insertNode(root, insertNode(root,

Above For a binary tree to be balanced, the longest length from the root to a node can only be one node longer than that the shortest root-to-node length

5); 1); 6); 8);

int count = 0; treeSize(root, &count); printf(“Tree size is %d.\n”, count);

return node; }

if (findNode(root, 1)) printf(“1 was found in the tree!\n”); if (findNode(root, 8)) printf(“8 was found in the tree!\n”);

// Return the minimum value of the tree struct node * findMinKey(struct node* node) { struct node* current = node; // Find the leftmost leaf while (current->left != NULL) current = current->left;

printf(“Inorder traversal of the given tree \n”); traverseTree(root); printf(“\n”); return 0;

return current;


} void treeSize(struct node *root, int *count) { (*count)++; if(root->left)treeSize(root->left, count); if(root->right)treeSize(root->right, count); } int main(int argc, char **argv) { struct node *root = NULL; root = insertNode(root, 4); root = insertNode(root, 7); root = insertNode(root, 2); root = insertNode(root, 3);

As you might have guessed, you will have to deal with memory allocation when you create a new node. The utility function that deals with memory allocation is the following:

struct node *newNode(int item) { struct node *temp = (struct node *) malloc(sizeof(struct node)); temp->val = item; temp->left = temp->right = NULL; return temp; } The newNode() function returns a pointer to the node structure it just created. Internally, it allocates the



Binary trees

appropriate memory space using malloc() and creates a node with the given key. The returned node has no children because it is the job of the insertNode() function to determine if it has to add any children to it or not. You can tell whether a node is a leaf or not by examining the left and right pointers of the node C structure: if both of them point to NULL then the node is a leaf. The traverseTree() function is the perfect example of a recursive function because it keeps calling itself until it finds the end of each possible path, as you can see with the following:

void traverseTree(struct node *root) { if (root != NULL) { traverseTree(root->left); printf(“%d ”, root->val); traverseTree(root->right); } }

The recursive nature of traverseTree() makes the code simpler – it would be incredibly difficult to write this function otherwise There is another utility function that returns the node with the minimum key value, and it is implemented as follows:

struct node * findMinKey(struct node* node) { struct node* current = node; // Find the leftmost leaf while (current->left != NULL) current = current->left; return current;

The function does not return any value, because this is not needed. The recursive nature of traverseTree() makes the code simpler and easier to understand – it would be incredibly difficult to write this function otherwise. There is also another important function that we should discuss at this point named findNode(). This function can determine for us whether an element belongs to the binary tree or not. The findNode() function will be used by the main() function because it allows main() to determine whether or not a key is already present in the binary tree, in order to avoid introducing duplicates to our tree. If you look carefully and closely at the implementations of both findNode() and traverseTree(), you can see that findNode() is actually based on traverseTree().

Creating a tree in Perl The Perl script shows how to implement a binary tree in Perl, as well as how to traverse it and print its contents on-screen. The good thing with Perl, and the main reason for using Perl, is that it has modules that can do almost anything you want; there exist various modules that can generate trees without you having to implement anything. Find in your resources pack and run the script to see the beautiful output that is produced by Perl. What is wrong with is that you will have to manually generate the tree structure before using it. The most important code of is the following:

my $root = Tree::DAG_Node->lol_to_tree($tree); print map {“$_\n”} @{ $root->draw_ascii_tree}; The first command creates the tree and the second command draws it using ASCII characters. Should you wish to go deeper into the Perl binary tree implementation, you should carefully read the source code of Tree::DAG_Node and implement what is missing. Another alternative is the Tree::RedBlack module – this is an implementation of a Red/Black tree, which is a binary tree which remains balanced all the time, so no operation takes more than O(log(n)) time.


} Due to the way a binary tree is constructed, you do not need to search the entire binary tree in order to find the node with the minimum key value – at most, you will need to visit as many nodes as the height of the binary tree! Another utility function is the treeSize() function, which calculates the size of the tree, i.e. the total number of its nodes. The treeSize() function has the next implementation:

void treeSize(struct node *root, int *count) { (*count)++; if(root->left)TreeSize(root->left, count); if(root->right)TreeSize(root->right, count); }

Inserting a node into a binary tree

The upper-left image on page 54 shows a graphical representation of the binary tree using a given order of elements. In order to understand that the order you insert the nodes into a binary tree is very important, the upperright image on page 54 shows a graphical representation of a binary tree with the same elements inserted in a different order. As you can see from both figures, the created trees do not really look like binary trees because the order of insertion is not appropriate – this is the biggest disadvantage of binary trees: the insertion order of the elements is very important. However, the way a binary tree looks does not affect its operation; it might just make it run a little slower.

Deleting a node from a binary tree

Deleting a node from a Tree is a relatively difficult task, especially when you compare it with deleting a node from a linked list. In the rare case where the node you

want to delete is a leaf, the task is straightforward. To see the C code of the deleteNode() function in action, open up deleteNode.c, provided in your resources pack. The most important task of deleteNode() is discovering the value that you want to delete inside the tree. After finding the requested node, deleteNode() has to take the placement of the node into consideration before deciding what to do. The deleteNode() function begins its real work as soon as it reaches the node that contains the key that is going to be deleted – once again, the searching happens recursively. An important part of the deleteNode() implementation is the else block because this is where the action happens. The code examines whether the node that is going to be deleted has two children, one child or none, because the actions that will be taken depend on the number of children the node has. The reason that deleteNode() returns a pointer to a node structure is because the root node of a binary tree might change after deleting a node.

Balancing a binary tree

There is no point in sorting a binary tree because trees are sorted by default in their own way. What is very important, however, is to have your binary trees balanced to speed up their operations. The most crucial question is deciding whether an existing binary tree is already balanced or not. This means that you will have to traverse it, find both its minimum and maximum lengths, then compare them before deciding whether your tree needs to be balanced or not. As this might be a difficult task, the presented C program takes a different but easier to understand approach: it can create a balanced tree if you give it a sorted array of elements. Sorting values is a relatively easy task so this technique will work pretty well in most

situations. The trick that balanceTree.c does to generate a balanced tree is that it puts the element in the middle of the sorted array in the root node – it also does the same for all sub-trees. Open up the file and take a look at the C code of the createBBT() function – once again, the createBBT() function works recursively. Executing balanceTree.c will produce the next output:

The given sorted array is 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 The preorder traversal of binary search tree is as follows 8->4->2->1->3->6->5->7->12->10->9->11->14->13->15->

Advantages and disadvantages

Despite their usefulness, trees are not a catch-all solution and should certainly not be used without consideration. You cannot beat a tree when you want to represent hierarchical data – this is a rare case where balancing a tree will destroy its usefulness. They can also be used for building efficient models for real life problems. Trees are extensively used in compilers when parsing computer programs – this is another case where balancing a tree is not good. Finally, trees can be traversed very efficiently. However, it’s not all good news. A major disadvantage of binary trees is that the shape of the tree depends on the order in which its elements were inserted. If the keys of a tree are long or complex, then inserting or searching for an element might be slow due to the large number of comparisons required. If a tree is not balanced, the behaviour of the tree is unpredictable. Although you can create a linked list or an array quicker than a binary tree, the flexibility that a binary tree offers in searching operations might be worth the extra overhead. Next issue, we will create a simple programming language where you will see trees in action, so look out for that!


From the makers of

Python The

Discover this exciting and versatile programming language with the new edition of The Python Book. You’ll find a complete guide for new programmers, great projects designed to build your knowledge and tips on how to use Python with the Raspberry Pi – everything you need to master Python.

Also available…

A world of content at your fingertips Whether you love gaming, history, animals, photography, Photoshop, sci-fi or anything in between, every magazine and bookazine from Imagine Publishing is packed with expert advice and fascinating facts.


Print edition available at Digital edition available at



Raspberry Pi 60

“The trick here is to find a games controller that has enough space inside for the Zero. We’re going to be using the original Xbox controller”

Contents 60

Hack together an Xbox Zero arcade


Self-driving RC car


Motion-tracking with Python


Set up a Ras Pi 2 Docker Swarm


Minesweeper in Minecraft




Xbox Zero


ARCADE Let’s make a self-contained arcade machine out of old bits of kit, a spare Xbox pad and a Pi Zero! The Raspberry Pi Zero is tiny, ridiculously tiny. It’s a wonderful, impressive piece of tech, but we can’t help wondering – because we’re terribly serious adults, honestly – where can we stuff the Zero for maximum fun? It’s small enough to be hidden in a variety of household objects in order to enhance their capabilities. Whatever you can find to fit it in, you can turn into some kind of smart machine. Okay, wiring it in to your vacuum cleaner in the hope of making an ersatz Roomba is probably a little tricky – but there are all sorts of electronic gadgets lying around your home which can be brought back to life with a little bit of effort. Take old game controllers. If you’re anything like us you’ve probably got a couple of boxes full of old computer

equipment you just can’t bear to throw away – an Atari Jaguar that hasn’t been touched since the 90s, a Sega Dreamcast which you’re sure you’ll plug in again one day, an old Xbox that lies languishing since you picked up something bigger and better. Turns out it actually was useful to keep them around – it’s time to bring these old systems back to life! We’re going to show you how to gut an old videogames controller, replace its innards with a Raspberry Pi Zero, and then load it up with a treasure trove of retro games. From start to finish, this project should take you under an hour to complete – and then you’ll be able to load up the ROMs you legally own on your new console and enjoy them from the comfort of your sofa.

Terence Eden

has been playing with Raspberry Pis since just before they were sold to the public. If you’d like to see what he’s building next, visit his blog at or follow him on Twitter @edent



What you’ll need Original Xbox controller

Micro USBOTG cable

Cross-head screwdriver

Wire cutters Raspberry Pi Zero

Craft knife Isopropyl alcohol swabs

Electrical tape 2A micro USB power supply

Micro SD card

Mini HDMI cable/adapter


Lots of little bits When taking apart electronics, keep a few small bowls or containers nearby. Put each type of screw in its own separate container so you don’t accidentally mix them up. With the Xbox controllers, you’ll find that the buttons especially have a habit of rolling away from you, so stash them somewhere safe as well. Keep track of any random bits of plastic or rubber which may be useful in re-assembling.


Gather your equipment



While the Zero doesn’t take up much space, videogame controllers are often stuffed full of delicate electronics. The trick here is to find a games controller which has enough space inside for the Zero. We’re going to be using the original Xbox controller, nicknamed The Duke. If you don’t have one to hand, they can be picked up for a couple of quid from most second-hand electronics shops. If you can’t find one, you can use newer USB game pads that are designed to look like controllers for classic systems like the SNES and Mega Drive. Make sure you choose a controller that has enough buttons for the games you want to play – some classic fighting games, for example, really can’t be played on a two-button NES controller!

Working with electrical items and sharp objects can be dangerous. You risk damaging yourself or, worse, breaking your toys. Please ensure everything is unplugged from electrical supplies before attempting this project. As with any electronics projects, you should also take care to fully ground yourself before playing around with sensitive components – the static electricity from your body can ruin them. Anti-static wrist straps or a few taps on a radiator should do the trick.


The build



See the image to the right – this is the controller we’ll be working with. It has dual joysticks, six buttons, a D-Pad and two triggers – it’s compatible with most retro games systems.

If you’re using a different controller, double-check that the Pi is likely to fit inside before you crack it open. As






Xbox Zero you can see, the Pi nestles neatly between the triggers of this controller – the original Xbox controller is one of the largest.



The controller is held together by half a dozen crosshead screws. Be careful when opening the case as the buttons and rubber contacts are loose within the controller – they will spill everywhere!




Gently does it


Cut to fit

With the shell removed, you should be able to undo the screws holding the main circuit board in place. There are also a couple of connectors which power the vibration motors – gently unclip them in order to completely remove the board. You might find it easier to use a pair of pliers for this – just be very gentle as you pull!

You can see for yourself just how well the Pi fits here; it can be squeezed under the memory card slot. If you want to hold it firmly in place, use some BluTak as a temporary solution. Also, if you’re using an older controller, it’s worth giving it a bit of a clean. Remove the rubber contacts and gently swab under them using the isopropyl alcohol swabs.

Depending on the model of controller, you may find that the Pi blocks one of the internal plastic struts. The plastic is soft enough that a craft knife will easily cut it down to size, though. Start with small strokes, shaving off a tiny bit at STEP


a time until you have enough room. Make sure the plastic dust is cleaned out before you reassemble the controller. If you have a can of compressed air, you can use it to easily blow away the shavings.


Connecting it up

If you’re using a controller that has a regular USB port on it, you can just plug it into the Pi via a USB OTG converter. If you’re using the original Xbox Controller, it’s slightly tricky. Microsoft, in its infinite wisdom, has decided that the original Xbox should use USB – but with an incompatible plug design. This means, in order to connect the controller to the Pi, we need to do some wire stripping. Fun! The wiring inside the Xbox controller’s cable uses bogstandard USB wiring colours, so once you’ve chopped the plugs off the controller and the OTG cable, it’s pretty straightforward to connect them together.



Strip the wires by a couple of centimetres and then connect them together. You should have Red, Green, White, and Black. The Xbox cable also has a Yellow wire which you can ignore. It is worth noting at this point that you need to be sure that you have a USB data transfer cable and not just a plain old power cable – the former will look like the photo below, but power cables will be missing the two data wires. With the wires stripped, we temporarily used regular stickytape to make the connections between the OTG cable and the controller – for a more permanent installation, you can use electrical tape or simply solder the wires together.

USB Wiring

The older USB 1.0 and 2.0 cables have fewer wires than the newer 3.0 – here’s a quick guide White This wire is one of two used for differential data signals. This white wire is the positive component and the green wire is the negative one



Red The red wire is one of two handling power. This one is a 5V power line that provides voltage to the circuit

Green The circuit

can get the difference between the two data signals rather than between a single wire and ground – it’s a more effective transmission

Yellow USB mini/micro

cables will also have an additional wire that isn’t required for our particular project

Black This is the other wire

associated with the power – the ground wire, which is the counterpart to the 5V wire


Feature STEP






One thing to note: you’ll need to insulate the bottom of the Pi against all the contacts on the controller. For this quick hack, we’ve used some of the cardboard packaging – but any nonconductive material will do. From there, it’s as simple as screwing the case back together. Make sure that the controller’s buttons and joysticks don’t slip out of alignment. Keep track of which coloured buttons go where and you should be fine.


Wiring up


A word about power


Let’s play!

The Pi will need three wires connected to it in order to work. The controller cable needs to be connected to the USB OTG port. An HDMI cable goes from your TV to the mini HDMI port on the Pi. Finally, a 2A micro USB power supply needs to be plugged into the Pi’s power socket. We’ve used a standard mobile phone charger, but you can use a USB battery pack if you want to reduce the number of wires trailing around your room.

You might be wondering whether it’s possible to get the HDMI cable to supply power from the TV to the controller. Sadly, the HDMI specification doesn’t permit power to flow in that direction. If your TV has a USB socket on it, you could use that to supply the Pi with power – just make sure the socket itself is powerful enough. The Pi needs at least 1 Amp, and ideally 2 Amps. Many TVs will only output 500mA which isn’t enough to run the Pi.

Okay! It’s looking good – you’re nearly ready to play. The next step is to get some emulation software on this thing!


The right controller Second-hand stores like CEX or GAME often have some older, obsolete consoles and accessories out of public view, as they aren’t particularly high-selling these days. It’s worth asking the staff what they have if you can’t see what you need on display. Some charity shops also have old consoles for sale. Failing that, local car boot sales or simply asking your gamer friends are both excellent ways to grab inexpensive controllers for all sorts of consoles.

Xbox Zero

Installing and configuring the RetroPie emulator What good is hardware without software? It’s not as difficult as you might think to run retro software through an emulator Left RetroPie can be restored straight to SD if you don’t need Raspbian as well

Right, you’ve got your Pi safely ensconced in a controller – all you need now are some videogames to play! We’re going to be using the RetroPie emulator. By the end of this tutorial, you’ll be able to play games directly from your Raspberry Pi, provided that you legally own the ROM files. It’s as easy as installing the software onto your SD card and then copying across any games that you want to play. If you’ve already got Raspian installed on your Pi, you can install RetroPie alongside it – or you can dedicate the whole disk to the software.


Bottom left If you see a splash screen like this when you power on again, the installation worked!

Install RetroPie inside Raspbian

If you’ve already started using your Pi and want to add RetroPie to it, you’ll need to install the software from GitHub. The latest instructions can be found at RetroPie-Setup. Open up a terminal on your Pi (for example, by SSHing into it from another machine, or by logging in directly to the Pi). Update your repositories and make sure the latest version of the Git software is installed:

The script will take several minutes to run, depending on the speed of your internet connection. It may ask you for permission to install extra software that is needed – you should allow this. Once fully installed, you will need to reboot your Pi:

sudo reboot sudo apt-get update sudo apt-get upgrade sudo apt-get install git Download the latest version of the RetroPie setup script:

git clone --depth=1 RetroPie-Setup.git If you’re security-conscious, it’s a good idea to check what the script does before running it. Once you’re ready, you can install it by changing into the correct directory and executing the script:

cd RetroPie-Setup sudo ./

RetroPie can now be run by typing emulationstation. We’ll come on to configuring your setup in just a moment.


Install RetroPie onto a blank SD card

If you want your Raspberry Pi Zero to be used solely as a RetroPie machine, this is the choice for you. Be warned: it will completely wipe a micro SD card, so if you’re using one you’ve used before, make sure you back up any important data before starting. Download the latest version of the software from Make sure you download the correct SD card image for your machine – the image for the Raspberry Pi 2 is not compatible with the Raspberry Pi Zero. Download the Standard version (not the BerryBoot version). The download is an 800MB .gz file. Unzip it and extract the .img file, which will be around 2.6GB. You’ll now need to write this image file onto your micro SD card. This is done in the same way that you would install a normal Raspberry Pi image onto a card. There are slightly different instructions for Linux, Mac and Windows.





Use the Disk Manager to select the image file and the micro SD card. Follow the on-screen instructions until the image has been fully written to the card.

Download the ApplePi Baker from www.tweaking4all. com/hardware/raspberry-pi/macosx-apple-pi-baker. Once it is installed, you can select the image file and the micro SD card. Follow the on-screen instructions.

What is an emulator? An emulator is software which lets your computer pretend to be a different sort of computer. It will allow a Raspberry Pi Zero to run software originally designed for the Sega Mega Drive, or Nintendo N64, old DOS-based PCs, etc. Emulators aren’t without their problems, though – it’s nearly impossible to perfectly recreate a games console in software. Keep in mind that older games may have bugs ranging from minor sound and graphical glitches to full-blown crashes.


Feature Where do I get ROMs? Many older games have, effectively, been abandoned. The original publishers are defunct and it’s not clear legally who owns the rights. There are several sites which claim to have permission from the original creators to distribute their games – but it’s not always easy to tell how legitimate they are. You should ensure that you either buy legitimate copies or download from organisations with the legal right to distribute them.



Download the Win32 DiskImager from http:// Once installed, you can select the image file and the micro SD card. Follow the on-screen instructions until the image has been fully written to the card.

controller to select “Expand Filesystem”. Once highlighted, press right until the “Select” button is highlight. Click on it. After a short delay, you will see a success screen – press OK and you’ll be taken back to the configuration screen. Press right until “Finish” is highlighted, then click on it. You should now reboot your Raspberry Pi.




Adding ROMs


Set up the disk



Right – you’re almost ready to play. Put the micro SD card into the Raspberry Pi Zero, hook up the controller USB cable and the HDMI cable. Finally, plug the Pi into the power. It should boot up automatically and, after a few seconds, you’ll be greeted with a configuration screen. RetroPie should automatically detect any connected USB game pads and step you through setting up the buttons. Once you’ve finished, you’ll be presented with a screen showing all the choices you made.

Before we get to playing any games, we need to make sure that RetroPie is able to use all the space on the micro SD card. This will allow you to store ROMs and save your games. Select “RetroPie” from the menu. You’ll be presented with several configuration options. Select “Raspberry Pi Configuration Tool RASPI-CONFIG” You can come back here later if you want to change the default username and password; for now just use the

The final step is adding new ROMs. Once you’ve legally purchased and downloaded ROMs from the internet, you’ll need to copy them onto the micro SD card. ROMs are stored in a separate folder for each system. So, for example, you need to place your Sega Master System ROMs in ~/RetroPie/roms/mastersystem/. Once you’ve installed ROMs, the systems will appear in the main menu. You’re now ready to play!

Once booted, you’ll see a menu with all the available games systems on it. Some emulators will only show up once game ROMs for that system are installed. Scroll until you find the game you want to play – then let rip! You can always return back to RetroPie if you want to change any of the configuration options, or update the software. And that’s all there is to it! Time to sit back and play some games. If you want to find out more about the RetroPie software, visit

Emulation on Raspberry Pi and Pi 2


EXPLORE THE TECH INSIDE w w w.gad getdaily.x y z

Available from all good newsagents and supermarkets







Nonpolar end

AIR Polar end




Without an emulsifier added to the mixture, air and water just won’t mix – there is nothing between the particles that will enable them to bind

Emulsifiers create polar and non-polar ends that attach to water and air respectively. These enable the particles to bind, which is why foams retain their shape and integrity for longer

The casing A series of valves and springs ensures the foam is released with enough force in order to keep the consistency intact

The canister Nitrous oxide is used to force air into whatever’s in the siphon – handy for creating foams of any density

£70 | $65

Siphon R-Evolution A full do-it-yourself kit aimed at any would-be culinary whipper. It comes with various whipping agents, so you can start experimenting with airs and foams of your own the second it arrives.


Print edition available at Digital edition available at 032-045 GDT004 Kitchen Tech.indd 43

22/12/2015 22:04

Available on the following platforms

Camera The camera module is the focal point for object detection. Input data is collated then put through a client program running on the Pi.

Ultrasonic sensor This

senses angles and surface conditions to determine the stopping distance relative to an oncoming object.

Right Once an object is detected, the ultrasonic sensor relays this information and helps the RC car come to a stop Far right The Arduino board simulates button presses, helping the RC car drive on its own


Components list ■ Raspberry Pi B+ ■ Arduino ■ Camera module ■ HC-SR04 ultrasonic sensor ■ OpenCV

Arduino This simulates the button presses of the RC car controller. Four pins connect to pins on the controller, for forward, reverse, left and right.

My Pi project

Self-driving RC car

Zheng Wang turns the tables on Google with his very own fully-functioning self-driving car Where did the idea to develop a self-driving car come from? Believe it or not, I actually did this for a school project. A lot of my interests centre around machine learning, so I decided to do something that heavily involves machine learning and the concepts that surround it. I did some research online and found a very inspiring self-driving car project made by David Singleton, which showcased what he was able to achieve with just an Arduino board and a few other items. I was amazed to see that the RC car can drive itself along the track without aid and wondered if I could replicate a similar project. After that, I took out my Raspberry Pi and made up my mind to attempt to build my own self-driving RC car that could do even more. The aim was to include things like front collision avoidance, stop sign and traffic light detection. It took me a while to develop the project to anything more than an idea, just because there are so many factors that needed to be considered.

images received on the computer to be converted into greyscale and then fed into a pertained neural network to make predictions for the car; so whether it should go straight ahead, or make a left or right turn at the correct moment. These same images are used to calculate the stopping distance between the car and the stop signs, while the Raspberry Pi alerts the system of the distance to an upcoming obstacle. The object detection in this project is primarily learning based. The final part of the system consists of outputs from the artificial neural network that are sent to the Arduino via USB, which is connected directly to the RC controller. The Arduino reads the commands and writes out LOW or HIGH signals, simulating buttonpress actions to drive the RC car. With so many sensors and data feeds consistently taking place, there was a lot of initial trial and error involved, but it didn’t take me an overly long period of time to get the project running completely independently.

Could you give us an overview of how the self-driving system works? The crux of the system consists of three subsystems that work seamlessly in sync together. These systems consist of an input unit for controlling the camera and ultrasonic sensor, a processing unit and also the main RC car control unit. Firstly, live video and ultrasonic sensor data are streamed directly from the Raspberry Pi to the computer via a strong Wi-Fi connection. I was quick to recognise that it was imperative to create as little latency as possible in the streaming, so in order to achieve these goals, the video resolution is dramatically scaled down to QVGA (320×240). It provides that smooth streaming experience that I was after. The next step is for the colour

What sort of role did the Raspberry Pi play in the grand scheme of things for your self-driving car? The main benefit of using the Raspberry Pi was that it’s the perfect piece of apparatus to help collect input data, which is a massive part of this project. With the Raspberry Pi in place, I connected a Pi camera module and an ultrasonic sensor, which work in tandem to help the Pi collate its data. There are also two client programs running on the Raspberry Pi that help with the streaming side of things. One is solely for video streaming and the other is for the data streaming from the ultrasonic sensor. To be honest, I didn’t stray too far from the official picamera documentation when using it, as all the guidelines for video streaming are all in there. When I needed some

help with measuring distance with the ultrasonic sensor, there were some handy tutorials on the web for fellow enthusiasts to follow and there’s other reference material all over the place. Can you tell us more about the ultrasonic sensor? Can it detect collisions at a full 360 degrees? For this project, I chose to use the HCS404 ultrasonic sensor, as it’s one of the most cost-effective and userfriendly pieces of kit on the market. It can be a bit fiddly to set up from scratch, but as I mentioned previously, I was able to source help from the internet whenever I had a problem that I needed solving. For this sensor in particular, the manual lists its best detection is within 30 degrees, which would seem about right based on the tests that I have run with it. There are numerous sensors on the market, so a complete 360-degree detection seems like something that would be plausible. How do you see yourself taking this project further? Perhaps you’ll want to scale up to a bigger model? There are a lot of areas that I’d like to explore further to really take my self-driving car to the next level. For one, I’d like to eliminate the use of the ultrasonic sensor and instead implement a stereo camera for measuring the distances. The results are far more accurate than what the ultrasonic sensor can offer. If I get into the situation where I’ve got more spare time on my hands, perhaps I’ll look to add new behavioural features. It would be intriguing to see if I can implement things like lane changing and overtaking into the project. Outside of this project, I’m not working on any other Raspberry Pi projects currently, but I’m always on the hunt for new inspiration – and the Pi is an amazing piece of kit that I love to work with.

Zheng Wang has an academic background in electrical engineering and has put what he has learned into action with his Pi car project.

Like it?

Zheng has documented the entire process over on his blog at bit. ly/22Zh5uq. For those who want to replicate it for themselves, Zheng has also outlined the equations needed to get the object detection working correctly.

Further reading

David Singleton was where Zheng took his inspiration from, and we recommend you go check out his guide on how he built a neural network RC car at blog. davidsingleton. org/nnrccar. The process is a little different from Zheng’s, and it’s certainly an interesting read.


Python column

Motion tracking with your Pi This month, you will learn how to track motions with your Raspberry Pi, a camera and some Python code

Joey Bernard

is a true renaissance man, splitting his time between building furniture, helping researchers with scientific computing problems and writing Android apps

Why Python? It’s the official language of the Raspberry Pi. Read the docs at

In a previous article, we looked at how you can capture images using a camera and a Raspberry Pi. This let you include image capture functionality within your own Python program, but there is so much more you can do once you add vision to your code. This month, we will look at how you can add motion detection to your Python program. This kind of advanced image processing is extremely difficult to do, so we will definitely be building on the hard work of others. Specifically, we will be using the excellent OpenCV Python package. This package is constantly being improved, with more functionality being added with every update. The first thing you will need to do is install the various Python packages that you will need to talk to the camera and use OpenCV. Installing the packages can be done with:

sudo apt-get install pythonpicamera python-opencv This will also install all of the required dependencies. This project will assume that you will use the camera module for the Raspberry Pi. Check out the boxout to the right for other options if you want to try using a USB webcam. To talk to the camera module, you need to import the PiCamera class from the picamera Python module. You will also need the PiRGBArray class so that you can store the raw data from the camera. To talk to the camera, you instantiate a new instance of the PiCamera class. You can then set the resolution and frame rate before you start capturing images.

from picamera import PiCamera from picamera import PiRGBArray camera = PiCamera() camera.resolution = tuple([640,480]) camera.framerate = 16 rawImage = PiRGBArray(camera, tuple([640,480])) You now have your camera ready, and a memory buffer available to store the


captured images in. There are several different methods that you can use to do motion tracking. One of the simpler ones is to try and notice when something within the image field changes. There is a Python module, called imutils, that provides several basic image processing functions that are useful in the preprocessing steps. There is no package for it within Raspbian, however, so you will want to install it with:

sudo pip install imutils To look at image changes, we need to see what the background image looks like. You can take a series of images and look at the average of them to get an idea of the general background. Then, if a new image differs from the averaged background, we know that something has changed. This change is most probably due to something moving within the field of the image. To simplify the process, we will greyscale the image and then blur it slightly to get rid of any high-contrast regions. You will then want to simply run a continuous loop, pulling an image from the camera and running this process:

import imutils import cv2 for f in camera.capture_ continuous(rawImage, format=‘bgr’, use_video_port=True): frame = imutils.resize(f.array, width=500) gray = cv2.cvtColor(frame, cv2. COLOR_BGR2GRAY) gray = cv2.GaussianBlur(gray, (21, 21), 0) Here we start using the OpenCV functions to handle the image processing steps. You may have noticed that we are actually working with the array representation of the raw image data for the captured frame. There is no meta-data wrapping this image information, so it is your responsibility to remember what you are working with. The next step within the loop is to check whether we have an averaged image yet, and to initialise it if

we don’t. So the first time through the loop, the following code will execute

if avg is None: avg = gray.copy().astype(“float”) rawImage.truncate(0) continue Now that we have an averaged image, we can add every subsequent captured image to the weighted average. We also need to find how different the current image is from this weighted average.

cv2.accumulateWeighted(gray, avg, 0.5) imgDiff = cv2.absdiff(gray, cv2. convertScaleAbs(avg)) By using this weighted average, we should be able to deal with false positive hits due to environment changes like fluctuations in the lighting. Now that you have what is different from the average, what can you do with it? How do you decide how different it is from the average? We need to set some threshold difference that signifies a “real” difference in the image from the average. If you then dilate this thresholded image, you can apply the findContours function to identify the contours of the objects that are different from the calculated averaged background:

imgThresh = cv2.threshold(imgDiff, 5, 255, cv2.THRESH_BINARY)[1] imgThresh = cv2.dilate(imgThresh, None, iterations=2) (conts, _) = cv2.findContours (imgThresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) This dumps all of the contours from the current image into the list ‘conts’. You probably aren’t very interested in tiny objects within the list of contours. These might simply be artifacts within the image data. You should loop through each of these and ignore any that are below some area limit. You probably want to highlight any remaining object contours by placing a bounding box around them. Luckily,

Python column

We will be using the OpenCV Python package, which is constantly being improved OpenCV provides a function that will give the corner coordinates and the width and height. You can then draw a box on the image using this information:

for c in conts: if cv2.contourArea(c) < 5000: continue (x, y, w, h) = cv2.boundingRect(c) cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2) You should now have an image with all of the moving objects highlighted by red bounding boxes. What can you do with these annotated images, though? If you have a graphical environment available, you can display these results directly on the screen. OpenCV includes several functions to display the results of your image analysis. The simplest is to use the imshow(), which will pop up a window to display the image and also add a title.

cv2.imshow(“Motion detected”, frame) If you aren’t monitoring the results of your motion detector in real time, you probably still want to capture images when something moves in the environment. Luckily, OpenCV also includes a pretty exhaustive list of IO functions. You will probably want to timestamp these images first, though. Using the Python module timestamp and the function “putText()”, you can get the current time and date and add it to the image itself with:

import timestamp ts = timestamp.strftime(“%A %d %B %Y %I:%M:%S%p”) cv2.putText(frame, ts, (10, frame. shape[0] - 10), cv2.FONT_HERSHEY_ SIMPLEX, 0.35, (0, 0, 255), 1) Now you have an image with the current time and date on it, and the parts of the image that show up as having movement bounded in red boxes. You can use the OpenCV IO functions to write out these images so that you can check them out later. The following code is an example:

cv2.imwrite(“filename.jpg”, frame) The function imwrite() uses the file name extension in order to figure out what format to use when writing out the image. It can handle JPEG, PNG, PBM, PGM, PPM and TIFF. If the particular format you want to use also takes options, you can include them in the call to imwrite() as well. For example, you can set the JPEG quality by including CV_IMWRITE_JPEG_QUALITY and then setting it to some value between 0 and 100. Everything we have looked at has been focused on the idea of analysing the images in real time, and this is great if you can put the Raspberry Pi in the same location as the camera. If you can’t fit it in, though, you can still use the ideas here to post-process the video recorded by your micro-camera. You can use the same OpenCV IO functions to load the video file with:

camera = cv2. VideoCapture(“filename.avi”) You can then run through the same process to analyse each of the image frames within the video file. The VideoCapture() function can also read in a series of image files if your camera is simply grabbing a series of still images rather than a video. Once your program finishes, you need to remember to clean up after yourself. You should release the camera that you were using, and if you had OpenCV display any images on the desktop then you should clean those up, too.

camera.release() cv2.destroyAllWindows() You should have enough information now to be able to add some basic motion detection to your own Python programs. If you explore the OpenCV documentation, you will find many other, more complex, image processing and analysing tools that are available to play with. Also, more functionality is constantly being added to the OpenCV project.

What about webcams? In the main article, we have been using the Raspberry Pi module that plugs into the IO bus of the Pi. But what if you don’t have easy access to one of these? Almost everyone has an old webcam sitting around the house somewhere, and the Raspberry Pi has a perfectly useful USB port. The image quality and frame per second count is not as good as what you can get with the actual Pi Module. The key is getting the image data off the camera in the format that the OpenCV image analysis functions is expecting. The VideoCapture() function can not only take a video file name to read in, but can also take device IDs for cameras attached to the Raspberry Pi. Assuming that you only have one camera attached, you can connect to it with:

camera = cv2.VideoCapture(0) Making sure that your USB webcam is correctly connected and that Linux can properly talk to it is always the place where you may run into issues. But, if everything works the way it should, you can use all of the ideas from the main body of the article to use it for motion detection. While OpenCV has some capabilities to interact with the user, you may want to use some other framework to handle this. A good framework that is also very fast is pygame. You can use OpenCV to handle all of the image processing steps and build your user interface with pygame. The only issue is that the internal formats used by OpenCV and pygame to store image data are different, so you will need to do a translation back and forth. You only really need to worry about translating from OpenCV to pygame, since that is the direction that information will flow. There are a few helper functions that you can use to convert the OpenCV image to a string format, and then a pygame function to import this string into a pygame image. As an example, you could use something like

pygameImg = pygame.image.frombuffer(cv2Img. tostring(), cv2Img.shape[1::-1], “RGB”) This takes images from OpenCV (stored in cv2Img) into a pygame format (stored in pygameImg). If you have to, you can do a similar transformation using strings back from pygame to OpenCV format.



Build a Pi cluster with Docker Swarm

Combine the power and resources of your Raspberry Pis by building a Swarm with Docker

Alex Ellis

is a professional developer who got inspired by Linux and the Raspberry Pi and has never looked back. He is always setting up sensors, tinkering with robots, writing tutorials or simply cutting code.

Docker is a framework and toolchain used to configure, build and deploy containers on Linux. Containers provide a means to package up an application and all its dependencies into a single unit. This makes them easy to share and ship anywhere, giving a lightweight and repeatable environment. Each application runs in its own isolated space sharing the host’s kernel and resources, in contrast to a virtual machine which needs to ship with a full operating system. A Docker container can be started or stopped within a second, and can scale to large numbers while having minimum overhead on the host’s resources. The Docker community has built out a clustering solution called Swarm which, as of version 1.0, is claimed to be “production ready”. Our single Raspberry Pi has 1GB RAM and four cores, but given five boards we have 20 cores and 5GB RAM available. Swarm can help us distribute our load across them. Get ready to install Arch Linux, compile Docker 1.9.1 from source, build some images and then start up your own swarm for the first time.

What you’ll need ■ Github repository

■ Arch Linux for ARM



Install Arch Linux to an SD card


Configure the users

Go to Arch Linux ARM’s landing page for the Pi 2 and click the Installation tab ( You will need to carry out some manual steps on a Linux computer. Follow the instructions to first download the base system tar.gz archive. Next, partition the card and create vfat (boot) and ext4 (root) filesystems. Then, expand the base system onto the card. Finally, unmount the partitions. This will take a while as the card finishes syncing.

Once the Pi has booted up you can log in with a keyboard as root/root and then change the password. You may also want to remove the standard user account called “alarm” and create your own. Here we’ve used “lud” as our account name:

# # # #

passwd root useradd lud -m -s /bin/bash -G wheel passwd lud userdel alarm

Pi cluster

Left Arch Linux is an excellent choice for projects that need a lightweight, bleeding-edge software base


Set a static IP address

Now set a static IP address so you can easily connect to each Pi without any guesswork. The OS uses systemd for service configuration. Edit the network configuration file at: /etc/systemd/network/ and then reboot:

[Match] Name=eth0 [Network] Address= Gateway= DNS= IPForward=ipv4 If you would prefer to move over to a laptop or PC, you can now connect via SSH to In our swarm there are five nodes, so the addresses range


Install tools and utilities

Arch Linux runs on a rolling-release model, so system upgrades are incremental and packages are bleeding-edge. We will use the pacman package manager to install some essentials and upgrade the system at the same time:

# pacman -Syu --noconfirm base-devel wget git sudo screen bridge-utils device-mapper apache


Enable sudo

Configure your new user for sudo access by editing the /etc/sudoers list and then removing the comment from the line below:

## Same thing without a password # %wheel ALL=(ALL) NOPASSWD: ALL This enables all users in the “wheel” group to use sudo. We configured our user’s primary group as “wheel” in the earlier useradd command.

Arch Linux runs on a rolling-release model, which means system upgrades are incremental and packages are bleeding edge


Clone the article’s Git repository

We’ve put together a git repository containing some essential scripts, configuration and a pre-built version of the Docker Swarm for ARM. Log in as your regular user account and clone the repository from Github into your home directory:

# cd ~ # git clone


Install Docker 1.7.1

Docker 1.9.1 exists in the Arch Linux package system but is currently broken, so we will install the last working version and then compile it ourselves using the official build scripts:

# sudo pacman -U ~/docker-arm/pkg/docker-1:1.7.1-2armv7h.pkg.tar.xz --noconfirm # sudo cp ~/docker-arm/pkg/docker.service /usr/ lib/systemd/system/docker.service # sudo systemctl enable docker # sudo systemctl start docker # sudo usermod lud -aG docker # sudo reboot

Arch Linux ARM One of the attractions of Arch Linux is that it ships as a minimal base system leaving you to decide exactly which packages you need. The boot time is much quicker than Raspbian, which has to appeal to a wider audience. The system runs a rolling release through the pacman tool, keeping all your packages up to date with the development community. Be aware that the release model updates mean that you can only install the latest version of a package.

Now log in again and check that the installation was successful:

# docker info Now we will add an exclusion to /etc/pacman.conf to stop our changes being overwritten by system updates:

sudo ~/docker-arm/pkg/


Tutorial Raw materials Here we have focused on distributing a web application across a cluster, but the Pi is also perfect for including hardware and sensing at scale. The Pi has 4 USB ports, 42 GPiO pins, audio output and a camera interface. You have all the raw materials to do something really unique. Could you extend the expressredis4.x image to light up an LED when it is processing a request, perhaps?


Build Docker on Docker! Now we have the working version, we need to compile it:

# cd ~/docker-arm/images/docker-arm # ./ For the next 30-60 minutes a Docker development image will be set up for us, the code will be built and patched for ARM, and will then be copied into a local folder on our Pi. When the script finishes running you should get a message as below:

*Created binary: bundles/1.9.1/binary/docker-1.9.1* If the output is successful then go ahead and install the changes:

# cd ~/docker-arm/images/docker-arm # sudo ./ # sudo systemctl start docker


Build Docker Swarm image

There is an official Swarm image available in the public registry, but we cannot use this because it was built for x86_64 architecture – i.e. a regular PC. So let’s build our own image:

# cd ~/docker-arm/images/swarm-arm # ./ # docker run alexellis2/swarm-arm --version


Additional nodes

At this point you can either duplicate the SD card or run through the instructions again on each Pi. With either method, /etc/hostname and the IP address need to be updated on all Pis. If duplicating cards then make sure you delete /etc/docker

Right Docker has fast become the industry standard for container tech


/key.json to avoid clashes in the swarm. There are a number of additional images in the repository – you can build them as you need them as you need them or use the script (recommended). This could take a while to run.


Start the primary node

We are going to dedicate the first node over to managing the swarm on port 4000 and handling service discovery through Consul on port 8500. Both services will be running on the local Docker instance.

# cd ~/docker-arm/images/consul-arm # ./ # ~/docker-arm/script/ # ~/docker-arm/script/ If you built the consul-arm container earlier, you will see that it is much quicker this time around because Docker caches the steps, so only what changes between builds needs to be re-built.

Pi cluster 12

Join the swarm

Connect to one of the nodes, i.e., and start the script. This will query the IP address of eth0 and then advertise that to consul and the swarm manager.

# ~/docker-arm/script/ You will now see the swarm agent running under docker ps. Type in docker logs join if you want to see its output. Repeat this step on each of the remaining nodes.

Now start an equal number of node.js containers linking them to the redis containers.

# docker run -p 3000:3000 -d \ --label=’node_redis’ \ --link redis_1:redis \ expressredis4.x Finally run the load balancer on the primary node:

# DOCKER_HOST=”” docker run -d --name=balancer -p 80:80 nginx_dynamic


Query the swarm

Log into the primary node and run the swarm-arm image passing in the address of the consul service:

# docker run alexellis2/swarm-arm list consul:// To start using the docker command with the swarm itself set the DOCKER_HOST environmental variable to the address of the swarm manager:


Run Apache Bench

We’ll start Apache Bench with 10 concurrent threads and 1000 requests in total. We started our application on six swarm agents after setting up two additional Pis.

# ab -n 1000 -c 10 ... Concurrency Level: 10 Time taken for tests: 2.593 seconds Requests per second: 385.65 [#/sec] (mean) ...

# export DOCKER_HOST=tcp:// Now find out how many pooled resources we have:

# docker info ... Nodes: 4 ... CPUs: 20 Total Memory: 3.785 GiB


Example: distributed web application

Let’s now set up a distributed web application that increments a hit-counter in a Redis database every time we hit it. We will run several instances of this and use an Nginx load balance in front of them. We can also use Apache Bench to get some metrics. These containers need to be started in the correct order, starting with Redis, then Node and finally Nginx.


Start the redis and node containers

First start all the Redis containers, giving them names from redis_1 to redis_5:

# docker run -p 6379:6379 -d --name redis_1 alexellis2/redis-arm

Repeating the experiment with a single Pi gave only 88.06 requests per second and took 11.356 seconds in total. You could also try increasing the concurrency (-c) value to 100.


Direct the swarm from your PC

If you pull down the binary of the Docker client on its own, you can then use the DOCKER_HOST variable to point at your swarm manager, saving you from having to log into the Pis with SSH. Docker client binary releases can be found at https://docs.

# wget docker-1.9.1 # chmod +x docker-1.9.1 # export DOCKER_HOST=tcp:// # ./docker-1.9.1 info


Wrapping up

You can repeat the steps in the tutorial until you have enough swarm agents in your cluster. One of the benefits of clustering is the distribution of work across nodes, but Swarm also provides us with linking to handle coordination between nodes. To take this idea further in your own Raspberry Pi creations, why not connect some sensors through the GPiO pins and take advantage of the Pi’s hardware capabilities?

Docker Compose Docker Compose is a tool that reads a YML file and links together containers transparently, enabling you to bring up a web service spanning more than one container and saving many keystrokes. nodejs_1: image: nodecounter ports: - “3000” links: - redis_1 redis_1: image: redis ports: - “6379” nginx_1: image: nginx links: - nodejs_1 ports: - “80:80”



Create a Minecraft Minesweeper game Use your Raspberry Pi and Python knowledge to code a simple mini-game in Minecraft Dan Aldred

is a Raspberry Pi Certified Educator and a Lead School teacher for CAS. He recently led a winning team of the Astro Pi secondary school contest


You may remember or have even played the classic Minesweeper PC game that originally dates back to the 60s. Over the years it has been bundled with most operating systems, appeared on mobile phones, and even featured as a mini-game variation on Super Mario Bros. This project will walk you through how to create a simple version in Minecraft: itâ&#x20AC;&#x2122;s Minecraft Minesweeper! You will code a program that sets out an arena of blocks and turns one of these blocks into a mine. To play the game, guide your player around the board. Each time you stand on a block you turn it to gold and collect points, but watch out for the mine as it will end the game and cover you in lava!


Update and install To update your Raspberry Pi, open the terminal and type:

sudo apt-get upgrade sudo apt-get update The new Raspberry Pi OS image already has Minecraft and Python installed. The Minecraft API which enables you to interact with Minecraft using Python is also pre-installed. If you are using an old OS version, it will be worth downloading and updating to either the new Jessie or Raspbian image downloadable here:


Left The safe blocks have been turned into gold – the rest are potential mines!


Switching to the shell

Importing the modules

Load up your preferred Python editor and start a new window. You need to import the following modules: import random to calculate and create the random location of the mine, and import time to add pauses and delays to the program. Next, add a further two lines of code: from mcpi import minecraft and mc = minecraft.Minecraft.create(). These create the program link between Minecraft and Python. The mc variable enables you to write “mc” instead of “minecraft.Minecraft.create()”.

import random import time from mcpi import minecraft mc = minecraft.Minecraft.create()


Grow some flowers

Using Python to manipulate Minecraft is easy; create the program below to test it is working. Each block has its own ID number, and flowers are 38. The x, y, z = mc.player.getPos() line gets the player’s current position in the world and returns it as a set of coordinates: x, y, z. Now you know where you are standing in the world, blocks can be placed using mc.setBlock(x, y, z, flower). Save your program, open MC and create a new world.

flower = 38 while True: x, y, z = mc.player.getPos() mc.setBlock(x, y, z, flower) time.sleep(0.1)


Running the code

Reducing the size of the MC window will make it easier for you to see both the code and the program running; switching between both can be frustrating. The Tab key will release the keyboard and mouse from the MC window. Run the Python program and wait for it to load – as you walk around, you’ll drop flowers! Change the ID number in line 1 to change the block type, so instead of flowers, try planting gold, water or even melons.


Posting a message to the Minecraft world


Create the board

It is also possible to post messages to the Minecraft world. This is used later in the game to keep the player informed that the game has started and also of their current score. In your previous program add the following line of code under the flower = 38 line, making this line 2: mc.postToChat(“I grew some flowers with code”). Now save and run the program by pressing F5 – you will see the message pop up. You can try changing your message, or move to the next step to start the game.

Switching between the Python Shell and Minecraft window can be frustrating, especially as MC overlays the Python window. The best solution is to half the windows across the screen. (Don’t run MC full-screen as the mouse coordinates are off). Use the Tab key to release the keyboard and mouse from the MC window.

The game takes place on a board created where the player is currently standing, so it is advisable to fly into the air or find a space with flat terrain before running your final program. To create the board you need to find the player’s current location in the world using the code x, y, z = mc.player.getPos(). Then use the mc.setBlocks code in order to place the blocks which make up the board:

mc.setBlocks(x, y-1, z, x+20, y-1, z+20, 58). The number 58 is the ID of the block that is a crafting table. You can increase or decrease the size of the board by changing the +20. In the code example above, the board size is 20 x 20 blocks, which gives you a 400-block arena to play within.



Right Nothing says “game over” quite like a huge eruption of lava


Creating the mine

In the previous step you found the player’s location on the board. This x, y, z data can be reused to place the mine on the board. The code mine = random.randrange(0, 11, 1) generates a random number between 1 and 10. Combine this with the player’s current x axis position and add the random number to the position – this creates a random mine block on the board.

mine_x = int(x+mine) mine_y = int(y-1) mine_z = int(z+mine) Use setBlock to place the mine: mc.setBlock(mine_x, mine_y, mine_z,58). Using y-1 ensures that the block is placed on the same level as the board and is therefore hidden. The number 58 is the block ID, which you can change if you wish to see where the mine is; this is useful for testing that the rest of the code is working correctly. Remember to change it back before you play!


Create a score variable

Each second that you remain alive within the game, a point is added to your score. Create a variable to store the current score, setting it to a value of zero at the beginning of the game. Use the postToChat code to announce the score at the beginning of the game. Note that MC cannot print a value to chat, so the score is first converted into a string before it is displayed.

build the original board and place the mine, you have to find the player’s position again and store it as a new variable – x1, y1 and z1 – otherwise the board shifts around as the player moves.

while True: x1, y1, z1 = mc.player.getTilePos()


One point, please

Now that the player has moved one square they are awarded a point. This is a simple action of adding the value one to the existing score value. This is achieved using score = score + 1. Since it sits inside a loop, it will add one point each time the player moves.

time.sleep(0.1) score = score + 1


The tension increases…

Once you have been awarded the point, the next stage of the game is to check whether the block you are standing on is a safe block or if it is the mine. This uses a conditional to compare the coordinates of the block beneath you – x1, y1-1, z1 – with the mine_x, mine_y, mine_z position of the mine. If they are equal then you are standing on the mine. In the next step you will code the explosion:

if (x1, y1-1, z1) == (mine_x, mine_y, mine_z): score = 0 mc.postToChat(“Score is ”+str(score)) time.sleep(10)


Check the player’s position on the board

So far you have created a board that includes a randomly-placed mine the same colour as the board so you can’t see it! Next, you need to check the player’s position on the board and see if they are standing on the mine. This uses a while loop to continually check that your player’s position is safe, no mine, else game over. Since the player’s coordinate position is used to



Setting the mine off

In the previous step a conditional checks whether you are standing on the mine or a safe block. If it is the mine then it will explode. To create this, use lava blocks, which will flow and engulf the player. You can use the mc.setBlocks code to set blocks between two points. Lava blocks are affected by gravity, so setting them higher than the player means that the lava flows down over the player.

mc.setBlocks(x-5, y+1, z-5, x+5, y+2, z+5, 10)

Minecraft Other Minecraft hacks If you enjoy programming and manipulating Minecraft then there are more great Raspberry Pi -based projects for you to check out. Our expert has a bunch of them at tecoed. html. The folks behind Adventures In Minecraft have some great guides over at stuffaboutcode. com/p/minecraft. html, as well. Left Once finished, our mini-game uses the chat console to report your score


Game over

If you do stand on the mine, the game is over. Use the post to chat code to display a “Game Over” message in the Minecraft World.

mc.postToChat(“G A M E O V E R”)


Final score

The last part of the game is to give a score. This uses the score variable that you created in Step 8 and then uses the mc.postToChat code. Convert the score to a string first so that it can be printed on the screen. Since your turn has ended, add a break statement to end the loop and stop the code from running.


Safe block

But what if you missed the mine? The game continues and you’ll need to know where you have previously been on the board. Use the code mc.setBlock(x1, y1-1, z1, 41) to change the block you are standing on into gold or another material of your choice. In the code, the Y positon is Y – 1, which selects the block beneath the player’s feet.


Increment the score

As well as living to play another turn, you also gain a point. This is achieved by incrementing the score variable by one each time you turn the block gold and return to the beginning of the loop to check the status of the next block you step on. The postToChat is to tell you that you have survived another move!

score = score + 1 mc.postToChat(“You are safe”)


Run the game

That completes the code for the program. Save it and then start a Minecraft game. Once the world has been created, run the Python program. Move back to the Minecraft window and you will see the board created in front of you. Watch out for that mine!

Full code listing import random import time from mcpi import minecraft mc = minecraft.Minecraft.create() ###Creates the board### mc.postToChat(“Welcome to Minecraft MineSweeper”) x, y, z = mc.player.getPos() mc.setBlocks(x, y-1, z, x+20, y-1, z+20, 58) global mine mine = random.randrange(0, 11, 1) ###Places the mine### mine_x = int(x+mine) mine_y = int(y-1) mine_z = int(z+mine) mc.setBlock(mine_x, mine_y, mine_z,58) score = 0 mc.postToChat(“Score is “+str(score)) time.sleep(5) while True: ###Test if you are standing on the mine x1, y1, z1 = mc.player.getTilePos() #print x1, y1, z1 ###test time.sleep(0.1) score = score + 1 if (x1, y1-1, z1) == (mine_x, mine_y, mine_z): mc.setBlocks(x-5, y+1, z-5, x+5, y+2, z+5, 10) mc.postToChat(“G A M E O V E R”) mc.postToChat(“Score is ”+str(score)) break else: mc.setBlock(x1, y1-1, z1, 41) mc.postToChat(“GAME OVER”)


Special offer for readers in North America

6 issues FREE FREE

resource downloads in every issue

When you subscribe

The open source authority for professionals and developers

Order hotline +44 (0)1795 418661 Online at *Terms and conditions This is a US subscription offer. You will actually be charged ÂŁ80 sterling for an annual subscription.

This is equivalent to $120 at the time of writing â&#x20AC;&#x201C; exchange rate may vary. 6 free issues refers to the USA newsstand price of $16.99 for 13 issues being $220.87, compared with $120 for a subscription. Your subscription starts from the next available issue and will run for 13 issues. This offer expires 30 June 2016.



for this exclusive offer!

81 Group test | 86 Genuino 101 | 88 Tails 2.0 | 90 Free software

Firefox Hello


Google Hangouts

KDE Telepathy


Video calling software

How do you prefer to make video calls to your friends and family? There’s already a choice of different tech for Linux – we find out the best options

Firefox Hello


GoogleHangouts KDE Telepathy

The Mozilla team rolled out Firefox Hello in version 34 of its flagship web browser. Since then, users have been able to chat with each other using text or make video calls with a few mouse clicks. Hello works via WebRTC (Real Time Communication) and relies on servers of the Telefonica carrier, with which Mozilla started partnering in 2014.

Skype continues to be the most recognised VoIP (Voice over IP) and chat solution in the world, and perhaps the most popular proprietary Linux app. Skype originally featured a hybrid peer-topeer and client–server system, but after acquisition by Microsoft in 2012 it has been powered by the supernode cluster controlled by Redmond.

Google launched Hangouts nearly three years ago as a successor to Google Talk, which was discontinued. Hangouts is a communication feature tightly integrated with Gmail. Hangouts has text and voice chats as well as video calls, and even allows pre-paid calls to landline phones. Google uses proprietary tech and processes all calls on its own servers.

There have been different attempts to bring video calls to the KDE desktop, with the first one registered in 2009 as KCall. Three years later, initial drafts came to life as KDETelepathy – a feature-rich desktop IM client that supported VoIP and video chats. Nowadays, Telepathy can be found in almost any Linux distro under the ‘ktp’ name.







Video calling software


Firefox Hello

Is Firefox’s video calls feature Hello really as good as advertised?

It’s the heavy hitter, but it’s still in need of some vital performance tweaks

■ Hello is already baked into the Firefox browser – just click on the smiley icon in the top menu bar

■ Skype for Linux works faster than on Windows and has plenty settings that you can change

Ease of setup

Ease of setup

All you have to do is click the chat button and then click on the Get started label. Firefox will generate a unique link that you should pass to your buddy in order to connect with them. If you have a Firefox account, things are even more convenient: once you’re signed in, you can add people to your contacts list and connect to them instantly without any links.

Getting Skype to work is more tricky than Firefox. Microsoft offers several builds of its proprietary app in the form of DEB and RPM packages, and a statically built tar.gz. All builds are 32-bit and haven’t been updated since summer 2014. Linux distros cannot include Skype directly due to license restrictions, but there are guides for all major Linux flavour on the web.



Hello delivers a very robust performance with high-quality audio and video on any WebRTC-enabled browser (most modern browsers qualify). Mozilla also collects anonymous metrics from Firefox users and uses this information to work on browser performance.

Skype used to have a lot of glitches with early versions of Pulseaudio and V4L (Video for Linux) tools, but all these bugs are mostly gone. Skype uses the high-quality open source Silk codec for audio, and adjusts video quality depending on network bandwidth.



It’s missing multiparty mode and won’t work for business collaboration, but text chat and screen sharing work fine. If you want to avoid calls, there’s a Do Not Disturb option that blocks all call notifications, plus there are essential switches for the webcam and mic, plus volume control and an exit button.


Skype 4.3 has a lot of features, like rich video and audio preferences, cloudbased group chat, file transfers, calls history and a very useful Skype Test Call – a very helpful aid to test if your video and audio setup works correctly. There are also a number of paid services, like Skype WiFi – it takes you online with a pay-as-you-go approach and lets you use your Skype credit account.


The biggest advantage of Hello is that it has few complications. There’s no need for any browser plugins, no system-wide codec dependencies, and even no direct need for accounts. Mozilla notes that all you need is a link you’d share with your buddy and neither of you are even not obliged to have a Mozilla account. In other words, you just need the Firefox browser in order to let it access your webcam and mic.

As Skype is 32-bit only, it pulls a lot of 32-bit dependencies on 64-bit Linux distros in order to play sound using Pulseaudio, access your webcam and mic, and use your preferred widget style. The Linux version feature set is limited compared to Windows: there is no support for group video chat, but Linux users can participate in Skype conferencing in audio-only mode.



If you trust Mozilla and its partners that provided the infrastructure behind Hello, then there’s nothing to complain about. Hello is stable, robust and smooth, and it only takes seconds to get going with it.



The Linux version of Skype is stable enough and fine to use once you overcome its oddities, and the extra features convinced us that the package is worth using despite the limitations compared to the Windows version.


Google Hangouts KDE Telepathy As apps turn into web-based services, Google’s at the head of that trend

The communication platform that offers native desktop components

■ Google’s attention to user interface design makes calls and conferences in Hangouts a real pleasure

■ Telepathy is the combination of several apps and libraries and it tries make them work smoothly

Ease of setup

Ease of setup

You will need a Google account to gain access to Hangouts. You should then be able to talk to any other contact (search by name or email) who also has a Google account. For browsers other than Chrome, the ‘googletalk’ plugin is required, and once you install it you’ll also have to go through the boring routine of unblocking the plugin and granting access to your hardware.


Hangouts delivers excellent performance and provides pleasing video with a crisp sound. The quality depends on your network, and Hangouts adapts its streaming settings accordingly. The interface has some preferences under the gear button, like the ability to set bandwidth for incoming and outgoing video streams separately, and choose audio and video devices by hand.


Making a video call between two KDE desktops was painful and consumed a lot of time because of the terrible initial arrangements. Further, most users will have their Video call buttons greyed out, simply because either the telepathy-kde-call-ui or the Farstream Gstreamer codec, or its Qt bindings, will be missing. Sadly, Telepathy video calls never work out of the box.


You can use different IM accounts to connect to your friends, but this affects performance. All traffic goes through the server and in our case it was one of the Jabber servers. Because of this, it took more time to establish a connection between two PCs in the same local subnet than with peer-topeer competitors. Nevertheless, the video and audio quality was fine.


Hangouts is the only viable solution for Linux that supports video multichat and thus enables conferencing (up to ten people). Google supports Hangouts on mobile platforms as well. Linux users can connect to any other Hangouts users with their browsers just fine.

The interface of the established video calls depends on the protocol used and the software of your recipient. In our test case, the only way to get the call working on two KDE machines was to install Empathy (a Gnome app), which negotiates data flow using the Farsight library and depends on Farstream.



If you don’t use Chrome then you’ll have to install the Google’s ‘googletalk’ browser plugin, which exists only as DEB and RPM packages. Hangouts also uses a proprietary backend nowadays instead of the libre XMPP protocol, which broke compatibility with third-party clients some time ago. Plus, you can do nothing without a Google account, which won’t appeal to everyone.

While Telepathy’s text chats work great, the Audio Call and Video Call buttons are not going to be usable for many users, and it takes a lot of research to find out the root cause of an issue. Of course, there is also no group video chat – even two-sided calls are often slow due to the limited resources of the service providers.



Hangouts is very advanced in terms of the technology underpinning it, but the tool is specifically tailored for Google Chrome (and Chrome OS). Setting it up outside of Chrome is a hassle, and a Google account is still required.


The Telepathy technologies are open – which is a key appeal for us – but require far too much effort and willpower to set up them correctly. We want to like it, but Telepathy is sadly unconvincing in terms of VoIP calls.




Video calling software

In brief: compare and contrast our verdicts Firefox Hello Ease of setup

Hello already works if you use Firefox, and all you need to do is share a URL to invite a chat


Smooth picture and a fast connection, with high-quality audio. Very solid


The necessary set is there but group calls and advanced features are missing


There are no limitations at all – you don’t even need an account to use this


Firefox Hello is clearly the best choice for Skype switchers who don’t need group calls



Takes a bit of effort for users of 64-bit distros, and none of the packages are in repos


Not ideal but certainly better than it was years ago – good enough for family calls


Lots of advanced and commercial-grade tools, including a credit account for phone calls


Proprietary, and limited compared to the Windows spin. Needs 32-bit dependencies


It’s a little bit ugly, but overall Skype is a very usable and stable app with useful features

Google Hangouts


Easy if you use Chrome, but requires a browser plugin and some tweaking if you don’t


Very robust and smooth. You can set streaming quality for audio and video separately


This is the only software in this test that supports conference calls, so is a must-have for business


Hangouts depends on Chrome and a Google account, and there’s no open source XMPP


Very good despite the strong focus on Google’s ecosystem, and enables group calls

KDE Telepathy


A nightmare for Linux newbies that never works out of the box. What a shame!



Server limitations mean a slower connection speed, but streaming quality is acceptable



There is only the set of controls for making video calls – Telepathy requires other software



A KDE-only feature that is quite hard to set up. Many users will miss out on this by default



Telepathy is, unfortunately, far too unstable and too hard to get up and running


AND THE WINNER IS… Firefox Hello

We had two classic desktop apps and two web services (or cloud-based ones, if you like). The modern web technologies clearly show that some websites can do things better than traditional offline apps. Firefox Hello has emerged as legitimate rival to Skype, lowering the entry barrier for video calling so that even non tech-savvy users can chat to each other and see each other live. The best thing with Hello is that it doesn’t insist that you have a Mozilla account (even though it is recommended) – you can just generate and share links to establish a video call. Hello constantly evolves and already can share your screen and let you chat in text. Skype on Linux is an original desktop app – a limited version of Skype for Windows, only without some extra features and also running ad-free. Sometimes Skype requires you to adjust your Pulseaudio settings or export auxiliary variables in a shell, but generally this software works decently. Of course, we cannot say that Google Hangouts is bad – the technology is very advanced, and it’s really the only option for Linux users if they need to have a video multiparty, making it definitely


■ Starting a video conversation is just a few mouse clicks away!

a useful tool as well. The only concern about Hangouts is that Google applies soft power in order to make people switch to Google’s ecosystem entirely. Hangouts are tied to Google account and work best in Google’s flagship Chrome browser. It’s not exactly in the spirit of freedom of choice that Linux is about.

Finally, KDE Telepathy is hardly usable for video calls, but sometimes it works and Linux enthusiasts may find it interesting to fight and fix non-working features for the sake of investigation. Firefox Hello is by far its superior and is the service we recommend. Alexander Tolstoy

Classified Advertising 01202 586442




of Hosting Come Celebrate with us and scan the QR Code to grab

your birthday treat!

0800 808 5450

Domains : Hosting - Cloud - Servers

e d a M






IQaudIO Audiophile accessories for the Raspberry Pi

• Raspberry Pi HAT, no soldering required • Full-HD Audio (up to 24bit/192MHz) • Texas Instruments PCM5122 • Variable output to 2.1v RMS • Headphone Amplifier / 3.5mm socket • Out-of-the-box Raspbian support • Integrated hardware volume control • Access to Raspberry Pi GPIO • Connect to your own Hi-Fi's line-in/aux • Industry standard Phono (RCA) sockets • Supports the Pi-AMP+


• Pi-DAC+ accessory, no soldering required • Full-HD Audio (up to 24bit/192MHz) • Texas Instruments TPA3118 • Up to 2x35w of stereo amplification • Provides power to the Raspberry Pi • Software mute on GPIO22 • Auto-Mute when using Pi-DAC+ headphones • Input voltage 12-19v • Supports speakers from 4-8ohm


• Raspberry Pi HAT, no soldering required • Full-HD Audio (up to 24bit/192MHz) • Texas Instruments TAS5756M • Up to 2x35w of stereo amplification • Out-of-the-box Raspbian support • Integrated hardware volume control • Provides power to the Raspberry Pi • Software mute on GPIO22 • I/O (i2c, 3v, 5v, 0v, GPIO22/23/24/25) • Just add speakers for a complete Hi-Fi • Input voltage 12-19v • Supports speakers from 4-8ohm


Twitter: @IQ_audio Email:


IQaudio Limited, Swindon, Wiltshire. Company No.: 9461908


Genuino 101


Genuino 101 SoC

Intel Curie

CPU Single-core 32MHz Intel Quark

Co-Processor Single-core 32MHz Argonaut RISC Core (ARC)

RAM 80KB SRAM (24KB user-accessible)


196KB Flash


14x digital (4x PWM), 6x analogue


6-axis accelerometer and gyroscope


Bluetooth Low Energy (BLE)


3.3V (5V safe)


£28 (


For Intel’s desire to break into the maker market dominated by ARM and microcontrollers, is the third time the charm? Intel has been trying to break into the maker market for a few years now, having seen the success of the Arduino project and the Raspberry Pi. The Intel Galileo (reviewed in LU&D Issue 138, 4/5) proved unpopular thanks to poor IO performance from the Quark processor. Its successor, the Edison (reviewed in LU&D Issue 151, 4/5), added an Atom processor to address performance issues but its odd form factor and highdensity connectors were off-putting. The Genuino 101 is Intel’s third crack of the whip. Abandoning its previous approach of producing an Arduino-compatible microcontroller, the company has partnered with the Arduino project to create a fully official Arduino board – known as the Arduino 101 in the US and Genuino 101 elsewhere thanks to ongoing trademark issues. The result is a device

which, at a casual glance, looks just like the classic microcontroller-based Arduino Uno. Yet, while it shares the Uno’s layout, the Genuino 101 is a different beast. At its heart is the Curie module, an ultra-compact, low-power system-on-chip (SoC) designed primarily for wearable projects. Inside this chip is a pair of processors: a 32MHz Quark core acts as the central processor, while a 32MHz Argonaut RISC Core (ARC) is present as a co-processor – a tacit admission from Intel that the Quark’s ability to directly drive IO pins isn’t quite where it should be. The presence of two processors splits the Curie module in an interesting manner. The Quark, which is a fully compatible x86 architecture processor based on the company’s old Pentium microarchitecture, runs a real-time operating system (RTOS); the ARC

Left The Curie module contains two cores: an x86 (Quark) and a 32-bit ARC core

Pros Fast performance, on-board

accelerometer and Bluetooth, x86 core may allow for future expansion

Intel has pledged to open-source the RTOS in March 2016, at which point the true potential of the Genuino 101 should become apparent is used to execute whatever Arduino program you care to upload, and in a manner which is theoretically indistinguishable from any other Arduino. In theory, this split architecture offers the best of both worlds: the flexibility and power of a microcomputer with the real-time operation and low power of a microcontroller. Sadly, at present, the RTOS is locked-down and closed-source. Intel has pledged to open-source the RTOS in March 2016, at which point the true potential of the Genuino 101 should become apparent – such as the ability to offload your own tasks to the Quark, or even to install and run a cutdown Linux directly on the device. It would, however, have to be cut-down indeed. While the 80KB of SRAM on the Curie module is impressive compared to the 2KB of an Arduino Uno, the memory is shared between the two processors: an Arduino sketch uploaded to the ARC processor has access to only 24KB, with the remaining memory locked up by the RTOS. However, the Quark core is

responsible for handling the on-board Bluetooth Low Energy (BLE) radio, handy to have when building an Arduino-powered device. The module also includes a six-axis accelerometer, evidence of the Curie’s design for wearable computing projects, with both the sensor and the radio accessible without tying up the IO pins on the board itself. By microcontroller standards, performance of the Genuino 101 is good: the board can toggle an IO pin at 169.9KHz to an Arduino Nano’s 94.1KHz, and completes the Dhrystone benchmark with a score of 27.69 MIPS to the Nano’s 6.25 MIPS. Floating-point performance, however, is poor: running on the ARC, the Whetstone benchmark gets a mere 0.765 MIPS to the Nano’s 1.17 MIPS – a possible sign of poor optimisation in the compiler. While slower, the ARC does at least offer the option of using true doubleprecision mode – something the 8-bit ATmega328 on the Nano can’t handle. Gareth Halfacree

Cons Poor floating-point performance,

less common 3.3V logic, no way to access the x86 core directly

Summary The Genuino 101 is a clever device, and a vast improvement on Intel’s previous maker-centric creations. Its true potential is locked behind a closedsource RTOS, though, and will only become apparent if Intel fulfils its pledge to release the source code in March 2016. Intel’s third venture into the maker market is, however, a solid offering.



Review Tails 2.0


Tails 2.0

The ultra-private distro beefs up its security with a big step up to Debian 8 and systemd-powered lockdown

CPU x86


Storage 1 or 2 USB drives


Bug fixes and low-level improvements were streaming through regularly in 2015, but Tails 2.0 brings with it a few major changes that really improve both the usability and security of the distro. To start with, Tails is now based on Debian 8 Jessie and ships with many of its newer packages. The transition has brought one of Tails’ most uservisible changes: the default desktop environment is now GNOME Shell in the Classic mode, which is a nicely GNOME 2-esque experience but with the updated stack. Classic mode preserves Places along with Applications in the top-left of your menu bar, with important applets such as Vidalia (for Tor management), the Florence virtual keyboard and the OpenPGP clipboard encryption applet all displayed along the top-right. The desktop is a refreshing and

more modern change, although it has meant that the Windows camouflage mode is now broken – the devs removed it from this release and have put out a call for help on the fixes. It’s not an issue for people using Tails in secluded areas, but for those working in public spaces or on a public computer, this is going to be a bit of a problem – vanilla Tails looks as stark as ever with that bare blue background, and the differences from Windows and OS X are quite apparent. Upgrading to GNOME 3.14 has brought many other benefits, however – especially considering that many of the core packages have been updated from GNOME 3.4: Files, Videos, Disks and the like. Notifications are looking much less obtrusive, too. The mainstay, Tor Browser, has been updated to version 5.5, itself based on the Firefox 38.6.0 extended support release, and

Above The new installer is very thorough, documenting the different steps required depending on your current OS

Services have been sandboxed by manipulating namespaces. The launching of Tor and the memory wipe on shutdown have also been made more robust many other tools have also been improved – Git has climbed from version 1.7.10 to 2.1.4, for example. Another big change for Tails 2.0 is the main email client – Claws Mail is now gone and has been replaced with Debian’s rebranded version of Mozilla Thunderbird: Icedove. Icedove has been included for the last couple of releases and the removal of Claws Mail marks the end of the dev team’s transition period to what they consider a simpler and more widely used (and less fingerprintable) email client than Claws. More important than these userland packages is the other big change brought by Jessie: systemd. Now that Tails is on Debian 8, it is using systemd as its init system and is taking advantage of a few of its features. For one thing, lots of services have been sandboxed by manipulating namespaces. The launching of Tor and the memory wipe on shutdown have also been made more robust, and the use of systemd has meant that the devs have been able to replace lots of their custom scripts, for generally cleaner code. A good few bugs have squashed as well – the full list is still being maintained but some important security holes introduced in Tails 1.8.2 have been fixed; plus, HiDPI displays now have better support and Videos

can now access the DVD drive. As always, a new bug has been introduced but thankfully it is minor: the Tor Browser includes a new feature that protects against fingerprinting by manipulating the fonts, but this hasn’t yet been enabled in Tails 2.0. Beyond Jessie and the new GNOME Shell, there’s one other important change worth mentioning here: the Tails Installation Assistant. The installation process for Tails has now been completely reworked, requiring you to visit en.html and then identify your host OS when prompted. Depending on your response, you will be given slightly different instructions and require either one or two USB sticks. Essentially, you set up an intermediate Tails system, then run the Tails Installer program from within that system to create the final bootable USB stick or disc. We must admit, it was a bit of a faff – and Tails Installer failed to work on early attempts, although this seems to be another known bug (see – but generally speaking, this new process should help to further verify and guarantee the security of your Tails USB. Gavin Thomas

Pros Systemd improves Tails’ operating

security; a new installation process should improve setup security; much-needed package updates

Cons Over-engineered Tails Installation

Assistant; problems using the new Tails Installer software; camouflage mode is no longer working

Summary The new Jessie base is a real boon to Tails and there are many user-visible changes to enjoy here, particularly the new desktop environment. Beneath it all, the greater control over application isolation afforded by systemd and the less-visible raft of minor changes and bug fixes are all very welcome indeed.




Free software


Fgallery 1.8.1 Complement your blog with a static photo gallery In reaction to the growing size both of average web pages, and the bloat and security risk in CMS and blog platforms, static blogging from Markdown text files is becoming increasingly popular. But what if you want to throw up a quick picture gallery alongside your wellcrafted words? Not every static blog generator can do this well, and writing your own seems an unnecessary labour; this is when fgallery can do the job for you. Fgallery takes a directory of images as its input and gives you a directory containing an attractive, minimalist gallery, built from CSS, HTML and JavaScript, and relying on nothing but your server. If you don’t have much control over what code you can run on your web server then it’s ideal – just upload the directory that it generates. Installation is not strictly necessary, as fgallery (not to be confused with the WordPress plugin of the same name) is an executable Perl script that will run from wherever you download it to: just follow the instructions to install the dependencies. The optional facedetect script uses Python and OpenCV to centre thumbnails on the faces in the pictures, and is very effective. The project’s website has an interesting section on colour management.

Above A quick-loading, fuss-free JavaScript gallery of pictures, ready to upload to any web hosting service

Pros A refreshingly simple way of

getting your pictures online, without surrendering control to an internet giant

Cons You’ll need to tell people where

your pictures are, instead of having Facebook automatically announce your new images

Great for…

Taking back control of your online image uploads


Lollypop 0.9.82

A simpler, cleaner and much faster GNOME music player Yes, there are a lot of media players around, and collection managers, too – there are even plenty of apps that claim to do both jobs well, but they all have different niggles so we’re always prepared to take a look at another one. Lollypop is quite pared down, handling just locally stored songs and a Last.FM account, but aiming to do these simple things well. Lollypop is edging towards a 1.0 release and advertises itself as a GNOME music player, and it does look good on GNOME – but we ran it on LXDE as well and just enjoyed the music. Well, not just music; we tried its sorting abilities on a week of BBC radio programmes downloaded with the get-iplayer script. This showed up


one disadvantage common to almost all media players: they’re built on the assumption that you’re listening to a four-minute pop song, not a 90-minute drama, for which there is no “cover art” to find. Still, we won’t hold this against Lollypop, as it shone in other areas. Easily installed from repositories for most distros, including an Ubuntu PPA, you’ll soon be adding songs to playlists and appreciating its speed – particularly if you’re using one of the other GNOME players that rather struggles with responsiveness. The speed is also very noticeable in the search function, and if you don’t need lots of features that slow down other players – just a fussfree way of handling playlists, genres and artist info – its speed and svelteness appeal.

Pros Fast, fuss-free and lean – fairly

unusual for a GNOME app! – and built with Python 3, too

Cons Interface built for songs, not longer audio, but without the features of similar apps

Great for…

Getting the party started with an easy music player


Tabview 1.4.2

Get a quick view of CSV files in a terminal or SSH session Tabview is a command line viewer for CSV and tabular data, written in Python, which uses the familiar ncurses library for a spreadsheet-like display. Installation is a simple matter of:

pip install tabview … with other options available for non-Pip users. That said, you really should embrace Python’s Pip installer, which has recently been updated to 8.0.0. If you wish to use tabview as a pager in MySQL, edit /etc/my.cnf to include:

pager=tabview -d ‘\t’ --quoting QUOTE_NONE silent

It can also be called as a module from an interactive Python3 session, for a useful tabular look at any data structure you want to check. But it is as a command line tool that tabview could have its widest audience, both for those who don’t wish to open up an office suite just to look at a CSV file, and for those who cannot because, for example, they’re in a SSH session on their VPS. Tabview will usually autodetect the delimiter and the character set upon startup, but they can be specified at the command line, as can the starting position – column, or individual cell – and column width. Keyboard shortcuts are mostly for sorting table and column order, and for (vimlike) navigation. Cell contents can also be yanked to the clipboard, making tabview a small but very useful utility.

Pros Very speedy access to CSV

files without having to open an office suite

Cons Not particularly surprising; as you would expect from a viewer, you cannot edit the CSV file

Great for…

A quick look at data when SSHed into your server


SpaceFM 1.0.5 Multipaned, tabbed, powerful but light

Unless you rely entirely on the command line to manage your files and navigate around your crowded hard drive, the file manager is a crucial part of day-to-day computer use, yet many stick with their distro’s default manager despite multiple annoyances. Trying out a new file manager to see if it’s a better fit for you can be well worth the effort, and SpaceFM – a fork of the original, lighter weight PCManFM – has plenty to offer. SpaceFM is usable as the kind of file browser you’re used to, but you can open extra panes (up to four in total) in the window – each with as many tabs as you like – with additional side panes for a tree view of directory structures or bookmarks; you can further customise through Design Mode. Lightweight, low dependency code runs speedily even on older hardware and SpaceFM lets you carry on with other tasks while it gets busy copying large files. SpaceFM works easily across networks with nfs://, ftp://, smb://, ssh:// as well as local ISO files, and its ad hoc network share support allows connections via a variety of plugins. Its abilities with icons and wallpapers also makes it useful for desktop management of lighter Window Managers.

Above Behind its simple appearance lurks power to manage your devices – you may need the manual!

Pros Lightweight and very fast,

even when running on older hardware – SpaceFM is powerful and very extensible

Cons While some power features are easily accessible, there’s also a lot that you’ll have to take the time to learn!

Great for…

Power users and those after alternative window managers


Get your listing in our directory


To advertise here, contact Luke | +44 (0)1202586431


Hosting listings 2746C

Featured host:

ElasticHosts | 02071 838250

About us

ElasticHosts offers simple, flexible and costeffective cloud infrastructure services with high performance, availability and scalability for businesses worldwide. ElasticHosts’ global team of engineers provides excellent support, around the clock by phone, email, and ticketing system. The company is true to its name; ElasticHosts servers can be resized at will,

What we offer

• Cloud hosting – reliable cloud servers with any specification and OS you can imagine • Affordable scalability – Linux containers with two perks: auto-scaling to demand and usage-based billing

run any OS, and are managed through a userfriendly web control panel with instant VNC access. ElasticHosts offers ten data centre locations around the world, including the UK and Europe. After seven years of helping development shops, systems integrators and tech startups run their businesses more efficiently, ElasticHosts has developed an excellent reputation.

Simple, flexible and cost-effective cloud infrastructure services with high performance, availability and scalability

• Free support – best-in-class 24/7 support via phone, email and ticketing system – available to customers for free • Simple management – easy-to-use control panel to deploy and configure your servers in a minute.

5 Tips from the pros


Data centre location matters If you’re in the UK, choose a British or European hosting provider that has data centres in the region to keep your server’s latency low and your customers’ data safe from unwanted state surveillance. International readers should look to providers close to home.


Server sizing is important Hosting providers often offer fixed configurations, but these don’t always match your needs. Instead, use ElasticHosts and set the specs yourself to your requirements and easily change them at any time.

03 92

Test the support team Having “excellent service”


“expert support” is easier said than done. Start a trial, then call or email our team at any time to experience the personal and knowledgeable ElasticHosts customer service for yourself.


Look for the SLA A reliable hosting provider should offer a Service Level Agreement (SLA) to customers. Ours includes a guaranteed 100% service availability and a generous credit return policy.


Run a trial You can’t judge the customer experience without running a trial. Sign up today and learn how easy it is to manage your servers with ElasticHosts’ userfriendly web control panel.


Greg Davis, NineByte Pty Ltd “The moment we began our migration, we established an understanding of how the billing model just rocks. The VM management user interface is just brilliant compared to the current crop.” Jack Lindsay, SitelinQ “The support experience I’ve had as a trial user has blown me away. You’ve all done everything perfectly and helped me every step of the way.” Jeremy Curtis, Flint IT “ElasticHosts provided a UK-based cloud solution and it took minutes to get a server running. Being able to buy the exact capacity I need has been very useful.”

Supreme hosting

SSD Web hosting 0800 1 777 000 0843 289 2681

CWCS Managed Hosting is the UK’s leading hosting specialist. They offer a fully comprehensive range of hosting products, services and support. Their highly trained staff are not only hosting experts, they’re also committed to delivering a great customer experience and passionate about what they do.

Since 2001 Bargain Host have campaigned to offer the lowest possible priced hosting in the UK. They have achieved this goal successfully and built up a large client database which includes many repeat customers. They have also won several awards for providing an outstanding hosting service.

• Colocation hosting • VPS • 100% Network uptime

Value hosting 02070 841810

UK-based hosting: | 0845 5279 345 Cyber Host Pro are committed to providing the best cloud server hosting in the UK – they are obsessed with automation. They’ve grown year on year and love their solid, growing customer base who trust them to keep their business’ cloud online! If you’re looking for a

hosting provider who will provide you with the quality you need to help your business grow, then look no further than CyberHost Pro. • Cloud VPS Servers • Reseller hosting • Dedicated Servers

• Shared hosting • Cloud servers • Domain names

Value Linux hosting 01642 424 237

GoDaddy, the world’s largest web hosting provider, hosts more than 5 million websites and is hyper-focused on delivering fast, reliable and easy-to-use web hosting. GoDaddy offers Shared, Dedicated and Virtual Private Servers in addition to a very popular, managed WordPress service.

Linux hosting is a great solution for home users, business users and web designers looking for cost-effective and powerful hosting. Whether you are building a single-page portfolio, or you are running a database-driven ecommerce website, there is a Linux hosting solution for you.

• Domain names • Cloud servers • Web security

• Student hosting deals • Site designer • Domain names

Small business host

Fast, reliable hosting 0800 051 7126 HostPapa is an award-winning web hosting service and a leader in green hosting. They offer one of the most fully featured hosting packages on the market, along with 24/7 customer support, learning resources, as well as outstanding reliability. • Website builder • Budget prices • Unlimited databases

Enterprise hosting: 01904 890 890 | 0800 808 5450 Formed in 1996, Netcetera is one of Europe’s leading web hosting service providers, with customers in over 75 countries worldwide. As the premier provider of data centre colocation, cloud hosting, dedicated servers and managed web hosting services in the UK, Netcetera offers an array of

services to effectively manage IT infrastructures. A state-of-the-art data centre enables Netcetera to offer your business enterpriselevel solutions.

Founded in 2002, Bytemark are “the UK experts in cloud & dedicated hosting”. Their manifesto includes in-house expertise, transparent pricing, free software support, keeping promises made by support staff and top-quality hosting hardware at fair prices.

• Managed and cloud hosting • Data centre colocation • Dedicated servers

• Managed hosting • UK cloud hosting • Linux hosting



Your source of Linux news & views

Contact us…


Your letters

Questions and opinions about the mag, Linux and open source

GPIO current draw I have a general query on the Raspberry Pi GPIO power pins, and that is: what is the maximum rated current that can be drawn on the 3V and 5V lines respectively when powering my external projects? I have a model B myself, but I guess the general rules will apply across the Pi variants? Carl Levick


Hi, Carl. With the 3V3 rail, things aren’t exactly clear-cut. There is a 50mA maximum current draw for all of the 3V3 pins. There’s a limit of approximately 12-15mA per pin (figures aren’t certain, so be careful if experimenting!) – so basically, spread your 50mA allowance over no more than three or four pins to be safe, depending on the draw of each pin. With low power draw – ie no Ethernet, no HDMI, etc – some users think that the overall limit could increase by as much as 100mA, giving you 150mA to play

with. This has not been tested, however, and we recommend limiting yourself to the 50mA max – as we said, it’s not a totally clear area. With the 5V rail, your maximum current draw is the USB power input minus the current power draw across the board. So for your model B that would be 1000 – 700 = 300mA max. This figure will vary between different Pi models, since the typical bare-board current consumptions differ: the model A uses about 200mA, the A+ uses just 180mA, while the B+ gets through 330mA. These numbers were taken from the official FAQ page, and while it doesn’t list the Pi 2’s current consumption we reckon it should be about the same as the B+. Hope that helps!



So long, Dropbox

Hello team! I’m looking to finally replace Dropbox – I’ve been using it pretty much since it first launched. At this point, the functionality is rather indispensable, but it’s almost the only bit of non-free software that I’ve got running at the moment, and migrating away from proprietary software is something I’ve been gradually doing over the last few years. So, any decent FOSS recommendations for me? Hertha Hello, Hertha! Yep, we can certainly help you out here. Depending on exactly which functionality you’re talking about, there are a couple of different suggestions for you here. Basically, it depends on whether it’s the ability to share files or the fact that they are backed up somewhere off-site. If it’s the former then we definitely recommend giving BitTorrent Sync a shot – there are no folder size limits, it’s blazing fast, and also supports things like WAN syncing and UPnP port mapping. The only catch is that you’ll need to pay for a pro subscription tier if you want to remove the cap on individual file

sizes and take advantage of the selective sync feature. Another option to consider, if you’re not interesting in going with a paid tier, is Syncthing – it’s very similar to BitTorrent Sync but a bit more basic. Now, if it’s important to have your data stored somewhere off-site like it is in Dropbox, as a backup beyond your connected machines, then ownCloud is a great option here. The advantage with ownCloud, beyond its ease of use, is that it actually provides you with a cloud suite of tools that covers documents (including collaborative editing), galleries, notifications, an activity feed, music and movies, contacts and a calendar – the whole kitchen sink, basically. If that seems a bit much for your needs and you want to pare it back to the core file backup and sharing functionality, then a great alternative would be SpiderOak. This one is primarily about privacy and encryption for your backups – these folks take it very seriously – but there is also a syncing feature that does give it that Dropbox functionality. So, the choice is yours! Give us a shout if you have any questions about these.

Above BitTorrent Sync is cross-platform, boasting incremental file updates and data transfer acceleration


Linux User & Developer

Ubuntu or Mint?

Where do you all land on the Ubuntu versus Linux Mint debate? I always find myself recommending Ubuntu to people interesting in switching away from Microsoft, mainly because that was the best recommendation a few years ago and it’s spawned so many derivatives. But Linux Mint has gotten better and better, and is consistently popular on Distrowatch, so I’m wondering if it’s time to switch up my official recommendation. My cousin is looking to switch some time soon and I’d like to make my mind up before the big data migration weekend! Scott Hi, Scott. To an extent, this is very much a question of taste and preference. That said, however, there are a couple of things worth mentioning. Full disclosure: none of us are Unity fans! So with Ubuntu, you are right in that it is an established distro with a good track record for stability and long-term support. However, going to the source doesn’t necessarily make for a better experience – vanilla Ubuntu’s Unity desktop environment can be very disorienting to newcomers, with things like the sidebar, the lack of draggable and positionable windows, the search overlay screen, etc. For anyone who has worked on Windows PCs up until now, or even OS X, then it’s worth looking at something with a more recognisable desktop metaphor, and that’s where the Flavours step in: Kubuntu (KDE) and Lubuntu (LXDE) are our two mainstays, though Ubuntu MATE is fast becoming a favourite. Again, going to the source isn’t always best – Ubuntu, even with Unity, can be a lot less overwhelming for new users than diving off Windows straight into the world of Debian, from which Ubuntu is derived. With Linux Mint, you have that familiarity in the desktop environment. Plus, it is based on Ubuntu so you do still have an excellent range of packages. In fact, and this is where the opinion comes into play, it is arguably far easier for someone to get to grips with Mint/Cinnamon than Ubuntu/Unity – the cosmetics are different but not really new, so you’ve reduced the learning curve to things like the command line and package management, rather than throwing in the odd Unity keyboard shortcuts and awkward app switching. Plus, there’s that lovely intro screen in Mint that should be a welcome sight for fresh arrivals!






Tails 2.0

Raspbian 2016-02-09

» Four latest distros, including Tails 2.0, plus the new Raspbian featuring an experimental OpenGL driver. » 20 excellent FOSS packages, including everything you’ll need for the network feature, plus a few extras. » Watch The Linux Foundation, Red Hat and Raspberry Pi video guides, tutorials and webinars. » Code and assets for this issue’s tutorials, including everything you need for the Raspberry Pi feature.

Zorin 11


Scientific Linux 7.2




FILESILO – THE HOME OF PRO RESOURCES DISCOVER YOUR FREE ONLINE ASSETS A rapidly growing library Updated continually with cool resources Lets you keep your downloads organised Browse and access your content from anywhere No more torn disc pages to ruin your magazines

No more broken discs Print subscribers get all the content Digital magazine owners get all the content too! Each issue’s content is free with your magazine Secure online access to your free resources This is the new FileSilo site that replaces your disc. You’ll find it by visiting the link on the following page. The first time you use FileSilo you’ll need to register. After that, you can use the email address and password you provided to log in.

The most popular downloads are shown in this carousel, so see what your fellow readers are enjoying!

If you’re looking for a particular type of content like distros or Python files, use these filters to refine your search.

Green open padlocks show the issues you have accessed. Red closed padlocks show the ones you need to buy or unlock. Top Downloads are listed here, so you can get an instant look at the most popular downloaded content. Check out the Highest Rated list to see the resources that other readers have voted for as the best!

Find out more about our online stores, and useful FAQs like our cookie and privacy policies and contact details.

Discover our amazing sister magazines and the wealth of content and information that they provide.





To access FileSilo, please visit


Follow the instructions on-screen to create an account with our secure FileSilo system, then log in and unlock the issue by answering a simple question about the magazine. You can access the content for free with your issue.


If youâ&#x20AC;&#x2122;re a print subscriber, you can easily unlock all the content by entering your unique Web ID. Your Web ID is the eight-digit alphanumeric code printed above your address details on the mailing label of your subscription copies. It can also be found on your renewal letters.


You can access FileSilo on any desktop, tablet or smartphone device using any popular browser (such as Firefox, Chrome or Safari). However, we recommend that you use a desktop to download content, as you may not be able to download files to your phone or tablet.


If you have any problems with accessing content on FileSilo, or with the registration process, take a look at the FAQs online or email filesilohelp@


Finished reading this issue? Thereâ&#x20AC;&#x2122;s plenty more free and open source goodness waiting for you on the Linux User & Developer website. Features, tutorials, reviews, opinion pieces and the best open source news are uploaded on a daily basis, covering Linux kernel development, the hottest new distros and FOSS, Raspberry Pi projects and interviews, programming guides and more. Join our burgeoning community of Linux users and developers and discover new Linux tools today. Issue 164 of 98

is on sale 7 April 2016 from

Linux Server Hosting from UK Specialists

24/7 UK Support • ISO 27001 Certified • Free Migrations

Managed Hosting • Cloud Hosting • Dedicated Servers

Supreme Hosting. Supreme Support.