Issuu on Google+






• Everyday Linux • Windows • Enterprise solutions • Specialist distros

MASTER VIRTUALISATION Set up a host and run virtual machines COMPILE SOURCE CODE SAFELY Harden binaries against memory corruption exploits


Manage enterprise-level multiuser projects




Modify a script’s execution sequence



Looking for an alternative to Raspberry Pi? Try one of these

Install your favourite OS in a chroot on your phone


The laptop with Linux on board

Make your Pi do the hard work


» Use Go packages » Office suites » Hack a toy with the Pi Zero






Imagine Publishing Ltd Richmond House, 33 Richmond Hill Bournemouth, Dorset, BH2 6EZ ☎ +44 (0) 1202 586200 Web:

to issue 168 of Linux User & Developer

Magazine team Editor April Madden ☎ 01202 586218 Designer Rebekka Hearl Photographer James Sheppard Senior Art Editor Andy Downes Editor in Chief Dan Hutchinson Publishing Director Aaron Asadi Head of Design Ross Andrews

This issue

Contributors Dan Aldred, Joey Bernard, Toni Castillo Girona, Christian Cawley, Sanne De Boer, Kunal Deo, Alex Ellis, Tam Hanna, Oliver Hill, Phil King, Jon Masters, Paul O’Brien, Swayam Prakasha, Richard Smedley, Jasmin Snook, Nitish Tiwari and Mihalis Tsoukalos


Digital or printed media packs are available on request. Head of Sales Hang Deretz ☎ 01202 586442 Sales Executive Luke Biddiscombe ☎ 01202 586431

Assets and resource files for this magazine can now be found on this website. Support


Linux User & Developer is available for licensing. Head of International Licensing Cathy Blackman ☎ +44 (0) 1202 586401


For all subscriptions enquiries ☎ UK 0844 249 0282 ☎ Overseas +44 (0) 1795 418661 Head of Subscriptions Sharon Todd


Circulation Director Darren Pearce ☎ 01202 586200

Look for issue 169 on 25 Aug


Production Director Jane Hawkins ☎ 01202 586200


Finance Director Marco Peroni

er? Want it soon


Group Managing Director Damian Butt

Printing & Distribution

e Subscrib ! y toda

Printed by William Gibbons, 26 Planetary Road, Willenhall, West Midlands, WV13 3XT Distributed in the UK, Eire & the Rest of the World by: Marketforce, 5 Churchill Place, Canary Wharf London, E14 5HU ☎ 0203 148 3300 Distributed in Australia by: Gordon & Gotch Australia Pty Ltd 26 Rodborough Road Frenchs Forest, New South Wales 2086, Australia ☎ +61 2 9972 8800


The publisher cannot accept responsibility for any unsolicited material lost or damaged in the post. All text and layout is the copyright of Imagine Publishing Ltd. Nothing in this magazine may be reproduced in whole or part without the written permission of the publisher. All copyrights are recognised and used specifically for the purpose of criticism and review. Although the magazine has endeavoured to ensure all information is correct at time of print, prices and availability may change. This magazine is fully independent and not affiliated in any way with the companies mentioned herein. If you submit material to Imagine Publishing via post, email, social network or any other means, you automatically grant Imagine Publishing an irrevocable, perpetual, royalty-free license to use the material across its entire portfolio, in print, online and digital, and to deliver the material to existing and future clients, including but not limited to international licensees for reproduction in international, licensed editions of Imagine products. Any material you submit is sent at your risk and, although every care is taken, neither Imagine Publishing nor its employees, agents or subcontractors shall be liable for the loss or damage.

© Imagine Publishing Ltd 2016

» Multi boot your system » Master virtualisation » 10 best boards for makers » Run Linux on an Android device Welcome to the latest issue of Linux User & Developer, the UK and America’s favourite Linux and open source magazine. Wouldn’t it be nice to be able to run every operating system or distro you fancied? The trouble is, affording the hardware to do so might be a bit of a problem – unless, of course, you multi boot your machine. Partitioning your hard drive allows you to run two or more distros or operating systems on one computer. Each one gets to take full advantage of the system’s resources, and you can flip in and out of them with the flick of a restart switch. In this issue, we’ve got the complete guide to multi booting, from backing up your data and partitioning your hard drive to how to run Windows and Linux on the same machine, how to run multiple Linux distros and how to configure, edit and even prettify the GRUB bootloader that controls the whole thing. Check it out on p16. Also in this issue we take a look beyond the Raspberry Pi and round up the ten best boards for makers. Whether you’re looking at the best option for getting kids interested in making and coding or whether you’re after the ultimate board for Internet of Things projects, quick deployment or x86 architecture, we’ve got you covered. Take a look on p56, and if we haven’t featured your favourite board, be sure to give us a shout on the links below! April Madden, Editor

Get in touch with the team: Facebook:

Linux User & Developer


Buy online


Visit us online for more news, opinion, tutorials and reviews:

ISSN 2041-3270


Contents e Subscrib! e v & sa r 26

Reviews 81 Office suites

How do these free alternatives stack up in features and usability?

t ou Check ou fer! of great new ers om US cust ibe cr can subs on page 80

16 Multi boot your system



WPS Office


Get three distros running on one machine

OpenSource Tutorials 08 News

28 Bash masterclass: Script execution

10 Interview

32 Compile software using modern protections

The biggest stories from the open source world

Reynold Xin of Apache Spark talks data

14 Kernel column

The latest on the Linux kernel with Jon Masters

94 Letters

Your questions answered, from tech help to advice

Bash provides a set of scripts that modify the execution sequence

Hardening binaries can prevent corruptionmemory-based exploits taking over

36 Run a Linux chroot on Android

Android is Linux-based and uses a Linux kernel, making it ideal for a chroot

40 Organise collaborative projects with Ganib

86 Dell XPS 13 9350 Developer Edition

A laptop with Linux out of the box

88 deepin 15.2

The Debian-based distro’s latest update promises big things...

90 Free software

Richard Smedley recommends some excellent FOSS packages for you to try

Install Ganib and use it to manage and simplify your collaborative projects

44 Set up a virtualisation host and virtual machines

Take a closer look at setting up virtualisation hosts and virtual machines on Ubuntu


16 Multi boot your system

48 Learn Go: Explore, create and use Go packages Learn how to develop and use Go packages

55 Practical Raspberry Pi

Using multiple operating systems increases efficiency

56 10 best boards for makers

Alternatives to the Pi

97 Free downloads

Find out what we’ve uploaded to our digital content hub FileSilo for you this month

Continue our Explorer robot series, check out a gesture-based remote control, get Arduinos talking to the Pi in Python, hack a toy and compile programs with distcc

Join us online for more Linux news, opinion and reviews 4

Normal Use

Heavy Use

Back To Normal

ELASTIC CONTAINERS Linux-based cloud servers Hassle-free auto-scaling Unique usage-only billing from


a month


020 7183 8250

Open Source On the disc

On your free DVD this issue Find out what’s on your free disc Welcome to the Linux User & Developer DVD. This issue we’ll help you multi boot your machine. Whether you want to combine Windows and Linux or several distros, we’ve got everything you

need, including essential backup software, the best open source partitioning utility and a choice of distros. Follow the instructions to live boot or install your software and distros from the disc.

Featured software:

Ubuntu 16.04

The perfect everyday distro for all-round use, the latest version of Ubuntu offers unparalleled convergence across your desktop and mobile devices. This distro live boots from the disc.

Arch Linux

Want to explore a command-line interface on your Linux PC? Arch Linux offers you exactly that. It’s the perfect introduction to command-line computing as it aims to offer a lightweight and flexible distro that keeps things simple. This distro live boots from the disc.


GParted is the ultimate partitioning utility for your Linux PC. Whether you want to combine Linux with Windows or mix it up with an everyday Linux install plus a specialist distro, GParted allows you to partition your hard drive into sections. It also live boots from the disc so that you can easily partition your PC at any time.

CentOS 7

To access software and tutorial files, simply insert the disc into your computer and double-click the icon.

Live boot

To live-boot into the distros supplied on this disc, insert the disc into your disc drive and reboot your computer.

Please note: • You will need to ensure that your computer is set up to boot from disc (press F9 on your computer’s BIOS screen to change Boot Options). • Some computers require you to press a key to enable booting from disc – check your manual or the manufacturer’s website to find out if this is the case on your PC. • Live-booting distros are read from the disc: they will not be installed permanently on your computer unless you choose to do so.

For best results: This disc has been optimised for modern browsers capable of rendering recent updates to the HTML and CSS standards. So to get the best experience we recommend you use:

Get all the benefits of Red Hat for free with this community-driven open source ecosystem. Please note that due to CentOS’s ISO structure this distro does not live boot from the disc – follow the instructions for installation from the disc interface.

• Internet Explorer 8 or higher • Firefox 3 or higher • Safari 4 or higher • Chrome 5 or higher


Problems with the disc?

It’s essential that you back up all data on your computer (including images of your current operating system) to optical media , network attached storage or to an external HDD or SSD before you begin the process of partitioning your PC’s hard drive and installing one or more distros. This highly rated open source backup utility provides everything you need to ensure that your data is secure before you get started.


Load DVD

Send us an email at linuxuser@ Please note however that if you are having problems using the programs or resources provided, then please contact the relevant software companies.

Disclaimer Important information

Check this before installing or using the disc For the purpose of this disclaimer statement the phrase ‘this disc’ refers to all software and resources supplied on the disc as well as the physical disc itself. You must agree to the following terms and conditions before using ‘this disc’:

Loss of data

In no event will Imagine Publishing Limited accept liability or be held responsible for any damage, disruption and/or loss to data or computer systems as a result of using ‘this disc’. Imagine Publishing Limited makes every effort to ensure that ‘this disc’ is delivered to you free from viruses and spyware. We do still strongly recommend that you run a virus checker over ‘this disc’ before use and that you have an up-to-date backup of your hard drive before using ‘this disc’.


Imagine Publishing Limited does not accept any liability for content that may appear as a result of visiting hyperlinks published in ‘this disc’. At the time of production, all hyperlinks on ‘this disc’ linked to the desired destination. Imagine Publishing Limited cannot guarantee that at the time of use these hyperlinks direct to that same intended content as Imagine Publishing Limited has no control over the content delivered on these hyperlinks.

Software Licensing

Software is licensed under different terms; please check that you know which one a program uses before you install it.

Live boot


Insert the disc into your computer and reboot. You will need to make sure that your computer is set up to boot from disc


Insert the disc into your computer and double-click on the icon or Launch Disc file to explore the contents

Distros can be live booted so that you can try a new operating system instantly without making permanent changes to your computer


Alternatively you can insert and run the disc to explore the interface and content

• Shareware: If you continue to use the program you should register it with the author • Freeware: You can use the program free of charge • Trials/Demos: These are either time-limited or have some functions/features disabled • Open source/GPL: Free to use, but for more details please visit gpl-license Unless otherwise stated you do not have permission to duplicate and distribute ‘this disc’.


08 News & Opinion | 10 Interview | 94 Letters | 96 FileSilo DISTRO

Fedora 24 shows off three new editions The latest Fedora release is available for server, workstation and the cloud

Despite not receiving the mainstream attention that Ubuntu gets, Fedora remains one of the most popular open source distributions out there for both developers and end users. So it’s with much fanfare that Red Hat’s distribution has finally hit its 24th release. Fedora 24 predominantly keeps up the patterns of its previous releases, consisting of three base packages that combine to form three separate editions: Fedora 24 Server, Fedora 24 Workstation and Fedora 24 Cloud. Users can choose to download whichever one caters to their needs the best. One of the most pleasing aspects of Fedora 24 is that it’s based on the 4.5.7 Linux kernel, with an emphasis on new tools for developers to get more from their Linux containers. Of course, you can expect a wave of enhancements and a couple of critical fixes to boot. The team has been quick to point out that isn’t the finished article and to expect incremental updates to hit soon. There’s also a marked improvement for transitioning Linux into the cloud. Fedora 24 Cloud now includes OpenShift Origin, an on-board cloud distribution, which has been heavily optimised for developing applications

Fedora 24 Cloud has OpenShift Origin, an onboard cloud distribution, which has been heavily optimised for developing applications and managing containers 8

Above GNOME 3.20 provides a new look for your stored software

and managing containers. The Cloud edition has also been revamped to improve Docker integration, a problem that many were facing in previous updates. Server roles have consistently played a big part in the Server edition of Fedora, so we’re excited to see a deeper implementation of rolekit introduced into Fedora 24 Server. For end users, this should make it considerably easier to set up server roles. Also included is a new identity management program, called FreeIPA 4.3. While for many it won’t be a household name, it’s a great tool for overall domain control, but certainly requires some advanced knowledge to use. Where the bulk of the changes have come into place, however, is through Fedora 24 Workstation. The biggest of the bunch is the preview of Wayland, a unique X display server. Although it’s very much an early preview, it’s an exciting addition and one that Fedora plans to implement as its default graphics server in the future. GNOME users will also want to pay close attention to the inclusion of GNOME

3.20 Delhi. Again, this is another preview, but the overall desktop environment has some noticeable tweaks and still boasts that easeof-use we’ve come to expect. If GNOME isn’t for you, then there’s still a myriad of desktops to enjoy, including Cinnamon 2.6, MATE 1.14, KDE Plasma 5 and Xfce 4.12. On a smaller scale, eagle-eyed users will also notice some upgrades to Fedora’s Software app. There’s now the capability to perform a full system upgrade from the comfort of your desktop, as well as leave reviews on available software. It’s these little touches that make Fedora 24 one of the biggest updates yet. All three editions of Fedora 24 are now available to download from



Top 10

(Average hits per day, 31 May – 30 June) 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Sony to compensate users for Linux on PS3 debacle Sony will pay out millions to those affected by the OtherOS issue One of the gleaming features of the PS3 when it first launched was the introduction of the ‘OtherOS’ feature. At the time, it caused quite a stir in the open source community. This momentous feature would allow users to install Linux on their machine and access all the extra functionality that it brings with it. Unfortunately the feature was relatively short-lived, with a software update essentially killing off the feature back in 2009. While the update wasn’t mandatory, it all but disabled several of the PS3’s key features until it was applied, preventing Linux users from connecting to the PlayStation Network or playing games online. A lawsuit was soon raised, stemming from Sony’s claims for third-party OS capabilities on their console. Lawyers representing as many as 10 million console owners on one side and Sony on the other, have now reached a deal which would see Sony make a mandatory multi-million dollar payout to consumers. The proposed settlement will see gamers eligible to receive $55 if they ever used Linux on their console, while users who

bought a PS3 for the ‘OtherOS’ functionality will be able to claim $9. The current settlement, which still needs to be approved by a federal judge in the US, is currently only available for those residing in the United States who bought a fat PS3 between November 1st 2006 and April 1st 2010, but it’s likely that the ruling could go much further than that. During the litigation, it was alleged that piracy was one of the key decisions behind the removal of the feature. Sony also added that its terms of service, which gamers agree to when initially setting up their console, allowed it to remove the OtherOS feature, and that it wasn’t as widely-used as the company initially thought it would be. For gamers to claim their $55 and subsequent payout if the case is more widespread, they must provide proof of their purchase, PlayStation SignIn ID and submit proof of their use of the OtherOS functionality. Sony has also agreed to use its PlayStation Network database to notify everyone potentially affected, with the steps they need to take to make their claim and seek compensation from the console giant.

Linux Mint Debian Manjaro Ubuntu openSUSE Deepin Fedora Zorin ElementaryOS CentOS

2,949 1,892 1,393 1,323 1,121 1,033 1,028 1,015 1,009 980

This month ■ Stable releases (18) ■ In development (10) While there’s nobody particularly new in the top 10 this month, the latest Linux Mint update has seen it rocket to the top of the download list.

Highlights Mint

The beta release for Linux Mint 18 ‘Sarah’ launched to a lot of fanfare, promising widespread changes in its design and overall usability. For new users, it remains one of the easiest distributions to get your head around.


Expect Fedora to dramatically go up the rankings over the coming months with the release of Fedora 24. As usual, there’s three different editions to test, with each bringing a unique set of features.


The stylish elementaryOS also released a big update this month, with its Loki beta providing a plethora of big fixes and the team claiming over 800 issues have been solved.

Latest distros available:



Your source of Linux news & views


Data processing made easy Apache Spark is one of the most highly regarded engines for data processing on the market, but what makes it so good? Reynold Xin explains all

Reynold Xin

Reynold Xin is a member of the Apache Spark Project Management Committee and chief architect and co-founder at Databricks. He set the 2014 world record in 100TB sorting, beating the previous record held by Apache Hadoop with 30x higher per-node efficiency.

Apache Spark is a fast and general engine for data processing, used primarily by data engineers, data scientists and business analysts. It provides high-level APIs in Scala, Java, Python, R and SQL. It also supports a rich set of higherlevel tools that make it attractive for machine learning, graph computation, stream data processing, ETL and business intelligence. In a way, it’s the Swiss army knife of data processing. Spark was originally started by Matei Zaharia at UC Berkeley in 2009, and was donated to the Apache Software Foundation in June 2013. Spark became an Apache Top-level Project (TLP) in February 2014. Apache Spark has seen rapid adoption by enterprises across a wide range of industries. Internet powerhouses such as Netflix, Yahoo and Tencent have eagerly deployed Spark at massive scale, collectively processing multiple petabytes of data. When you consider that some of the biggest companies in the world have adopted Apache Spark, it just shows how many people believe in it. It has quickly become the largest open source community in big data, with over 1,000 code contributors and with over 187,000 members in 420 Apache Spark Meetup groups.


However, we are always looking at ways to continuously expand our community, and there are plenty of ways that those interested can get involved with the future development of Apache Spark. An interesting titbit is that according to StackOverflow’s 2016 survey, Apache Spark has become the top-paying technical skill in the United States (developers that know Spark are more likely to make the highest amount of money). That’s a pretty staggering fact if you ask me. How easy is it for users to implement Apache Spark into their systems? There are primarily two ways to use Spark. The first is to use the engine itself interactively. The second is to embed Spark as a library in data applications. They both require very different tools and attributes. Although Spark has been used in the past mostly as a big data processing engine, the engine also runs smoothly on a single machine. Many users use Spark on their laptops, either to unit-test their data applications or to directly analyse small amount of data interactively. To do this, users can simply download the tarball from

the project’s website, untar it and launch the system without any configuration. We’ve looked to make the process as simple as possible, and we provide help throughout our site for those who are still struggling to get to grips with it. Spark can also run on a cluster of machines. Many vendors offer support for running Spark in private data centres, and many cloud providers offer Spark support. One way to learn Spark is through the Databricks Community Edition, which features award-winning MOOC classes offered by a collaboration of UC Berkeley, UCLA, edX and Databricks. We implore users to read up on this over at and How exactly does Apache Spark run on clusters? Apache Spark can run on a cluster via a cluster resource manager. There are three cluster resource managers available: 1) standalone, 2) Apache Hadoop YARN, and 3) Apache Mesos. The standalone mode is the easiest way to set up, and many Spark users start with this mode. According to a survey conducted by Databricks in 2015, 48 per cent of users run Spark on standalone mode, 40 per cent on YARN, and 11 per cent on Mesos. We appreciate that our user base is particularly diverse with their skill set, so we’ve had to cater these different resource managers for them. Of course, we’re always interested in exploring ways that we can expand on these managers, but it requires a lot of hard work and manpower. When running in cluster mode, a Spark cluster includes a single driver process and a number of executors. The driver process is responsible for coordination, eg tracking metadata and performing task

Exploring the Spark Summit Due to its immense popularity, Apache Spark also hosts the Spark Summit, an event that brings together the world’s best scientists, engineers, analysts and executives to share and receive knowledge on Apache Spark. In previous events, there have been some key developments made towards the future of Apache Spark, as well as in-depth Q&A sessions where users can discuss their issues and grievances. Each Spark Summit also includes a series of training sessions hosted by Databricks. While their sole aim is to educate, they also form the building blocks for the expansion of Apache Spark into other markets and audiences. And to say the open source community is taking notice would be a massive understatement. Attendance at the Spark Summit events has increased dramatically over the past 24 months, with more demand than ever before for what the Spark team has to offer. In the latest Spark Summit East, held in New York City earlier this year, some of the leading business professionals and analysts took to the stage. These included Chris D’Agostino, VP of technology at Capital One, and Seshu Adunuthula, head of analytics infrastructure at eBay, with the message being how important data science is, and how Apache Spark could be the answer to issues within the industry.



Your source of Linux news & views

The Apache Software Foundation

scheduling. The executor processes do the actual heavy lifting of data processing, and they talk to each other to exchange data in order to execute the data flows specified by the users. Are the security protocols in place to keep Apache Spark safe when in use? Yes, absolutely. Apache Spark itself has a lot of built-in security features. For example, it can encrypt data onthe-wire and integrate with Kerberos and SASL. Often, Spark is also used in environments that provide additional security features beyond the processing engine itself. For example, it can be used with server-side encryption with Amazon’s S3. Another example is Databricks Enterprise Security, which offers end-to-end encryption and role-based access control as well as auditing. We take security incredibly seriously, especially considering the level of sensitive data that is passed through Apache Spark on a daily basis. The protocols we implement are constantly monitored to make sure they’re as effective as they possibly could be, but there’s always room to grow. Could you tell us about MLlib (Machine Learning Library)? How practical is it to use? Apache Spark provides a general machine-learning library – MLlib – that is designed for simplicity, scalability and easy integration with other tools. With the scalability, language compatibility and speed of Spark, data scientists can solve and iterate through their data problems faster.


We take security incredibly seriously, especially considering the level of sensitive data passed through Apache Spark

Spark plays just a small part in the grand scheme of things when it comes to the Apache community. The Apache Software Foundation (ASF) provides a series of open source software projects, aimed at providing high-quality software that leads the way in its field. But it’s the community part that resonates the most, as many of these projects are purely community-driven, with many sharing the same developers and end users. The ASF currently has a high number of volunteers who help develop and steward over 350 open source projects that cover a wide range of technologies. The choice is so vast, there’s something there for everyone. How about Jena, a Java framework for building semantic web and linked data applications? Or maybe SpamAssassin, providing a quick way for administrators to filter and block potential email spam? Contributing plays a big role in the development of Apache and its myriad projects. The best way to contribute is to simply get involved with one of the project communities, or even submit your own for consideration.

From the inception of the Apache Spark project, MLlib was considered foundational for Spark’s success. The key benefit of MLlib is that it allows data scientists to focus on their data problems and models instead of solving the complexities surrounding distributed data (such as infrastructure, configurations and so on). Since its inception, some key problems have been fixed, while others are close to completion. The data engineers can focus on distributed systems engineering using Spark’s easyto-use APIs, while the data scientists can leverage the scale and speed of Spark core. Just as importantly, Spark MLlib is a general-purpose library, providing algorithms for most use cases while at the same time allowing the community to build upon and extend it for specialised use cases. MLlib implements a large collection of distributed machine learning algorithms, as well as feature transformers. Use cases include recommendation systems (collaborative filtering), spam detection, fraud detection, churn analysis and IoT device failure prediction. According to our 2015 survey, 64 per cent of users use Spark for advanced analytics, mostly features provided by MLlib.

MLlib implements a large collection of distributed machine learning algorithms

For those particularly interested, is there scope to help contribute to the development of Spark? Are there any criteria that users need to meet? Apache Spark is the most actively developed and largest open source project in the data space, with over 1,000 contributors and on average 400-500 issues resolved each month. Spark owes its success to these contributors. It is actually really easy to become a new contributor, because there are many aspects of Spark that new users can help out with, from documentation to bug fixes to engine internals. We believe that the community is at the heart of Apache Spark and it has proved to be a massive help in the development of it. We need to continue growing our user base and community to help shape Apache Spark for years to come. The project has a pretty active mailing list that users can join at The project accepts code contributions via pull requests on GitHub: And there is a pretty detailed guide available on Spark’s wiki:



Your source of Linux news & views


The kernel column

Jon Masters summarises the closing of the Linux 4.7 development merge window and ongoing work toward new features for future kernels

Jon Masters

is a Linux-kernel hacker who has been working on Linux for some 19 years, since he first attended university at the age of 13. Jon lives in Cambridge, Massachusetts, and works for a large enterprise Linux vendor, where he is driving the creation of standards for energy efficient ARM-powered servers


Linus Torvalds announced Linux 4.7-rc6, traditionally the penultimate candidate prior to final release of a new kernel. Not too much stood out, other than that Linus was a little concerned by the growth in size of RC6 vs prior RCs. Generally, each release candidate should get smaller as the bar to pulling in fixes grows higher and higher toward the tail end of a cycle. Disruptive changes should instead come in via the next merge window: the couple of weeks during which well-tested changes are incorporated at the beginning of a cycle. Linus asked aloud, “one-time fluke or bad pattern emerging?” We’ll find out next cycle. Perhaps the most interesting feature that will be available to readers in Linux 4.7 is that of parallel directory lookups. The various filesystems (such as xfs, btrfs, and ext4) provided by Linux use a common infrastructure known as the Virtual FileSystem (VFS). This provides an abstracted concept of a directory hierarchy that may be implemented differently by the various underlying filesystems. The directory hierarchy begins at ‘/’ (the root), and works down through directories such as ‘/usr/ bin’ and ‘/home/your_name’. Whenever an application needs to find a file, such as ‘/bin/bash’ for a user login shell program, it needs to walk through the filesystem, beginning at ‘/’. Performing a directory walk (as it is known) is a rather expensive operation, as the kernel first looks up ‘/’, then ‘bin’ within the directory known as ‘/’, and finally ‘bash’ within ‘/bin/bash’. That’s three lookups for a pretty short path (but as readers will be aware, paths are often much longer within more ‘/’s), made worse if the system is busy and IO operations to read data from any of the intermediate directories must wait (block) on other IO. For these reasons, Linux implements a directory cache (dcache), which stores information about recently used directories, so that the system can then immediately resolve the whereabouts of “/bin/bash” on disk in a single lookup operation. As a function of its frequency of use, the dcache is one of the most performance-sensitive areas of the kernel, yet until now it has suffered from a rather unfortunate bottleneck: parallel lookups of entries within the same directory would be serialised (one after the other). So (for example), looking up ‘/bin/bash’ and ‘/bin/ls’ would be a

serial operation. Multiply this out over a busy system with thousands of processes (tasks), and this can significantly undermine performance. This is one reason why certain data stored on disk by applications has been optimised to use large numbers of smaller directories (web caches, mail software, and the like). Linux 4.7 will finally introduce parallel directory lookups and remove this bottleneck. We will have a full wrap up of the Linux 4.7 release, as well as the features lined up for Linux 4.8 inclusion in the next issue.

Virtually mapped kernel stacks

The Linux kernel community is a large and thriving community formed from over one thousand individual contributors. Yet much of the core work within the community is undertaken by a dramatically smaller group of “usual suspects”, mostly funded by large corporations. It is, then, always refreshing to see an influx of fresh new talent into the core development community. Over the past few years, Andy Lutomirski of AMA Capital (a hedge fund and automated foreign currency trading outfit which he cofounded) has become increasingly engaged as a driver of novel and ingenious new technology within the Linux kernel. It’s true that many of these optimisations are focused in areas that are performance critical – and thus commercially useful to both Andy, and to his company, benefitting from lower latency trading operations – but the same corporate motivation can be said of many other kernel contributions from a majority of kernel developers today. Andy’s latest such contribution is a set of proofof-concept patches for virtually mapping the kernel stack. The patches introduce a new kernel configuration option called CONFIG_VMAP_STACK, and make many associated cleanups to correct for years of assumptions made by other core kernel code and drivers about the nature of kernel stacks. To understand what all this means, let’s look at kernel stacks more closely. In a world of increasingly higher level scripting languages, many of today’s programmers don’t have to work directly with low-level architectural details, such as stacks in memory. These are, in the modern world, often automatically managed by underlying language runtimes. But they are still there, especially within the Linux kernel,

and so let us begin by way of explanation. A stack is simply a region of memory used by running functions within a larger program for their local variable storage. When you as a programmer declare a local variable in the source code of a software function, that local variable lives on the (thread) stack. The variable is automatically allocated; you as a programmer do nothing further to cause actual memory to be found for this data. Contrast this with the more general notion of a ‘heap’ – broadly speaking, the rest of memory other than that used to store program code or the content of files – which is specifically managed through calls to ‘malloc’ and ‘free’ functions in C-based user programs (or by a language runtime). Within the kernel, these functions have the counterparts ‘kmalloc’, ‘vmalloc’, ‘vfree’ and ‘kfree’ (‘v’ prefix for virtual memory allocation, ‘k’ for the low level kernel physical memory allocator). User programs don’t worry about allocating space on their local thread stacks, because the kernel (silently) does it for them. When programs try to access stack space beyond (beneath for common downward growth stacks) the current stack, the kernel allocates space during a resulting page fault – the application doesn’t even notice. Within the kernel, things are a little more complicated. The kernel can’t rely upon software beneath it magically allocating its stacks, so it does this through explicit management. One side effect is that the kernel has a fixed size stack. This means no large allocations (of data structures) are allowed on the stack within kernel code – lest the kernel exceed (or overflow) its stack. Such an event is typically catastrophic. It can be detected, and will minimally result in an oops and the destruction of whatever program is currently running, if not a full system crash. Thus great pains are taken to reduce the maximum ‘stack depth’ of the kernel: that is the maximum stack utilisation caused by certain code paths, often in complex filesystem code involving network based volumes. Even your Linux desktop could experience a stack overflow. The odds are never zero, merely extremely unlikely. Over the years, the kernel stack has been split out into one stack per running thread (this is just the stack used by the kernel itself in servicing a thread, not the thread’s user visible stack), and a separate interrupt stack for use while servicing hardware interrupts from external

devices. Just for good measure, the stack is periodically increased in size. Typically, it’s a couple of pages (8-16K on x86 and other architectures) and is traditionally allocated from the low-level kernel allocator using physically contiguous memory. Andy’s patchset changes this to using the virtual memory (vmalloc) interface for kernel stack allocation, amongst several other nice changes and benefits that result. The use of virtual memory for kernel stacks cleverly avoids the need for physically contiguous memory page allocations. This has potential to greatly reduce pressure upon the virtual memory system, especially when extremely large numbers of tasks (programs, or threads) are running on a given system. As an added benefit, Andy is able to introduce special guard pages either side of the now much more manageable kernel stack that serve as proverbial canaries, taking the impact of a stack overflow rather than whatever random memory happened to be below the stack before. Existing systems typically store a special ‘thread_info’ structure at the base of the kernel stack, with vital information about a running program (such as where all of its associated data structures are stored in memory) and so this would often be the target corruption of a kernel stack overflow. The resulting outcome was seldom good for system stability, but will be much cleaner. Virtually mapped stacks won’t likely make it into Linux for at least several kernel development cycles. Even if they can be contained to a CONFIG_VMAP_STACK configuration option that is initially only enabled on certain architectures (x86) and gradually bleeds into the rest, such a change is disruptive enough to find still yet further corner cases. But we should still look forward to this and similar exciting work over the coming months.



Multi boot your machine

MULTI BOOT YOUR MACHINE Putting multiple operating systems onto your workstation increases efficiency and gives you almost limitless choice

One operating system doesn’t always cut it, especially when a computer is used professionally. Some applications are available only for Red Hat, while other developers choose to support only Ubuntu. In addition to that, CAD, EDA and the infamous Visual Studio product families are available for Windows only – let’s not even get started on games. Installing all of these operating systems on one workstation is both cost- and space-effective: having to purchase motherboard, RAM, CPU and GPU multiple times tends to add up quite quickly. Furthermore, simply adding a few hard drives is not a problem with most workstations – this text was written on a 16GB RAM octacore AMD behemoth hosting five HDDs. However, having access to multiple hard drives is not necessary for multi-boot systems. This guide will walk you through a process called partitioning – it allows you to divide one large HDD into multiple smaller compartments dedicated to the individual operating systems.

Why go for multi boot? As we noted above, there are many reasons why you would want to both dual boot and triple boot, and they depend entirely on how you use your computer and how often you need to use different environments for different tasks. One reason is often Windows – however we feel about it, many of us need it in our dayto-day lives. It could be something as simple as enjoying playing new games, which aren’t always supported on Linux, or it could be the case that you are a designer who needs to use the industry standard Photoshop or InDesign. You can even install OS X for a Hackintosh build if that’s more to your taste.


A key reason to further extend a dual boot setup is to preserve your main distro – the one containing the bulk, if not all, of your personal data and media. There are innumerable reasons as to why you may want or need to use different distros on a regular basis, and sometimes livebooting or virtualising just doesn’t cut it – in such cases, it is incredibly convenient to have a third partition onto which you can install the distro you temporarily need to use. Non-Linux OSs aside, it could be something like wanting to have, for example, a Pentoo partition for testing alongside your main Debian distro, with a third slot for distro-hopping. It really is down to you!

As with any tutorial of this nature…

Make sure you back up everything you need first!

Partitions Graphical disk overview GParted provides a diagram of the partitions as they appear on the hard disk. This is interesting, as partitions usually can only grow “backwards” – if a partition structure looks irrecoverable, starting over can be the best choice

Back up before you start Even though repartitioning tends to be a very safe operation, it’s important to perform a backup before following any of the steps shown in this guide. Power outages, typing mistakes or the infamous lab aide tripping over a power cable can all cause problems that are almost impossible to fix without the (pricey) services of a data repair lab. The safest way to get into the fray involves the use of an external hard disk. Replicate a hard disk image, or copy a selection of important files and – this is crucial – disconnect the external HDD from your workstation. Some IT security professionals even go so far as to recommend the purchase of multiple external hard disks, one or more of which are stored off-site and which are updated in a round-robin fashion. On the disc and on FileSilo this issue you can install BackupPC to ensure you’ve backed up all data. Be aware that the usage of dual or triple boot with at least one Windows partition does not provide complete protection against ransomware. Viruses like Petya have already shown the ability to work outside of Windows – it’s only a question of time until a miscreant extends his reach to EXT file systems.

Choose your file system wisely Partitions can be formatted in a variety of file systems. If a partition is intended to be used by different operating systems, the primitive FAT32 tends to be the best choice

How partitions work

The most basic installation – well known from Windows – consists of one partition per disk. Operating systems with a server tradition expect multi-disk systems: dedicating disks to tasks is effective as it reduces latency impact and actually provides for a greater transfer speed. Hard disks can be divided into up to four partitions using the traditional MBR scheme. One of them can be a logical partition that contains additional logical partitions.

This space is empty The partition of the example workstation is not used fully. Unallocated space is not used by anything, and should be allocated to a partition to make it accessible for file storage

How to handle partitions

Administrators can choose between a large selection of utilities for partition management: Windows comes with the Storage Snap-In described later in the feature, while Ubuntu and Debian tend to use GParted (on the disc and FileSilo) instead. All of the programs behave in a similar fashion. The most important warning is that meddling with partition structures can be dangerous – leaving an inexperienced user alone with a partition manager is a sure-fire way to provoke data loss.



Multi boot your machine

Get to grips with GRUB bootloader A boot loader controls the workstation after power-up – if an error occurs here, the operating system can do nothing! After power-up, the BIOS of the workstation performs a quick self-test and passes control to a special bit of code called a bootloader. This is what is responsible for passing execution to one or more operating system ByteStreams that actually bring the station up. While Linux initially started via a bootloader called LILO, the GNU Project’s GRUB has since become the standard for most distributions. It was initially intended to boot GNU’s Hurd operating system – while that platform competes with Duke Nukem for the vapourware throne, components of it have since been reused in various other Unix operating systems.


GRUB compares favourably to LILO for a variety of reasons. First of all, the loader is not limited to residing in the traditional MBR – in fact, it completely foregoes the traditional MBR for more modern ways of system initialisation. Furthermore, GRUB differs from LILO in that it understands the concept of file systems. This means that the bootloader can analyse the content of the partition being booted – LILO simply worked with fixed offsets and failed if one of them changed.

While power users will probably want to stick to the instructions for manual customisation, average owners of GRUB installations are better served with an automatic installer. Sadly, the GRUB Customizer is not part of the official Ubuntu distribution at the time of writing. Instead, it must be downloaded from a third-party PPA, which must be added to the apt-get repository list. Simply feed a terminal with the following commands one by one, and follow the instructions on the screen: sudo add-apt-repository ppa:danielrichter2007/GRUB-customizer sudo apt-get update sudo apt-get install GRUB-customizer

Above The GRUB bootloader Another interesting benefit is the ability to start from the network: in theory, a Unix workstation can obtain its operating system from a server accessible via a LAN peripheral supported by GRUB. Finally, GRUB offers a significantly better user experience. The program can be skinned to display custom start-up screens and boot options. Even the basic version, however, is more flexible than LILO – the two screenshots accompanying this boxout show some of what the default installation of the program can do for you on a triple-boot workstation.

Above Use the arrow keys and Enter to select an operating system to boot in GRUB


Style GRUB

Next, open the Dash and look for the search string “GRUB Customizer”. Starting the program yields a three-

GRUB’s default start screen might be functional, but it’s not pretty. Let’s improve it tab interface – the List Configuration tab lists the various operating systems configured, while the General settings tab allows you to customise the waiting time and the default entry selected after machine power-up.

Change its looks

Most of the action takes place in the “Appearance settings” tab. Peruse its various options to adjust background colours, set wallpaper and/or change the font used to display GRUB’s menus. Finally, increase the display resolution if you feel like it – but be aware that this operation can be a little risky. When done, click the Save button in the toolbar in order to commit the settings to GRUB’s configuration. GRUB Customizer will run the process for you – simply reboot and enjoy the results.

Configure GRUB GRUB’s settings can also be changed from the command line. Here’s the nitty gritty


What’s what?

GRUB finds its settings at runtime by parsing the file /boot/GRUB/GRUB.cfg. Editing it directly is, however, considered an antipattern – instead, the settings found in /etc/default/ GRUB are to be transpiled into a .cfg file using the sudo update-GRUB command. While not adhering to these rules does not usually lead to a non-working system, be aware that GRUB and kernel updates sometimes cause a complete recompilation of the GRUB config. In that case, any changes applied by hand will be lost and cannot be recovered easily.


Get editing

The user-facing settings – think waiting times, backgrounds and the like – are stored in the file /etc/default/GRUB. It can be opened by entering sudo gedit /etc/default/GRUB into a terminal window of choice – the editor will then pop up automatically. Windows 3.x diehards will recognise the syntax immediately – it is derived from the traditional .ini files. Parameters take the form NAME = VALUE, while lines beginning with a # are considered comments. GRUB’s developers provide a bit of documentation in the file – some minor edits can be handled without getting further information from the internet.


Make your changes


Apply the changes

The parameters and their syntax differ from GRUB release to GRUB release. Consulting the documentation of the GRUB version used on your machine is the most sensible approach – Google really is your friend here.

As discussed in step 1, changes to the configuration file are not committed to storage automatically. Instead, the update-GRUB command must be invoked from a terminal. Running the command is interesting as it emits a list of all bootable volumes used in the currently active configuration – if you miss an important operating system, rebuilding GRUB and checking the presence of its loader in the output of the utility makes great sense.

Above GRUB can also be used via the command line



Multi boot your machine

Start with Windows

Be it CAD, EDA or Visual Studio – running Windows is required at times The following tutorial assumes that a newly commissioned single-disk workstation is to be populated with a variety of operating systems. Our first candidate is a preview version of Windows 10, which can be downloaded from the Windows Insider program ( after signing up and agreeing to the terms and conditions. Next, proceed to burn it onto a DVD or an USB stick. Finally, insert it into your workstation and set it as the start-up medium.


Set up language and more


Fix technicalities!

Windows 10’s setup process starts out by asking you to specify the language and keyboard settings: this is important as it makes sure that the keyboard behaves correctly during the setup process. Next, click “Install Now” to start the actual deployment process. At the current stage of the proceedings, the entering of the serial number can be skipped via the “I don’t have a product key” label at the bottom of the third screen of the setup wizard.

Microsoft bundles the Home and Pro versions of Windows 10 in one ISO file. Select the version that you want to use – consult your licensing agreement with Microsoft to find out which one you are entitled to use. In the next step, the EULA must be accepted: not agreeing to Microsoft’s terms and conditions ultimately means that you will not be allowed to install Windows on your machine.



Go custom!


Set up partition structure

Selecting the default installation process makes Windows occupy the entire partition of the workstation. People working on a multi-boot workstation must instead select the option “Custom: Install Windows only (advanced)”, after which the Windows setup process will proceed to display an overview of the hard disks currently installed in the workstation. The setup routine will display both populated and unpopulated space – for us, only those parts described as “Unallocated Space” are interesting as they can be filled with new partitions without data loss.

Windows 10 can be productively used with about 60GB worth of hard drive space; developers seeking to deploy Visual Studio should allocate at least 80GB. Sadly, Microsoft’s installation routine will grab the whole disk space by default – enter a smaller value in the input window. Furthermore, confirm the dialog box informing you about the creation of an additional 500MB partition needed for running and maintaining the NTFS file system.


Stand by for action


Complete userland settings

When the setup partition is identified, the actual setup process can be kicked off. Even though Microsoft has made significant optimisations to improve the deployment speed, installing Windows 10 can still take half an hour or more. Be aware that the operating system will download a lot of updates during the process – it should not be done when your PC is connected to a metered network. Finally, don’t worry if the machine reboots a few times during the installation process.

The final configuration of Windows is handled via a set of apps running inside the completed operating system. Each user has his own personal preferences here – some want to connect their local user account to Windows, while other users are not interested in that particular luxury. In principle, select as you see fit – most of the decisions can be changed afterwards if you find out that they are completely unpalatable to you.

Installation order

Types of distro Get to know the variations

Windows, then Linux

Recent Linux distributions come fully equipped for coexistence scenarios: if the deployment process detects the presence of a Windows-based operating system, it will usually be added to the GRUB bootloader automatically. This means that no further manual intervention is needed: everything works out of the box. When a fresh workstation is being commissioned from scratch, time can be saved by instructing the Windows installer to leave part of the hard disk untouched. It can then be formatted during the Linux installation process without having to resort to partition meddling.





You might not love Mark Shuttleworth’s operating system. It, however, has done more for desktop Linux than most other distros and is excellent for everyday use. It’s also great for beginners.

Take Red Hat’s legendary reliability and remove the commercial support to end up with CentOS. If you ever felt a need to effortlessly deploy advanced enterprise software, this might save you a lot of money!



Linux, then Windows

Installing Windows last causes trouble. The first issue tends to be the shrinking of the partitions – if a second hard disk can be installed doing so is often the better choice, as the Windows installer sometimes messes up partitions. Be careful to have a bootable USB stick or rescue DVD on hand in order to restore the bootloader. Finally, be aware that virtual machines can be a nice alternative; if you aren’t into gaming and can live without UWP emulators, VirtualBox provides more than decent performance on recent hardware.


Kali Linux

Being the distribution that brought the world the apt package manager definitely earns Debian a place in this list. It is a decent everyday Linux for all who dislike Mark Shuttleworth...

Reach out and touch a computer system in true elite style. This Debian-derived distribution comes with a set of utilities useful for penetration testing and forensic analysis.





The operating system powering Google’s feature-reduced micro notebooks can also run on your workstation of choice – but be aware that this distro limits you to web apps!

Slackware can best be described as a living fossil. The oldest still-updated distribution allows you to get close to the bone – any true Linux head should take it for a spin!



Multi boot your machine

Ubuntu to the rescue Running Windows as an everyday productivity OS is not recommended for a variety of reasons, one of which is cryptoviral ransomware Ubuntu is a great everyday operating system. It’s often the default distro used for introducing new users to Linux. Many development and enterprise products have started supporting Ubuntu recently, making it a great overall distro, and it can be installed with minimal effort. Linux installs are at somewhat less risk from ransomware than Windows (simply because malware authors tend to focus their efforts on large userbases running homogenous code), so performing regular tasks (browsing, social media etc) on an everyday Linux distro can help add a thin but useful extra layer of security.

assistant, and proceed to building a new logical EXT4 partition with the mount point /. Should you feel like using Hibernation (or fear lack of RAM), you can also create an additional swap partition, which, at minimum, must be a bit bigger than the amount of RAM installed in your workstation. Finally, click Install Now to start the actual deployment process. whether updates and third-party software should be installed. Enable both checkboxes shown in the figure – not downloading third-party software can cause significant effort in terms of accessing all kinds of proprietary hardware. Furthermore, not deploying updates makes getting support harder as Ubuntu is unwilling to assist users with nonupgraded workstations.

03 01

Get started

Just like Windows, Ubuntu comes in the form of an ISO file, which must be burnt onto an USB stick or a DVD (it’s also on this issue’s disc and FileSilo). Keeping this installation medium on hand is sensible, as it can be used to enter Ubuntu’s recovery mode which is mentioned later in the tutorial. In the next step, boot your computer from the newly created installation medium. Ubuntu throws a few error messages during startup – the dialog shown in the figure accompanying this step can treat itself to up to two minutes of R&R.


Commit to Ubuntu

Ubuntu’s installation medium allows you to take the operating system for a spin without installing it. The setup process can be started by clicking the “Install Ubuntu” button, after which point the operating system will ask you to decide


Pick your poison

Ubuntu’s installation process comes with an option dedicated to the creation of a dual-boot machine containing both Windows and Ubuntu. Using it leads to a turnkey system that uses the above-mentioned GRUB to let you select the operating system during each reboot. As we also want to add CentOS for enterprise applications, instead select the last option, “Something else”. Next, click the Continue button to start the analysis of your hard drives – when done, the dialog shown in the next step will appear.


Create some partitions

The free space area shown below the NTFS partitions is waiting to be filled with the work area of your Ubuntu system. Click the Plus symbol to open the partition creation


Perform superficial setup



After clicking the Install Now command, Ubuntu will automatically start copying files to the target partition. During this process, the installer will query you about your location, your keyboard settings and your password and user name combination. As entering these values can usually be accomplished quite quickly, the installer will then continue to treat you to a tour of various interesting aspects of Ubuntu’s ecosystem. Any updates will also be downloaded during this dead time.

From that moment onward, Ubuntu is – in principle – ready to be used. Remove the installation media when the reboot prompt is shown, as the operating system can be iffy if the computer reboots with it. A correct restarting sequence will be indicated by the appearance of the GRUB prompt shown next to this step. Select Ubuntu to get into your newly installed version of Shuttleworth OS.

Shrink partitions Got a notebook? No extra hard disk bay? We’ve still got you covered! Dedicating one hard disk to each operating system is the most efficient approach. Sadly, both workstations and notebooks can mess up your plans – most notebooks smaller than 17 inches have just one hard disk bay, while modifications on workstations can turn out to become an annoying bit of work. As a complete reinstallation of the operating system is also tedious, reducing the size of partitions becomes an attractive alternative. This step is the most risky one of all outlined in this tutorial – please make sure you perform a backup at this stage.

Line me up

Moving partitions tends to wreak havoc with bootloaders. Due to this, we usually shrink them by cutting parts off their back – a process which requires that all relevant information must be at the start of the partition. Should this not be the case on your system, run a defragmentation utility of your choice. Windows 10 comes with the one shown in the figure – it can be opened by entering Defrag into the Start menu, and can be made to clean up by clicking the Optimise button. Of course, deleting (large) files before the defragmentation process is a sure-fire way to smooth things out by giving Defrag more space to work with.

Shrink it!

With that, it’s time to reduce the size of the offending partition. As NTFS is a Microsoft affair, using the tools provided by the guys in Redmond is likely to be the best solution. Open the traditional control panel accessible via a right-click on the start menu, and proceed to changing it to traditional view. Next, click Administrative Tools and proceed to the Computer Management utility. Finally, open the Storage>Disk Management snap-in, and right click the partition currently used as Windows host. Then, select Shrink Volume in order to reduce its size to a more manageable version.

Above The values shown in the current status field are not relevant here: an okay partition can still have some straggler files at the end

Install CentOS Feel like deploying enterprise software? Get Red Hat for free with CentOS


Install media


Pick language and keyboard


Partition your disks


Set up the directory structure

Just like with all other operating systems, the life of a CentOS installation starts out with the downloading of an ISO file. It must then be burnt onto a DVD or a USB stick (you can get the Minimal version via our disc this issue), which finally serves as bootup media for your workstation of choice. Use the up and down keys to select Install CentOS 7 when prompted – the live mode feature of the installation medium is not particularly useful for this tutorial.

CentOS’s installation process is squarely aimed at professionals. Even though the user interface tends to update quite quickly after selecting language and keyboard layout, users are well advised to wait until the callout symbols next to Installation Source and Software selection have disappeared. With that out of the way, a click on the Installation Destination callout is required to bring you into the disk configuration applet discussed in the next step.

Start out by selecting the hard disk you wish to use in the Local Standard Disks tab – network drives do not play a role during the setup of a normal multiboot machine. Next, select the “I will configure partitioning” radio button in the Partitioning tab – forgetting to do so will lead to problems and potential data loss. Finally, click the big blue Done button on the top left side of the screen to get to the next step.

Set the partitioning scheme to Standard Partition. Next, click the Plus button at the bottom of the list shown on the left-hand side of the screen to create new mountpoints. Create a mountpoint for / with a size of at least 50 GB – the popup dialog takes values like “50GB” and converts them to the correct byte lengths automatically. Finally, click done twice to ignore the warning generated by the missing swap partition. Follow the instructions onscreen until CentOS is installed successfully. Even though GParted can also handle NTFS, this is an option best avoided. All of the NTFS drivers in Unix should, as a matter of principle, always be considered experimental and should never be allowed to write on files containing important data.



Multi boot your machine

Recover GRUB

Some installation processes – CentOS being a prime example – kill GRUB. Solving this problem is easy One of the disadvantages of open operating systems is the emergence of multiple solutions for one problem: for example, CentOS and Ubuntu use widely varying bootloaders. If a Windows machine forces an update (hello, Windows 10) that you hadn’t planned for, this can sometimes hide your bootloader at startup, although fortunately it doesn’t delete the data itself. Happily, restoring the venerable GRUB is not particularly difficult if you have not overwritten the actual partitions.


Download boot-repair

Even though GRUB can also be installed by hand, a graphical utility called boot-repair tends to be a faster and safer alternative for the average user. Sadly, it is not included by default – instead, download it by entering the following commands into a terminal window:

sudo add-apt-repository ppa:yannubuntu boot-repair sudo apt-get update sudo apt-get install -y boot-repair && boot-repair When done, the utility will automatically start to scan your hard disk for all potential operating systems available.


Fire up Ubuntu

The aforementioned USB stick or this issue’s disc containing Ubuntu’s installation media comes in really handy at this point – ram it into your workstation and boot from it. Next, select the option “Try Ubuntu” when prompted. After a few seconds, make yourself at home in a version of the Ubuntu desktop hosted on the installation media.



Use recommended repair

At this point of the proceedings, simply click Use Recommended Repair. This will make boot-repair try and create a boot parameter set of its own – a process that can take some time and must not be interrupted at any means.


Obey the orders!


Reboot your workstation

boot-repair cannot run all commands directly. In some cases, a window similar to the one shown in the figure accompanying this step will pop up. Should this be the case, follow the on-screen instructions carefully, using a second terminal which can be opened from the Dash. Be aware that some of the commands download between 20 and 50MB worth of data from the internet.

When done, boot-repair will open a text editor window containing a detailed log of the changes made – it is important if you ever seek help online, and should be saved onto a USB stick or to an online service like PasteBin. Next, reboot your workstation and remove the installation media – all should be just the way it used to be before that pesky installer went wild!















al Beta 16.04 Finer editions • Ubuntu s p & Serv • Desktoand 32-bit version it -b 64 •

w to Learn ho in Go program gle’s ing in Goo Start cod rce language open sou

data from Recoverdisk a failedr precious Rescue you Knoppix files with


in Edit video ot 2.0e r, enSh ck, Pape k Op your hom Code Ro Lizard, Spoc Improvein minutes sic movies Scissors, recreate a clas

Powe tions applica

fleet, antage of Take adv systemd etcd and










th a Debug wi chine em virtualnema your syst ing

tform toolGitHub’s

Clo y test to simplif


d Networke ys spla a from sensor di y live dat

the ultra strengthe ns -private distro

Displa r Raspberry Pi anothe ERA CAM





to Learn howthe Sense HAT game on 001_LUD1




that change everyt hing 001_LUD164Week3_

Digital.indd 1


Build a selection interfaLS and display imagece s





Expert serve remote r configuration access • Networks • Sharing, ba ckup an adminis d tration FO SS

game with sweeper lava trap s




22/03/2016 17:50


Upgrade a full Retryour gamepad oPie cons to ole








processin data g tasks





Intel adds the newits Curie chip to Arduino board





016 11:30

*US Subscribers save up to 40% off the single issue price.

See more at:


Every issue packed with… Programming guides from Linux experts Breaking news from enterprise vendors Creative Raspberry Pi projects and advice Best Linux distros and hardware reviewed

Why you should subscribe... Save up to 37 % off the single issue price Immediate delivery to your device Never miss an issue Available across a wide range of digital devices

Subscribe today and take advantage of this great offer!

Download to your device now


Tam Hanna

has been benevolently likened to the James Bond villain Dr Zorin. Said man's handsomeness spawned a real-life control systems company – a passion shared with yours truly. Let Tam be your guide to discipline in the Bash space.

Resources Workstation with Bash shell

Bash masterclass

Bash masterclass

Take control of your Bash script's execution Running scripts linearly is boring. Fortunately, Bash provides a set of structures that modify the execution sequence

Tutorial files available:


DOS-based Bash files have resulted in a spoilt generation of programmers: their largely linear execution entombed the perception of shell scripts as uninteresting simpletons. This could not be further from the truth. Bash provides script coders with a set of control structures that justly deserve a comparison with the ones found in programming languages such as JavaScript or C. When using them intelligently, scripts can be customised on the fly: manual rewrites are replaced by the analysis of the execution environment.

Even more advanced possibilities are opened up by the usage of the input handling features of Bash. After having worked through the entirety of this tutorial, your scripts will be able to poll your user in order to get input from them. This allows you to increase the user-friendliness of your scripts even more – when done right, your scripts will even be able to reject invalid input on the fly, thereby keeping clumsy or technically uninitiated users safe. In short: it always pays out to learn more about Bash scripting. So – feel free to follow us down the rabbit hole of advanced Bashery – don't worry, we've got your back...

Figure 1

At the end of the last tutorial (issue 166), our script was able to ask the user for the subject of the emails to be sent. Let’s recapitulate by taking a look at a small example:

#!/bin/bash echo "Enter title!" read emailTitle echo emailTitle After the hashbang designating the file to be a Bash script, we can now invoke the read command in order to get data from the command line. It is then written out once again via the echo function. Forcing users to enter long texts is a classic dealbreaker in mobile: when working with desktop apps, it still should be avoided in the interest of usability. One effective way to improve your program's behaviour involves guiding the user to their result:

Figure 2

if [ $emailShorty == "a" ]; then echo "Alert, electronics head detected" emailTitle="Analogue electronics tutorial!" elif [ $emailShorty == "g" ]; then emailTitle="Review of gadget" elif [ $emailShorty == "r" ]; then emailTitle="Review of book" else echo "Error" exit fi Bash requires the presence of the if clause: elif and else clauses are optional, but handy to keep your shell script more compact. The actual syntax of the if clause is quite useful. Selectors come in boxy brackets and are terminated with a semicolon; the then clause starts the command list to be executed if the matcher hits.

Scripts will be able to reject invalid input on the fly, keeping clumsy or technically uninitiated users safe #!/bin/bash echo echo echo echo

"Choose the material" "a...Analogue electronics tutorial!" "g...Review of gadget" "r...Review of book!"

read -n 1 emailShorty read can be provided with one or more parameters which modify the behaviour. Passing in -n 1 informs the command that we are looking for just one character – the user does not need to press return. Be aware that this can lead to awkward-looking console output – adding an empty echo “ ” after the read fixes the problem.

If this, then that!

Running the code snippet above leaves you with a variable containing – ideally – a single character's worth of input. Transforming this into a usable e-mail title requires the usage of a structure commonly known as a selection. It takes its name from its habit of changing program execution in relationship to a variable. The most primitive form is the if selection, which can be deployed like this:

... read -n 1 emailShorty echo " "

However, there is one small niggle lurking for unsuspecting users. Be aware that the [ must have a trailing space, while the ] needs a leading one – if they are missing, errors similar to the ones shown in the figure will occur. Furthermore, spaces are also needed on the left-hand and the right-hand side of the == operator.

Shorthand expressions

As the collection of input from a specific set is an extremely common operation, Bash provides a shorthand expression for it. Its use would simplify our program as in the following:

echo "Choose the material" select emailTitle in "a...Analogue electronics tutorial!" "g...Review of gadget" "r...Review of book"; do echo $emailTitle break done You should be aware that the break command is mandatory: the script will loop endlessly if you forget to include it, which isn't what you want at all. Breaking out of this is best accomplished via the Ctrl + C sequence, which will lead you to the results shown in the figure.



Bash masterclass

Figure 3

It's a Merry-Go-Round!

Sending out multiple files required us to get acquainted with the for loop: it executed its command body until the list of parameters passed in was exhausted:

for file in `ls *.mp3` do echo "$emailTitle File $fileNow of $fileCount" echo "$file" fileNow=$(($fileNow+1)) done for can also be used to work on numbers and/or numeric ranges. This is accomplished via a modification of the {} operator or the passing in of a complete for declaration – the following script outputs two sequences of numbers:

for((a=1; a<=10; a++)); do echo $a; done for a in {1..10}; do echo $a; done

accu=0 until [[ $myval == "q" ]]; do read myval if [ $myval != "q" ]; then let accu=accu+myval fi done echo $accu This code may look a little upsetting to developers who grew up on C or BASIC: in Bash, tail-controlled loops also start out with the loop control expression. Be aware that this is but a syntactic difference – loop behaviour would better be described by the following snippet taken from the C standard:

do { statement(s); } while( condition );

Don't repeat yourself

Saying the same thing over and over again is boring: it is not without reason that many, if not most, pirate captains had a parrot on their shoulder to repeat commands to thick or drunk underlings. Jokes aside: repeated code is an antipattern of the first order. It is responsible for difficult-to-maintain programs that develop errors during maintenance. This problem is best solved via functions. Let us start out by taking a look at lineMaker – it generates a line containing some ASCII art:

#!/bin/bash for loops are ideally suited to all cases where a shell script is to process a set of predefined commands. In practice, few programs enjoy the luxury of operating on a set of tasks known at invocation – in most cases, iterations must take place until a job is accomplished. Sadly, using hardware-based examples for loops is unattractive as reusing them is difficult. Due to that, we will stick to hybrid examples – the first one computes the sum of a group of numbers entered:

myval=22 accu=0 while [ $myval != "q" ]; do read myval if [ $myval != "q" ]; then let accu=accu+myval fi done

anotherF() { echo "- - - - - - - - - - - - - - - - - - -" } function drawline { anotherF anotherF } drawline Bash makes handling functions interesting in that it provides two syntax variants: our snippet uses the one for anotherF, while the other one is demonstrated in drawline. Actually running the code can then be accomplished just as if drawline was part of the Bash shell's command set. Functions can, of course, be modified by the code responsible for invoking them. This is accomplished by parameters – let us demonstrate its usage by modifying the line drawer a little bit:

echo $accu While is an implementation of the head-controlled loop: if such a loop is to be executed at least once, the developer must set up the parameters accordingly. Tail-controlled loops provide an attractive alternative in such situations – another implementation of the program could look like this:


function drawline { anotherF anotherF echo $# echo $1 }

drawline a b c d

Figure 4

Invoking this new version of the code will lead to the following result:

tamhan@TAMHAN14:~/Desktop/bashspace$ ./ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 4 a Running functions have access to a variable called # – it contains the number of parameters passed in. The actual parameters can then be accessed via variables called 1, 2, 3 and so on. Accessing their content involves the use of the 4 operator discussed in the last instalment. Bash's return command is misleading in that it can not be used to return actual values to the caller – it is, instead, limited to the handling of numeric status codes. One interesting way to return values involves the use of “parametric variables”:

function myfunc() { local _res=$1 local myresult='Hello World' eval $_res="'$myresult'" } myfunc result echo $result

drawline a b c d Furthermore, you should always keep in mind that function invocations are not particularly fast. While Bash does allow the creation of recursive functions, doing so is not recommended – tasks like the computation of a factorial are better handled by C or Fortran.


Bash's syntax is complex enough to give experienced programmers pause: as the language is interpreted at runtime, errors are usually found only when it is too late. Static analysis provides a workaround for this problem. ShellCheck is a conveniently packaged program which checks your shell scripts for a selection of well-known sources of trouble. Casual users can access the product via the internet: open in a browser of choice, and paste your shell script into the file. Should you feel like using the product more often, install it locally via aptget:

tamhan@TAMHAN14:~$ sudo apt-get install shellcheck [sudo] password for tamhan:

Be aware that a function must be placed above its first invocation – if that is not the case, Bash will throw an error on execution The code above is quite tricky in that it uses the eval command: it instructs Bash to first parse the string passed in, and to proceed to running the result in the second step. $_res is thus transformed into the variable name, while the string containing &myresult remains untouched. Be aware that a function must be placed above its first invocation – if that is not the case for some reason, Bash will throw an error on execution. Furthermore, keep in mind that functions can be overwritten at will – the following product will yield very odd output (see figure):

. . . function drawline { anotherF . . . } function drawline { echo "Ouch!" }

ShellCheck can then be deployed from the command line – very savvy developers even go so far as to invoke it automatically after each file save.

Conclusion and outlook

Even though Bash scripts – admittedly – look quite odd at first glance, the scripting language does nevertheless provide a complete set of loop and selection commands. This means that Bash script is a more-or-less Turing complete language which, in theory, could be used to tackle all kinds of advanced problems. In practice, this is not recommended due to a variety of issues. First of all, the syntax is unforgiving. Second, Bash misses a variety of advanced structures like pointers and associative arrays: their makes solving advanced tasks needlessly complex. Fortunately, Bash treats programs written in other languages as first-class citizens. The next part of this tutorial will introduce you to a variety of ways to chain commands and applications – stay tuned, as awesome power awaits!



Compile new software

Compile software using modern protections Hardening GNU/Linux ELF binaries can prevent corruption-memory-based exploits from taking over an entire system Toni Castillo Girona

holds a bachelor's degree in Software Engineering and works as an ICT research support expert in a public university sited in Catalonia (Spain). He writes regularly about GNU/Linux in his blog:

Resources Smashing the Stack for fun and profit

Debian Hardening Hardening

GDB Quick Reference

Running shellcode

Relocation ReadOnly

Software security is a complex area of expertise. Weâ&#x20AC;&#x2122;ve left behind the heyday when smashing the stack for fun and profit was, so to speak, a piece of cake. Nowadays GNU/Linux distros ship with modern protections against memory corruption outof-the-box, rendering these sorts of attack more improbable. And yet, security researchers are always finding obscure ways to exploit software flaws even with these protections on. Hardening your binaries can help you prevent most of these attacks. The GNU C library and the compiler offer some of these protections, whereas the Kernel itself implements others. These protections are: DEP/NX, ASLR, Stack Canaries, PIE and Full RELRO. Modern gcc versions also include some useful protections, like replacing any call to the strcpy family functions with their length-limited counterparts (strncpy).

The NX bit

The Non-Execute bit (NX) makes sure that alien code that could have been injected into non-executable segments of the ELF binary won't be executed. Modern 64-bit processors include this protection out-of-the-box and the Linux Kernel supports it. Open a new terminal and run this command to find out if your processor supports NX:

$ gcc -m32 nx.c -o nx If you execute the binary, you get a segmentation fault:

./nx Segmentation fault Compile it again but this time clearing the NX bit:

gcc -m32 -z execstack nx.c -o nx ./nx sh-4.2$ The shell-code gets executed and you obtain a shell. Apart from being able to clear the NX bit at compile time, you can do so on an existing ELF binary by means of the execstack utility. Install it on your distro:

#apt-get install execstack cat /proc/cpuinfo |grep ^flags|head -1|egrep --color=auto ' (pae|nx)' This bit can be cleared at compile time by passing the -z execstack flag to the compiler. Otherwise, it is set by default. Grab your favourite text editor and write this code down:

#include <stdio.h> char shellcode[] = "\x31\xc0\x31\xdb\xb0\x17\xcd\x80" "\xeb\x1f\x5e\x89\x76\x08\x31\xc0" "\x88\x46\x07\x89\x46\x0c\xb0\x0b" "\x89\xf3\x8d\x4e\x08\x8d\x56\x0c" "\xcd\x80\x31\xdb\x89\xd8\x40\xcd" "\x80\xe8\xdc\xff\xff\xff/bin/sh"; int main(int argc, char **argv) { int *ret; ret = (int *)&ret + 2; (*ret) = (int)shellcode; } This code is a pretty well-known technique to execute shell-


code. The shell-code is stored in the address pointed to by shellcode[]. This address belongs to the .data segment. First, compile it like this:

Compile our example without clearing the NX bit and run it; you get the segmentation fault as before. But hey, you have execstack now, so you can do some magic:

# execstack -s nx # ./nx sh-4.2# Cool: you can set or clear the NX bit at will with execstack. Now

let's do something more interesting; let's find out how many binaries in the /usr/bin directory have the NX bit cleared. Thanks to the execstack utility and its -q flag, you can query an ELF binary for the presence of the NX bit:

find /usr/bin/ -exec execstack -q {} ";" 2>/dev/ null|egrep --color=auto '^X' X /usr/bin/grub-mklayout X /usr/bin/grub-mkrelpath X /usr/bin/grub-mkimage ...


Address Space Layout Randomization (ASLR) is a protection mechanism introduced in Linux and other operating systems that randomises the process memory space so that it is extremely difficult to find the address of functions or gadgets that an attacker could use in order to exploit a binary. Shared

are plenty of papers exposing a way to bypass ASLR on 32 bit architectures using brute-force.

Stack Canaries

One well-known method to inject alien code into an executable is by means of corrupting its stack. Open your favourite ASCII editor and write this code down:

#include <stdio.h> #include <string.h> void vuln(char *); int main (int argc, char **argv){ printf("Arg is: %s\n", *(argv+1)); vuln(*(argv+1)); } void vuln(char *arg){ char buffer[10]; strcpy(buffer, arg); }

In-house development In-house development is another important issue to focus on. When developing new software, above all network service daemons, one must always be extremely careful and build it with the protections enabled to prevent memory corruption attacks from being successful. Avoiding bad coding practices must always be paramount too!

If you compile legacy code in a newer GNU/Linux distro, chances are that it will be built without protections libraries are compiled with the -fPIC flag, allowing them to make use of the ASLR protection automatically. However, binary .text segments are not compiled with this protection unless they are built as Position Independent Executable (PIE). Grab your favourite text editor once again and write this code down:

#include <stdio.h> void* getEIP () { return __builtin_return_address(0)-0x5; } int main(int argc, char** argv){ printf("EBP located at: %p\n",getEIP()); return 0; } Compile the code and then start to execute it a minimum of ten times (or more). You will get the same exact memory address for EBP each time:

$ gcc -m32 aslr.c -o aslr $ ./aslr EBP located at: 0x80483f8 $ ./aslr EBP located at: 0x80483f8 Now, let's give it another try but this time setting the PIE protection on the .text segment: s

$ gcc -m32 -fPIE -pie aslr.c -o aslr $ ./aslr EBP located at: 0xf77dd57c $ ./aslr EBP located at: 0xf77e357c Of course, if you do this with an ELF64 binary, the entropy will be greater and therefore the ASLR will become more reliable. There

This is a classic buffer-overflow vulnerable code. Compile it the usual way and run it passing as many bytes as you need until you overflow the buffer:

$ gcc -m32 vuln.c -o vuln $./vuln AAAAAAAAAAAAAAAAAA Arg is: AAAAAAAAAAAAAAAAAA Segmentation fault Now, execute it inside a gdb session until you get the 0x41414141 value for the EIP register. To achieve this, just increase the total number of â&#x20AC;&#x153;Asâ&#x20AC;? passed to the vulnerable program at each execution until you reach the desired value:

$ gdb ./vuln (gdb) r AAAAAAAAAAAAAAAAAAAAAAAAAA Program received signal SIGSEGV, Segmentation fault. 0x41414141 in ?? () It is obvious that you can control the return address by corrupting the stack. If you want to overwrite the EIP register with, say, 0x45444342, you can do it easily by altering the last 4 bytes passed as an argument like this:

(gdb) r "`perl -e "print 'A'x22 . 'BCDE'"`" Program received signal SIGSEGV, Segmentation fault. 0x45444342 in ?? () The stack has been corrupted, that is undeniable. Now, compile the previous code with the stack guard protector to detect the corruption. The GNU c compiler offers this feature:

$ gcc -m32 -fstack-protector-all vuln.c -o vuln



Protections and performance Some of these protections can affect the program's throughput. For example, using Full RELRO means that the linker will resolve every single shared library function the binary needs at load-time before returning control to it. That translates into more loading-time before the binary is responsive. Take this into account and always run performance tests!

Compile new software

Try to overflow the buffer now as you did before:

$ ./vuln `perl -e "print 'A'x18"` *** stack smashing detected ***: ./vuln terminated


Whenever running a new binary that makes use of shared libraries, the addresses of these libraries are resolved at execution time when a first call is made to any one of these libraries; this is called Lazy Binding. Because some addresses have to be figured out later on in the life of a binary loaded in memory, some ELF segments must be writeable. This can lead to a well-known attack that overwrites the Global Offset Table (GOT) so that instead of making a call to, say, printf, once the attacker has been able to overwrite the address for printf, the program will end up calling some malicious injected code.

#include <stdio.h> int main(int argc, char **argv){ printf("Hello there!\n"); return 0; }

08048300 <printf@plt>: 8048300: ff 25 98 96 04 08 *0x8049698 8048306: 68 00 00 00 00 804830b: e9 e0 ff ff ff 80482f0 <_init+0x34>

jmp push

$0x0 jmp

Run the executable inside a gdb session to analyse the GOT. Because the .got.plt table is a section inside this ELF binary, you can ask gdb for its exact address:

$ gdb ./relro (gdb) mai i sections .got.plt 0x804968c->0x80496a4 at 0x0000068c: .got.plt ALLOC LOAD DATA HAS_CONTENTS Set a breakpoint right after the call to the printf@plt function and run the program:

(gdb) b *0x080483fa Breakpoint 1 at 0x080483fa (gdb) r Hello there! Breakpoint 1, 0x080483fa in main ()

Save it and compile it this way: Let's find out where the printf function is located:

$ gcc -m32 -fno-builtin-printf relro.c -o relro Disassemble the binary by running the objdump utility:

(gdb) print printf $3 = {<text variable, no debug info>} 0xf7e82c80 <printf>

$ objdump -d relro > relro.S Open the resulting disassembled file and look for the main function. Instead of calling printf's address directly, there is a call to a symbol named printf@plt:

080483e4 <main>: ... 80483f5: e8 06 ff ff ff call 8048300 < HYPERLINK "mailto:printf@plt"printf@plt> 80483fa: b8 00 00 00 00 mov $0x0,%eax Go to that offset, i.e: 8048300. You can clearly see that there is a jump into the GOT. The push $0x0 is going to set the actual address for printf in the first GOT entry available for shared symbols, i.e GOT[3]:

So GOT[3] must have the address 0xf7e82c80, which is the address for printf. To make sure this is so, dump the first 4 bytes for the .got.plt table:

(gdb) x/4x 0x804965c 0x804965c <_GLOBAL_OFFSET_TABLE_>: 0x08049568 0xf7ffd908 0xf7ff3980 0xf7ea0c80 Because this is a writeable address in the ELF binary, you can put whatever address you want in GOT[3]. You can find out which address equals to GOT[3] by doing pointer arithmetic:

(gdb) x/x 0x804965c+12 0x8049668 <printf@got.plt>:


So GOT[3] is located at 0x8049698! Let's toy with the GOT; your goal will be to execute the system function instead of the â&#x20AC;&#x153;Hello world!â&#x20AC;? message. To do this, first find out where the address of system is:

(gdb) p system $8 = {<text variable, no debug info>} 0xf7e74c90 <system> Set a breakpoint before calling the printf@plt symbol:

(gdb) b *0x080483f5 Breakpoint 2 at 0x80483f5


Run the program from the beginning until it breaks at 0x80483f5. Then, overwrite the GOT entry at offset address 0x8049698 with the address for the system function:

(gdb) set *0x8049698 = system (gdb) x/x 0x8049698 0x8049698 <printf@got.plt>:


Continue the execution using the “c” command in gdb:

(gdb) c Continuing. sh: Hello: command not found You have successfully overwritten the address in the GOT entry for the function printf with system's. But because system expects the command to execute from the stack, and the stack

Checksec also allows you to check every process that is running on your system and obtain its protections by using the –procall flag. To get a list of binaries that may be vulnerable to stack corruption or GOT overwriting, you can run the following:

# ./ --proc-all |egrep 'No RELRO|No canary' init 1 No RELRO No canary found NX enabled No PIE rpc.statd 2426 No RELRO No canary found NX enabled No PIE rpc.idmapd 2440 No RELRO No canary found NX enabled No PIE …. However, it's important to note that if you compile legacy code in a newer GNU/Linux distro, chances are that it will be built without protections.

Checksec allows you to check every process that is running on your system and obtain its protections has the string “Hello world!\n”, system tries to execute “Hello” and fails! Anyway, you have altered the GOT table. To prevent this, use Full RELRO:

$ wget uploads/2012/10/xmgr-4.1.2_tcg.tar.bz2 Decompress the file and start the building process as usual:

$ gcc -m32 -fno-builtin-printf -Wl,-z,relro,-z,now relro.c -o relro Now that all the symbols are resolved before returning control to the executable program by the linker, there is no need to have writeable sections loaded in memory. To check this, use the readelf command like this:

$ readelf -S ./relro|grep "got.plt"

Auditing for unhardened binaries

On modern GNU/Linux distros, most binaries are pre-compiled with these protections enabled. To check, you can use either the readelf tool or the checksec script.

$ wget

$ tar xvfj xmgr-4.1.2_tcg.tar.gz $ cd xmgr-4.1.2/ $ ./configure && make Use the checksec script to make sure the only enabled protection is NX:

$ -file src/xmgr No RELRO No canary found NX enabled PIE


Harden it by altering its Makefile so that it has the proper flags set. Modern GNU/Linux distros include an environment variable that turns on these flags automatically. Edit the xmgr-4.1.2/ Make.conf and set these flags to the CFLAGS0 variable:

Using checksec, let's test the previous code protections after compiling it using Full RELRO:

CFLAGS0 = -fstack-protector-all -fPIE -pie -Wl,z,relro,-z,now

$ ./ -file relro Full RELRO No canary found No PIE

Add the “-fPIC” flag to the Cephes library too:

NX enabled $ vi cephes/Makefile … CFLAGS=$(CFLAGS0) -fPIC -I$(TOP) -I.

Checksec is reporting to you that this binary has been hardened with Full RELRO and the NX bit. And yet, its .text segment is not protected by ASLR and it has no stack corruption protection.

Now recompile the binary and check its protections again:

$ gcc -m32 -fstack-protector-all -fPIE -pie -fnobuiltin-printf -Wl,-z,relro,-z,now relro.c -o relro $ ./ -file relro Full RELRO Canary found NX enabled PIE enabled

$ make clean $ make $ -file src/xmgr Full RELRO Canary found enabled.

NX enabled PIE



Linux on Android

Run a Linux chroot on Android Android is Linux-based and uses a Linux kernel, making it ideal for a chroot

Paul O’Brien

Paul is a professional cross-platform software developer, with extensive experience of deploying and maintaining Linux systems. Android, built on top of Linux, is also one of Paul’s specialist topics.

Resources SuperSU

Root Checker

Complete Linux Installer

Linux Deploy

Ubuntu Touch Touch/Devices

Right As well as terminal mode, you can even run a full Linux desktop on your Android device!


Why would you want to run full Linux on your Android device? That’s the question we’re often asked when we mention that we have a full Debian install accessible on our phone. Actually, it’s more useful than most might think (although, as a Linux user, you know better anyway!). Being able to run up a full LAMP stack is handy, we can get quick and easy access to all our favourite Linux tools and even use the Linux build as an option for syncing content to and from the device with good old rsync. In order to be able to run Linux in a chroot environment on your Android device, there are a couple of prerequisites. First of all, you need to have root access on your device. This is no great surprise, as we’ll be integrating with the Android system at the lowest level. Second, you’ll need to be running a kernel with support for loop devices. If your kernel doesn’t have support, you have the option of compiling your own, provided your manufacturer has made source available. Finally, you’ll want plenty of storage and RAM. A microSD card will give you additional storage for your Linux chroot, but bear in mind that it is usually quite a lot slower than the internal memory, so you’ll be better served if you have a device with lots of space.

your device before you can continue. The actual process varies per device, but it will typically involve unlocking the bootloader (via your device manufacturer’s website), flashing a custom recovery and finally flashing a rooting ZIP file known as SuperSU. Bear in mind that unlocking your bootloader will likely affect your warranty and having root access, just like on ‘proper’ Linux, gives you a lot more potential to do bad things to your device. Proceed with caution! After you have installed the SuperSU root binaries, install an application such as Root Checker from the Google Play Store to confirm everything is working correctly. When an application requests root access on Android, a dialog is displayed requesting confirmation – this ensures that applications on your system can’t use root access for nefarious purposes without your knowledge.



Ensure you have root access

The most important thing you need before you install a Linux chroot on your Android device is root access. Root gives you full access to your device’s system, and allows you to carry out fundamental functions needed to set up the chroot. Android devices don’t ship pre-rooted, so you’ll need to root

Download the Complete Linux Installer

The quickest and easiest way to get a Linux distribution onto your device is by using Complete Linux Installer from the Play Store. On first launch, the application will install the scripts and Busybox binary required to boot Linux. The application includes support for a range of distributions including Ubuntu,

Debian, Arch, Kali, Fedora and openSUSE. Your architecture type will be checked and the Install Guides section of the app will indicate which are compatible with your phone or tablet. Before you start the install, ensure you have USB debugging enabled and plenty of storage space available. To enable USB debugging, open the Settings>About menu, tap the build number five times to reveal the Developer menu, then from the Developer menu, check the USB Debugging option. The Complete Linux Installer app itself is just a 2.4MB download. For efficiency, the distribution images are downloaded on demand – you can start the download from the second page of the installer guide.


Install the Linux image and companion apps

When you start the Linux distribution image download, you will be offered a number of options. ‘Large’, ‘Small’ and ‘Core’ image versions are available. The Large images include the core OS and a range of additional apps

app such as Explorer or File Commander to extract the image and MD5 file (you can delete the download ZIP after doing so).


Launch Linux for the first time


Installing additional apps in your chroot

After you have extracted your image and installed the Terminal and optionally the VNC companion app, you are ready to launch! There are a couple of options for launching Linux – you can either use the Complete Linux Installer app (see the Launch option on the side menu), or you can add a widget to your homescreen. Note that depending on the performance of your device, it can take a couple of minutes for the image to complete the first boot, so be patient! After your distribution is booted, you’ll be prompted to set the root password, and then you will able to do as you wish with the chroot! Type ‘exit’ to shut down (you should try to remember to do this, both for battery life reasons and to protect the integrity of your files). The Complete Linux Installer app allows you to configure the resolution options for the VNC session – you should ensure that they match the resolution of your device.

Integrating with Tasker As well as manually launching your chroot from the Complete Linux Installer app or using the provided widget, you can automate your Linux install using Tasker. When you launch the chroot manually, take note of the command line used and you can call this from a Tasker script. You could even then send additional commands, to perform your own specific useful tasks.

Once you have your Linux chroot up and running, you’ll probably want to install additional apps (especially if you installed a small or core system image). As you will be logged in via the Terminal as root, you can do this in the normal way – using your distribution’s package manager. Binaries will automatically be downloaded for your current architecture (ARM or x86), so everything should work exactly as it does on a ‘real’ Linux machine.

The Terminal application is used to connect to your chroot in a shell, while the VNC Viewer will allow you to connect to a GUI such as OpenOffice; the Small images include only basic additions such as a web browser and Java; the Core, as the name suggests, is just the OS itself. While the main image itself is downloading, it’s a good idea to download the suggested companion apps, too. The Terminal application is used to connect to your chroot in a shell, while the VNC Viewer will allow you to connect to a GUI. The distribution image is downloaded as a ZIP file, which needs to be extracted to the root of your internal or external SD card before Linux can be launched. When the download is complete, use an

What you should avoid is updating your distribution using the built-in update tools – this will almost certainly cause issues and prevent your system booting. Instead, if you want to be on the latest version, you will need to build your own custom image. If you install a lot of additional applications, you might encounter space issues. The easiest way to increase the space available is to make a new, larger, empty image and to copy the contents across from the old image (using cp -rp to preserve permissions).


Preparing your own image

Since you can’t update the Linux image that you are using on your Android device in the conventional way, if you want to be on the latest version of your chosen distribution,



Linux on Android

you’ll need to prepare your own image. Fortunately, it’s actually pretty straightforward. In order to create your own image, you first need to create the IMG file itself, of an appropriate size (4GB is a good place to start), by dd’ing /dev/zero to a file and then using mke2fs to create the file system itself. After mounting your new image, you’re ready to copy the actual Linux files to it. You will need to obtain the RootFS (Root File System) image for your distribution, built for the architecture of your device (more than likely ARM). The Ubuntu image can be found at Copy (cp -rp) the files across to your new image and finally, create a root/ script. This is used by the Complete Linux Installer app on startup – you should copy the script from an existing image and tweak to suit your updated OS.


An alternative option – Linux Deploy

Complete Linux Installer works by launching pre-made images on your device, which you need to manually download and extract in order to get the operating system up and running. An alternative app (also available in the Google Play Store) is Linux Deploy, which takes a slightly different approach. Linux Deploy, which is open source itself, creates a disk image on a microSD card, mounts it and installs an OS distribution. Installation of the distribution is carried out by downloading files from official mirrors over the internet. As with Complete Linux Installer, the application requires root access.


After completing the OS installation, the Linux Deploy app allows you to start and stop services of the new Operating System through the UI. The app can also manage SSH and VNC settings. Installing a new operating system takes about 30 minutes. The recommended minimum size of a disk image is 1024MB (with LXDE), and 512MB without a GUI – bear in mind that when you install Linux on a microSD card with the FAT32 file system, the image size must not exceed 4095MB!


Special features of Linux Deploy

Linux Deploy supports a large number of distributions on a variety of architectures. It currently enables installation of Debian, Ubuntu, Kali Linux, Arch Linux, Fedora, CentOS, Gentoo, openSUSE, Slackware or RootFS on both ARM and x86 32- and 64-bit. If you prefer a Linux GUI to terminal use of your distribution, Linux Deploy allows you to use xterm, LXDE, Xfce, GNOME or KDE. Impressive!

Running automated jobs After launching your chroot using Tasker, you can then easily issue additional commands. We’ve used our chroot to easily rsync photos on demand to a remote server – after the process completes, ‘exit’ can then be called to shut down the chroot. Using the Tasker new file trigger works well, you can also add a ‘WiFi Connected’ condition to save your mobile data allowance.

One of our favourite tricks in Linux Deploy is the ability to install Linux into RAM. Modern mobile devices have an evergrowing amount of memory, with the OnePlus 3 recently shipping with an impressive 6GB of RAM. The downside of an install to RAM is that as all data is stored in memory, it is lost when the device is reset. Compared to installation on a microSD or internal storage, performance on a RAM install is predictably very good. The application starts quickly and the interface responds instantly. Installation is much faster too, taking only a few minutes.

such as cellular connectivity; also, as a test-quality image, you should expect a lot of bugs!


A no-root Linux alternative

Throughout this guide we’ve talked about installing on your rooted device… but if you’re not rooted, there is an option, in the form of the ‘noroot’ builds, downloadable from the Play Store. We’ve checked out ‘debian noroot’ by pelya and, as the name suggests, you can use it to boot a minimal Debian install on a stock Android device.

The noroot implementation is made possible by the ‘proot’ project, which implements chroot, mount –bind and binfmt_misc in userspace


Replacing Android completely with Linux

Having Linux running alongside Android works surprisingly well, but it always feels like a bolted on additional operating system. If you want to run truly native Linux you have two options. The first is to buy a real Ubuntu phone or tablet (such as the BQ M10). However, if you happen to own a device that is used by Ubuntu for testing its ‘Touch’ images (such as the Nexus 4), then you can also flash Linux to it. Details can be found here on the Ubuntu website – The process involves unlocking the bootloader on your device, downloading the Ubuntu images for your device and manually flashing them using Fastboot. Of course, the process will wipe your data and if you want to return to Android in the future, you’ll need to download the stock factory images for your device and flash those. Note that if you do flash to native Linux, you may lose features

Although it will work on a wider range of devices than the rooted option, there are some caveats. Right now, the app doesn’t play nicely with devices running Android Marshmallow. The other big thing to note is that the app can’t be installed to the SD card – you’ll need to have a device with a decent amount of internal storage. Those quirks aside, though, the noroot implementation works well and, like the best Linux projects, is open source. The app is made possible by the ‘proot’ project, which implements chroot, mount –bind and binfmt_misc in userspace. You can read more details at




Organise collaborative projects with Ganib

Discover how to install Ganib and use it to manage and simplify your collaborative projects

Nitish Tiwari

is a software developer by profession, with a huge interest in free and open source software. As well as serving as community moderator and author for leading FOSS publications, he also helps organisations adopt open source software for their business needs.

Resources Ganib

Above right The Ganib homepage depicting various features. In the middle, you have the burn down chart, followed by the billing hours


Software development is one of the fastest growing fields in IT, but it has its own problems. There are remotely distributed teams, working on the same code repository. There are changing priorities and deadlines. There are changing requirements and new technologies coming up. In addition, with its intangible outputs, software development proves very difficult to measure. So how do you manage all this while making progress? Agile development methodology has given us some hope. Agile focuses on the end product, while ignoring trivial things like lines of code written or man-hours spent, thus eliminating the need to measure effort and rather measure the completion of the final product. But then there is a dearth of good open-source, agile-focused tools; most of the tools are old, tweaked to support agile. With Ganib, this is set to change. Ganib is an open source project management tool with support for agile scrum, real-time tracking, bug tracking and various other features. It also supports integrations with third party tools like Microsoft Project, LDAP and so on to facilitate seamless working. In this tutorial, we’ll take a closer look at Ganib and its features. But before that, let’s start with the installation process. We’ve used Ubuntu 14.04 as the platform of choice for installation.


Installation preconditions

Ganib can be installed on any of the major OS platforms. As mentioned earlier, we have taken Ubuntu 14.04 as the base platform for this tutorial. To get started, you should have JRE and MySQL installed on your system. Apache Tomcat is required as well, but it is bundled along with the Ganib installer. So, let’s first see how to install MySQL. The installation is simple: update your package index, install the mysql-server package, and then run the included security and database initialisation scripts. Just open a command prompt and type the following commands.

$ $ $ $

sudo sudo sudo sudo

apt-get update apt-get install mysql-server mysql_secure_installation mysql_install_db

Once done, let’s check if you have JRE installed already. Type

$ java -version If you get an error saying “the program ‘Java’ can be found in following packages”, it means you don’t have JRE installed yet. To install JRE, first update the package index

Import/Export MSP If you switched to Ganib recently and your project plan is saved in Microsoft Project, you can easily import your MS Project Plan into Ganib. To do this go to the Plan module, and hover over the MSP link. You’ll see options to Import and Export MSP. You can also share your plans with customers/partners in MSP file format by exporting it to MSP.

$ sudo apt-get update Then install JRE.

$ sudo apt-get install default-jre Once this is done, it is a good idea to install JDK as well.

$ sudo apt-get install default-jdk


Next edit the Tomcat/conf/context.xml file to change the database connection properties.

<Resource name="jdbc/GanibDB" auth="Container" type="javax.sql.DataSource" username="root" password="" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/ganib? autoReconnect=true&amp;useUnicode=true&amp;characte rEncoding=utf-8" maxActive="125" maxIdle="25" />

Ganib installation

Now download the latest Ganib installation package from SourceForge. Here is the link projects/ganib/files/latest/download/ Unzip the downloaded file to the location where you’d like to install Ganib. You can do this using the commands below:

Don’t forget to change the username, password and the database name as per your setup. We’re assuming you’re installing Ganib on localhost, so we’ll also need to change the SMTP settings. So, edit the same file as above and also update the following:

Though time tracking is crucial for a project it is also very time consuming to enter data manually after each task $ cd <Target Directory> $ unzip $ cd Ganib-5.3_with_jre Then, import the database script files to your MySQL installation. But first, create a database for Ganib. Here is how to do it:

$ mysql -u root -p mysql > CREATE DATABASE ganib CHARACTER SET utf8 COLLATE utf8_general_ci; mysql > exit; $ #mysql -h localhost -u root -p ganib < database/ ganib.sql

<Resource name="mail/GanibSession" type="javax.mail.Session"  auth="Container""localhost" mail.smtp.auth="false" mail.smtp.user="username" password="password" mail.smtp.port="25" mail.transport.protocol="smtp"  /> Now, to start Ganib, run

$ chmod +x Start $ ./Start If everything is fine, you’ll be able to access Ganib at localhost:8080 on your browser.


Getting started

Now that Ganib is successfully installed and you can access the homepage on your browser, let’s get started with its basic usage. First, we will create an account. Once logged in to the Ganib application, you will be asked to provide an account name and account description in the home page. Provide the relevant information and on submitting the required data, you will see anaccount dashboard page for the new account. Next we create user accounts for members of the project. Once you are inside an account, select the Resources button on




Theme and colours You can change the theme colour of your Ganib application from the footer of any webpage you’re on. There are several options available – the major ones are blue, orange or green. To do this, just click Change Theme in the footer and you’ll see a colour palette pop up from the right. Just select the colour of your choice and the theme will change.

the top menu bar. You can enter first name, last name and email address for the project members here. Once done, select and invite them as member, leader or manager. Ganib allows rolespecific information access. For example, data that a manager can access need not be disclosed to all project members. Multiple roles can be added to a single user. Typically, users are added at the beginning of the project but you can add them during the project too.


Ganib provides a user-friendly interface for timesheets where project members can easily update their current tasks or view tasks for any date, week or month. Monthly view allows you to keep track of what an employee has worked on and for how many hours in that month. You can easily navigate to a certain date of a month to view the task details and also select a different month or year. There are few filters available in the page to view the desired data for particular account and project. Ganib timesheet also provides you with weekly views where you can see tasks assigned to logged-in users and meetings scheduled in that week. You can easily navigate between weeks and select the date range from the filters to view data for a particular week. You can further filter out data based on name and assignment types.

Create a project

Now we have an account and project members allocated to that account. Next we create a project in Ganib. On the Account page, you’ll see the icon for creating a new project. Click on the icon and enter the project name and description to create the project. When the project is created, the next thing is to create a task plan to help manage the project easily. Ganib provides a plan module where you can easily create a task and change its dates, dependencies, deadlines and assignments to anyone for specific hours/day. As you start working on the task you can update the number of hours spent per day on the task along with any comments you want to specify.


Generating reports

Project progress can be monitored using reports. Reports are also very effective tools for managers for purposes like budgeting and estimations, while developers can view statistics based on task status, and monitor if they are behind deadline or on track. There are several detailed reports available in Ganib. To access reports click on the Reports button on the top menu bar.

During almost all the phases of a project’s life cycle, documents play a very important role. File management is often a burden for a project manager When a complex task is created it is very difficult to manage it between the team due to confusion that arises when multiple people are working on it. Using Ganib you can easily manage complex tasks by turning them into simple sub-tasks, which will help each team member to concentrate on one thing at a time.


Time management

Time management is the most crucial part for effectively managing a project. Click on the Timesheet button on the top menu bar to access timesheets.


There are two major bar graphs available. First is the workbased bar graph, depicting the percentage of work completed on a particular task. It classifies the tasks into categories: percentage-of-completion, late and unassigned tasks. Second is the members-based bar graph. This bar graph gives details on all of the resources in the project. It even classifies the users according to their availability status as online, active and inactive (resources who are offline for more than eight hours). Apart from these there are also four more graphs available – Billed Hours showing the hours calculated for each user on each

date; Burn Down showing total work, work done and remaining work; Resource showing total work done, resource-wise total work done and assigned bars; and Resource Work Cumulative, representing resource-wise total and completed work of the user, showing the hours captured for each date. This gives a complete overview of project performance.


Real Time Tracker

Though time tracking is crucial for a project it is also very time consuming to enter data manually after each task, and developers forgetting to log their working hours is a common scenario! gTrack within Ganib is a desktop application that provides automated time tracking. It can capture work hours, screenshots, count keyboard or mouse activity, and then update Ganib with work progress in real time. To install gTrack, download the jar file for your OS from the gTrack tab and execute it. Postinstallation you will be asked to log in with your Ganib account credentials; it will auto-sync your tasks from Ganib. To work on a particular task, double-click it and click Work. To pause the timer, click the Relax button. If there is no interaction with gTrack, a warning is issued every five minutes. Click Done to see your progress. If there are some technical issues in the project, the responsible member can write a post in gTrack by clicking Scrum, which will be visible to all project members.


File management

During almost all the phases of a projectâ&#x20AC;&#x2122;s life cycle, documents play a very important role. File management is often a burden for a project manager. Ganib provides options to retain the important documents in the project itself, where you can not only store the file but also optimise it, incorporate version control and hierarchy structure. To access documents, click on the Documents button on the top menu bar under Projects. Create the respective folders and upload documents to them. You can add documents in the Documents module individually or as a group in different folders. Now you can check in these documents for version control or check them out to edit. To update the file properties, select the specific file using the radio button. You can share the documents by uploading the file in the project Documents for your teamâ&#x20AC;&#x2122;s project. To move the documents from one folder to another, select the document and click the Move button. You can also link the document to particular tasks, deliverables, post, calendar, and so on. You can

delete the documents if you want and restore them back from the trashcan.


Specifications management

The Page module lets you create wiki pages for your project. To create a new page, click on the Page button on top menu bar and click new page. You will get a form to enter content. Submit the form after you add all your content. You will be able to see a new page with content you just entered. To edit a page, go to the Edit Page link. It is good practice to add a descriptive edit summary along with the update. This way, members of the team can understand the reason behind any change in the content of the wiki page. You can view and control all the changes made to the wiki pages via the Changes option. Just click on the Changes link on the left side bar. Here you can see the changes made on the page in chronological descending order along with the change description that was provided in the Edit Summary option.


Lists, blogging and more

We have covered most of the important features of Ganib, but there are a few more features that will definitely make your life easier. The first is dynamic forms called Lists. You can create a custom list and add different fields to it as you need. To create a list, first go to your project and click on Lists>Design. On the next page, click New List on the left side bar. Another interesting feature is blogging. You can create blogs on certain topics. To create a blog, click on Blog on the top menu bar. On the next page, you can add Facebook-style status updates. Ganib also provides discussion boards. This lets members related to a project join the discussion and share their views and suggestions. Discussions are segregated into various groups, with access control available for each group. Discussions also allow voting. You can create or access a discussion using the Discuss button on the top menu bar. Ganib also provides an Iterations module to specify a block of time on a project. Different software cycles use this mode differently. Iterations can have predecessor iterations, meaning until the predecessor iteration is complete, the successor iteration shouldn't start.




Set up a virtualisation host and virtual machines Take a closer look at setting up virtualisation hosts and virtual machines on Ubuntu

Swayam Prakasha

has a Masters degree in computer engineering. He has been working in information technology for several years, concentrating on areas such as operating systems, networking, network security, electronic commerce, internet services, LDAP, and web servers. Swayam has authored a number of articles for trade publications, and he presents his own papers at industry conferences. He can be reached at swayam.prakasha@

Resources Beginner Geek: How to Create and Use Virtual Machines

All about virtualisation

Setting Up a New Virtual Machine


Using the Ubuntu system as a virtualisation host, you can run multiple operating systems on a single computer. The systems you create on the host are referred to as virtual machines (VMs). A VM can be running MS Windows, Fedora, another Linux system or just about any other operating system that can run directly on the computer architecture of the host. Once a VM is installed, you can work with it in the same way as you would work with the operating systems installed directly on computer hardware. It can be noted here that with VMs, it is much easier to duplicate them, migrate them to other virtual hosts to improve performance or configure them to failover to another host when a host becomes inoperable. That is, with VMs, you can make more efficient use of your computer infrastructure. Using your Ubuntu system as a virtualisation host, you can start building a computer infrastructure that can scale up as you need more computing power. By having

multiple hosts, you will also be able to migrate your virtual machines to get better performance or shut down the hosts that you under-utilise. You can configure Ubuntu as a virtualisation host by using Kernel-based Virtual Machine (KVM). So what is the most critical requirement for running Ubuntu as a virtualisation host? You need to ensure that you have a CPU that supports virtualisation. You also need to make sure that the computer has enough resources to effectively run the virtual machines. Letâ&#x20AC;&#x2122;s take a look at how we can check our computer to ensure that we have the right assets available to run as a virtualisation host. The first thing is to check for CPU virtualisation support. Note that the processor on your system must support either Intel VT technology or AMD-V virtualisation support. The good thing with Ubuntu is that it has a command (namely kvm-ok) that you can use to test your CPU for virtualisation support. You can install and use this command in the following way:

$ sudo apt-get install cpu-checker $ sudo kvm-ok The output of the above command indicates whether or not the CPU supports the proper CPU extensions that are needed by KVM. Note that there is actually a more manual way you are able to check for virtualisation support. You can check the flags set for the CPU in the /proc/cpuinfo file. Using the popular command egrep, you can search that file for Intel-VT support (vmx) or AMD-V support (svm). This can be done in the following way:

$ egrep “(svm|vmx)” /proc/cpuinfo In the output of the above command, you should see either the vmx or svm flag and not both. If you do not get any output from the egrep command, it means that your computer’s CPU does not support KVM virtualisation. Next, you need to enable virtualisation support in the BIOS. If you think that your computer can support

Virtual Machines in a nutshell A virtual machine can be considered as a computer program that creates a virtual computer system. This virtual computer machine runs as a process in a window on your current operating system. You can boot an operating system installer disc (or live CD) inside the virtual machine, and the operating system will be tricked into thinking it’s running on a real computer. It will install and run just as it would on a real, physical machine. Whenever you want to use the operating system, you can open the virtual machine program and use it in a window on your current desktop. It is important to note here that virtual machines add some overhead, so they won’t be as fast as if you had installed the operating system on real hardware.

The flag lm stands for long mode. If this flag is present, then you can be sure that the processor is a 64-bit processor. Check the available RAM and disk space. Since you will be running multiple operating systems on one physical computer, the amount of RAM and disk space needed is likely to need to be multiplied. Ensure that you have

A virtual machine can be running Windows, Fedora, another Linux system or just about any other operating system you want virtualisation, but the appropriate flag is not set, then you may need to turn on virtualisation support in the BIOS. In order to do this, first reboot your computer and interrupt the boot process when you see the first BIOS screen. In the BIOS screen that you see, look for something like a CPU or Performance heading (it varies depending on manufacturer) and select it. Then you need to look for a virtualisation selection such as ‘Intel Virtualisation Tech’ and enable it. Once you’ve changed the virtualisation BIOS settings, save and ensure that you power down the computer so that the BIOS settings will come into effect. If possible, try to use a 64-bit computer as your virtualisation host. Some operating systems do not support KVM on 32-bit systems. One drawback of a 32-bit KVM host is that each VM is limited to 2GB of memory. In order to check whether your computer is a 32-bit or 64-bit, you can check the CPU flags.

enough memory to service the host and all the VMs you plan to run at the same time. You can check your available memory by using a free command:

$ free -m As you evaluate how much memory is needed, you can use a command such as top to get a sense of how much memory you require. For disk space, you want to make sure that there is plenty available in the directory that will store the disk images used by the VMs. By default, the location is /var/ lib/libvirt/images. You can try the df command to check the disk space available at that location. Note that the /var/lib/libvirt/images directory will not be created until the libvirt-bin package is installed. So, use the /var/lib directory if it is not installed yet, as shown in the following line:

$ sudo egrep lm /proc/cpuinfo $ df -h /var/lib




There are three software components that you need to add so that you can perform KVM virtualisation. They are: • libvirt – This provides an interface to the virtualisation software • qemu – This emulates PC hardware for the virtual machines • bridge-utils – This offers a way to bridge your networking from the virtual machines through the host You can then run the following apt-get commands so that the basic software that is needed for KVM virtualisation will be installed:

then you will be able to add this user to the libvirtd group as shown below:

$ sudo adduser james libvirtd You may need to reboot your system to make sure that all the necessary services are up and running and that the user account that you have added to the libvirt group is logged in and ready. The Virtual Machine Manager (virt-manager) graphical interface is a very popular tool for managing your KVM virtual machines. Before using any of the commands such as virtinstall (to install a new virtual machine) or virsh (to manage

All you need to do to create a virtual machine is to pass the required options to the virt-install command $ sudo apt-get install libvirt-bin kvm bridgeutils qemu-common qemu-kvm qemu-utils In addition to the basic software needed for KVM virtualisation, you can also add a graphical interface for managing virtual machines. Use the following command to install the virt-manager graphical software for managing the virtual machines:

$ sudo apt-get install virt-manager When we have the virt-manager installed, we have the choice of managing our virtual machines from a graphical interface or from the command line. In the next step, we need to ensure that the user account that we want to use to manage virtualisation is configured to do so.

virtual machines), it’s a good idea to try out virt-manager. It can lead you through the creation of your first virtual machine in an intuitive way. A few things that you need to focus on before starting with virt-manager are ISO images – you need to download the ISO images of the operating system you either want to install or run live as a virtual machine, and starting virt-manager – you can run virt-manager from the command line as the user you added to the KVM group. You should be able to see the Virtual Machine Manager window. Another useful way to manage your virtual machines is through the commands. There are various reasons why you may need to go for commands to manage the virtual machines – the most significant reason being you may want to run commands to work with the virtual machines from a shell script. To get started, you can use the virt-manager command to install a virtual machine. With virt-clone, you can clone an existing virtual image. For managing virtual machines, you can use the virsh command. The virsh command will list the information about the virtual machines as well as help in the starting, stopping and rebooting of virtual machines. All you need to do to create a virtual machine is to pass the required options to the virt-install command. Note that before you use virt-install, you need to create a storage image and this can be done by using the qemu-img command. With the qemu-img command, you can create image files that the virtual machines can utilise as their storage media. Note that to the installer, the images look like regular hard disks. Let us take a quick look at the supported image formats. • Raw – This is the default image type for qemu-img. It is the simplest image type and typically used if we need to export the image to the other virtual environments

Virtual Machine Programs If the user account that you are going to use to manage KVM is not a member of the libvirtd group, then that user needs to be added to the group. Say for example, if you want the user account james to manage virtualisation,


There are various virtual machine programs for you to choose from. The most popular ones are VirtualBox and VMware Player. VirtualBox is widely used as it is open source and completely free. You can also use VMware Player on Linux as a free and basic virtual machine tool. The best practice is to start out with VirtualBox and if it does not work properly, you can try VMware Player.

• Qcow2 – The interesting thing with this format is that it does not immediately consume all space that is allocated, but instead grows as space is needed • Other formats – The other formats supported by qemuimg include vdi, vmdk, vpc etc. You can type man qemuimg at the command prompt to learn more details about these image types:

$ sudo qemu-img create -f qcow2 \ -o preallocation=metadata /var/stuff/mine. qcow2 8G In the above example, note that qemu-img is creating a qcow2 image at /var/stuff/mine.qcow2. We have allocated 8GB disk space for the image. The preallocation=metadata option can be used to improve the performance of the image as it grows. Once the qcow2 image is created, you can perform a consistency check on it by using qemu-img with the check option as shown below:

$ sudo qemu-img check \ /var/stuff/mine.qcow2 At some point you may want to check the amount of space being consumed by your virtual machine. This can be done by using the info option with qemu-img. This is shown in the following command:

$ sudo qemu-img info mine.qcow2 Virt-install is a very powerful tool for creating new virtual machines. On the command line you will be able to identify the various attributes of the virtual machine’s environment. Take a close look at the manual page of this command.

Option --connect

--name --disk_path



Function This option identifies the location of the virtualisation service on the hypervisor. This represents the name of the virtual machine This gives the details about the location of the disk image and its format This option prevents a console to the virtual machine from being opened automatically This sets the keyboard language to US English

You can also see the details about the other options you can use with the virt-install command by referring to its man page. Once the virt-install command starts, you will be able to open an application from the desktop to see the progress of your installation. You can use the virt-manager and virtviewer commands to view your virtual machine console. Once the virtual machine is successfully installed, the next step is for you to use the virsh command to manage the virtual machines. The virsh command provides a good way to manage your virtual machines. You can use the virsh command to see what virtual machines are running and to start, stop and pause the virtual machines.

$ virsh help With this, we can view the list of sub commands to virsh.

$ virsh list This shows the currently running virtual machines.

$ virsh version This gives the current version information.

$ virsh hostname This displays the hostname of the hypervisor.

$ virsh shutdown vm1 Shutdown the virtual machine vm1.

$ virsh destroy vm1

These are the various other options that you can use with the virt-install command:

You will then need to immediately stop the specified virtual machines. Please note that there are many more options you can use with the virsh command so that you can effectively manage your virtual machines. If you want to see more details on the other options, please refer to the man page of the virsh command.



Mihalis Tsoukalos

Mihalis Tsoukalos is a UNIX administrator, a programmer (UNIX & iOS), a DBA and a mathematician. He has been using Linux since 1993. You can reach him at @mactsouk (Twitter) and his website:

Resources A text editor such as Emacs or vi The Go compiler

Go functions

Explore, create and use Go packages Learn how to develop and use Go packages

This tutorial will talk about creating and using Go packages. Packages offer a very convenient way of grouping related functions and variables and using them as a set instead of individual components. Other programming languages call packages modules or libraries. The main purpose of packages is to make the design and implementation of software easier to understand and maintain. Pay special attention to this tutorial because sooner rather than later you will need to create and distribute your own Go packages. Additionally, make sure that you thoroughly understand the part that talks about Go environment variables, because using them correctly will make your life easier. The next tutorial is the Go series will talk about File I/O in Go, so stay tuned!

About Go packages

The following code is the Go version of the classic ‘Hello World!’ program:

Tutorial files available:


package main import "fmt" func main() { fmt.Println("Hello, world!") } As you can see, its first line is package main. Put simply,

all Go code must be delivered in packages – if you find a function called main() in a package named main, then you are dealing with an executable program; otherwise, you have a shared library. In other words, what differentiates an autonomous Go program from a Go library is the presence of the main() function in the main package, which is also the entry point of an executable program just like the main() function in C. A single string that is called the import path identifies each Go package. As you will see, the import path can also be a URL that points to an internet address. It is important to remember that the specification of the Go language has nothing to do with the interpretation of import paths – this job is left to the go tool. The source code of a Go package, which can contain multiple files, can be found within a single directory that is named after the package name, with the exception of a main package which can have any name.

Standard Go packages

Go comes with a plethora of standard packages so before creating your own package, make sure that Go does not have an existing package that does all or a big part of your task. Figure 1 shows a small part of the list of standard Go packages, as found at Two very popular Go packages are fmt and io. The standard Go library also includes math, which provides

Figure 1

mathematical functions and constants; net, which supports portable TCP and UDP connections; http, which offers HTTP server and client implementations; url, which allows you to easily parse URLs; the os package, a portable interface to system operations; and syscall, an interface to low-level system calls. Last, the time package supports time and date operations, the crypto package is used for cryptographic operations, and the compress package implements common compression algorithms. In order to use the fmt variable, you should include the following code in your programs:

import "fmt" Should you wish to use two or more packages, you can format your code as follows:

import ( "fmt" "io" )

Go environment variables

The go tool uses various environment variables that you should be aware of when developing non-trivial projects and modules. Figure 2 (overleaf) shows the output of go help environment, which gives you information about the available Go environment variables. If you have no Go environment variable defined and you try to execute go install, the command will fail with an error message similar to the following:

$ go install simple.go go install: no install location for .go files listed on command line (GOBIN not set) The most important Go environment variable is GOPATH, which specifies the location of your workspace. Usually, this

is the only environment variable that you will need to define when developing Go code. The best place to put all these kind of definitions is either .bashrc or .profile, which means that environment variables will be active every time you log in to your Linux machine. If you are not using the Bash shell, which is the default Linux shell, then you might need to use another startup file â&#x20AC;&#x201C; check the documentation of your favourite UNIX shell to find out which file to use.

A simple package

This section will teach you how to create a small package. For this part of the tutorial, you will need to set the value of the GOPATH variable to your current directory:

$ export GOPATH='pwd' $ echo $GOPATH /home/mtsouk/docs/article/working/goPack.LUD168/ code You will also need to create the following directories and files:

$ $ $ $ $ $ $

mkdir src mkdir src/simple vi src/simple/simple.go mkdir bin export PATH=$PATH:$GOPATH/bin mkdir ./src/useSimple vi src/useSimple/useSimple.go

Left You can visit for the list of standard Go packages

Go Documentation Go offers a very handy documentation system, supported by the godoc command-line utility that can assist you in finding information about available functions. In order to learn more about fmt.Println(), you should type the following command:

$ godoc fmt Println

The Go code of the simple package, saved as simple.go, is the following:

package simple import "fmt" const Version = "1.2" func Main() { fmt.Printf("This is the main function of the

Figure 6 (page 52) shows the output of the previous command as well as a small part from the output of the next command, which prints information about the entire fmt package:

$ godoc fmt



Go functions

Right Go needs and uses various environment variables that you should be aware of. The best place for help is the output of the â&#x20AC;&#x2DC;go help environmentâ&#x20AC;&#x2122; command

Figure 2

$ go build simple $ go install simple $ ls -l pkg/ total 4 drwxr-xr-x 2 mtsouk mtsouk 4096 Jun 3 00:18 linux_amd64 $ ls -l pkg/linux_amd64/ total 4 -rw-r--r-- 1 mtsouk mtsouk 3756 Jun 3 00:18 simple.a The go tool automatically generates the pkg directory that contains the object file of the package your have just compiled. Now, you are ready to use the simple package. In order to use the aforementioned package from a Go program that resides in the GOPATH directory, you will need to write the following Go code:

import "simple" If the go tool cannot find the simple package for some reason, it will generate the following error message:

single package!") } func Foo(n int) { for i := 0; i < n; i++ { fmt.Printf("This is the foo function!") } } func Bar(x, y int) int { return x + y } A package can use and depend on other existing packages just like simple depends on fmt. Now let us see what the simple package offers. First, it offers three functions, named Main, Foo and Bar. Second, it defines one constant variable named Version. After finishing writing simple.go, you should execute the following commands:

$ go run ./src/useSimple/useSimple.go useSimple.go:5:2: cannot find package "simple" in any of: /usr/lib/go/src/pkg/simple (from $GOROOT) /home/mtsouk/docs/article/working/goPack. LUD168/code/src/simple (from $GOPATH) The full source code of useSimple.go is:

package main import ( "fmt" "simple" ) func main() { fmt.Printf("This is the main function of the main package!\n") simple.Main() simple.Foo(2) fmt.Printf("Calling simple.Bar: %d\n", simple. Bar(2, 3)) }

Figure 3

Right This is the Go code of githubExample.go, which illustrates how you can use an external package in your Go programs


Compiling the executable of useSimple.go inside the ./bin directory requires the following step:

Figure 4

$ go install useSimple $ ls -l bin/useSimple -rwxr-xr-x 1 mtsouk mtsouk 1824408 Jun 3 00:38 bin/useSimple

Left This figure illustrates the use of the ‘go get’ command that allows you to download and use an external Go package locally

Executing useSimple produces the next output:

$ useSimple This is the main function of the main package! This is the main function of the single package! This is the foo function! This is the foo function! Calling simple.Bar: 5 If for some reason you import a package but forget to use it, the Go compiler will generate an error message and refuse to finish the compilation process:

$ go build useSimple # useSimple src/useSimple/useSimple.go:5: imported and not used: "simple" Should you wish to ignore such a message, you can use the import statement as follows:

import _ "simple" Using the ‘_’ as the package identifier will make the compiler ignore the fact that the package is not being used – the only sensible reason for bypassing the compiler is when you have an init function in your unused package that you want to be executed. Keep reading to learn more about the purpose of the init function. Please make sure that you fully understand Go environment variables and the build process before continuing with the rest of the tutorial.

The init function

Every Go package can have a function named init that is automatically executed at the beginning of the execution time. The following code illustrates the use of the init function:

Various uses of the go tool The go tool can do many more things related to packages. The single most useful is the go clean command, which allows you to remove object files from package source directories. You will have to use the -i flag in order to specify which program or package you want to clean up. If you are not sure about the effects of go clean, you can use it with the -n flag that tells it to print the remove commands it will execute, without actually executing them. Finally, the -x flag tells go clean to print remove commands as it executes them. Figure 7 (overleaf) shows go clean in action. Cleaning up a package that you downloaded from the internet requires the use of its full path, as in the next example:

$ go clean -x -i The go clean command is particularly useful when you want to transfer your project to other machines.

using it will require the following code, as found in useInit.go:

package initFunction import "fmt" func main() { fmt.Printf("init function executed!\n") } func Foo() { fmt.Printf("Foo function executed!\n") } The related Go code is saved as initFunction.go; therefore,

import ( "fmt" _ "initFunction" ) func main() { fmt.Printf("Hello!\n") } Building and executing useInit.go requires the following steps:

$ mkdir src/initFunction



Go functions

Right The multiple.go file uses the simple.go package, which is in another directory, with the help of the GOPATH variable

Figure 5

Figure 6

Across The godoc utility can give you help about the use of the functions of a standard Go package

$ $ $ $ $ $

vi src/initFunction/initFunction.go go build initFunction go install initFunction mkdir src/useInit vi src/useInit/useInit.go go install useInit

Executing the previous code will generate the following output, where you can see that the init function is executed without any explicit call to it:

$ useInit init function executed! Hello! However, the init function is not for printing silly messages; it is mainly used for performing serious initialisation tasks such as connecting to a database server.

Using an external Go package

Sometimes packages are on the internet and you would prefer to use them by specifying their internet address. Imagine that there is a Go package named simpleGitHub stored somewhere at that contains a useful

Advantages of packages Packages make the design, implementation and maintenance of large software systems much easier and simpler. Additionally, they allow multiple programmers to work on the same project without any overlapping. A good package implemented for project A can be reused in a different project without any additional work. The good thing with reusing an existing package is that the package has been already tested; as a result, an existing package contains fewer bugs and errors than a newly developed one. Also, packages allow you to use the same function and variable names with other packages, because each package has its own scope. Put simply, the development and the use of packages can only bring advantages to your software systems without introducing any risks. Therefore, if you are developing large software systems, you should definitely divide their functionality into packages that you can also reuse in other projects.


function. The exact location of the package will be You can use it in your own Go programs as follows:

import "" After that, you will need to download the entire simpleGitHub package on your local machine, with the help of go get:

$ go get The handy thing is that as soon as you execute the go install githubExample.go command, the go tool will automatically compile all required packages, which you can verify by looking at the contents of both pkg and src directories. When using go get to check out a new package, a new target directory is created inside src, named after the full import path. You can find the full code as githubExample.go â&#x20AC;&#x201C; you can see its source code in Figure 3 (preceding page). Figure 4 shows the contents of the src and pkg directories before and after executing go install. Please note that the name of the package, which is simpleGitHub, is used in the code of githubExample.go, not its full path. It is not compulsory to use every external package you download â&#x20AC;&#x201C; you can just go get a Go program and examine its code without actually using it.

Private variables and functions

This section will talk about how to define and use private variables and functions. What differentiates private variables and functions from public ones is that private ones can only be used internally. Controlling which functions and variables are public or not is also known as encapsulation.

Figure 7

The Go rule is that functions, variables, types etc that begin with an upper-case letter are public, whereas functions, variables, types etc that begin with a lower-case letter are private – this is a simple rule that illustrates the simplicity that governs Go. This rule does not affect package names. So, you should understand now why the function for printing is called fmt.Printf() instead of fmt.printf(). Please have in mind that the same rule applies to the various fields of a structure – lower-case fields are considered private; therefore they can only be used by the other members of the package. So, if you change the name of the Foo function of the simple package to foo and do the same in useSimple.go, then the go tool will generate the following error message:

$ go build useSimple # useSimple src/useSimple/useSimple.go:11: cannot refer to unexported name src/useSimple/useSimple.go:11: undefined: simple. foo It is now easy to understand that you cannot directly call the init() function of a package because it is private!

Making a package available to all users

So far, you have seen how to make a package available to a single user. This section will show you how to install a package in a place where everyone on your Linux system can see and use it. The solution is simple and includes the use of the GOPATH environment variable. As long as you can find the path of a package in GOPATH, you’ll be able to use that package systemwide! The following commands show you a simple example using simple.go:

$ echo $GOPATH /home/mtsouk/myGo:/home/mtsouk/docs/article/ working/goPack.LUD168/code $ go run src/multiple/mutliple.go cwd: /home/mtsouk/myGo Calling simple.Bar: 8 If you use a different GOPATH then the go tool will not be able to find the desired package for you, which means that it will generate a familiar error message:

Figure 8

Across The ‘go clean’ command lets you clean up your Go projects by removing unnecessary executable and object files Left As Go is an open source programming language, you can find the Go code of all Go standard packages on the internet

Where are standard Go packages stored? As Go is an open source programming language, you can find the source code of all standard Go packages on the internet. It is considered good practice to read good code, and the code of standard Go packages is no exception. Even if you are not considering writing something similar, you will get many ideas from reading the code of a standard Go package. You can find the Go code at : or Figure 8 shows a small part from the Go code of the fmt package. Additionally, you can find the code of the standard packages at /usr/lib/go/src/pkg.

$ echo $GOPATH /home/mtsouk/myGo $ go build multiple src/multiple/mutliple.go:5:2: cannot find package "simple" in any of: /usr/lib/go/src/pkg/simple (from $GOROOT) /home/mtsouk/myGo/src/simple (from $GOPATH) The contents of the multiple.go file can be seen in Figure 5. The order in which you put the various paths in GOPATH is significant because only the first match will be used. Therefore, if you have multiple Go packages with the same name, which is not recommended, only the first match of the package will be used. Please bear in mind that this can be the root of nasty bugs! So, this way you can let the users of a Linux machine use the same Go packages while each developer is allowed to develop their own packages.


EXPLORE THE TECH INSIDE w w w.gad getdaily.x y z

Available from all good newsagents and supermarkets







Nonpolar end

AIR Polar end




Without an emulsifier added to the mixture, air and water just won’t mix – there is nothing between the particles that will enable them to bind

Emulsifiers create polar and non-polar ends that attach to water and air respectively. These enable the particles to bind, which is why foams retain their shape and integrity for longer

The casing A series of valves and springs ensures the foam is released with enough force in order to keep the consistency intact

The canister Nitrous oxide is used to force air into whatever’s in the siphon – handy for creating foams of any density

£70 | $65

Siphon R-Evolution A full do-it-yourself kit aimed at any would-be culinary whipper. It comes with various whipping agents, so you can start experimenting with airs and foams of your own the second it arrives.


Print edition available at Digital edition available at 032-045 GDT004 Kitchen Tech.indd 43

22/12/2015 22:04

Available on the following platforms



Raspberry Pi 76

We are going to build your explorer robot from scratch, running through each step needed including soldering and placing all the pieces

Contents 64

Gesture-based TV remote control


Raspberry Pi plus Arduinos


Hack a toy with Raspberry Pi


Build packages with distcc


Build a n Explorer robot: Part two



10 best boards for makers

10 BEST BOARDS FOR MAKERS Check out the microcomputers that are giving the Raspberry Pi a run for its money What can we say about the Raspberry Pi that hasnâ&#x20AC;&#x2122;t already been said? Well, not a lot. The fact that over 7 million units of the microcomputer have been sold worldwide is testament to just how amazing it really is. But with every new update, add-on and expansion that the Pi shows off, a whole new crop of competitors rear their heads. While many of them donâ&#x20AC;&#x2122;t really stack up against the power of the Pi, a few are certainly


catching the eye. The main reason why many of us will look for an alternative to the Raspberry Pi is to get something that offers a unique set of features. These new wave of microcomputers tend to ship with faster processors, a better selection of connectors and better GPUs. So if your next project could do with a bit of a boost, then one of these alternatives to the Raspberry Pi might just be for you.



SPECIFICATIONS Processor: 1GHz Allwinner A13 ARM chip Graphics: Mali400 GPU with OpenGLES 2.0 and OpenVG 1.1 RAM: 512MB DDR3 Storage: 4GB NAND flash storage Connectivity: Micro-USB, USB 2.0, Bluetooth, Wi-Fi, composite video, mic in, audio out, HDMI and VGA out via adaptor

Just how much does $9 get you today? Well in this case it gets you a surprisingly capable microcomputer. After a successful Kickstarter campaign in which just over $2 million was raised, CHIP includes all the basic computing functions that you could need. As well as core Wi-Fi and Bluetooth functionality, there’s built-in composite support and a cost-friendly

adaptor is available for VGA connectivity. It also provides a one-stop shop for those with little coding knowledge, with each unit preloaded with Scratch – a program that teaches basic programming via games and animations. But if you’re looking for something far more advanced, then CHIP still has the power to keep up. With a 1GHz processor and 512MB of RAM, there’s

enough scope to run full software and handle the demands of a full GUI. But of course, there’s plenty of room to expand with CHIP, as the full design schematics are free to download and use, meaning creating your own variant of CHIP is very much a possibility. The only limit with CHIP is your imagination, but those projects you’ve dreamed of creating are all plausible with this $9 computer.



10 best boards for makers

PANDA BOARD ES £150/$182

BBC: MICROBIT £13/$17 •

Out of all the boards featured here, the PandaBoard ES is one of the few that’s tailored towards developers, rather than end users. Its OMAP 4460 processor is perfect for devs wanting to develop for mobile devices, with the added benefit of OpenGL 2.0 graphics for further tinkering. The scale of options is staggering, with support for 1080p video, HDMI output and a full-sized SD card slot being just a glimpse of what this microcomputer can do.



BEST FOR developers

After delays with the production of the Micro:bit, the BBC’s pocket-sized microcomputer was finally made available in 2016. Although it was initially handed out to every Year 7 schoolchild in the UK as an educational tool, it’s now one of the more sought-after computers out there. While its design may look a little different from many of the boards featured here, it’s similar in functionality. There’s built-in Bluetooth, five sensor ports and power comes from two AAA batteries. One of the key reasons behind the Micro:bit’s popularity is its webbased editing environment, which allows for budding programmers to seamlessly switch between different programming languages – something that many leading boards lack. The 32-bit ARM processor does perhaps lack the firepower that the Pi offers, but in terms of programmable options built into the board, it goes above and beyond, even including support for hooking up the Micro:bit


to an existing Pi or Arduino board. If you can manage to get your hands on one, the Micro:bit is a more than capable all-in-one board built for novices and experts alike.

UDOO NEO £38/$49.90 •


large-scale projects CONNECTIVITY

While the UDOO NEO may look like a fairly standard microcomputer, it is in fact one of the few that really looks to do things a bit differently. Its primary unique feature is its heterogeneous processor, which combines the power of a 1GHz Cortex A9 processor with the added power of a Cortex M4 processor as well. For end users, it provides a formidable CPU for your projects, with enough power for large-scale projects to be completed with absolute ease. Based on our time with the board, it’s by far the best choice for experimenting with sensors, thanks to the integrating tracking system, three-axis accelerometer, magnetometer and digital gyroscope.

One USB 2.0 Type A port and one USB OTG (micro-AB connector), plus fast Ethernet RJ45 – 10/100Mbps, Wi-Fi 802.11 b/g/n, Direct Mode SmartConfig and Bluetooth 4.0 Low Energy


32 extended GPIOs (A9 dedicated); 22 Arduino GPIOs (M4 dedicated). Arduino-compatible through the standard Arduino pins layout; compatible with Arduino shields


Freescale™ i.MX 6SoloX applications processor with an embedded ARM Cortex-A9 core and a Cortex-M4 core

BEST FOR quick deployment

SOLIDPC Q4 £108/$139.99 •

One of the new microcomputers to hit the market, the SolidPC Q4 can boast that it’s one of the first microcomputers sporting an Intel Braswell processor. For end users, it provides all the tools for hooking up a monitor and keyboard, while developers

can experiment with the scalability of the Braswell chip. Other features include a microSD card interface, dual Ethernet ports and DisplayPort support for two displays. Stock of the SolidPC Q4 is relatively low, so to get your hands on one you’ll need to be quick.



10 best boards for makers

MINNOWBOARD MAX £108/$139.95 •

Intel’s on-board graphics chipset is open for hackers to really push what the unit can do

Intel’s MinnowBoard MAX has really gone allout to show off its open source credentials. The board’s complete schematics are available for download and Intel’s on-board graphics chipset is open for hackers to really push what the unit can do. With this in mind, it’s one of the few boards that uses breakout boards, called Lures, to help expand on its functionality. It uses Intel’s Atom processor to keep things running smoothly, so you’ll have no issues hooking it up to most modern Linux distributions. This is the second generation of Intel’s board, which upgrades the original to a 64-bit Intel Atom E3800 (Bay Trail-I) processor while shrinking the board’s footprint by almost half. The original MinnowBoard was somewhat bulky; this smaller form factor makes the MAX more suitable for a wide range of projects. Be aware, though, that updates to the Linux kernel have meant changes in the GPIO base address – you need to check the kernel version you’re using to ensure you assign the pins correctly.


breakout boards

Above The MinnowBoard MAX features plenty of inputs and outputs, including a GPIO header


Above The underside of the board is populated by numerous chips, along with a High Speed Expansion header

OMEGA £15/$19.99 •

Despite being a quarter of the size of the Raspberry Pi, the Omega board trumps it when it comes developing for the Internet of Things. With support from its maker’s own cloud system, Onion Cloud, it’s simple to connect the board to the web and create IoT applications. Of course, the fact it’s open source enables you to explore other users’ projects as well. While the Atheros AR9331 processor isn’t a household name, it provides enough power to run a full Linux operating system, while also supporting a high number of key programming languages, such as Python, PHP and Ruby.


Internet of Things

JAGUARBOARD £61/$79 • The biggest difference between the Pi and the JaguarBoard is that the latter is based on x86 architecture, allowing for generally better performance. For those accustomed to x86 architecture, it also provides users with a single board with a high level of

BEST FOR x86 architecture

scalability that doesn’t require knowledge of embedded systems such as ARM. The boost in power is made more apparent with Intel’s Atom Z3735 processor, capable of running most desktop programs with ease and supporting many popular Linux distributions.



10 best boards for makers

ODROID – C2 £31/$40 • It’s all about raw power with the ODROID-C2, which is why it already has a thriving community who have managed to create some pretty staggering projects. The ARM Cortex-A53 processor, clocked at 2GHz, is partnered with 2GB of RAM, which is comparable to many smaller laptops. It also provides a few added

extras, which many boards simply don’t offer. The IR receiver and ADC capabilities are ideal for experimenting with new projects, while the 40pin GPIO header enables seamless connection between the board and other physical devices. It also bills itself as the most effective 64-bit development board out there. The clue’s in the

terminology – it can’t compete with the $9 price tag of the CHIP, but then the CHIP has half the processing power. However, the ODROID-C2’s horsepower is all on the performance side – to install a distro that isn’t the out-of-the-box Ubuntu 16.04 or Android 5.1 Lollipop, you’ll need a microSD card or eMMC module.


raw power

BEAGLEBONE BLACK £37/$55 • Simplicity is key when it comes to the BeagleBone Black, and everything from the initial setup process to attaching external devices is all made as pain-free as possible. It boasts that you can go from zero to Linux in under ten seconds and be developing on the board within five minutes. Like the MinnowBoard MAX, the BeagleBone Black is a successor to the previous model, and it has eschewed some features of the original while adding new ones – notably


HDMI video out, double the RAM and an improved processor that takes it over the all-important 1GHz bar. It’s also one of the better choices when it comes to project creation, with a total of 92 connection points on hand for your projects, which will appeal to both new and seasoned users. But if you’re stuck for ideas, there’s a thriving development community behind BeagleBone, which lists a wide number of beginner projects you can attempt.






micro: bit










1GHz + 200MHz











Sold separately







Via memory card


Via memory card

Via memory card




Via memory card


MicroUSB, USB 2.0, Bluetooth, Wi-Fi, composite video, mic in, audio out

Ethernet, Wi-Fi, Bluetooth 2.1, audio in/out, HDMI, DVI-D, USB 2.0

Bluetooth LE, MicroUSB

MicroHDMI, LVDS, NTSC, PAL, USB 2.0, MicroUSB, audio out

HDMI, USB 3.0, Ethernet, Wi-Fi, Bluetooth 4.0, DisplayPort

HDMI, USB 3.0, USB 2.0, SATA2, Ethernet, digital audio out

USB 2.0, Ethernet, Wi-Fi

USB 2.0, Ethernet, HDMI, audio out, mic in

USB 2.0, Ethernet, HDMI, IR receiver

USB 2.0, HDMI, Ethernet, McASP0




Largescale projects

Quick deployment

Breakout boards

Internet of Things

x86 architecture

Raw power




VERDICT All of these boards have clear strengths that make each of them ideal for different kinds of maker projects. For getting kids involved in making and coding, the Micro:bit is perfect (especially considering that they’re likely to get their hands on one at school). For those moving their projects up a gear from the Raspberry Pi and the Micro:bit,

the BeagleBone Black is simple to use but packed with GPIO pins, so it can power all kinds of projects. If it’s sheer power you’re looking for, though, consider the ODROID-C2 your best bet. The JaguarBoard, UDOO Neo, SolidPC Q4, MinnowBoard MAX and PandaBoard ES are all more specialist units, ideal for the development, prototyping and

deployment of advanced projects – they’re great for those who have moved beyond the capabilities of the Raspberry Pi. But the big story here has to be the CHIP: it has great specs, great connectivity and an unbeatable price. It’s the best board for makers we’ve come across so far – but can it challenge the ever-popular Pi for market dominance?


Adding gestures

Building the circuit

Adding gestures is done through the Skywriter HAT add-on board. The size of the device fits on the top of the Raspberry Pi via the 40-pin GPIO header, with the software then being installed with one command

The crux of the circuit built into the remote control consists of both a transmitter and receiver. The transmitter is built from an IR LED, which is controlled via a transistor and a GPIO pin

Crafting the enclosure

Preparing the Pi

Two wooden panels make up the enclosure for the gesture controller. Slots have been carved into various sections of the unit for power, USB and the IR LEDs. The two pieces are then joined with four small screws, before being finished with a thorough sanding

Components list ■ Raspberry Pi+ ■ Skywriter HAT ■ IR LED ■ Oscilloscope ■ Wooden enclosure ■ Transmitter ■ Receiver


A Raspberry Pi A+ model was used by Frederick, as it’s largely the same size as the Skywriter HAT. The Pi was then given a Raspbian Jessie image flashed onto a 8GB microSD card using ‘dd’

Right Before soldering the circuit on to the board, Frederick made a breadboard prototype and used an oscilloscope to verify the basic functionality Below Two carved wooden panels were used to build the enclosure of the remote, with four screws in place to help keep it together

My Pi project

Gesture-based TV remote control Make controlling your TV even easier with the help of Frederick’s amazing gesture-controlled remote Where did the idea for the remote control stem from? When I come across interesting parts online, I tend to buy them, even if I don’t have a specific project in mind yet. That was the case with Pimoroni’s Skywriter HAT. I knew I’d be using it in a project at some point in time, but didn’t know what at the time. Later on, my children were asking to see a certain show. My daughter, five years old, started swiping on the tablet looking for the show. It was a TV show, however, and wasn’t available on the tablet. Using the remotes was too difficult for her: you have to turn on the TV, then the decoder using a separate remote, change the channel and the volume. That’s when I had the idea of simplifying the remote into something more intuitive like the swiping gestures from the tablet. As it’s a first prototype, I kept the number of commands to a minimum: on/off, change channel and change volume.

incorporating the LIRC functionality in one of the Skywriter HAT example scripts. Once the script was finished and tested, I ensured it would be started at boot time. Finally, with the hardware and software sorted, I moved on to the creation of a wooden enclosure. I chose wood because it is not conductive and does not affect the gesture detection of the Skywriter HAT. The enclosure consists of two parts, with the components sandwiched in between.

How long have you been working on the project for? Could you give us an insight into the development process? The project really didn’t take that long once I knew what I wanted to achieve. I believe the project took about ten hours to prototype, build and program, spread across a few evenings. The first thing I typically do is break down the project into smaller, easier to manage ‘problems’. In this case, I wanted to be able to detect gestures and receive/ transmit IR signals. The second step is to find a solution for each problem identified in the previous step. Detecting gestures can be done with the Skywriter HAT, using the accompanying Python library. Receiving and sending IR signals can be done using software called LIRC and a simple circuit with IR LED and receiver. Once both worked individually, they had to be combined, so that a gesture would trigger the corresponding IR signals. This was achieved by editing and

Did you face any problems developing the remote? Problems typically arise when combining different features to have them interact with each other, even though they function properly individually. In this project, nothing has to function simultaneously, but rather sequentially, thus avoiding this type of problem. When a gesture is detected, then the IR signals are triggered. The most difficult part of the project was probably fitting everything into as small a package as possible. Using a piece of prototyping board with the IR components and soldering it directly to the underside of the Pi saved a lot of space compared to using stacking headers, for example.

from the TV’s remote using LIRC and it processes the gestures received by the Skywriter HAT. In the current application, the Pi can seem like overkill, especially for sending IR signals. The idea, however, is to extend the functionality of this project and give the user the ability to record the IR signals and map them to gestures via some kind of web-based wizard. This is to avoid forcing users to need to initially have to use the command-line interface to record

Wood does not affect the gesture detection of the Skywriter HAT Some holes have been made to expose the IR LED and receiver to allow the IR signals to pass through. The device is currently powered externally, as this would have made the project more complex. It is something I would like to tackle in a future version though, by having a rechargeable battery incorporated in the enclosure.

What sort of role does the Raspberry Pi play in this project? As can be expected, the Raspberry Pi is the brain of the project. It was used to record and store the IR signals

the signals. The web server can be added to the Pi, without having to modify the hardware or enclosure. Was it difficult to implement IR signals into the remote? Not really! The fact that LIRC allows the recording of signals from the original remote(s) facilitates the entire process greatly. Once all the signals have been recorded, it is possible to send them individually or even have multiple signals combined. This is particularly useful when needing to control multiple devices at once with the same trigger. For example, to watch TV, both the TV and decoder need to be turned on. Changing the channels is done by controlling the decoder; changing the volume is done via the TV. Do you have any advice for anyone interested in recreating your project? The project is not that complex and is a fun introduction to control via gestures. Try to make it your own by giving it a twist. You don’t necessarily have to control a TV, as the proposed setup with IR transmitter could control any IR-enabled device. Or you could even swap the IR transmitter for another type of trigger and control something entirely different. The project shows one example of what can be achieved, but could be used for many different applications.

Frederick Vandenbosch

is an integration engineer by day and builder of electronic things by night. He likes to build things as a way of learning about a subject and have fun while doing so.

Like it?

While nowhere as cool as Frederick’s project here, the Pi Hut has its own Raspberry Pi-based controller worth checking out. It works as the perfect companion for your FLIRC module, or perhaps even for improved XBMC/Kodi control.

Further reading

For those interested in recreating this gesture TV remote control, Frederick has uploaded a full tutorial on his website: frederickvanden On his site, you’ll also be able to take a look at his other Pi projects, including an IoT light, Amazon Echo controller and a panic button.


Python column

Raspberry Pi plus Arduinos This issue, we will look at how to get your Raspberry Pi to talk to and use your Arduino

Joey Bernard

Joey Bernard is a true Renaissance man, splitting his time between building furniture, helping researchers with scientific computing problems and writing Android apps

Why Python? It’s the official language of the Raspberry Pi. Read the docs at

The Raspberry Pi is a truly amazing single-board computer that gets used in lots of DIY projects. That has been the basis for this whole column and the previous several articles. While the Raspberry Pi has a GPIO and can communicate with sensors and actuators, you may have cases where you want to use your Raspberry Pi as the brains of your project and offload the interactions with the physical world to another system. This is usually handled by one of the many microcontroller boards that are available. In this issue, we will specifically use the Arduino board and see how to connect it to a Raspberry Pi and how to handle the communications between the two. As always, we will be using Python as the language of choice to handle all of the actual coding in the examples here. The Arduino is an open source prototyping platform defined as a specification. This means that you can get Arduino implementations from several different manufacturers, but they should all behave in a similar fashion. For this article, the assumption will be that whatever implementation you wish to use will behave properly. The first step is to connect the two boards

This will make sure that you are starting with all of the core software that you might need. When you plug in your Arduino, you need to know over which port communications will happen. The specific port name will vary based on the exact version of Raspberry Pi and Arduino that you are using. However, it should be something like ‘/dev/ttyUSB0’ or ‘/dev/ ttyACM0’. In the example code below, we will be assuming that the Arduino is visible on the port ‘/dev/ttyUSB0’. Once you have the two devices connected, you can start writing code to have them talk to each other. We will start with the most low-level protocols and build upwards from there. The first step is to open a serial connection to the Arduino. In order to do this, you will need to make sure that the Python serial module is installed. If you are using Raspbian, you can do this with:

sudo apt-get install python-serial You then need to create a new Serial object connected to a given serial port, along with the speed you need to use.

import serial ser = serial.Serial(‘/dev/ttyUSB0’, 9600)

The first step is to open a serial connection to the Arduino together. You will probably want to use a powered USB hub to connect them since the Raspberry Pi can’t provide huge amounts of power through its USB port. While they are connected over USB, the Arduino will appear as a serial port to the Raspberry Pi. This means that you can communicate with the Arduino directly over the serial connection. To be sure you have all of the relevant libraries installed, you can simply install the Arduino IDE with the command:

sudo apt-get install arduino


In the above example, the speed is 9600 baud (bits/sec). With this Serial object, you can read and write data to and from the Arduino. But you need code on the Arduino to handle its part of this communication. The Arduino has its own programming language, based on C, which you use to write the code that will run on the board. The way Arduinos work is that at bootup it will load a program that will run as long as it is powered up. As a simple example, the following code will listen on an input pin to see if it goes high. If so, it will then fire

off a message on the serial port.

int pirPin = 7; void setup() { pinMode(pirPin, INPUT); Serial.begin(9600); } void loop() { if (digitalRead(pirPin) == HIGH) { Serial.println(“High”); } delay(500); } To load this program to your Arduino board, you will need to use the Arduino IDE that was installed at the beginning of this article. This is a graphical program, so you will need to connect your Raspberry Pi to a monitor if you want to do this step using it. Otherwise, you can do this programming of your Arduino using your regular desktop. If you are using the standard bootloader on most Arduinos, it will start up whatever program was last uploaded to it. This way you can use your desktop to upload your code and then connect it to your Raspberry Pi later on. Moving back to the Raspberry Pi, how can you read this message from the Arduino? You can simply do a read from the Serial object that you created earlier.

import time while True: message = ser.readline() print(message) if (message[0] == ‘H’) do_something_useful() time.sleep(.5) As you can see, we imported the time module in order to be able to sleep in the loop between attempts to read from the serial port. What about sending instructions out to the Arduino? This is also requires Arduino code to be uploaded ahead of time. For example, the following code will take an input number and flash an LED that number of times…

Python column

PyFirmata can help even more int ledPin = 13; void setup() { pinMode(ledPin, OUPUT); Serial.begin(9600); } void loop() { if (Serial.available()) { flash( - ‘0’); } delay(1000); } void flash(int n) { for (int i = 0; i < n; i++) { digitalWrite(ledPin, HIGH); delay(100); digitalWrite(ledPin, LOW); delay(100); } } Then, you can send a count from your Python code with something like:

ser.write(‘5’) This will flash the LED 5 connected to pin 13 on your Arduino five times. One missing element on the Raspberry Pi is an analogue-to-digital (ADC) converter to take a given voltage and turn it into a number that can be used in some piece of control software. This is where attaching an Arduino can be extremely helpful, as it has a 10-bit ADC converter included. The following code will read the voltage on pin 3 and then send it out over the serial connection to the Raspberry Pi.

int analogPin = 3; int val = 0; void setup() { Serial.begin(9600); } void loop() { val = analogRead(analogPin); Serial.println(val); } This maps the measured voltage to an integer between 0 and 1,023. The minimum voltage is zero, while the maximum voltage can be set with the

function analogReference(). By default, the maximum is the power supplied to the board (5 volts for 5V boards, or 3.3 volts for 3.3V boards). You can also use two internally supplied reference voltages, one at 1.1 volts and a second at 2.56 volts. For special cases, you can supply an external reference voltage to the AREF pin. You need to be sure that it is only between 0 volts and 5 volts. Going in the opposite direction, you can use the Arduino to supply an output voltage. This is done by actually using a PWM (pulse width modulation) output signal. The idea here is to actually send out a number of pulses with some duty cycle that is on for some percentage of the time and off for the remainder of the time. For example, if you have an LED connected to one of the pins, you can light it at half brightness with the following code.



sudo pip install pyFirmata You can now use Firmata to act as a sort of remote control to the Arduino port, where your Python code can get almost direct access to all of the functionality available. To get started, import the pyFirmata module and create a new Arduino object connected to the relevant serial port:

import pyfirmata board = pyfirmata.Arduino('/dev/ttyUSB0') You can now access digital I/O pins directly. For example, the following code would write a 1 to pin 10.

int ledPin = 9; void setup() { pinMode(ledPin, OUTPUT); } void loop() { analogWrite(ledPin, 127); } The

While you can write your own code to run on the Arduino, there are several projects that can be uploaded to it to make interacting a bit easier. One of these is the Firmata project, which includes a Python module to help you talk to the Arduino. The first step will be downloading the Firmata Arduino code and uploading it to your Arduino, most easily done with a desktop computer. The code is available at There are a few different versions available, but for these examples you should upload the StandardFirmata sketch with the Arduino IDE. There are client libraries available for many different programming languages, including several for Python. The one we will look at using is pyFirmata. You can install it on your Raspberry Pi with:[10].write(1) When you want to read from a pin, you have the possibility of overflowing input buffers. To deal with this issue, you can create an iterator object and start it before doing any reads, using code like that below. to


analogWrite() function is a value between 0 and 255, which defines the duty cycle between 0% (or fully off) to 100% (or fully on). This analog output signal stays on at the given duty cycle until a new call to the analogWrite() function. By having your Raspberry Pi write out values over the serial connection, it can then control the output duty cycle by sending a simple integer. Hopefully, this short article will spark some ideas on how you can start combining multiple computing platforms to expand the capabilities of your own projects. There is no reason to try to find the one silver-bullet platform for your project when you can pick the sub-modules that actually do their own individual jobs best and build up the complex behaviour you need from these simpler parts.

it = pyfirmata.util.Iterator(board) it.start() You can now get selected pins for either input or output. The following code will get pin 4 for digital input and pin 12 for analogue PWM output.

pin4 = board.get_pin('d:4:i') pin12 = board.get_pin('a:12:p') You can then read and write with these new pin objects with the related methods:

val = pin12.write(100) When you are done, don’t forget to reset any output pins back to 0 volts, and then you can close down the connection with:




Hack a toy with the Raspberry Pi : Part 1

Learn how to master four simple hacks and embed them into a toy

Dan Aldred

Dan is a Raspberry Pi Certified Educator and a lead school teacher for CAS. He is passionate about creating projects and uses projects like this to engage the students that he teaches. He led the winning team of the Astro Pi Secondary School Contest and his students’ code is currently being run aboard the ISS. Recently he appeared in the DfE’s ‘inspiring teacher’ TV advert.

Combining an old toy and a Raspberry Pi, you can embed a selection of components to create your own augmented toy that responds to user input. For example, take a £3 R2-D2 sweet dispenser, light it up, play music and stream a live web feed to your mobile device. Part one of this two-part tutorial covers setting up four basic features: an LED for the eye, a haptic motor to simulate an electric shock, a webcam stream from a hidden camera and the Star Wars theme tune broadcast to a radio. You may choose to combine these with your own hacks or use them as standalone features in other projects. Don’t feel limited to just using a Star Wars toy either – we used the R2D2 figure because it was cheap, available and popular, but there’s no limit to the toys that you can experiment with. Action figures (provided that they can be disassembled and/ or have a stand or cavity that can be used to hold the electronics) and plushy or cuddly toys both lend themselves well to this kind of maker project, especially if they accompany a movie or TV show that has recognisable music or sound effects that you can make them broadcast. Part two next issue covers how to set up the toy and set up triggers to start features.


Set up an LED

LEDs are really easy to set up and control. They make a nice addition to your toy and can be used as eyes, flashing buttons or, in this example, R2-D2’s radar eye. Take your LED and hold the

What you’ll need ■ An old/new toy ■ Resistors ■ Small disc motor ■ LED ■ A radio ■ Small webcam ■ Female-to-female jumper jerky wire

R2D2 is © LucasFilm


longest leg; this is the positive arm and connects to the input. Wrap the resistor around the leg and attach to a female jumper jerky wire. Now take the negative arm and attach this to another jumper wire.


Attach the LED


Light up the LED

Take the positive wire, the one with the resistor, to GPIO 17, which is physical pin number 11 (look at the far-left top pin and count down six pins). This pin will provide the current that powers the LED. Connect the other wire to any of the ground pins, (GND) 6, 9, 14, 20, 39; you may need to move this around later as you add more wires for the other components.

To turn the LED on and off, use the gpiozero library, which was developed to simplify the interaction between code and a physical computer. Open the LX Terminal and type

sudo apt-get install python3-gpiozero to install the library (Remove the ‘3’ if you want to use it with Python 2). Once installation is completed, open a new Python window and import the LED module (line 2 of the following code). Assign the pin number of the LED (line 3) and finally turn it on and off (lines 6 and 8). Save your code and run it to make the LED flash. Change the timings to suit your own project.

Hack toys R2-D2 in action

import time from gpiozero import LED led = LED(17)

Check out the video of the completed R2-D2 toy hack and see the features in action. This may give you some ideas for your own toy hack. VnOsUaS5jSY

while True: led.on() time.sleep(0.5) time.sleep(0.5)


Add a vibration

You may want the toy to vibrate or shake when it is touched. R2-D2 is known for giving out electric shocks and a safe way to emulate this is to add haptic feedback, similar to that when you press a key on the screen of your mobile. Pimoroni stocks a suitable disc motor (, which will deliver a short vibration. Take each of the wires and connect them each to a female-to-female jumper wire.

08 05

Wire up the disc motor

Take the positive wire from the motor (usually coloured red) and attach it to GPIO pin 9; this is physical pin number 21. The black wire connects to a ground pin – the nearest is physical pin 25; from pin 21, drop down two pins on the left and this is number 25. Start a new program or add the code to your existing program. Import the RPi.GPIO library (line 1) and set the board to the BCM setting. This ensures that the GPIO numbering system is used.

import RPi.GPIO as GPIO GPIO.setmode(GPIO.BCM) import time


Each time you connect to the internet, your Pi will be given a new IP address; this is called a dynamic IP address. This can cause issues, as when it changes, other devices will no longer be able to locate your Pi. To create one that stays the same (a static IP address), load the LX Terminal, type ifconfig and make a note of the inet addr, the Broadcast and the Mask numbers. Then type route -n and note down the Gateway address. Now type sudo nano /etc/network/interfaces, find the line iface wlan0 inet dhcp and change it to:

iface wlan0 inet static address netmask gateway network broadcast

Turn the motor on

To enable the motor, first set the output pin, number 9. This tells the program that GPIO pin 9 is an output. Next, set the GPIO pin to ‘HIGH’ (line 2), this is the same as turning it on. Current flows through and your motor will turn on. Add a short pause (line 3) before setting the output to LOW, which turns the motor off. Play around with the timings to find the perfect pause for your needs.

GPIO.setup(9, GPIO.OUT) GPIO.output(9, GPIO.HIGH) time.sleep(5) GPIO.output(9, GPIO.LOW)


Set up a static IP address

Replace the numbers with yours (which you noted down). Press CTRL+X together to save and exit the nano text editor. When you reboot your Pi, it will now have a static IP address which will not change when you reboot or reconnect.


Install the web server

To set up the web server, open the LX Terminal and update your Raspberry Pi by typing sudo apt-get update, sudo apt-get upgrade. Next, install the Motion software that will control the webcam: type sudo apt-get install motion. Once

Hack a web camera

A small web camera can be hidden within the toy as an eye or more discreetly within the body of the toy. This can be used to take photos or stream a live feed to a laptop or mobile device. Take your webcam and carefully strip away the plastic shell so you are left with the board and lens. Depending on the size of your toy, adjust the casing so that it fits and can be hidden.


Tutorial it is finished, attach your USB webcam and type lususb and you will see that your webcam is recognised.

the Pi; it’s better that it automatically starts at bootup. In the LX Terminal, type sudo nano /etc/default/motion to edit the file. To set Motion to run as a service from bootup, you need to change the start_motion_daemon to Yes:

start_motion_daemon=yes Save and exit the file using CTRL+X and restart your Pi.


Configure the software – part 1

There are now seven configurations to change in order to get the most suitable output from the camera. You can experiment with these to see which produces the best results. In the LX Terminal, type:

sudo nano /etc/motion/motion.conf …to load the configuration file. Locate the daemon line (you can find the lines by pressing CTRL+W, which loads a keyword search) and ensure that it is set to ON; this will start Motion as a service when you boot your Pi. Next, find the webcam_localhost and set this to OFF so that you can access motion from other computers and devices.


Starting the web feed


Install the Pi radio software

Before viewing the feed, make a note of your IP address. In the LX Terminal, type sudo ifconfig, although you will have set this in step 8. Then start Motion by typing sudo service motion start. Wait for few seconds for it to initiate, then open a browser window on your device. Enter the IP address of your Pi, including the port number that you set in step 11. If you are using VLC player, go to File>Open Network and enter the IP address of your Pi followed by the stream_port – for example, The port number in this example is 8080, but you set this in step 11 to 8081 or another value of your choice.

PiFM is a neat little library which enables your Raspberry Pi to broadcast to a radio. Note that this is only for experimentation and should you want to make a public broadcast you must obtain a suitable licence. Getting set up is simple: load the LX Terminal and make a new directory to extract the files into:

mkdir PiFM cd PiFM


Configure the software – part 2

Next, find the stream_port, which is set to 8080. This is the port for the video and you can change it if you are having issues viewing the feed; 8081 provides a stable feed. Then, find the control_localhost line and set it to OFF. The port that you will access the web config interface is on the control_port line and the default is 8080; again, you can change it if you have any streaming issues. The frame rate is the number of frames per second that are captured by the webcam. The higher the frame rate, the better the quality, but setting it higher than 6fps will slow the Pi’s performance and produce lag. Finally, set the post_capture and specify the number of frames to be captured after motion has been detected.


Running Motion as a daemon

A daemon is a program that runs in the background providing a service; in this project you want to run Motion. You do not want to manually have to start it every time you load


Then download the required Python files:

sudo apt-get update, sudo apt-get upgrade, wget Finally, extract the files using the code:

sudo tar xvzf pifm.tar.gz


Add a simple aerial and then broadcast

Setting up the hardware is really easy. In fact, there is no need to attach anything as the Pi can transmit directly from physical pin 7 without the need to alter anything. However, you will probably want to extend the range of the broadcast by adding a wire to GPIO 4 (pin number 7). Unbelievably, this can extend the range of the broadcast to up to 100 metres! Ensure that you are in the PiFM folder and then broadcast the WAV file with the code line:

Hack toys BCM number

In this example, the Star Wars theme will play, but you can create and add your own sound files. The ‘100.0’ refers to the FM frequency of the broadcast; this can be changed between a range of 88 and 108MHz. Turn on your radio and adjust to the appropriate frequency and you will hear your message being played.


Stop the broadcast

If you wish to end the broadcast before the song or voice has finished, then you will need to kill the transmission.

In a new terminal window, type top. This will list all the running programs. Look for PiFM somewhere near the beginning of the list and note the ID number. Return to the LX Terminal and type sudo kill 2345, replacing the 2345 with the appropriate process ID number. This will ensure that each broadcast is new and the Pi is only trying to transmit one WAV file at a time.


Next time…

You now have four mini hacks which you have prepared and can adapt and combine. You may want to try embedding some of your own hacks that you have created. Next issue, we will cover how to wire up and deploy them!

R2D2 is © LucasFilm

sudo ./pifm name_of_wav_file.wav 100.0

GPIO pins are a physical interface between the Pi and the outside world. At the simplest level, you can think of them as switches that you can turn on or off. You can also program your Raspberry Pi to turn them on or off (output). The GPIO.BCM option means that you are referring to the pins by the ‘Broadcom SoC channel’ number. The GPIO.BOARD option specifies that you are referring to the pins by their physical location on the header.



Save time building packages with distcc

Building packages takes ages on the Pi, so speed things up by sharing the load

Christian Cawley

I am a former IT and software support engineer and since 2010 have I have been proving advice and inspiration to computer and mobile users both online and in print. I cover the Raspberry Pi and Linux at

Most of the time when you install an application in Linux, you do so via a package manager, which enables the download and installation of binary packages, built from source packages. These include the default options, general features and parameters so that a wide portion of users can install without any problems. However, if you’re developing code, you first need to create a source package, and on a Raspberry Pi this can seem to take forever. One option is to copy the code to a desktop or notebook computer running Linux, but wouldn’t it be far more convenient to just let the Raspberry Pi deal with it… but give it a helping hand? Using distcc, we can distribute the resources needed to build the package, which basically means we can draw on the hardware of other devices. In this tutorial, we show how to use a nearby Linux laptop, but you could just as easily use one or more Raspberry Pis.


Connect to the network

Using distcc is all about making use of network resources, so begin by ensuring your Raspberry Pi is connected to your local network, along with the other devices (which we’ll call “slaves”) that you intend to use. While you can use wireless networking, it’s faster with Ethernet cables. Set this up in the Terminal with:

What you’ll need ■ Wireshark ■ Another computer, laptop, or even a Raspberry Pi ■ Large SD Card or mounted HDD storage ■ Compilers from http://bit. ly/29g5hx9


sudo nano /etc/wpa_supplicant/wpa_supplicant. conf Add your network’s SSID and passphrase (psk), then Ctrl+X to save.


Install compilers

On your desktop computer (ours is running Ubuntu 16.04 LTS) install the compilers to the /opt/cross

distcc When this is all done, you’ll need to save the file, so press Ctrl+X to prompt this, and Y to commit the changes.

directory. You can do this manually, from the Raspberry Pi Foundation’s Github, but it’s quicker to directly download the file, cd to the download location, and then you will need to run this RPM command:

sudo rpm -i --ignorearch raspberrypi-tools9c3d7b6-1.i386.rpm With the compilers installed, and used with distcc, we can outsource compiling to the Ubuntu PC.


Install distcc on your slave

With distcc installed on the slave device, you’re halfway done with the set up, so run

sudo apt-get install distcc When this is complete, you’ll need to make some edits to the distcc configuration file.


Make alterations if necessary

It’s worth noting that the list of changes to the distcc file is not set in stone. For instance, ALLOWEDNETS and LISTENER can be adjusted to suit the needs of your network, while the JOBS parameter can be increased to ensure that the Pi will send work direct to the desktop slave. We need to move on, so restart the daemon.

sudo /etc/init.d/distcc restart


How many Pis to use?

Prepare your Raspberry Pi

On your “master” Raspberry Pi, begin by updating the device.

sudo apt-get update sudo apt-get upgrade Wait until both of these processes have completed before installing the distcc utility on your Pi.

Setting up a distributed cross compiling network is, as you’ve seen, relatively straightforward, but how many Raspberry Pi’s can you connect? Well, distcc should work with the original Raspberry Pi Model B all the way through to the Raspberry Pi 3. However, you’ll find that there are compatibility issues with the Raspberry Pi 3 in 64-bit mode, so avoid this.

sudo apt-get install distcc Installation on your Raspberry Pi may take a little longer than on your desktop computer, but when done, you’ll be ready to start configuration of distcc.


Edit distcc on slave

Open the distcc configuration file in nano in order to edit, and configure the file so that it can then be used as a slave device.

sudo nano /etc/default/distcc Here, you’ll need to use the arrow keys to scroll through the file, making the changes below.


Configure your master device

On your master Raspberry Pi, you will need to check the distcc config file to confirm that the device is on the correct subnet.

sudo nano /etc/default/distcc STARTDISTCC=”true” ALLOWEDNETS=”” LISTENER=”” NICE=”0” JOBS=”” ZEROCONF=”true” PATH=/opt/cross/arm-bcm2708/gcc-linaro-arm-linuxgnueabihf-raspbian/arm-linux-gnueabihf/bin/:\ /opt/cross/arm-bcm2708/gcc-linaro-arm-linuxgnueabihf-raspbian/libexec/gcc/arm-linuxgnueabihf/4.7.2:\ ${PATH} While most of the edits will be a case of uncommenting a line (deleting the # symbol) some will require more handson alterations, such as adding the PATH declaration.

Look for STARTDISTCC, ALLOWEDNETS and LISTENER, and set as follows:

STARTDISTCC=”true” ALLOWEDNETS=”” LISTENER=”” With these changes made, the Pi will distribute jobs to computers with IP addresses within the ALLOWEDNETS list. Press Ctrl+X to exit the nano text editor, hitting Y to confirm the changes.


Add a path

Press Ctrl+X to exit the nano text editor, hitting Y to confirm the changes. Next, you should add a path to the compiler.


Tutorial Using other distros export PATH=/usr/lib/distcc:${PATH} We’ve run this tutorial using Ubuntu 16.04 LTS on the slave computer, and Raspbian Jessie on the Raspberry Pi 2, but you shouldn’t feel restricted by these. Arch Linux is a good choice for distcc, but if you want to go for the full cross-compiling experience, we recommend seeking out the ARM release of Gentoo Linux. With this installed, you’ll even now be able to compile your own system kernel!

With this in place, distcc will be called whenever gcc (installed with the compilers) is invoked. desktop GUI, or in the Terminal (sudo raspi-config), and find the option to Expand Filesystem. As a result of this change, you will need to restart the Raspberry Pi; this should happen automatically, but if not, restart manually. In the Terminal, this is:

sudo restart


Specify your hosts

Still on your Raspberry Pi, create and edit the distcc utility’s hosts file so it can find your slave computer.

sudo nano ~/.distcc/hosts To this file, add the IP address and number of slots, like this. Note that your IP address may differ.: --localslots=1 --randomize If you have multiple slaves configured, then simply add the IP address for each, one per line. Save with Ctrl+X and confirm with Y. Then you will need to check the IP address of your slave with ifconfig, and check the results, making sure to find the inet addr for the appropriate connection.



Unpack and compile

We need to unpack the downloaded tar file. Do this by running the tar command:

tar xvzf advancemame-0.106.1.tar.gz This will take a moment, and the file will take advantage of the expanded filesystem on your Raspberry Pi. Once the extraction of the files has completed, cd to the destination, which will be in the format advancemame-[version]

cd advancemame-0.106.1 You’re then ready to begin compiling.

Prepare to compile software

To find out just how much faster compiling is when you have a faster computer slaved to your Raspberry Pi via distcc, you’re going to have to find something to compile. What better than MAME? In the Raspberry Pi Terminal, or via SSH, enter the following to download MAME to your Pi’s Home folder.

wget advancemame-0.106.1.tar.gz


Configure your software

We’re almost ready to compile, but first need to configure. This is done with a single command from within the installation directory itself:

./configure Once this has downloaded, we can then continue to get on with the task of compiling...


Expand your filesystem

If you haven’t already expanded your filesystem, now is the time to do it. Open the Raspbian Configuration utility, either in the


Note that this unpackages the application with default options. In this test scenario, defaults are fine, but you will find when compiling software that you have the opportunity to take advantage of some considerable customisations. Find out more by checking the documentation of the software you’re compiling.

distcc unpackaged and compiled, it is now actually ready to run – meaning you’re done. As long as the process completes without errors, you’ll be able to launch the software on your Raspberry Pi; if you’re developing for another platform (for instance, Android) then naturally with the software now compiled, you’ll need to take steps to test it in a suitable environment.


Compile software with distcc

We’re finally there. When you’re ready to commit to compiling the application, enter the command:

make -j8 Then sit back and enjoy a cup of coffee. In the case of MAME, a single Raspberry Pi takes around 90 minutes to compile, but with a distcc in operation and the resources of a more powerful machine to rely on, we did it in 25 minutes!


Run your software With the software now being completely downloaded,


Checking for errors


Make use of redundant hardware

However, saying that, not every compiling task goes to plan. In situations where the compiler throws up error messages, you’re going to need to be proactive to find out just what these errors mean. This might be easier said than done, but if you’re new to compiling, it’s a good idea to have a pen and paper handy for jotting down error codes, and a browser window open to check them in Google. Remember to cross reference with the fact you’re using distcc.

With distcc we can take advantage of Linux devices that aren’t doing anything, or have plenty of resources free to share. This is a pretty big deal; one that actually enables us to run miniature distributed cross-compiling networks in our own homes. Whether you use a group of Pis for this, or simply tap into the spare resources of a desktop computer, it’s a great way to use the Raspberry Pi.

Can I use this daily? Quite often you’ll find that Raspberry Pi projects are pretty cool, but they don’t suit being done repeatedly on the little device. Like a proof of concept, such projects can inspire you to find out more, or offer a means to try something out (such as stop motion photography, for example) before you buy a dedicated device to use. Distributed cross compiling, however, is not one of those things. Once you have it installed you can use this method again and again to save time.



Create an explorer pHAT robot Part two: the build Build your explorer robot, soldering wires, connecting the tracks and setting up the Pi

Alex Ellis

@alexellisuk is a senior software engineer at ADP, a Docker Captain and an enthusiast for all things Linux. He is never far from a Raspberry Pi and regularly posts tutorials on his blog.

In part one in the previous issue we read Wikipedia’s definition of a robot as a machine capable of carrying out a series of actions automatically (paraphrased). We then gave an overview of all the considerations you may have for your build, from the chassis to the motors to the sensors you may want to add for autonomous tasks. In this issue we invite you to join us as we build our own model robot. If you follow along, you should have everything to get you started especially if this is new territory for you. We are going to build your explorer robot from scratch running through each step needed including soldering and putting all the pieces in place. We start with looking at the full parts list, the chassis kit and the other non-robotic parts. We will solder header pins onto our explorer pHAT from Pimoroni, flash our SD card and run the initial setup to enable the pHAT.

What you’ll need ■ Article’s Github repository ( alexellis/zumopi) ■ Raspberry Pi Zero ■ microSD Card ■ All other kit is listed in step 1



The full parts list In addition to the Pi Zero you will need:

• A Pimoroni explorer pHAT • A Polulu Zumo Chassis kit (available at Pimoroni) • 2x micro-metal gear motors (available at Pimoroni) • Cross-head screw driver • Soldering iron, flux & solder • 3A 5-6v UBEC (from HobbyKing or eBay) If you are struggling to find a Pi Zero head over to http:// for a live stock count at three major outlets.

Explorer robot 02

The chassis kit


The axles and free-wheels

The chassis is made up of three main parts – the largest piece is where we attach the wheels and tracks, but also holds the 4xAA batteries. It has a battery lid which if lost or damaged will mean the batteries may drop out, so take good care of it. There is an acrylic cut-out which mounts on the top with bolts – peel off the brown protection paper.

The robot has two free-running wheels and two which are directly driven by the geared motors. Insert the two metal hubs into the plastic wheels then attach to the chassis tightening up the bolt on either side. You may find that a small Allen or hex key helps here, but do not over-tighten the bolt. Both wheels should spin freely, and now we can move onto the motors. fit which will be made tighter when the top plate is screwed down. You can temporarily place the top plate over the motors to help with deciding where to route your wires. We used the two grooves to the left and right.


Preparing the motors

The motors currently come with two small solder pads – one for positive and one for negative. We suggest clipping the ends of two sets of white and black or red and black jumper wires leaving one male dupont (jumper-style) end on each wire. Next carefully strip 2-3mm of wire from the bare end and solder this to the pads on the motors. A little flux will really help and cut off any excess wire afterwards.


Installing the motors

Once the solder joints have cooled down turn off the solder iron and place the motors with the shafts in the small slots at the opposite end of the chassis. They have a friction


Secure the top plate


Solder the battery contacts

Securing the top plate can be a bit fiddly. Place the robot chassis with the battery compartment pointing up. Find the first two bolts and nuts and then carefully drop a nut into the slot by the motors and turn it until it sets in place. You can now use a small crosshead screwdriver to tighten this down.

Now let’s start looking at the battery contacts. They come in five separate parts and when we first tried to put these together we needed a lot of trial and error. One thing



you can definitely do up front is to solder the positive and negative terminal. Go ahead and solder a small piece of jumper wire through the positive and negative terminal; these are identified through a tiny loop.

it point out, then slowly but firmly push the plastic wheels down onto the shaft of the motor. Try to make sure these line up with the free-running wheels we installed in step 3. You will be able to turn the motors by hand but it is not recommended â&#x20AC;&#x201C; just check that they are able to turn freely without rubbing on the plastic wheel arches.

10 08

Inserting the battery contacts

We found that the positive and negative terminals could move around, especially when no batteries were inserted. You may want to put a dollop of hot glue or hockeystick tape behind the positive and negative terminals so that they remain in place when going over bumps or when inserting new batteries. You will find that the wires thread through the bottom of the compartment and we now have six sets of wires in total.


Mounting the front wheels

We will now mount the front set of wheels that drive the tracks. Find the shallow side of the wheel and let


Getting the tracks on

Our robot runs on two sets of rubber tracks which are elastic enough to stretch over the two sets of front and rear wheels. With the teeth of the track facing in, place the track on one wheel then stretch it very carefully over the other side. Before going any further find two more nuts and bolts and secure the opposite end of the chassis next to the battery terminals. You will need your cross-head screwdriver again.

ZERO-SIZED MOTOR CONTROLLERS The explorer pHAT is one of our favourite motor controllers for the Pi Zero. Having analog inputs and ouputs along with the motor control means it can be used for many different purposes. There are other motor controller boards designed for the Zero by 4tronix, PiHut and PiBorg. PiBorgâ&#x20AC;&#x2122;s ZeroBorg was a KickStarter project which has now shipped to backers.

Explorer robot SMART SOLDERING Good solder joints use the smallest amount of material possible to make a clean joint. A flux pen can be a good way to help solder take to two wires or metal surfaces. If you add too much solder then a ‘solder sucker’ or desoldering braid can be used to remove excess material. We would also highly recommend you purchase a fire retardant mat.


Connect the motors


Fire it up and set up Python

At this point our motors can be plugged into the explorer pHAT into the motor 1 and motor 2 connections on the front of the explorer pHAT. You may find that some of your wires are too long; there should be no harm in cutting them down to the size and re-soldering them. We think that the best way to do this is through trial and error.

Our robot will be programmed in Python, so flash Raspbian or Raspbian Lite to your SD card and boot up the Pi. Providing everything starts up correctly, configure the internet and then run Pimoroni’s installation script for the explorer pHAT Python library.


Check and tighten everything

We have now installed the drive motors, both sets of wheels and tracks. We took the cover off the acrylic top plate and secured it with two sets of nuts and bolts. We have installed the battery contacts and cover. There are three pairs of wires: two sets for the left and right motors and two for the batteries. Install 4xAA batteries and touch together the battery wires with each of the motors, you should see each side turn. If not then go back and check all the solder joints.


Preparing the explorer pHAT and zero


Adding the UBEC

If you have a Pi Zero without a header attached, then go ahead and solder a 40-pin male header making sure to use as little solder and flux as possible. Next mount the 40-pin female header that came with your explorer pHAT and solder each joint – we find that a peg or a piece of Blu-Tack can help make sure the header is attached squarely without pointing off at an angle. Now solder the single-row female pin header on the opposite side.

The UBEC is a battery elimination circuit, it will make sure that 5v is supplied to the Raspberry Pi. The component comes with a female header for the output, you can plug this straight into the +ve and -ve connectors on the explorer pHAT. The input side is two wires which will need to be attached to the battery contacts with solder, twisting or non-conductive tape.

``` $ curl -sS | bash ``` Now check that the library can control the motors by running this Python code:

``` import explorerhat import time explorerhat.motor.two.forwards() time.sleep(2) explorerhat.motor.two.stop() ```


Ready for more?

If you’re ready for more, then you should keep an eye out for part three, where we will install Bluetooth tooling and start to use a gamepad for manual control of our robot. We’ll be able to drive it around the living room or kitchen or even outdoors. If you’ve run into any issues with the Zumo chassis itself then head over to the Pimoroni or Polulu support forum to ask questions.


Special offer for readers in North America

6 issues FREE FREE

resource downloads in every issue

When you subscribe

The open source authority for professionals and developers

Order hotline +44 (0)1795 418661 Online at *Terms and conditions This is a US subscription offer. You will actually be charged ÂŁ80 sterling for an annual subscription.

This is equivalent to $120 at the time of writing â&#x20AC;&#x201C; exchange rate may vary. 6 free issues refers to the USA newsstand price of $16.99 for 13 issues being $220.87, compared with $120 for a subscription. Your subscription starts from the next available issue and will run for 13 issues. This offer expires 31 October 2016.



for this exclusive offer!

81 Group test | 86 Dell XPS 13 9350 | 88 deepin 15.2 | 90 Free software






Office suites

With the latest Microsoft Office suite costing £59.99, how do these free alternatives stack up in terms of features and overall usability?



WPS Office

It’s surprising Apache OpenOffice goes back a staggering 20 years, but in that time it’s managed to turn itself into one of the premier alternatives to the Microsoft Office suite. As well as the fairly standard Writer and Calc programs, it includes some impressive extras to boot.

When you think of free office suites, chances are that LibreOffice will be one of the first to spring to mind. Frequent updates and bug fixes have gained it a massive user base, but with so much stiff competition, is it still the premier office suite that users can download for free?


WPS Office claims to have over 1.25 billion installs around the world and it’s by far the best office suite for mobile users. But how does it perform on desktops? All the basic functions and programs are present and it’s got one of the best behindthe-scenes communities around.

FreeOffice has undergone a big transformation in the past few years, and what was once quite a buggy suite of programs now keeps up with the best of them. It’s designed in a very similar way to Microsoft Office, so those moving from the MS suite will feel right at home with FreeOffice.



Office suites



An office suite with a bunch of extras to keep its user base hooked

Frequent updates helped LibreOffice become a premier office suite

■ Tools are listed around the entire Writer program, which can be a lot to take in for new users who are accustomed to Word

■ Cut-out the mundane task of creating templates and let LibreOffice do the work with the built-in Wizards tool

Document creation

Document creation

The sheer volume of tools available in OpenOffice’s Writer program has to be applauded, as there’s really everything you could need here. A particular favourite of ours is the Wizards feature, a handy tool that takes the hassle out of creating your own templated documents such as letters and agendas. Options are crowded, but the layout can be altered if needed.

Spreadsheet tools

While Calc can also boast an impressive suite of features, it’s much harder to navigate than many of the spreadsheet-based programs in this group test. For advanced users, however, the DataPilot feature is an invaluable tool for pulling in raw data from different databases. New users can find all the data entry tools they need, if they manage to navigate between the menus.

Presentation options

There’s a lot to like in the Impress program, so much so that it closely rivals Microsoft PowerPoint. Creating and customising slides can be as easy or as convoluted as you like, with additional support for multiple monitors a real benefit. Without doubt its killer feature is being able to create Flash versions of your presentations, which works well in practice.

Extra features

Everything about LibreOffice’s Writer app looks the part and in its most recent updates, there’s been some noticeable enhancements in its overall layout. Navigating between menus is a breeze, with all tools easily accessible. There’s a lot of help at hand if you get stuck, through the Wizards and AutoComplete features, but these are more tailored for new users.

Spreadsheet tools

There’s some great behind-the-scenes algorithms in place within Libre’s Calc program, which make it the best of the bunch for formula creation. It does lack in its formatting options however, which can make sifting through large portions of data a little laborious. If you can look past the design faults, this is definitely one of the better spreadsheet programs around.

Presentation options

Versatility is key when it comes to the presentation tools found within the Impress program. The choices of editing and view modes are particularly helpful when at different stages of creating your presentation, even if the switching process is a little slow. Transitions are another highlight here, with some options we’d love to see appear in PowerPoint.

Extra features

As well as the core programs you’d expect from an office suite, Apache OpenOffice piles on the extras. Base is a workable, if not complicated, database management program, while the Math program is on hand for working out complex equations. For many, these programs will be fairly superfluous, but they could be helpful for some.

There’s an extensive community behind LibreOffice and updates come in thick and fast, making it the perfect choice for those wanting the latest features. One of the biggest updates recently is the introduction of OpenGL, which has transformed the way transitions and menus work within your presentations. It’s certainly worth checking out.



If you need an office suite that packs in the features, then OpenOffice may very well be worth downloading and checking out. But for what it packs in, its poor layout does hinder it



While many competitors can keep up with LibreOffice’s suite of programs and features, it’s the community that has made it a real winner. Continuous updates and community discussion has forged a fantastic product


WPS Office

Can a stellar design compensate for a minimal selection of features?


An office built by the community, for the community

n A unique Tabs feature is a helpful tool for managing multiple presentations simultaneously without too much fuss

n In each of FreeOffice’s programs options tend to be split well, as a helpful way to avoid overcrowding

Document creation

Document creation

At the heart of WPS’ Writer program are some great formatting features, enabling users to easily adjust the various elements on the page. However, the program as a whole is on the basic side, with it missing a number of key features that are pretty commonplace in most office suites. But it at least nails all the core features you’d expect.

Options are split well between two panels, which helps stops the overcrowding of many of the key tools. While there’s some helpful spacing tools provided, they’re poorly listed within the TextMaker program and new users may initially struggle to find the great benefits that they can bring to their documents.

Spreadsheet tools

Spreadsheet tools

Out of all the office suites here, WPS’ Spreadsheets program is arguably the closest to Excel. It’s laid out very well, making it easy to find and use the various tools provided, with help on hand for some of its more advanced inclusions. We’d also say it’s one of the best programs around for chart creation, with some great editing tools on hand to help make them as precise as possible.

Presentation options

It’s very much the same situation in the Presentation program as it is with Writer. While it’s cleverly designed to make it easy for users to find what they need, the program itself does the absolute basics. One handy extra is being able to use the Tabs feature to manage multiple presentations simultaneously, which in practice works wonders.

Similar to the TextMaker program, the PlanMaker spreadsheet tool has a plethora of options, which are cleverly spread out. Basic formatting options are on hand to help stylise spreadsheets, but there’s little help when curating formulas, something that many of its competitors manage to do with success.

Presentation options

Arguably the weakest program in the FreeOffice arsenal is its presentation program, Presentations. It does all the basics well enough, but it lacks the advanced features found in both LibreOffice and OpenOffice. Animations are available to help liven up slides, but do cause some noticeable stuttering in finished presentations.

Extra features

One of the key reasons that WPS Office has carved out a positive reputation is the compatibility it has with a wide range of distributions. No matter if you’re using Ubuntu, Fedora, CentOS, Mint or even Knoppix, you won’t find any slowdown or noticeable differences here.

While it may lack the extra programs that some of the competition offer, there’s some great expansions available through the FreeOffice site. Extensions to the TextMaker program are particularly good, giving users the options to expand on various chart and table creation options. Extensions can be enabled and disabled through each program with great ease, which is a benefit for new users.



Extra features

We’d go as far to say that WPS Office is the best looking of the bunch, but it lacks the spread of features we’ve seen in the competition. Only worth considering if you just need the absolute basics from your office suite


There’s nothing that really stands out within FreeOffice’s suite of programs, but it does boast an impressive expansion system that certainly adds some value to it




Office suites

In brief: compare and contrast our verdicts OpenOffice Document creation

A massive choice of options is marred by an overcrowded design and layout

Spreadsheet tools

The handy DataPilot tool is particularly helpful for pulling in batches of raw data

Presentation options

There’s plenty here to really make a unique presentation without overcomplicating it

Extra features

Extra programs are nice to have, but will be of little use to most general users


A suite with an abundance of options and tools, but poor design lets it down



Clearly tailored towards new users with the addition of insightful help sections


Creating formulas is a particular highlight, rivalling Office’s Excel in many areas


Versatility is the key word, with a range of editing options on hand to play with


Liaising with the community helps keep updates quick as well as relevant


LibreOffice feels like a complete package, the best for those needing a free office suite

WPS Office


Formatting tools are plentiful, but other areas feel overlooked and too basic


Creating charts is a particular highlight, catering for both new and advanced users


Another case of being too barebones for our liking. Editing tools are very minimal


Support for multiple distributions has helped it stay popular in the community


While Calc is a highlight, other areas feel too barebones for our liking



Options are split between two panels, a helpful way of avoiding overcrowding



Data entry is good, but there’s little for dealing with formulas, which is a big omission



Far too basic in its options and left behind the strong competition to some degree



A unique extensions suite enables users to customise the core programs to their liking



Some big omissions with certain features leave FreeOffice lagging behind the rest



When it comes to choosing an office suite, many will instantly turn to the Microsoft suite. But unless you’ve got a spare £50 handy, chances are you won’t want to part with your hard-earned cash. If we can prove anything with this group test, it’s that there are some fantastic alternatives that won’t cost you a penny to download on to your desktop. Out of the four we tested here, LibreOffice proved to be the complete package. In many of its key areas, it closely matches the Microsoft suite stride for stride, and in a couple of areas, it even surpassed it. While it doesn’t do anything out of the world with its feature set, it does have tools that can both cater for new and seasoned users alike. New users in particular will feel right at home here, with an array of help on hand through the programs and community whenever they’re stuck. But while the community is great for a spot of tech support when needed, they’re also the driving force behind the continued development of LibreOffice. Updates are frequent, with the developers constantly liaising with the community to find out what they want included and fixed in the next update – if only more programs would follow suit.


n Chart creation boasts some impressive editing features to help style them how you see fit

If you go ahead and download LibreOffice and can’t seem to get to grips with it, then no worries, as OpenOffice and WPS Office are both viable alternatives. Each do things a little differently to LibreOffice, but be prepared to sacrifice on some certain features. WPS Office in particular can

be customised in certain areas, so it’s a suitable choice for those who are particularly fussy. But without doubt, LibreOffice is the number one choice for Linux users wanting to try out a new office suite. Oliver Hill

Classified Advertising 01202 586442




of Hosting Come Celebrate with us and scan the QR Code to grab

your birthday treat!

0800 808 5450

Domains : Hosting - Cloud - Servers

e d a M






IQaudIO Audiophile accessories for the Raspberry Pi

• Raspberry Pi HAT, no soldering required • Full-HD Audio (up to 24bit/192MHz) • Texas Instruments PCM5122 • Variable output to 2.1v RMS • Headphone Amplifier / 3.5mm socket • Out-of-the-box Raspbian support • Integrated hardware volume control • Access to Raspberry Pi GPIO • Connect to your own Hi-Fi's line-in/aux • Industry standard Phono (RCA) sockets • Supports the Pi-AMP+


• Pi-DAC+ accessory, no soldering required • Full-HD Audio (up to 24bit/192MHz) • Texas Instruments TPA3118 • Up to 2x35w of stereo amplification • Provides power to the Raspberry Pi • Software mute on GPIO22 • Auto-Mute when using Pi-DAC+ headphones • Input voltage 12-19v • Supports speakers from 4-8ohm


• Raspberry Pi HAT, no soldering required • Full-HD Audio (up to 24bit/192MHz) • Texas Instruments TAS5756M • Up to 2x35w of stereo amplification • Out-of-the-box Raspbian support • Integrated hardware volume control • Provides power to the Raspberry Pi • Software mute on GPIO22 • I/O (i2c, 3v, 5v, 0v, GPIO22/23/24/25) • Just add speakers for a complete Hi-Fi • Input voltage 12-19v • Supports speakers from 4-8ohm


Twitter: @IQ_audio Email:


IQaudio Limited, Swindon, Wiltshire. Company No.: 9461908


Dell XPS 13 9350 Developer Edition


Dell XPS 13 9350 Developer Edition Operating System Ubuntu Linux 14.04 SP1

Processor Intel Core i7 6560U processor (up to 3.2GHz)

Specs 16GB LPDDR3 1866MHz 512GB PCIe SSD 13.3 inch QHD+ (3200 x 1800) InfinityEdge touch display Intel Iris Graphics Intel 8260 2x2 802.11ac 2.4/5GHz + Bluetooth 4.1 USB 3.0 (x2) - 1 w/PowerShare, 3-in-1 Card Reader (SD, SDHC, SDXC), Headset jack, Noble lock slot, Thunderbolt™ 3 (USB C) 56WHr Battery 304mm x 200mm x 9015mm 1.29 kg

Price £858-£1228 + VAT



Dell’s finest hardware, tailor-made to run Ubuntu out of the box. A dream come true? As a Linux user, the process of buying a laptop generally goes like this. “1: Choose hardware you like. 2: Check it will work okay with Linux. 3: Order laptop. 4: When laptop arrives, wipe what it comes with, install Linux and try and get everything working.” It’s not ideal. Even though we enjoying tweaking our devices, wouldn’t it be nice to be able to buy something and have it ‘just work’? Enter Dell, with the XPS 13 Developer Edition, which offers top-end consumer hardware, pre-loaded with Ubuntu. The XPS 13 is a truly stunning piece of hardware. A number of configurations are available, but we’re checking out the pinnacle-of-the-range model, with 16GB RAM, 512GB storage and a gorgeous QHD touchsensitive screen. Regardless of which specification you buy, you’ll get a 13.3-inch device that is barely bigger than a 12-inch Macbook, itself a benchmark for compact machines. The size is achieved thanks to the InfinityEdge

screen, which has far smaller bezels than other laptops. It’s a design and engineering masterpiece, except perhaps for the fact that due to the thin bezels, the 720P webcam drops to below the screen. The 3200x1800 QHD screen is simply stunning. With its Gorilla Glass coating and touch layer it is rather susceptible to glare, but if we had to pick one problem with the display it would be that it is likely to ruin all other computer displays for you – even full HD just can’t cut it after using the XPS. The premium qualities of the laptop extend to the top and bottom of the device, which are machined from aluminium; the main deck of the laptop is soft-touch carbon fibre and the backlit keys feel excellent, providing a far better typing experience than the Macbook. A glass precision touchpad delivers where Windows machines often fall short.

Below The Thunderbolt 3 port can be used to enable VGA, HDMI, Ethernet and USB-A via a Dell Adapter

Below Be aware that the RAM is soldered to the motherboard on the XPS and therefore the laptop’s memory is not upgradeable!

Pros A fully working Ubuntu build together with very powerful hardware make the XPS 13 Developer Edition a developer’s dream

Cons If you prefer screens larger than

13.3”, your options are limited. The XPS 15 9550 doesn’t have official Dell support for Linux

The XPS 13 Developer Edition offers top-end consumer hardware, pre-loaded with Ubuntu As an Ultrabook, the XPS does dispense with some ports – HDMI isn’t on board, nor Ethernet, and only 2 USB A ports are available. You do however get a Thunderbolt 3 (aka USB C) port, which can be used with an adaptor to offer HDMI and network connectivity. The preloaded Linux distribution on the XPS is Ubuntu 14.04 LTS. As you’d expect, everything works perfectly. Even the touchscreen is surprisingly well supported in applications. The installation is very clean, with minimal additional Dell repositories delivering just enough tuning of the build for everything to work perfectly. If you’ve ever set a Linux laptop up from scratch, you’ll know that this really is a breath of fresh air. Since the release of the machine, Canonical has updated the LTS Ubuntu release to 16.04 Xenial Xerus. In the interest of research, we updated the laptop. At the time of writing, which admittedly is very early in the

release’s life cycle, there are a few teething troubles. Chromium glitches occasionally, the mouse pointer can freeze and Wi-Fi can fail to turn on when resuming. These aren’t uncommon Linux issues and will likely be resolved pretty quickly as usage of the new release increases. With that said, we have no official line from Dell on Xenial support at this time, so if you pick up a Developer Edition, you’re likely best sticking with 14.04 for now. The XPS 13 includes a 56WHr battery, which typically has given us around six hours of battery life. This is less than the quoted eight hours when the machine is running Windows, but isn’t terrible (and we suspect with a bit of tuning, this could be improved a little too). Note that if you opt for the Full HD, non-touch version, you should expect a least a couple of additional hours of use. Paul O’Brien

Summary The best thing about the XPS 13 Developer Edition is that you can actually show your new laptop to friends and colleagues without feeling like a second-class citizen. The hardware is positively lustworthy, the software is stable and compared to equivalent systems from Apple, the pricing is very competitive. Thank you Dell… but let’s lose the Windows logo on the Super key please!




deepin 15.2


deepin 15.2

The Debian-based distribution’s latest update promises big things, but can it back them up?

RAM 512mb

Storage 2GB

Specs Debian-based 64-bit and 32-bit versions


When it comes to standing the test of time, deepin can claim to be one of the few that’s managed to stay current and continuously grow a user base. While others have come and gone, deepin has grown into one of the most popular Debian-based distributions out there, with a thriving community that’s consistently looking to better the product. In its latest update, 15.2, a whole host of new options have been included, making it an even more attractive propect for potential users to check out.

While installation hasn’t changed, it’s still one of the more user-friendly processes around. From start to finish, the whole process only takes a few minutes and there’s nothing overly complicated that will stump new users. Both the 32-bit and 64-bit versions are now available for free from the official deepin site, depending on your desktop requirements. Of course, it’s when the installation of deepin has finished that the real fun can actually begin.

One of the more pleasing things about deepin has been the emergence of its own curated applications The deepin desktop is gorgeous throughout and has some nice nuances from both traditional Windows and Mac desktops; including docks, folder management and search systems. One of the biggest additions in the 15.2 update is the launcher interface, a fantastic way to segregate apps into different categories, without clogging your primary desktop screen. For end users, there’s also some noticeable speed improvements and while deepin has never really been touted as a CPUintensive distribution, the amount of small bug fixes and optimisations have really helped here. It makes it a pleasure to use, with no task feeling like it’s too much for deepin to handle, with the only real limitations being your own hardware. Dig a little deeper into the desktop and you’ll find numerous changes to the Control Center, which has now been opened up to a series of potential customisations; including the implementation of different timezones and support for multi-monitor setups. While deepin does claim that its Control Center should be used as a hub for all actions users undertake on their desktop, it’s yet to really offer enough variety for users to properly get their teeth stuck into. It is, however, something that’s already been noted by the community, so expect to see further improvements in a later update. One of the more pleasing things about deepin has been the emergence of its own curated applications. The likes of Deepin Store, Deepin Music and Deepin Movie have all been tailored with the deepin design in

mind, and as a package, it’s second to none. While the apps aren’t as powerful as some of their third-party counterparts, they’re heavily optimised for the deepin desktop, so performance is very good. Loading times are kept to a minimum and there’s a certain amount of crosssync throughout, allowing users to connect the apps and share certain features. A new addition to the app selection in 15.2 comes in the form of Remote Assistance, a handy tool where you can invite others to connect to your desktop. It provides the tools to diagnose problems with your desktop and the connection process takes just a few seconds to establish. Away from that, the app dock has been further optimised with new animations and bug fixes. Similar to other areas of deepin, there’s plenty of extra options regarding the dock that can help make it your own, but if you’re not too bothered about fancy animations and sounds, then there’s little here that will really pique your interest. Debian-based distributions are plentiful, but where many are marred by complicated setups and performance issues, deepin stands out in a league of its own. The latest 15.2 update has helped to eradicate many of the smaller issues users were finding with it, but it’s also helped introduce a series of intuitive, new menus. If you’re more accustomed to Ubuntu, then this is arguably the best alternative, especially on Debian, that you’re going to find. And with an ever increasing user base, deepin will continue to grow. Oliver Hill

Pros Every part of this latest update

works flawlessly and with plenty of customisation available, you can really make it your own

Cons Core apps are optimised well, but

perhaps lack the features found in many third-party alternatives

Summary If you’ve consciously stayed away from entering the world of Debian desktops, deepin could very well be the distribution to help you to finally make the transition. In terms of looks and functionality, there’s a lot of familiar ties to Ubuntu, but there’s also enough here to please new and seasoned users alike. Truly a great distribution that certainly demands your attention now




Free software


Psensor 1.2.0 Alerts, graphs and charts of your PC’s hot spots

While the Raspberry Pi 3 may herald a future where our PCs use low power-consumption ARM processors, for now most of us use much more power-hungry devices – and where there is substantial power consumption, there is as much heat as light. Add in powerful GPUs, and a processor-intensive task, such as transcoding video, and the temperature rises. Linux provides ready access to the many sensors on the I²C bus, through the lm-sensors monitoring and reporting tool – and many general monitor tools make use of this data. Psensor presents you with not just the figures, but also a very useful graph covering a selection of those monitored temperatures. Disk drive temperature, fan speed, and free memory also feature in Psensor’s chart. As well as choosing which temperature monitors to view in the graph, config options range from measurement interval through alarm threshold (alerts can be delivered through desktop notifications) to naming the sensors. ATI/AMD GPU monitoring needs a recompile if you’re using the proprietary Catalyst drivers, otherwise these and nVidia GPUs work out of the box. The separate Psensor server provides a JSON web service for monitoring remote servers via Psensor or your web browser.

Above With selectable graph options for each sensor, Psensor gives you a quick visual check on hot spots

Pros Handy interface to all the

temperature monitors on your system. Simple and configurable graph view.

Cons It won’t tell you where the I²C

sensors are, you’ll have to trawl through motherboard docs for that!

Great for…

Monitoring during compilation or video processing


PlayShell 0.2.3

A command line media player using the shell’s own language This is a bit of fun: a media player framework, written in Bash, that will use various players as a back-end. It also has a serious use in embedded systems, where resources are scarce, but for those who spend most of their time using a virtual terminal, the appeal will be a player with some useful features, yet written in that most humble of scripting languages, Bash. PlayShell, when first started, will search your system for media players. It works with VLC, MPlayer, SoX and FFPlay, and is capable of using some more minimal players like amp, aplay, gst123, madplay, mpg321, mpg123 and splay. As a Bash script it can be run directly where it’s unpacked, but it also comes


with an install script, which gives you configuration files and a man page. Documentation on the website, or the man page, will get you started, but all that you really need to do once you’ve typed in playshell is hit E to add a folder of media files to the library, and then A, which adds a file to the playlist and then starts it playing. Unless you’ve started playshell with the --editor option, or edited the sample playshell.conf.example file, then you are likely to find E drops you into vi, which won’t be to everyone’s taste. As well as general media player features, terminal-friendly features include video disabling, to save opening of unnecessary windows while listening to audio from video files.

Pros Written in Bash, so naturally it’s

fairly lightweight – yet surprisingly feature-packed.

Cons Defaults to vi for some operations; still at a relatively early stage of its development.

Great for…

Low resource computers, and for all command line fans


etcd 3.0

An improved API for the distributed, consistent key-value store etcd is a “distributed, consistent keyvalue store for shared configuration and service discovery,” whose stored data is automatically distributed and replicated. The Raft consensus algorithm, which is very efficient for small clusters, provides automated master election and consensus establishment. The new release’s API brings improved scalability, thanks to a new storage engine, and supports Multi-Version Concurrency Control (MVCC). Installation for most will be a matter of unpacking the pre-built binary and running it – or following the Docker instructions. etcd is written in Go – the language that seems, ironically, to be designed to be difficult to Google for – and is rigorously tested, as well as proven in some

very large scale deployments. The included etcdctl is a command line interface for exploring your cluster or running scripts, but most of your interaction will be through the REST interface via gRPC and HTTP/2. Originating in the distributed CoreOS, which received a funding boost from Google last year, etcd has been used with Google’s own Kubernetes orchestration system for cluster management and with many other applications in a number of interesting companies, many of which are listed on the project’s website. Trialling the development versions in the run-up to the latest release, we were impressed with the speed and efficiency of 3.0. If your project is moving towards disposable clusters and microservice architectures, take a look at etcd.

Pros Extremely efficient. Asynchronous DRAM (ADR) gives stable storage without the bottlenecks.

Cons While it works out of the box

for well established use cases, others will need some work.

Great for…

Managing clusters, or apps running across small clusters


wxEphe 1.4

It’s easy to find the planets with this handy tool Major Tim’s months on the ISS have seen an increase in interest in all things spacerelated. If you’ve got that old telescope or binoculars out, dusted them down, and started searching the heavens, you’ll know the real difficulty is finding anything, and knowing when to look. An ephemeris – from the Greek word έφημερίς, a diary – gives the position of heavenly objects (natural and artificial) at a given time, and given the calculations involved to project the positions of astronomical objects, it’s no surprise that they’ve been computerised from the very earliest days. Nowadays there are online services, but having an ephemeris on your laptop frees you from data connectivity worries when out with your telescope. wxEphe is a wxWidgets-based application – easy to install given a few basic packages – displaying ephemerides for the Sun, the Moon and the planets, given a date and the observer’s location. The default location is Lyon, and changing this currently means editing a config file, armed with your co-ordinates. Although only the planets (except Pluto, you’re not going to see that through your telescope) and the Moon are covered, the data given is extensive, from coordinates to phase.

Above Got a telescope and a basic guide to use? Now all you need to view the planets is to run wxEphe

Pros All the data you need to find

and observe the major bodies of our solar system, all very neatly presented.

Cons It is not quite yet finished

– adding your location piecemeal to a config file is not ideal in a GUI application.

Great for…

Anyone who needs a little help finding the planets



Get your listing in our directory To advertise here, contact Luke | +44 (0)1202586431


Hosting listings Featured host:

About BHost

BHost specialises in one thing and doing it right: Linux Virtual Private Servers (VPS). BHost doesn’t sell extras like domain names or SSL certificates – they are simply dedicated to providing customers with the highest quality VPS service with exceptional uptime and competitive pricing. BHost’s customer focus means that their team always goes above and beyond to help.

What BHost offers

• Affordable VPS – deploy your own Linux Virtual Private Server, starting at just £5 per month • Unlimited bandwidth allowance – you will never have to worry about data transfer quotas with BHost!

Their platform successfully hosts a whole variety of services, including game servers, office file storage servers, email and DNS servers, plus many thousands of websites. BHost is confident that you will love their service. However, if for any reason you’re not completely satisfied, they will provide you with a full refund within 14 days of ordering, guaranteed! Why not drop them an email?

They are simply dedicated to providing customers with the highest quality VPS service with exceptional uptime and competitive pricing

• International reach – you have a choice of data centre locations, including the UK, Netherlands and USA • Excellent service – 100% satisfaction guaranteed, or BHost will provide you with a full refund!

5 Tips from the pros


Run your choice of Linux distro A good VPS host will let you work the way you want with your preferred Linux distribution. BHost offers Ubuntu, Debian, CentOS and Fedora as standard, although if you need another distro then they will happily make it available upon request.


Bandwidth allowances and cost If your server gets a surge of traffic, don’t end up with a surprise bandwidth bill – look out for hosts like BHost that offer unlimited or unmetered bandwidth.


Port speeds – one gigabit or ten? Your VPS will have a virtual network adaptor – check that your host allows this to run at 1-Gbit speeds, so that data transfer to and from your machine is fast.


If you’re planning on hosting an application needing plenty of bandwidth, BHost can also offer 10-Gbit ports upon request.


Build a dev machine – or ten! Deploying a new VPS on a BHost platform takes only 30 seconds! Many of BHost’s customers have several machines for development, permitting thorough testing and debugging of applications and other code before moving them to a production environment.


Be IPv6-ready IPv4 addresses are running out. To ensure your services are future-proofed, find a host that fully supports the new standard: IPv6. Every BHost customer gets an IPv6-ready service as standard.


Russ Murch, “I was recommended BHost as a reliable source for our hosting requirements and they have not let Function28 down. On the odd occasion when there are technical questions, BHost are very quick to respond.” Dave Ashton, “We have been customers of BHost for over 5 years. The facilities they offer are ideally priced for us, and BHost’s service and support are absolutely second to none.” Flamur Mavraj, @oxodesign “After years with shared hosting, I was looking for a VPS to move my clients to. I was looking for three things: speed, reliability and low cost, and BHost have it all.”

Supreme hosting

SSD Web hosting 0800 1 777 000 0843 289 2681

CWCS Managed Hosting is the UK’s leading hosting specialist. They offer a fully comprehensive range of hosting products, services and support. Their highly trained staff are not only hosting experts, they’re also committed to delivering a great customer experience and passionate about what they do.

Since 2001 Bargain Host have campaigned to offer the lowest possible priced hosting in the UK. They have achieved this goal successfully and built up a large client database which includes many repeat customers. They have also won several awards for providing an outstanding hosting service.

• Colocation hosting • VPS • 100% Network uptime

Value hosting 02071 838250

UK-based hosting: | 0845 5279 345 Cyber Host Pro are committed to providing the best cloud server hosting in the UK – they are obsessed with automation. They’ve grown year on year and love their solid, growing customer base who trust them to keep their business’ cloud online! If you’re looking for a

hosting provider who will provide you with the quality you need to help your business grow, then look no further than CyberHost Pro. • Cloud VPS Servers • Reseller hosting • Dedicated Servers

• Shared hosting • Cloud servers • Domain names

Value Linux hosting 01642 424 237

ElasticHosts offers simple, flexible and cost-effective cloud services with high performance, availability and scalability for businesses worldwide. Their team of engineers provide excellent support around the clock over the phone, email and ticketing system.

Linux hosting is a great solution for home users, business users and web designers looking for cost-effective and powerful hosting. Whether you are building a single-page portfolio, or you are running a database-driven ecommerce website, there is a Linux hosting solution for you.

• Cloud servers on any OS • Linux OS containers • World-class 24/7 support

• Student hosting deals • Site designer • Domain names

Small business host

Fast, reliable hosting 0800 051 7126 HostPapa is an award-winning web hosting service and a leader in green hosting. They offer one of the most fully featured hosting packages on the market, along with 24/7 customer support, learning resources, as well as outstanding reliability. • Website builder • Budget prices • Unlimited databases

Enterprise hosting: 01904 890 890 | 0800 808 5450 Formed in 1996, Netcetera is one of Europe’s leading web hosting service providers, with customers in over 75 countries worldwide. As the premier provider of data centre colocation, cloud hosting, dedicated servers and managed web hosting services in the UK, Netcetera offers an array of

services to effectively manage IT infrastructures. A state-of-the-art data centre enables Netcetera to offer your business enterpriselevel solutions.

Founded in 2002, Bytemark are “the UK experts in cloud & dedicated hosting”. Their manifesto includes in-house expertise, transparent pricing, free software support, keeping promises made by support staff and top-quality hosting hardware at fair prices.

• Managed and cloud hosting • Data centre colocation • Dedicated servers

• Managed hosting • UK cloud hosting • Linux hosting



Contact us…

Your source of Linux news & views


Your letters

Questions and opinions about the mag, Linux, and open source

Turing test

Hello LU&D, Every so often, when following tutorials in your magazine, I come across the phrase ‘Turingcomplete’. Correct me if I’m wrong but Alan Turing was around a long time before Linux was invented, so what does he have to do with computers today? Ray Gerrard That’s a very good question Ray! As most people know, WWII codebreaker and scientist Alan Turing is often called ‘the father of computing’ and the reason for this, and for the phrase ‘Turing-complete’ is because the theories that he posited still govern the mathematics under the hood of all computers and programming languages. While Turing did design and build early computers, a Turing machine is a thought experiment he came up with – an idea of a computer that has an infinite tape (the I/O and storage method used at the time) that has symbols written on it. In the thought experiment, the Turing machine reads a symbol on the tape and then follows a set of pre-

defined rules to decide whether it writes a symbol, moves left or right, follows another instruction, or stops. It sounds simple, but at its heart it’s the mathematical model for how computers behave. Something that’s Turing-complete, then, is something that can behave exactly as a Turing-machine would. Any function that can be computed by an algorithm can be computed by a Turing machine – as this is how today’s computer’s work, they can be said to be equivalent to a Turing machine, or Turing-complete. A programming language that’s Turing-complete can make use of branches – instructions in a program that tell the computer to do something else rather than continue to execute the program it’s been given in order. This means that the computer can be told to do something only if the right conditions have been met. If commands are a primary example of this – if this, then do that. Not all programming languages in use today are Turing-complete, but the majority are. So when one of our tutorials explains that something is Turing-complete, it means that either it uses a programming language that’s Turingcomplete, or that the code instructions used could be run by a Turing machine.

Above A model of a Turing machine seen at the Harvard Collection of Historical Scientific Instruments


Getting drivers

Hi guys, I’ve just bought a spanky new graphics card for my dual-boot machine. The Windows drivers have installed fine from the disc in the box, but there are no Linux drivers on there – how can I get it up and running properly on my Linux partition? Dave Nairn The good news, Dave, is that your graphics card will more than likely just work natively under Linux, especially seeing as you’ve already installed its Windows drivers. To find out, start by opening the Terminal and then type: grep VGA and Linux should tell you the name and model of your graphics card. Now check the version number of the drivers with grep OpenGL This should give you a result that says ‘OpenGL




Linux User & Developer

Above The Mesa 3D Graphics Library is a good source for graphics card drivers for Linux

version string’ and then specifies a number, which will be the version number of the graphics card drivers Linux is using for your card. If, however, this returns a result that says ‘OpenGL renderer string: Mesa X11’ or ‘Software Rasterizer’ then there are no drivers installed for your graphics card under Linux. But don’t despair! We’re guessing that your ‘spanky new’ graphics card is likely to be an AMD or nVIDIA card, in which case there will also be closed-source, proprietary drivers from the manufacturers that you can enable (unless you’re running Fedora, which sulks about them). On an Ubuntu-based distro, open the Dash, search for ‘Additional Drivers’ and launch it – it will detect your hardware and look for drivers you can use. Still no luck? A good source for open source drivers is, but you will need to follow the instructions for each driver and for your particular distro as installation methods vary depending on the hardware and software you’re using. Another thing that’s worth doing is updating your distro, especially if you’re on an older version of Linux and you’re using a newer card. Some kind-hearted dev may well have created drivers for your card in the latest version of your distro.

Nice NAS

Dear Linux User and Developer, Thank you so much for your article on how to build a NAS (issue 167, p36 – Ed). I followed your guide and put together a box running FreeNAS to store photos, media files, and backups from the family’s computers. I actually made mine out of an old PC as I didn’t fancy shelling out £150 for the HP Microserver that was mentioned, but it works just as well and has come in very useful. For example, my eldest son uses iMovie on his MacBook a lot to make gaming videos and so he rapidly runs out of hard drive space – now he can back up his files to the NAS (with a bit of help from Dad) and take the older working files off his laptop, so he’s not moaning about running out of room all the time. We can also serve up music or video around the house, which is great when the wife and I are sick of the football/ tennis/cricket (we have two teenage boys) as we can now play our entire digital music collection in the dining room rather than relying on the PC in the living room. Alan Phelan We’re glad you enjoyed the article and that it’s been useful to you Alan! We’re big fans of network-attached storage as it makes it easy to back up or access big files that would otherwise

Above FreeNAS is a great open source resource for building your own network-attached storage

take up a lot of room on individual devices. One thing we will say though is do make sure that you back up the data on your NAS regularly, especially if it’s photos in non-lossless formats like JPEG, which can degrade the more often they’re opened and closed. Also, accidents can happen to a NAS box just like any other device (just ask our cat), so our expert’s advice about cloud backup, whether you pay for a plan or buddy up with a friend to store each other’s data, is very definitely worth following.


From the makers of

Python The

Discover this exciting and versatile programming language with The Python Book. You’ll find a complete guide for new programmers, great projects designed to build your knowledge, and tips on how to use Python with the Raspberry Pi – everything you need to master Python.

Also available…

A world of content at your fingertips Whether you love gaming, history, animals, photography, Photoshop, sci-fi or anything in between, every magazine and bookazine from Imagine Publishing is packed with expert advice and fascinating facts.


Print edition available at Digital edition available at

Free with your magazine

Instant access to these incredible free gifts…

The best distros and FOSS Essential software for your Linux PC

Professional video tutorials

The Linux Foundation shares its skills

Tutorial project files

All the assets you’ll need to follow our tutorials

Plus, all of this is yours too… • All-new tutorial files to help you master this issue’s Go tutorial

• 20 hours of expert video tutorials from The Linux Foundation • The essential distros so you can multi boot your machine • Must-have backup software

• The best partitionng software

• Program code for our Linux and Raspberry Pi tutorials

Log in to Register to get instant access to this pack of must-have Linux distros and software, how-to videos and tutorial assets

Free for digital readers too!

Read on your tablet, download on your computer

The home of great downloads – exclusive to your favourite magazines from Imagine Publishing Secure and safe online access, from anywhere Free access for every reader, print and digital

An incredible gift for subscribers

Download only the files you want, when you want All your gifts, from all your issues, in one place

Get started Everything you need to know about accessing your FileSilo account


Follow the instructions on screen to create an account with our secure FileSilo system. Log in and unlock the issue by answering a simple question about the magazine.

Unlock every issue

Subscribe today & unlock the free gifts from more than 40 issues

Access our entire library of resources with a money saving subscription to the magazine – that’s hundreds of free resources


You can access FileSilo on any computer, tablet or smartphone device using any popular browser. However, we recommend that you use a computer to download content, as you may not be able to download files to other devices.


If you have any problems with accessing content on FileSilo take a look at the FAQs online or email our team at the address below

Over 20 hours of video guides

Essential advice from the Linux Foundation

The best Linux distros Specialist Linux operating systems

Free Open Source Software Must-have programs for your Linux PC

Head to page 26 to subscribe now Already a print subscriber? Here’s how to unlock FileSilo today… Unlock the entire LU&D FileSilo library with your unique Web ID – the eight-digit alphanumeric code that is printed above your address details on the mailing label of your subscription copies. It can also be found on any renewal letters.

More than 400 reasons to subscribe

More added every issue

Linux Server Hosting from UK Specialists

24/7 UK Support • ISO 27001 Certified • Free Migrations

Managed Hosting • Cloud Hosting • Dedicated Servers

Supreme Hosting. Supreme Support.

Linux user developer issue 168 2016