Issuu on Google+



Pages of tutorials and features Ubuntu bare METAL server LibreBoot your laptop Build a file sharing box Coding Academy: Take Rust further and build a REST app

Get into Linux today!

NEW MINT! The most refreshing distro of 2016 is here Your complete guide to Mint 18 Enable key kernel updates Get inside Cinnamon 3.0 Discover the new X-Apps …try it all today!

Puppet on a string

They’re hungry, like really ready to eat, and there’s nothing for them to put their food on!

Kara Sowles on building successful events Roundup

Shared server

Broadcast to the world with the best open source tools

Dump Evernote for this open source star – it’s easy!

Screen casting

Turtl notes

Build a Pi Linux drone! Create a fully functioning drone powered by a Pi Zero

Welcome Get into Linux today!

What we do

We support the open source community by providing a resource of information, and a forum for debate. We help all readers get more from Linux with our tutorials section – we’ve something for everyone! We license all the source code we print in our tutorials section under the GNU GPL v3. We give you the most accurate, unbiased and up-to-date information on all things Linux.

Who we are

This issue we asked our experts: What do you like or not like about Mint? Or how did you first come to use it?

Jonni Bidwell Mint has been great for ‘fixing’ various old XP machines. Auntie Ethel loves it – she’s currently learning ROP exploits in order to exact revenge following some poor adjudication at the bridge club. As well as being accessible, it has the most wonderful squelchy effect on the context menu. It’s reason enough to get a new video card.

Neil Bothwick It has to be the colour. While many people switched to Mint because of the old-school desktop environments, for me it was the chance to have a desktop that was anything but a murky brown or purple. Please don’t tell me defaults can be changed when all it takes is a switch of distro.

Matt Hanson My favourite feature of Mint is actually the Cinnamon desktop environment. It’s helped me convince a number of friends and family members that Linux isn’t this scary alien operating system, but is just as easy to use – and runs far better – than their ageing Windows XP.

Nick Peers The best thing about Mint is that it’s basically Ubuntu for Windows switchers. Its Cinnamon desktop makes it that bit easier to wean yourself off your Microsoft dependency, giving new users the confidence to explore Linux before coming to the right conclusion: who needs Windows anyway? Les Pounder I first used Mint 17 after hearing how good it was from Tony Hughes at my local LUG. Tony was right, Mint is the ideal distro for those moving from Windows machines. It just works really well no matter the hardware – you just get a really good experience for any level of user.

Information wars 2016 is turning out to be an exciting year for GNU/Linux distro releases. First we had the release of the classleading Ubuntu 16.04 LTS, and from that a host of spin-off distros have been slowly appearing. The next most popular release is Linux Mint 18. Based on that LTS Ubuntu release, this next big Mint release brings with it a host of huge changes to the popular distro. To celebrate Linux Mint 18 being released, we’re running a comprehensive feature on what’s new and exciting in this release of Mint, from its all-new X-Apps to the new improved Cinnamon desktop that so many know and love. We run you through the install process, look at how Mint is built, examine where it went wrong with its security in the past, show how it has fixed the situation and where Mint could still improve itself. We’re sure you’re going to love Mint 18, so we’ve got both the 64- and 32-bit releases on the disc. As we know, Linux is expanding in all directions, and one of these is into more and more embedded applications. This issue we explain how you can build your own Pi-powered Linux-run drone. Importantly, it won’t break the bank and you don’t need to be Elon Musk to build it. It’s just another example of the expansion of Linux into real-time hardware applications. We’re also continuing to look at the issue of open hardware, this issue by examining how useful LibreBoot – the completely open BIOS replacement – actually is. Would you want to try it? Read our feature to find out. There’s also much, much more this issue: with tutorials covering basic terminal tasks to the utterly involved ggplot2 and finishing looking at Rust plus the absorbing Play and Wrangle, we’re continually blown away by how varied and interesting living in the FLOSS world is. Enjoy!

Neil Mohr Editor

Subscribe & save!

On digital and print, see p30

Summer 2016 LXF214 3


“All of humanity’s problems stem from man’s inability to sit quietly in a room alone.” – Pascal

Reviews Dell Chromebook 13......... 17 Dell continues its rollout of top-flight Chromebook offerings, this time with a model that feels more like a business-class ultra-laptop than a cheap budget system.

Dell is back and this time it has brought a new Chromebook line.

Asus Chromebook C202..18

Get started with Mint 18 One of the most loved and easy to use distros gets its biggest upgrade. We dive deep into what’s new. Get going on page 32

Roundup: Screencasting tools p24

Can another rugged Chromebook rule the classroom? Asus thinks so and has built something as solid as a tank. We’re not so sure, as it runs about as fast as a tank…

LulzBot Taz 6 .....................19 Reflecting its open hardware origins, the Taz 6 refines all that has come before. Alastair Jennings checks out the latest features of this hotly anticipated 3D printer.

Fedora 24........................... 20 Another Goliath in the distro world has been released. Jonni Bidwell takes an extended look at this latest release and all the cuttingedge technology it offers.

For those who love living on the edge Fedora delivers a vast amount.

AMD Radeon RX 480....... 22 We put the first Polaris GPU from AMD that uses its open source first AMDGPU driver through its paces and we love what we find.

S.USV Pi Advanced ......... 59 Even a Raspberry Pi deserves UPS protection... Les Pounder tests a solution that can keep your projects running.

4     LXF214 Summer 2016

Interview Something’s going to go wrong, I don’t know what, but I’m excited to face it. Puppet Labs’ Kara Sowles on event planning p40

On your FREE DVD Mint 18 Cinnamon, Mint 18 Mate Edition, Peppermint 7. 64-bit



Subscribe p96 & save! p30

Only the best distros every month PLUS: Hotpicks, Roundup & more!

Raspberry Pi User

In-depth... Build a Linux drone.............. 44

Pi news................................... 58

With Linux drones selling for up to £2,000, we  head out to build our own for a tenth of the cost.  Alastair Jennings is your engineering friend. 

Scratch games celebrate the 2016 Olympics  in Rio, and the Pi gets killer AI.

S.USV Pi Advanced .............. 59 What happens when your Pi project loses all its  power? Nothing, cos you have a UPS attached!

Pi camera effects ................. 60 Les Pounder shows us how to use the new  Raspberry Pi Zero and official Pi Camera to  create a device to instantly capture the moment.

Build a Pirate Box ................ 62 Nate Drake shares a treasure map to turn   your Raspberry Pi into a secure device for   offline chat and sharing your media.

What will you do with your drone?

Coding Academy

Tutorials Terminal basics X11 remote access ..........66

Play and Swagger ................ 83 Bernard Jason explains how to expose your  APIs using REST, then with a little Swagger  how to make it clear to use.

Nick Peers has had enough of the terminal and fires up a remote SSH connection capable of running X11.

Rust: Multi-threading .......... 88

Ubuntu server Hitting the METAL..........68

Our series finally rusts to nothing with Mihalis Tsoukalos... This month we explain what you  need to start using threads in Rust.

Mayank Sharma explains how Ubuntu Server offers an easy way to quickly fire up multiple servers on real hardware.

Regulars at a glance News............................. 6 Subscriptions ...........30 Overseas subs ..........82 32-bit, your days are numbered! So 

Subscriptions are really important, so  We ship Linux Format all around the 

say a host of distros. Linux users are 

we’re told. So go, run, subscribe and 

now the 2 percent. And Vulkan starts 

make the Masters ever so happy!

to power open source gaming.

Mailserver.................. 10

Sysadmin...................48 Mr. Brown is having a mid-life crisis, 

globe, subscribe and save money!

Next month ...............98 We celebrate the Linux kernel turning  25 years old by looking back at how 

From RAID to Linux AV we’ve got a 

takes a stab at machine learning and  it developed, the key distro releases 

bulging post bag and just the one 

has a play with Amazon’s Lambda 

angry Englishman this time.

services without infrastructure.

and how you can relive those days.

Les Pounder heads to Liverpool as 

Alexander Tolstoy hasn’t been sacked from the Baltic fleet, he’s too

Roundup ....................24

Boot it Libreboot the X200 ......... 78

source waters for: Eko, StyleProject, 

Mayank Sharma wants to teach the 

Qt Virt Manager, SquashFS, Synapse,  qBittorrent, Trojitá, FlightGear, Raging 

he’s screen casting his singing. Awful.

Gardens, PDFGrep and Shotwell.

We start using the latest open source system that promises pervasive note taking and more – it’s awesome!

Mihalis Tsoukalos isn’t doing Game of  Thrones but graphical plotting.

busy navigating the temperate open

world to sing in perfect harmony, so 

Notes server Using Turtl ....................... 72

Plotting ggplot ................................ 74

User groups................14 HotPicks .................... 52 Makefest takes over the library.

Metal as a service.

Our subscription team is waiting for your call.

Neil Mohr takes his perfectly functioning laptop and decides to wipe its brain.

Summer 2016 LXF214    5

This issUE: 32-bit support

Linux desktops at 2%

Linux gaming


Distro news

More distros drop 32-bit

But you won’t need to upgrade just yet...


he European Union, David Cameron, almost all of the Labour Shadow cabinet… all are going out of fashion or disappearing completely, but will 32-bit distributions (distros) follow suit? Painfully forced UK politics comparisons aside, the fate of Linux distros that support 32-bit hardware is once more up for debate, with Ubuntu’s Dimitri John Ledkov suggesting in a mailing list in June ( that Canonical should consider dropping support for 32-bit distros in the near future. He did so pointing out that 32-bit hardware is becoming ever more uncommon, with third parties dropping support, and that “Building i386 images is not ‘for free’, it comes at the cost of utilizing our build farm, QA and validation time… As well as [taking] up mirror space and bandwidth.” Canonical has already confirmed that there won’t be a 32-bit image of Ubuntu 16.10, though you will be able to install the 32-bit version from installers. Before Ubuntu users with 32-bit hardware panic, this is in no way an official stance of Canonical, and even Ledkov himself suggests a long time

6 LXF214 Summer 2016

frame for phasing out 32-bit support, with April 2021 seeing the end of i386 as host/base OS architecture (coinciding with the end of support for Ubuntu 16.04 LTS) and April 2023 the end of running legacy i386 applications with security support.

Bit long in the tooth

Ledkov states that “between now and 2018, it would be logical to limit the amount of new installations of i386, because cross-grading between i386>amd64 is not something we can reliably ship. We must continue [to] provide the i386 port, to support multiarch and third party legacy application that are only available as i386 binaries.” With decreasing downloads of 32-bit versions, many distros and projects are evaluating the time and energy spent on creating these versions, with distros such as Fedora and OpenSUSE dropping 32-bit images. It was also announced in May this year that Debian would be dropping support for i586 and hybrid i586/i686 processors in Debian 9, with the changes implemented to the Linux kernel 4.3 packages that have been uploaded to Debian’s Unstable repositories (repos). Again, owners of 32-bit hardware won’t have to upgrade their components just yet, with the 32-bit supporting Debian 8 “Jessie” scheduled to enter the LTS stage in May 2018, and support stretching to 2020. Of course, you should consider upgrading your hardware in the next few years to ensure you continue to get mainstream support. But we’re sure there will continue to be specialist distros that will support 32-bit hardware.

Good night, sweet prince—support for 32-bit processors such as the intel i386DX may be dropped by many mainstream distros in the near future.

It’s also worth noting that dropping support for 32-bit processors, such as i386, i586 and i586/i686 hybrids, does not mean that support for all 32-bit hardware is going to be dropped. There

“Building i386 images is not ‘free’, it costs build, QA and validation time.” was a flurry of speculation a few years ago when the Linux kernel dropped support for 386 processors. Many people thought this signalled the end of 32-bit support when, in fact, it simply meant 386-specific code. So the end of 32-bit support in Linux is not quite nigh just yet... but it might not hurt to begin thinking of switching to 64-bit in the near future to make your life easier.

newsdesk Desktop news

We are the 2%!

The latest figures show Linux use has now risen above 2% of desktop market share, but Windows is still a little way ahead.


inux now has 2% of the desktop market, rising from the 1.63% it had in August 2015. In a graph published by Net Market Share (, there was quite a jump in June 2016. At 2.02% market share, Linux is still behind Mac OS (8.19%), and Windows still has a huge lead with 89.79% of the market as of June 2016. By the time you read this, the numbers may be even be slightly higher for Linux. The way Net Market Share collects its data is from browsers that visit certain websites that are

included in the network of HitsLink Analytics and SharePost clients, which is made up of 40,000 websites and brings data from around 160 million unique visits per month. So while this gives us a very good idea of OS usage for internet users, it doesn’t tell the whole story. So what could be the cause of this surge in Linux use? It could be down to the growing popularity of Chromebooks: as we reported last issue, Chromebooks have now outsold Macs for the first time in the US, with Dell, HP and Lenovo combined shipping almost 2 million of the devices.

newsbytes After six years it looks like Sony is finally going to pay PlayStation 3 owners, after it removed the ability to install Linux in March 2010, which annoyed many people who’d already bought the console only to find a much trumpeted feature had been taken away from them. While the deal is still being thrashed out, it looks like Sony will pay $55 to any PS3 owner who had a console manufactured between November 2006 and September 2009 and can prove they lost the use of Linux. If you don’t have proof you may still be able to claim $9. You should get an email from Sony (if you signed up for the PlayStation network) detailing how to claim.

Did you use Linux on the Ps3 before the feature was dropped? sony might owe you some cash…

This graph from Net Market share shows the rise in Linux use – now above 2%!

LinuX gAming news

Big names continue to get behind Linux

Vulkan, Dell Steam Machines and AMD support.


nother month and another round of great news for Linux gamers. At this year’s E3 gaming convention Dell announced new Steam Machines in its gaming enthusiast Alienware range. These console-like PCs can be plugged into TVs in the front room and run the Debian-based SteamOS operating system. With the latest Intel Skylake CPUs and Nvidia GTX 960 graphics cards, these Steam Machines will play the latest blockbuster games with ease,

The Alienware steam Machine looks like a console, and brings Linux games to your lounge.

and an increasing number of these games are being ported over to Linux thanks to Valve. AMD also took to the stage to announce that its new Radeon RX 470 and Radeon RX 460 graphics cards, which are based on its eagerly anticipated Polaris architecture, will ship with both open and closed source Linux drivers. Nvidia has also offered Linux drivers for its latest GTX 1080 and 1070 GPUs, further strengthening Linux’s gaming credentials. Support for the Vulkan API is also growing, which makes it even easier for graphically intensive games to be ported from the Windows 10-only DirectX 12 API, with big titles – such as the recent reboot of Doom – embracing Vulkan, and game engines, such as Unity, looking to implement Vulkan support in the future. This could finally put a stop to Microsoft using DirectX to force gamers to use its Windows operating system.

Security researcher Dymtro “Cr4sh” Oleksiuk has found another critical vulnerability in Lenovo PCs affecting the UEFI which disables firmware write protection. This zeroday exploit is dubbed ThinkPwn and Oleksiuk observed that “Running of arbitrary System Management Mode code allows [an] attacker to disable flash write protection and infect platform firmware, disable Secure Boot, bypass Virtual Secure Mode (Credential Guard, etc) on Windows 10 Enterprise and do other evil things.” Has nano, the popular text editor, opted out of GNU with version 2.6.0? After a post saying “with this release, we take leave of the herd” on the nano news page ( many thought so, perhaps due to plans to move nano to Github, which is incompatible with the GNU licence. But it may be simply a fork – one of the GNU maintainers team posted on Hacker News ( that GNU Nano continues: “The current maintainer of GNU Nano, Chris Allegretta, was hoping to add Benno Schulenberg as a co-maintainer, [but] Benno refused to accept GNU’s maintainership agreement… It seems Benno decided to fork the project. But he updated the official GNU Nano website rather than creating a website for the fork.” The plot thickens...

Summer 2016 LXF214 7

newsdesk Comment

Kill all package managers! Alex Campbell

The first-boot Linux experience is one of the best things about the modern Linux desktop distro. First of all, many Linux desktop distros come pre-installed with the likes of Firefox and LibreOffice, but you still need to get the other things. Doing this usually takes just one or two entries on the command line. You then relax as the system does its magic. It’s awesome, but there’s a problem with the Linux package manager ecosystem, and it’s the fact that there’s an ecosystem of package managers at all.

Packed field

There’s a slew of package systems to choose from: Debian, Ubuntu and its derivatives (such as Linux Mint, see our Feature, p32) use APT; Fedora and Red Hat use Yum; OpenSUSE uses ZYpp, which is compatible with Yum; Slackware has Pkgtools; Arch uses Pacman; and Gentoo uses Portage. Yes, that is quite a list. This is a big problem, because when a developer wants to release software for Linux, it has to choose how to package the product. It’s not the developers’ fault. Developers spend most of their time writing software. That takes time and energy – they don’t want to exhaust that effort making pretty packages for every single distro under the sun. While we’re here, let’s give a big hand to package maintainers. Seriously. These folks do a lot of grunt work for distributions with very little recognition. These poor souls donate their precious time to package the latest versions of their software project of choice for the distro that they use. It’s a thankless job that only exists because of package system fragmentation. Package managers are one of the core elements that differentiate distros from one another. They really are some of the best and worst parts about Linux. Alex is the associate editor and resident Linux geek on Maximum PC (@MaximumPC).

8 LXF214 Summer 2016

Distro watch

What’s behind the free software sofa?

kDe neon 5.7


resh off the heels of the newly-released KDE Plasma Desktop 5.7 comes KDE neon 5.7, a Kubuntubased distro that specialises in fast development that allows it to take advantage of a cuttingedge desktop stack, which

includes the latest version of Plasma. As well as KDE Plasma 5.7, it features Qt 5.7, as well as improved Wayland integration, a new system tray and a new task manager. To find out more visit the release announcement at

ZenwALk LinuX 8.0


he latest major version of the Slackware-based distro, Zenwalk Linux 8.0, has been released after what it calls a “long development blackout”. It comes with updates to essential programs (such as LibreOffice 5.1.3, Chromium 51, MPlayer 1.3 and FFmpeg 3.0.1), as well as the Linux kernel 4.4.14

and the Xfce 4.12.1 desktop environment. The ISO weighs in under 1GB, though support for 32-bit hardware has been dropped. The release statement notes that “As it is hard to find 32-bits CPUs nowadays, I believe that the old 32-bit architecture is for small specialised systems only”. Find out more at


It’s less than you thought. With a 2ndQuadrant Migration Assessment you’ll get an accurate evaluation of the effort, complexity, tasks and cost of migrating from a proprietary database to PostgreSQL. The cost savings can be substantial. +44 (0)870 766 7756 +1 650 378 1218

The home of technology

Write to us at Linux Format, Future Publishing, Quay House, The Ambury, Bath BA1 1UA or


Mailserver we did outline how it was pretty straightforward to use something like ZFS to store your boot OS alongside. Being more helpful, Ubuntu does offer a software RAID install guide SoftwareRAID but be aware if you’re planning to dual-boot Windows you’ll likely need to use ther FakeRAID path http://bit. ly/FakeRaidHowto unless you have a true hardware RAID controller. It’s certainly something we’re interested in and will hopefully get to look at, right after we get to Slackware…

No English I have read the letter from Harvey Rothenberg, USA and I’m absolutely astonished at his lack of knowledge regarding the distribution of magazines. First, LXF is not an American published magazine so why would it target that area? Second, USA manufacturers and suppliers of electronic equipment seem to think that

We love the smell of RAID in the morning, and the burning sensation.

the only language in the world is American English, and nobody actually speaks or writes the English language. There is no American English language the Americans have bastardised standard English and claimed it as their own. Last, Harvey Rothenberg of USA should stick with an American version of Linux (if such a thing exists) and if he doesn’t like EU versions of anything then he should realise

We don’t want to start an argument, but look—it’s in Wikipedia.

You know, you’ve been posting some darned useful articles recently. Back in LXF206 there was one on RAID [Tutorials, p76]. I remember messing with that back with Ubuntu 10.04, with the objective of building a high-reliability PC. I even got to scripting the RAID build, to make it easier and faster. Then in LXF207 you had a review of Fedora Security Lab [p20], and put it on the cover-disk of LXF208. So far, so brilliant. Next step? How about an article on how to install Linux on a RAID set? Back with Ubuntu 10.04, Canonical produced the ‘Alternate-build’ CD, with a simple text-based installer to make room for the RAID additions, but not any more. An article on how to augment and process a standard-release would be ooh-so-helpful. And an excellent follow-up to the Security Lab/Suite would be an introduction to the basics of pentesting such as some of the most useful commands and what they discover. Dare I look forward? Peter Overfield, Shefford Neil says: Thank you for your kind words. Often Jonni sidesteps the issue saying just use RAID for your main storage and run Ubuntu on a small SSD, it’s almost like he’s avoiding the issue… Last issue in

10     LXF214 Summer 2016

that there are places in the world other than the USA! Robin Strachan, UK. Neil says: As we’ve just chosen to Brexit the EU, we’re not going to take any high grounds here. But, really, what’s in a name? I often see the English (US) option so that’s seems close enough and don’t get Effy started on Spanish, the Spanish are always telling him how to talk.

RUST FTW! I agree with Jim Blandy about C. It is as worthless as Basic and Fortran, the only advantage is that nobody will ever steal your source as it is completely incomprehensible. Around 1978, I ‘ordered’ my computer group to switch to Pascal. One person refused: a good programmer does not make mistakes, only


Jim Blandy highlighted the perils of undefined behaviour.

the amateurs need something like Pascal. Half a year later he lost most of his brother’s experimental data; a mistake caused by a slight confusion between subroutine a10 and a11, which is a three-line program that doesn’t exist in Pascal, but the following one is close: program arraytest; var a1: array[1..3] of integer; a2: array of integer; begin a1[1] := 1; writeln(a1[1]); a1[5] := 5; a2[7] := 7; writeln(a1[5]); end. Compiling and executing results in: Free Pascal Compiler version 2.6.4 [2015/03/25] for x86_64 Compiling arraytest.pas arraytest.pas(9,6) Warning: range check error while evaluating constants

arraytest.pas(10,3) Warning: Variable “a2” does not seem to be initialized arraytest.pas(11,14) Warning: range check error while evaluating constants [johan@localhost test]$ ./arraytest 1 Runtime error 216 at $000000000040EEB9 Why learn another programming language, again? Johan Herbschleb, Portugal Neil says: I think Jim Blandy said it all in our interview with him in LXF209 [p40]: “ The jury is in, the experiment has been run, humans can’t write that code, they can’t be trusted.”

Open letter Every month I buy Linux Format in order to find something that really helps me. Almost without exception, I’m disappointed because I belong to that large group of people who try to get some financial benefit by using a Linux machine. Using Windows is expensive for me, but I don’t have a choice as Linux doesn’t work as well as Windows. On the other hand, I don’t understand why the Linux operating system exists. It’s not a very good reason that boys have something to play with. I’m sure that they defend themselves by saying that it’s their right in the free world. Of course, it is a free world but being so it is not a fair thing to write that Linux is as userfriendly an OS as Windows. I can assure to you that all my Windows applications don’t work via Wine on Linux, my printer doesn’t work at all and my MIDI files are dumb even

though I have installed all the needed code screws and bolts. All problems on Linux are due to its so called freedom. Instead of inventing something really new programmers invent the wheel all over again. From the users’ point of view it would be much better if there is one Linux distro which works properly than uncountable amount of systems that work almost all right. You don’t know how happy I am now when I can count on you, Linux Format Editorial Staff. Next month, my Midi player and editor on Mint Linux will praise you loud and clear. Linux Format is a decent magazine but too Linux introspective. For example Android is often criticised in vain. I haven’t had any setbacks with my Android laptop but with Linux a lot. Yelling Rosa, A Poor poet Neil says: An excellent poem [although we edited it a lot – Ed], and a lovely lighthearted way to look at Linux (See it in full here: As I’ve said before Linux isn’t Windows. Blaming Linux for not running Windows programs, printer companies not supporting

Linux drivers and using proprietary Midi format isn’t a failing of Linux. OK, perhaps the last one might be: Midi isn’t a free standard you’ll need to install Timidity to gain Midi playback, but it’s a question already answered by the community. We’d question if Android could replace a desktop OS. Even Google don’t think it can, offering ChromeOS as the desktop alternative. You’re right in terms of the fractured nature of the Linux ecosystem, but we’d see that as a strength rather than as a weakness. Open source enables anyone to create the next ‘Ubuntu’ or ‘Debian’ if they’re so driven. The core Linux distros do have strong developer communities behind them and produce highly polished, Windows-level slickness and let’s not forget a huge volume of people have endless problems with Windows. Endless problems. Would you like to upgrade to Windows 10?

Distro signs As a home user I feel that it would be most helpful if articles had a header that indicated their target. All too often I start to

Android can be used as a desktop OS, but we’re not sure you’d want to…

Write to us Do you have a burning Linuxrelated issue you want to discuss?  Want to let us know that Android is  your preferred OS or tell us about  a DIY drone you’ve crashed or  maybe just suggest future  content? Write to us at Linux  Format, Future Publishing, Quay  House, The Ambury, Bath, BA1 1UA  or

Summer 2016 LXF214    11

Mailserver read something only to find that it doesn’t apply to my situation, perhaps because it looked interesting initially, but turns out to apply to only web-servers or something else. Also I feel sure a little more consideration could be given to us poor home users. I have a simple website which may benefit from the use of PHP, but everything that I have read seems to be aimed at admins of their own systems, or generally bogs me down in a big pile of jargon, eg how to implement PHP when the website is on a web hosting site. I also have two other computers connected to the router, I have read so many articles on networking, on the web and the magazine. Not one explains in plain English just what needs to be done to set up a simple network up without all the waffle on protocols and TCP etc. Just a list of instructions is all that is required. I am sure you are aware that many people like myself only want to get things working and aren’t concerned with how it works. Last, there’s something that occurs all too often in the magazine, eg ‘In Ubuntu you … bla-bla-etc’ At that point I move

on, and I’m sure that there are many others that do the same. Is it not possible to be more generic? If you have got this far in my letter, thanks, I hope it provokes some thought, and maybe a very small but useful improvement for home users like myself. Dave P, email. Neil says: I fully appreciate your frustration with the tutorials. The disappointing answer is I don’t see an easy answer, or in fact, an answer at all to this particular problem. We could flag up the target system at the start but by and large we default to Ubuntu/Unity, just as it’s the most widely used distro, and it’s based on one of the most common bases which is Debian. It’s a side-effect of the FOSS world that Linux has ended up with multiple distros, desktops and package managers, but it would be impractical to attempt to cover off all of them in every article. The fact is if you can aptget something you can also yum / pacman / portage / dpkg it in every other distro. I’d love to cover

Linux drones

ARNING W May contain Ubuntu. more distros on a regular basis but it’d mean losing the coding section or HotPicks or some other section that I’m sure people love…

AV Linux A new release of AV Linux is now available. AV Linux has a long history as a very good Linux distro particularly for audio and video programs. I have been a user for many years and I think you should do a review of it. Even more amazing is it’s a distro that’s more or less a oneman production. See http://bit. ly/AVLinux2016. Bo-Erik Sandholm, Stockholm. Neil says: OK, we will!

I enjoyed your drone article, [Features, p32, LXF209]. The 3DR solo drone looks fantastic! I note that you say drones are able to make deliveries in China but due to UK restrictions this would never happen here. In fact, the rules in Chinese airspace are militarily governed and far stricter than ours, their drones can only operate safely because they do not allow GA (general aviation). The UAV (unmanned aerial vehicle) rule creation is still very young here and open to interpretation. This is a dream for Linux programmers, the core sensor should be GPS. This is very simple, if you are about to break the following rules; flying above 500 feet, no longer in the operator’s line of sight, remain clear of restricted air space, clear of people. Then automated corrective control must override your control. The Linux program that would make the perfect Robocop of the sky will win that law enforcement contract. Tony, via email. Neil says: Funny you should mention this Tony, check out our build your own drone feature on page 44. LXF

Letter of the month


Single board

ince you started doing them, I’ve been closely following the Raspberry Pi articles, but I never really had the time and/or incentive to actually get one and do Xx something with it, although I did make note of potential projects I may get around to at some point. Now I do have the time and incentive. The more I read up on it the less sure I am on what to use at the heart of the project. I had originally thought it would be a toss-up between Pi or Arduino, but now I’m reading about the Beaglebone, the ODROID-C2, and the Banana Pi. Unless it was in one of the rare issues I don’t get due to being out of the country, I don’t think you have done a detailed roundup of these

12     LXF214 Summer 2016

and listed the strength, weaknesses and oddities of each with the best/worst type of projects to use them for (and power consumption comparisons etc). How about doing one? I’ve been following Linux Format since the start, and Amiga Format before that and you are still the best on the market. Mex, via email. Neil says: The problem I have is a) we’re UK  based and the Pi is part of the UK education  system, so there’s a big reason to cover that  so much. But b) out of the other boards  you mention the Pi is the only one to offer  full support for its own Linux distro and  provide massive education support and  community support. Until open hardware  projects actually appear (hopefully in 2017)

the Broadcomm GPU and SoC is the most  open of all. In fact, any AllWinner based  board is an awful choice due to poor kernel,  GPU and general open source support. We’ve mentioned Arduino before and  that’s an excellent option if you don’t need  a full computer and don’t mind coding the  real-time controller.   Pi competitors come, Pi competitors go, the Pi continues to grow.

“What film features the fictional tech company Blue Book?”


Linux user groups

United Linux!

The intrepid Les Pounder brings you the latest community and LUG news.

Find and join a LUG Blackpool Makerspace Blackpool Makerspace, 64 Tyldesley Road, 10am every Saturday. Bristol and Bath LUG Meet on the fourth Saturday of each month at the Knights Templar, Bristol BS1 6DG (near Temple Meads) at 12:30pm until 4pm.

Coding Evening Help teachers learn Computing and have a beer. Various locations. Egham Raspberry Jam Meet at Gartner UK HQ every quarter.

Lincoln LUG Meet on the third Wednesday of the month at 7pm, Lincoln Bowl, LN4 1EF. Liverpool LUG Meet on the first Wednesday of the month from 7pm onwards at DoES Liverpool, Gostins Building, Hanover Street , Liverpool. Manchester Hackspace Open night every Wednesday at Wellington House, Pollard St E, M40 7FS Surrey & Hampshire Hackspace Meet weekly each Thursday from 6:30pm at Games Galaxy in Farnborough.

Tyneside LUG Meet from 12pm, first Saturday of the month at the Discovery Museum, Newcastle.

North West Makefest buzzing Liverpool Makefest 2016 doubles attendance.


iverpool Makefest was the surprise hit of 2015 and generated quite a buzz around the North West of England. In 2015 they saw 1,800 people in attendance, so could they beat that in 2016? The answer was a resounding “Yes”, with attendance doubling to about 4,000 people attending the free oneday event in Liverpool’s Central Library. For 2016 the number of stalls doubled, offering plenty of interesting project ideas for makers of all ages. At this year’s event we saw makers large and small. Local astronomy groups demonstrated how they are using Raspberry Pi and 3D printers to augment their hobby. Children demonstrated their Raspberry Pi and Arduino projects and offered hands-on sessions for interested parties. We also saw the low-cost Arduino alternative, The Shrimp, a breadboard based Arduino compatible board. The Shrimp team ran a free workshop demonstrating how to build and program your own microcontroller device. The UK R2-D2 Builders club, who build robots from the Star Wars films, displayed their fantastic replica robots, and we saw makeup and special

effects used in the Doctor Who reboot. The day was filled with interesting projects for all abilities and pockets, which is the lifeblood of the maker movement – cheap and easy access to equipment and knowledge. The number of children attending was reflected in the great work undertaken by Code Club, who ran a stall offering fun and games created with Scratch and Raspberry Pi. They also spoke to parents and teachers about integrating Code Club into more local schools, which would be a fantastic legacy of this fine event. LXF

Liverpool Makefest took over the Central Library in Liverpool for a fun day full of projects and imagination.

Community events news working to find suitable locations across the UK, with a number of venues on offer. Stay tuned!

Oggcamp 2016 Sadly, time has beaten the new organising team for Oggcamp, which will return in 2017. This will be the first “break” in the popular Oggcamp event since it was started in 2009. The team are

14     LXF214 Summer 2016

PHP North West 2016 PHP has been powering the web for many years and is the backbone to the Wordpress platform. But in an ever changing world, with the PHP framework ever evolving, developers need to keep their skills sharp, and there is no better place than PHPNW. Taking place in Manchester from

September 30 until October 2, this event offers talks, workshops and a hackathon. Plenty of opportunities to keep your PHP skills sharp and develop your personal skills.. More details can be found on their website. uk/phpnw16/ Cambridge Raspberry Jam Cambridge is the home of the Raspberry Pi and this September 17 we see the return of the popular Cambridge Raspberry

Jam, the largest Jam in the UK. Organised by Michael Horne and Tim Richardson, this large event draws in quite the crowd, thanks to a diverse selection of talks and workshops all based on the ever popular computer. There will also be a marketplace where you will be able to purchase new components and kits for your latest project. Admission is £3 for adults but children under 16 are free. So take the kids and enjoy a fun day out!

Take the smart movie quiz...




16 N 20 ITIO ED

Raspberry Pi The only guide you need to get more from the amazing mini PC



• Make a media centre • Secure your home • Build a robot and more!

Covers all models of Raspberry Pi!

THZ17 2016

ORDER YOUR COPY TODAY! Order online at Also available in all good newsagents

All the latest software and hardware reviewed and rated by our experts

Dell Chromebook 13 Seeing a Chromebook that means business, Steven Wong rubs his eyes in disbelief at its battery life. Specs CPU: 1.7GHz Intel Celeron 3215U (dual-core, 2MB cache) Graphics: Intel HD Graphics GT1 RAM: 4GB DDR3L Screen: 13.3inch 1920 x 1080 Matte FHD LCD Storage: 16GB M2.NGFF Solid State Drive, MicroSD slot Optical drive: None Ports: 1x USB 3.0, 1x USB 2.0, HDMI, headphone and microphone combo jack Connectivity: Intel Dual Band Wireless-AC 7260 802.11AC, Bluetooth 4.0 Camera: Built-in 720p HD video camera Weight: 2.17kg (4.8lb) Size: 382 x 252.5 x 20mm (15.04 x 9.94 x 0.78 in) W x D x H

All models in the range sport 13.3-inch FHD displays.


erhaps high-end Chromebooks were inevitable, but it looks like the Chromebook Pixel has started a trend. This new Dell model features an FHD screen, more memory, and more processing power than one might expect – or arguably need – from an average Chromebook. Perhaps it is overpowered, but the growing popularity of Chromebooks in recent years begs for some kind of upgrade, if for no other reason than to stand out from the previous generation. The Dell Chromebook 13 does just this. Although it shares the same minimalistic focus on light productivity and web browsing use as other Chromebooks, the high-resolution screen and spiffy internals indicate that Chromebooks are moving toward a higher class of computing. If it weren’t for the colour logo on the lid, one might not be able to tell it’s a Chromebook, and that might be the point. The black carbon fiber cover and magnesium alloy chassis suggest there are high-end components inside. It’s a very attractive notebook that fits in perfectly with Dell’s other businessclass machines. We tested the base model with a 1.5GHz Intel Celeron processor, 4GB of memory, and a 16GB solid state drive. However, the premium frame and 1080p screen basically double the price. That’s still enough power to

run Chrome OS efficiently, and the internal storage is supplemented with a microSD slot and two USB ports (one of which is USB 3.0).

Business class The Dell Chromebook 13’s matte FHD 13-inch screen has excellent viewing angles and works reasonably well in sunny conditions. It also sports a backlit keyboard, which makes it convenient to work with in darkened rooms. Web pages and cloud-based applications load quickly, but text often appears very small on the 13-inch screen, which made working with documents a pain, unless you’re keen on squinting a lot. That said, streaming video from YouTube and Netflix looks sharp at Full HD, although the picture can be a bit dark at half brightness. In fact, the brightness has to be turned up all the way in sunlit spaces. Slim speaker grilles are located on the bottom of the tapered sides, so the sound comes out clear and loud, especially on solid surfaces. The Dell Chromebook 13 comes in four varieties, with one that includes a touchscreen display but no 2-in-1 functionality. The most expensive configuration has a 2.9GHz Intel Core i5 processor with 8GB of memory and a 32GB SSD. All models sport 1,920 x 1,080 FHD displays. The Dell Chromebook 13 performed well in our suite of benchmark tests, scoring Octane 13,795 and Mozilla Kraken 2,139.6ms. For battery life (HD film test) it managed a very usable 14 hours and 30 minutes. This means the Chromebook’s Celeron processor delivers impressive performance. The system scored well in both the Octane and Mozilla Kraken JavaScript benchmarks. The battery

The new Dell looks the business.

is rated for 12 hours, and it means it. In fact, our movie test ran for 12 and a half hours at 50% brightness, and the battery still had 19% left in it; at maximum brightness it managed 10 hours with about 16% battery left. The Dell Chromebook 13 is a greatlooking little notebook that, at first glance, is almost indistinguishable from other brands of business class systems. It offers strong performance and is ready for both work and play. LXF

Verdict Dell Chromebook 13 Developer: Dell Web: Price: £580

Features Performance Ease of use Value

9/10 9/10 9/10 6/10

Looks great and has the performance to support the needs of both work and play, all at a reasonable price.

Rating 8/10

Summer 2016 LXF214 17

Reviews Chromebook

Asus C202 Built for spills, not for thrills Jacob Grana is left cold by the latest rugged  Chromebook from Asus. Specs CPU: 1.6 GHz Intel Celeron N3060 (dual-core, 2MB cache, up to 2.48GHz with Turbo Boost GPU: Intel HD Graphics 400 RAM: 4GB LPDD3 Screen: 11.6inch, 1,366x768 SSD: 16GB eMMC Ports: 2 x USB 3.0 ports, HDMI, headphone/ microphone combo jack, SD card reader Comms: Intel 7265 Dual-Band 802.11ac 2x2 Wi-Fi; Bluetooth 4.2 supporting WiDi Cam: 720p HD webcam Weight: 1.2kg Size: 292 x 200 x 22mm (W x D x H)


he Asus Chromebook C202, with its giant keyboard font and rubber bumpers, may look like an alphabet-singing ‘My First Laptop’, but, this Chromebook is no fragile toy. Designed with the student in mind, the Asus C202 is built to weather all types of day-to-day adversity, be it backpack G-forces, clumsy adolescent hands or careless tosses in the classroom. Like the tortoise, the C202’s main line of defence is a rigid shell: dimpled plastic covers the laptop’s lid and base. Additional shielding – thick, midnight blue rubber bumpers – runs along the C202’s edges. All this armour doesn’t make for an elegant profile, but it does provide a good degree of ding, dent and drop resistance. According to Asus’s tests, the C202 can withstand a 4-foot drop landing flat, and a 2.5-foot fall landing on its side. This defence-first philosophy continues with the C202’s screen, hinge and bezel. The hinge allows the screen to tilt 180 degrees, a flexibility intended to safeguard the lid and hinge against sudden pulls or tugs. The large bezel provides plenty of grabbing room for lidlifters too impatient to pick up the laptop by its base. Despite all these wise reinforcements, the C202 is far from heavy. The C202 also features repairbudget-friendly modular components. Thanks to the C202’s modular design, a broken trackpad means only the trackpad will have to be replaced, not

Features at a glance

Touch pad

As responsive as its keyboard. Multi-touch gestures are fluid and the ‘click’ is strain-free.

18     LXF214 Summer 2016

Radio One

The laptop also pumps out audio like a transistor radio which has been dropped into a fish tank.

the entire input structure. The keyboard repels up to 60ml (2.23 ounces) of liquid, and any that leaks into the interior can be drained by merely flipping the laptop over. With two millimeters of travel, the C202’s chiclet-keys descend so deep the Marianas Trench is jealous. And all that travel isn’t undone by sponginess either: every key, from the top of the keyboard to the bottom, quickly bounces back after you press it. Its 1,366 x 768 resolution is average, but ‘average’ is the theme here. There’s nothing exciting about the C202’s display. Its colours are demure. Its viewing angles are frustratingly narrow. Its anti-glare coating is missing in action.

We fear change The thin speakers are located near the base of the laptop, underneath the rubber bumpers, making audible, unmuffled sound an impossibility. It’s worth mentioning that the C202’s Wi-Fi antenna is particularly potent. Previous Wi-Fi deadzones for other devices saw the C202 in those same areas ramps up to three bars. Here’s how the Asus C202 performed in our suite of Chromebook benchmark tests: Octane: 8,303, Mozilla Kraken: 3,913.8. As the benchmarks show, the C202’s JavaScript performance is mediocre. The Acer C740, for comparison, scored nearly 60% better on the Octane and Kraken tests, In day-to-day use, the C202 handles multiple browser windows and applications well enough, but open too many and the device slows down considerably. Websites with pop-up and auto-play video often fail to load completely unless refreshed. Under battery testing where a HD movie is continuously looped at 50%

It might look better than average, the annoying truth is that it’s not.

brightness and 50% volume, the C202 actually fares quite well: it lasts a school-day-ready 8 hours and 42 minutes. On the same test, the Dell Chromebook 11 musters 8 hours of playback. But the budget Chromebook battery crown belongs to the Acer C704. At 9 hours and 35 minutes, it handily edges out the C202. Asus’ latest Chrome OS laptop is simply underpowered, especially when compared to similarly priced Chromebooks. More so than its poor visuals and audio, the C202’s shoddy multitasking is what holds it back. LXF

Verdict Asus Chromebook C202 Developer: Asus Web: Price: £209 (4GB model)

Features Performance Ease of use Value

7/10 6/10 8/10 7/10

A low-end spec and sub-average performance Chromebook, which at least is solid and easily fixed.

Rating 7/10

3D printer Reviews

LulzBot Taz 6 Reflecting its open hardware origins, the Taz 6 refines all that came before it. Alastair Jennings tests out the latest features of this hotly anticipated printer. In brief... Alpha Objects, the company behind the Taz 6, is a keen supporter of the open source community and as such offers free blueprints of the Taz 6, alongside its entire line of printers, online. The production model is finely finished using only high-quality parts and is hard to match in terms of quality and reliability.


isually, the initial difference between the Taz 6 and its predecessor the Taz 5 isn’t a great deal – a few design refinements and the self-levelling print base. However, scratch beneath the surface and the number of refinements across the entire machine is significant. The self-levelling base adds to the printer’s ease of use and adopts the same principles as the excellent LulzBot Mini. The build volume sees a change to 280 x 280 x 250mm compared to the Taz 5’s 290 x 275 x 250mm. Layer thickness with the 0.5mm nozzle is 50 microns – and as with other LulzBot printers, the nozzle can be swapped and replaced for other sizes if needed. The print base is PEI type, which enables good print adhesion and extraction. There’s also a handy toolkit included, with a knife for extracting prints, Allen keys and replacement nozzle cleaning pads. Tool head temperatures remain at a maximum of 300 degrees, and the print bed at 120 degrees, which enables support for a huge variety of materials, but Alpha Objects favours 2.75/3mm filament for use with its printers. As with the Taz 5, the tool head can be quickly removed and replaced with another, such as the flexystruder or dual that can purchased separately. The small LCD screen and control panel has also seen an update with exceptionally easy navigation. Unfortunately, there’s no Wi-Fi option;

Features at a glance

Self-levelling base

On starting a job, the nozzle touches each corner of the print base to ensure that everything is level.

Quick release head

A wide variety of materials can be used – with a quick release tool head, it’s easy to install replacements.

however, the open hardware design means that Wi-Fi can be added easily enough. Prints are prepared in a customised version of the Cura print software, which is freely downloadable for use with the Taz 6. The software is exceptionally easy to use and perfect for both beginners and experts. A new material ease of use option enables users to tailor the software to their level of experience. Selecting Beginner and using the nGen filament supplied from ColorFabb Cura offers three basic options for print quality: standard, high speed and high detail. All work incredibly well and produce prints that are in line with the setting’s description. If you want to break out and select your own print settings then switching to the software’s expert mode enables complete control.

Solid performance Printing at standard quality shows just what the printer is capable of as an everyday solution. The 0.5mm nozzle enables quick printing at a consistent quality. Overhangs are well layered, and the slightly thicker extrusion of material works well for bridging gaps successfully – great for prototypes. High detail mode slows the print process down considerably and is a good option for modellers, but the thinner layer of thickness struggles with successful gap bridging, which is only to be expected and quite normal. Build volume for the printer is huge and should be more than enough for the majority of home users, and the PEI print surface not only enables prints to stick firmly during the print process, but once cooled also makes it easy enough to remove the print without too much trouble. The bottom line is: the Taz 6 is an exceptional printer. The open design

The Taz 6’s new self-levelling base and refined design make it an ideal choice for anyone needing reliable large-scale printing.

enables you to upgrade and tweak the build to your own needs. Out of the box, the setup can be achieved easily within ten minutes, and the autolevelling print base and self-cleaning function cut out the need for regular maintenance and adjustment. Print quality is excellent and at the highest setting is almost identical to the Ultimaker 2 series, with only the faintest sign of layering. What really makes this printer stand out is its reliability and consistency. If you need a large scale workhorse of a printer, there is nothing to match the Taz 6. LXF

Verdict LulzBot Taz 6 Developer: Alpha Objects Web: Price: £2,099

Features Performance Ease of use Value

9/10 9/10 9/10 8/10

Auto levelling, huge material support and reliability make this an exceptional all-round printer for large-scale printing.

Rating 9/10

Summer 2016 LXF214 19

Reviews Linux distribution

Fedora 24 Jonni Bidwell thinks people might take him more seriously if he had a Fedora.  But he can’t afford expensive hats, so he settles for the free OS instead. In brief... The distro for those that want both stability and features has another outing. With Gnome 3.20, support for Flatpak images and improving Wayland support, there’s a lot to explore.


he Mint 18 release stole the fanfare a bit last month and we all but missed Fedora slipping out its latest offering. But better late than never, and better late than having to fight with early release bugs, we hold our breath and prepare for a deep dive. Fedora is reasonably fluid with its release schedule—it’s been about a sixmonthly one for the last three releases, but Fedora 21 didn’t appear until almost a year after its predecessor. Fedora’s oft-stated goal, to lead and not follow, means that it is often the first (fixed release anyway) distribution (distro) to adopt exciting new features. Historically, it has led the way with PulseAudio, Systemd, Gnome 3, SELinux and LVM etc all enabled out of the box. Support for Wayland by default was tabled for inclusion in this release, but in the end it was postponed. If you want to be an early adopter, though, the bits are all there and all you need to do is select the Wayland session from GDM. More on that later. For now, the take home is, those wanting up-to-date versions of everything, but not desirous of rolling-release distros, unofficial PPAs or manual compilation, should probably look at Fedora before anything else. As with the previous release, Fedora 24 uses the Anaconda installer. It continues to work very well; it had no trouble at all manipulating and installing into an already populated LVM volume group. It did, however, seem to insist on making its own UEFI entry the default.

The weather application looks good, even on an overcast day in the southwest. Maps is currently broken, sadly.

However, we went back and realised that this can be worked around—you can elect not to install a bootloader or UEFI image at all, which some users in this situation might prefer. Those users would then proceed by manually adding an entry to the existing Grub menu. It’s easy to resize partitions and logical volumes straight from the installer too, so long as you remember to push the Update Settings button every time you change something. Otherwise the change won’t be registered so the next thing the user attempts may not be possible. If you just accept the default install options Fedora will set up its own

Features at a glance

LVM group, which will turn out to be most handy if you come to install further distros later. As we mentioned in our Fedora 23 review, there is a slight nonlinearity to the installer in that it starts copying files across before any accounts are set up. The idea is that you can set usernames and passwords up in parallel, in theory saving yourself a bit of time. It’s a nice idea, but it’s still potentially confusing— the install starts and you have two red boxes telling you something isn’t set up. To many, that will not seem right. However, it won’t take long to realise there’s no need to panic. If you don’t like defaults, and use an alternative installation source, such as the net install image, then you can customise the resulting install from the software selection screen. If you just use the standard Workstation image, as most people will, you’ll get Fedora’s beloved Gnome desktop. There’s also a text installer in case anything goes wrong, but unless you have a particularly badly behaved graphics card that’s unlikely.

New Gnome Great choice of spins

There are myriad other flavours if Gnome isn’t your type of thing. Some of these are desktopcentric (eg MATE-Compiz) and some aren’t (eg the Design Suite), but there’s lots to choose from.

20     LXF214 Summer 2016


Now that both Qt4 and 5 applications can be better styled to fit with the rest of the desktop environment. Gone are the days of messing with the QtCurve theme.

For purveyors of fine desktop accoutrements, the biggest feature of this release will be Gnome 3.20. As per usual, Fedora is the first distro (on a fixed cycle at least) to include this as

Linux distribution Reviews

standard. Of course, it’s been available in Arch and Gentoo and third-party repositories (repos) since release, but with Fedora the result is slightly more polished. Anyone that tried out and abandoned Gnome 3 in the early days would be well-advised to go back and have another look. Things have progressed immensely and hardly a rough edge remains. The annoying notifications area that used to be in the bottom right of the task selection screen now lives next to the calendar. You are alerted to pending notifications by a discrete but visible dot next to the clock. Clicking this reveals calendar, notifications and also a panel for controlling any application currently playing music.

Unsettled apps Gnome Software continues to improve and can now install regular system packages too. It will even offer suggestions from the Activities screen, which is handy for those times when you erroneously think something’s installed already. There have also been improvements to other new Gnome apps, Maps and Weather. Correction, Weather has been improved (it still doesn’t know about Bath though), Maps was looking good, but has been entirely crippled as of July 11, thanks to MapQuest revoking direct access to tile data. It’s not clear whether Gnome will be able to negotiate a deal with, or use

Ubuntu Software Centre it ain’t, but if you don’t like it the excellent dnf package manager is awesome.

some other provider, or if it has the resources to manage its own tile server, but for now those seeking direction should seek it without Maps. This release also sees support for the Flatpak, the new distro-agnostic packaging format. Flatpak applications are isolated from one another and the rest of the system, so they and their dependencies are bundled into a

“Supports Flatpak, the new distro-agnostic packaging format.” sandbox. One consequence of this is that Flatpak images can be quite large. Mercifully, there are ancillary runtimes for common desktop libraries, so the Flatpak need not contain the entire dependency tree. LibreOffice is currently leading the Flatpak charge, and its website contains instructions. It’s a straightforward process: Install the flatpak command, add a key and a repos, install the Gnome 3.20 platform runtime and install the (100MB) LibreOffice.flatpak file. When used in combination with Wayland, Flatpak provides total separation (X11 programs can spy on each other and inject keystrokes with impunity). Thus our Flatpak’d LibreOffice didn’t interfere with the one installed with Fedora. There are downsides though—it can’t

talk to other applications, so it can’t launch a browser when you click a link, and it can’t launch Java when required by some LibreOffice widgets. So it’s a work in progress, but one with which early adopters will enjoy playing. Fedora and other major distros have historically been accused of bloating desktop offerings, this is certainly no longer the case. We find LibreOffice, RhythmBox, Shotwell, Cheese and Boxes (the virtualisation cum remote desktop tool). Most people will use most of these at some point, and the whole install clocks in at under 5GB so accusations of bloat aren’t really valid here. There’s a lot of new stuff under the hood too: kernel 4.5, GCC 6.1, Systemd 229, but let’s not get bogged down in numbers. LXF

Verdict Fedora 24 Developer: The Fedora Project Web: Licence: Various free licenses

Features Performance Ease of use Documentation

8/10 8/10 7/10 8/10

Another excellent release paving the way for more great things in future. A great distro for going deeper.

Rating 8/10

Summer 2016 LXF214     21

Reviews Graphics card

AMD Radeon RX 480 AMD smashes the budget market once again, and blows Zak Storey away... Specs GPU: Polaris Process: 14nm FinFET Transistors: 5.7 billion Compute: 36 Texture: 144 ROPs: 32 Core: 1,120MHz Memory: 8GB GDDR5 @ 8GHz Bus: 256-bit TDP: 150W Ports: HDMI 2.0b, 3x DisplayPort 1.4 HDR


urely this is the year of the graphics card? Think about it: we have the advent of VR, high refresh rate 1440p gaming, and not one but two die shrinks all occurring within the last 12 months alone. But let’s face facts. It’s all well and good having £400+ graphics cards, but what really matters is making our platform more accessible. Welcome then AMD’s first 14nm Polaris offering, the RX 480. Aimed squarely at the mid-range market, this £220 graphics card is designed to supplant Nvidia’s GeForce GTX 970 from holding the title of value king, and boy does it. With 36 compute units and 8GB of GDDR5 on a 256-bit memory bus, and drawing a meagre 150W of power from the wall, you can expect the RX 480 to average around 5 TFLOPS of performance at stock. This places it quite nicely in between both the GTX 960 and the GTX 980 in terms of performance, and that’s exactly what we saw in testing. Possibly more important than performance, the AMD RX 480 is the first AMD card that ships with out-ofthe-box open source driver support. From Kernel 4.7 (from Ubuntu 16.10 and Fedora 25) onwards, the AMDGPU open source driver is ready to support modern AMD GPUs, and offers near closed-source driver performance with OpenGL 4.4 support. A closed-source AMDGPU-PRO driver can also be used and offers support for OpenGL 4.5, Vulkan 1.0 and OpenCL 2.0. In Unigine Heaven v4.0 the RX 480 scored 61fps, 10% higher than the GTX

Features at a glance

Polaris chip

The 480 is the first Polaris card. A RX 470 and RX 460 are expected next.

22     LXF214 Summer 2016

PCIe power

AMD had an issue with excessive power draw on some cards, now fixed.

Simple, elegant, overly compensating cooler... What’s not to love?

980 and 50% more than the AMD R9 285. Tested with real games, the card gave impressive performances: with Insurgency at 2160p it returned 134fps, firmly between the cheaper Nvidia GTX 960 and more expensive Nvidia GTX 980.

Electric elephant So let’s talk about that power draw – it’s been a smidgen of contention over this launch, particularly that the reference cards draw perhaps too much power from the PCIe slot. It’s worth noting that this card has been approved by the PCISIG organisation. The only scenario in which you might encounter problems from overdraw is when utilising multiple cards. So who does this affect? Overclockers and ASIC bitcoin miners. AMD has already released an updated driver to combat this concern, providing options to better balance the power between the PCIe and 6-pin connector or just reduce the power draw. For the concerned, we ran this card in Unigine’s Heaven benchmark over a whole weekend, only to check in on Monday to see that the card was still functioning absolutely fine.

So the question is, should you buy this card? Well, as far as the price-toperformance measure goes, the AMD Radeon RX 480 absolutely kills it. OK, it’s not the processing powerhouse of a GTX 1080 (£600) or the GTX 1070 (£400), but for value for money this card is second to none. It’s cool, quiet, efficient and well worth it if you’re looking to create yourself a 1080p frame maxing goodie. LXF

Verdict AMD Radeon RX 480 Developer: AMD Web: Price: £220 (8GB) / £180 (4GB)

Features Performance Ease of use Value

9/10 9/10 8/10 9/10

Fantastic value and aggressively priced, with strong 1080p performance, VR-ready credentials and acceptable 1440p results, outpaces the Nvidia GTX 970 and 980 where it counts.

Rating 9/10



1 |








| Y


| 1 0 T H








0 |1


1 0











1 0


1 0
























1 0



















1 0




AWA R D S 2 0 1 6



1 0









1 0
















1 0

1 0






1 0









and win a MacBook Pro





The gadget world's most prestigious event is back and better than ever – vote for the tech you can't live without!



Roundup Roundup Screencasting applications

Every month we compare tons of stuff so you don’t have to!

Screencasting apps Mayank Sharma  rounds up a bunch of screen-casting applications for  monologuing your adventures in the Terminal.

How we tested... Support for audio recording and  output formats are two key factors.  Most screencasting applications  allow you to record audio alongside  the video, but we’ll rate them based  on the recording options on offer.  Most applications give you the  option to select an output format for  the captured video, while some will  make the choice for you and default  to patent-free formats. This is good  for storing screencasts for personal  use but might not be supported by  popular video-sharing websites. The applications tested don’t all  follow the same methodology for  capturing screen activity. Those that  enable you to define an area for  recording will be rated higher than  those that indiscriminately record  the entire screen. Note: we haven’t  paid much attention to performance  as none of them stressed our Core  i3 [Luxury! - Ed] test machine.


Our selection DemoRecorder Kazam ScreenStudio SimpleScreen Recorder VokoScreen

esides cute kitten videos, YouTube and other videostreaming websites also host loads of video-based tutorials on all kinds of software. In fact, an increasing number of software developers are choosing to create screencasts instead of authoring plain text-based tutorials and documentation for any new projects. Screen casts are an excellent tool for developers as they provide a chance to show-off new features and their intended functionality instead of forcing a development team and its technical writer to rely on pure wordsmithing

24     LXF214 Summer 2016

skills and a user’s ability to visualise their use for a feature. For pretty much the same reasons, screencasting software has become an integral part of the computer-based training industry and are the preferred means of dispensing lessons over the internet. Moving away from thick screenshot-laden manuals, the digital trainers now use the desktop as their

stage and the mouse as their pointer. You too can create screencasts to demo or review your favourite piece of software with little effort. In this Roundup, we’ll look at some of the best applications available, and they offer varying degrees of dexterity. We’ll examine their strengths and weaknesses to help you pick one that suits your needs and requirements.

“You too can create screencasts to demo or review your favourite piece of software with little effort.”

Screencasting applications Roundup

Recording What settings do they offer?


hile all the applications can capture the action on the desktop with ease, some offer more aids than the others to enrich the process. So, eg, while all applications can record the full desktop, they also enable you to limit the scope of the recording by defining a limited area on the desktop. Kazam, SimpleScreenRecorder and VokoScreen also allow you to select and record particular windows only. ScreenStudio and VokoScreen can also record video from the webcam during the screencast which is a definite plus. During our testing, all the applications made flawless screencasts of the Nexuiz first-person shooter ( in both windowed and full-screen mode. All the apps offer options to record audio from both the microphone and speakers. When recording videos, some applications can be controlled via their icons in the Notification area. DemoRecorder, in contrast, displays small controls at the top of the

recording area to pause and stop the recording. Then there’s SimpleScreenRecorder which has a recording window that remains visible during recording and displays live information about the current recording including the size of the file. The recording window enables you to define a recording hotkey and you can start, pause and stop recording from this window as well. It’s also possible to define the framerate for the recording and you can optionally scale the video to another resolution than the one available natively. By default, the application will record audio using either PulseAudio, ALSA or Jack backends. One of the SSR’s more unusual features is its ability to record areas where the cursor is going. Kazam too has a mouse-related option. By default, it captures the movements of the mouse but you can optionally disable this behaviour. ScreenStudio is one of the most feature-rich screencasting applications. The main screen offers pull-down

All the applications, besides DemoRecorder, enable you to make screencasts across multiple desktops.

menus to select which screen you want to record if there are multiple ones and at how many frames per second. You can also choose the desired resolution for recording video from the webcam. VokoScreen also has a couple of interesting options. The Magnify option zooms the area under the mouse which is useful for displaying text in highresolution screencasts. There’s also the Showkey option which displays all keystrokes across a big strip in the screencast to make it easier for the viewer to follow the screencast of a keyboard-driven application.

Verdict DemoRecorder








ScreenStudio and VokoScreen can both record from the webcam.

Streaming support Beam ‘em live!


hile we’re rating these applications primarily for their screencasting abilities, some also offer the option to stream them live to video-sharing websites or include preset settings to make screencasts ready for specific sharing platforms. If you’re looking to stream

your screencasts, stay away from Kazam and VokoScreen as they both don’t offer this option. DemoRecorder has a bunch of export scripts with their parameters optimised for creating videos for YouTube. The application offers different scripts for creating screencasts in low-

You can also use ScreenStudio to screencast live over UDP.

resolution, 720p HD and Full HD resolutions. Besides these, there’s also an option to make high-quality lossless videos for web publishing using Flash which uses a combination of HTML, SWF and FLV. The other two screencasters, SSR and ScreenStudio, both enable screencast streams directly to online streaming services. SSR can stream to a Real Time Messaging Protocol (RTMP) server on the local network or to streaming services, such as and If you select the YouTube option, SSR will load the preset container and codecs optimised for streaming videos to YouTube. Similarly, ScreenStudio has the option to stream to, Ustream, or YouTube Live directly. To make this work, you’ll need to configure the server, profile, and share your stream key with the application so that the server can associate the stream with your cast.

Verdict DemoRecorder





VokoScreen N/A

Both SSR and ScreenStudio can live stream recordings over the web.

Summer 2016 LXF214    25

Roundup Screencasting applications

Usability Don’t let the user get lost in a sea of menus.


screencast falls between a simple screenshot and a fully fledged video. The same can be said about a screencast’s controls and capabilities—you get more tweakable options and settings than you get with a screenshot capturing utility but

not as much post-editing control offered by video-editing software. That said, there are enough parameters in the average screencasting application to present a usability challenge to a developer. While most applications we tested have a simple layout,

we’re on the lookout for the one that’s the most intuitive and exposes the most number of controls without cluttering the interface and confusing the user. There are several steps to recording a screencast and a good option should guide the users through them.

DemoRecorder HHHHH For this Roundup, we’ve used the full 30-day trial version of the application which is easy to install following the instructions on its website. It uses a console-based, text-driven menu which is why the application does appear different from the others but doesn’t throw any unexpected surprises. The ‘Record’ button will capture the full screen by default and you can choose an area to record with the ‘with option’ button. You can also specify additional arguments. While it’s recording, the app displays buttons to pause/resume and stop the recording. Once you’re done, you can play back the recording from within the app itself before saving it. You’ll first have to save it in the app’s custom lossless format. From here on you can use the export option to save the recording into any of the common file formats.

Kazam HHHHH The application is available in the repos of most distributions (distros). It has a very simple and intuitive interface that displays toggleable buttons for the different areas it can capture. If you select the window or area option, Kazam will ask you to either select the window or define the area to capture. It then initiates the recording process by displaying a countdown which is set to five seconds by default but can be easily edited from the Preferences window. During the recording, the application’s icon in the Notification area offers the option to either pause/resume or stop the recording. The screencaster, by default, asks you to name the recording and point it to a directory for saving the file in the defined container and extension. However, if you’ve enabled the option to save the file automatically, Kazam will do so as soon as you select the option to stop recording in the Notification area icon.

Support and documentation Verdict

Looking for some handholding?


azam, VokoScreen and ScreenStudio offer no in-application documentation. Kazam also doesn’t offer any documentation on its website but there’s a forum for support. VokoScreen’s website lists a handful of tips and hints and a solitary demo video in German. While there are no forums, the application enables you to contact the developer via email to resolve any issues. ScreenStudio also only has basic install instructions on its website

26     LXF214 Summer 2016

that gives an overview of the application’s features and basic usage. The developer is an avid YouTuber and the official channel is full of screencasts of various Linux application and games that show off the app’s capabilities. All the options in DemoRecorder’s interface include a Help button, which opens the man page for the CLI utility that powers that feature. On its website, you’ll find procedures for common tasks explained in the form of a FAQ. There are also example videos from

other users as well as from the developer. The application doesn’t have any public forums but the licence for DemoRecorder starts from £77 and includes support, updates and bug fixes for two years. SSR includes tool tips and the website has lots of detailed and illustrated articles for tasks, such as recording videos for YouTube, recording Steam games and live streaming etc. There’s also a troubleshooting page that lists solutions to common issues and users comment here to seek support.









The tool tips in SSR alone are detailed enough to be really helpful to users.

Screencasting applications Roundup ScreenStudio HHHHH This Java-based application uses a tabbed interface and you step through each one to customise the recording. If you wish to stream the video to an online service you’ll have to select it from under the Targets tab. Depending on the service, the app will also ask you for additional details. The Sources tab includes several pull-down menus for devices it can capture from, including the screen, webcam, microphone and speakers etc. One of ScreenStudio’s most powerful features is the panel which can be completely customised. You can change its size and orientation and its content by using the built-in HTML editor. Once it’s done recording, the application will automatically save the screencast under the Capture directory inside its installation location.

SimpleScreenRecorder HHHHH The application uses a wizard-like interface and each step of the process has several options. Despite its name the interface is a little overwhelming. However, all the options have tool tips that do a wonderful job of explaining their purpose. In addition to selecting the dimensions of the screen recording, you can also scale the video and alter its FPS. Similarly, the next screen offers several options for selecting the container, and audio and video codecs for the recording as well as few associated settings. You can also preview the recording area before you start capturing it. While it’s recording, the application also enables you to keep an eye on various recording parameters, such as the size of the captured video.

VokoScreen HHHHH This screencaster has tabs at the top and buttons at the bottom to Start, Stop, Pause and Play the recordings. VokoScreen’s tabbed interface is similar in function to ScreenStudio but with a lot less options. The first tab lets you define the area for the recording and also houses the Magnification and Showkey options which are two of VokoScreen’s unique features. The audio recording options are listed in the next tab while the third tab enables you to customise the frames per second and the codecs for the recordings. You can toggle the option to record the webcam from the fifth tab. The webcam video is displayed in a floating window that you can move around the screen. When you’re done recording, the video is saved using the settings defined in the fourth tab.

Extra features Verdict

Wait! There’s more?


hile the primary task of the applications is to create screencasts, many of them offer additional features and functionality. The visible functions are also assisted by back-end functionality that kicks in only under special circumstances, eg, the lightweight Kazam works wonderfully if it’s installed on older computers or inside virtual machines. In addition to screencasts, Kazam can also capture screenshots as well. You can also use SSR on older

machines as the application can automatically reduce the framerate depending on the available processing power. You can also use the app to preview the recording area which can be a great time saver for certain people. ScreenStudio’s biggest add-on is its ability to stream to other websites and create a customised interface for the screencast. VokoScreen includes the ability to playback recorded screencasts within the application itself. You can also use the application to share recorded

screencasts with friends by asking VokoScreen to attach them to emails using the default mail client. When it’s done recording, DemoRecorder gives you the option to preview the recorded video before saving it. The interface for playing back videos offers plenty of options, eg you can process the captures to reduce the frame rates and scale the video. There’s also a bonus feature for recording screencasts from a virtual desktop created by the app from your physical one.









DemoRecorder’s nested desktop is a wonderful feature—if you can get it to work.

Summer 2016 LXF214    27

Roundup Screencasting applications

Playback aids What formats do they support?


hile you can use a dedicated transcoder, such as Handbrake, to transform the screencasts into any format, it’s more convenient to have the screencasting support a wide variety of containers and codecs to save the screencasts in our preferred format. Kazam, VokoScreen and ScreenStudio don’t offer much

flexibility in this regard. VokoScreen only enables to you record in either the MKV or the AVI formats with the MPEG4 and libx264 codecs. Similarly, Kazam only supports VP8/WebM, H264/MP4 and the AVI video formats. ScreenStudio does a little better and enables you to record to a file in either FLV, MOV and MP4 formats. You can also choose between different profiles that can

You’ll have to use command-line switches to alter settings, such as the FPS, while exporting screencasts in DemoRecorder.

change various parameters for size and quality, ranging from 240p all the way up to 1080p. If you’re using SSR you can choose from a wide variety of container formats that are supported by FFmpeg and libav, including MKV, MP4, WebM, OGG as well as a host of others such as 3GP, AVI, MOV etc. Furthermore, you can choose codecs for the audio and video stream separately and also tweak other related settings as well. The developer of DemoRecorder argues that none of the existing formats were cutting it for him, so he decided to cook up his own lossless format that manages to capture high framerates without gobbling up all your CPU cycles—It also doesn’t have a huge footprint on the disk. Once you’ve saved the screencast on this format you can use the plethora of export scripts that will not only export your screencast to the common formats, such as FLV, OGG, AVI but also to NTSC, PAL, DVD, VOB etc.

Verdict DemoRecorder








DemoRecorder and SSR support and offer the widest variety of codecs.

Configurable parameters Tweak ‘em to your liking.


azam includes a simple threetab Preferences window that offers very little flexibility. It enables you to select the speaker and microphone capture devices and adjust their volume levels. You can also toggle the countdown splash that appears before the app begins recording. From under the Screencast tab you can change the framerate for the capture which defaults to 15FPS. There’s also a pull-down menu that allows you to choose between the default H264 (MP4) and the VP8 (WebM) containers. You can also ask the application to save recordings automatically by just selecting a directory and specifying a filename prefix. Similarly, all settings in VokoScreen are housed within the Settings tabs. Here you can choose the default location for storing videos. You can also choose the default video player which is used for previewing screencasts. One setting that you will probably want to

28     LXF214 Summer 2016

enable is the option to automatically minimise Vokoscreen to the notification area while recording. Other configurable parameters are housed within the app’s different tabs, eg, you can choose how large the magnification window is from 200x200, 400x200 and 600x200 while enabling the option. SSR doesn’t offer a dedicated window or screen for tweaking options. However, despite its name, the app offers plenty of options. You can, eg, choose containers and codecs for recording the audio and video for the screencast. The interface also enables you to pass additional custom options if the codec supports them via CLI parameters. For quickly applying these custom settings to future recordings you can save them as custom profiles. Similarly, DemoRecorder offers loads of configurable options that are accessible via the CLI but can be passed from the graphical interface as well, eg, when recording a screencast,

Verdict DemoRecorder You can ask SSR to make separate files every time you resume recording.

you can disable sound or restrict the CPU used, which by default is set to 50%. You can also specify an audio fade-in and fade-out duration. One of ScreenStudio’s interesting option is its customisable interface and you can tweak all aspects. You can edit and create custom overlays in various formats (HTML is preferred).








Kazam and VokoScreen don’t offer much encoding control of screencasts.

Screencasting applications Roundup Screencasting apps

The verdict A

lot has changed since the last time we rated screencasting applications back in July, 2009 [RoundUp, p30 LXF120]. All the options sans one are now defunct and the sole remaining option, DemoRecorder which aced the competition in 2009 brings up the rear in this one. DemoRecorder has an expansive set of features, but the competition has caught up. Its case is not helped by being proprietary, fairly expensive and doesn’t have the most appealing of user interfaces. Then there’s Kazam which is lightweight and has one of the most simple and intuitive of interfaces. However, the application can’t record from webcam which we think is a fairly important feature. Even when you look beyond this, Kazam offers fewer controls over the screencast as compared to some of the others. Similarly, SimpleScreenRecorder loses out because it too can’t record from the webcam. However, the

1st ScreenStudio

application offers more options and control than Kazam and uses a wizardbased interface to keep things simple. Together with the app’s experimental ability to stream screencasts, the app does enough to put itself on the third spot on the podium. Note that while we penalise apps for not including support for adding video from the webcam in the screencast, we don’t give the same level of importance to live-streaming. It’s a nice useful addon that requires loads of bandwidth and computational resources and has limited appeal. VokoScreen takes advantage of this fact and slips to the runners-up spot. The application has a fairly simple tabbed interface and you can use it to record your screencast with video from the webcam and audio from the microphone and speakers as well. The two features that’ll appeal to a whole lot of users are the magnifier and showkey. But the VokoScreen supports a limited

The app has all the important screencasting features you’ll need.


Web: Licence: GNU GPL v3 Version: 1.4.5 One of the lightest and simplest apps but does have as many options.

5th DemoRecorder


Web: Licence: GNU GPL Version: 2.4.0 Limited format support, but has a few unique recording features.

3rd SSR

number of containers and codecs and doesn’t offer the same level of flexibility as our winner, ScreenStudio. The only real criticism we can offer for the Java-based ScreenStudio is that its user interface doesn’t look as appealing as some of the others. However, it makes up for its average looks with a bag full of interesting features. It can record the webcam along with audio from the microphone and speakers and also live stream them to lots of services. But its biggest draw for us is its ability to easily create a customised screencast interface.

4th Kazam


Web: Licence: GNU GPL v3+ Version: 2.2.4

2nd VokoScreen

You should take time out to visit the ScreenStudio’s official YouTube channel to get accustomed to the app.


Web: Licence: Proprietary Version: A proprietary solution that works well but is just too expensive.

Over to you...


Web: Licence: GNU GPL v3 Version: 0.3.6

Do you think screencasts will soon become the online medium for  delivering tutorials? Let us know at

No webcam recording, but good screen recording customisation.

Also consider... There aren’t many Linux screencasting apps that are still actively developed besides the ones that are covered in this Roundup. RecordItNow and RecordMyDesktop are good examples of apps that haven’t been updated in over half a decade but still work well. If you don’t want the fancy features and conveniences that come with the dedicated

screencasters we’ve covered, you can also use VLC and the console-based FFmpeg to make screencasts. In VLC, head to Media > Open Capture Device and select Desktop from the pull-down menu. To use FFmpeg to record your screen, fire up the terminal and enter: ffmpeg -f x11grab -video_size 1920x1080 -framerate 25 -i :0.0 screencast.mp4

You can also use the Open Broadcaster Software which is designed for live streaming the desktop. However, you can also save the screencast to a local file instead of streaming it over the Internet. If you like VokoScreen’s showkey feature, install the Screenkey utility to display all keystrokes onscreen which can then be recorded by a screencasting app. LXF

Summer 2016 LXF214    29

Subscribe to Choose your Print £18

Every 3 months

Every issue  comes with  a 4GB DVD  packed full  of the hottest  distros, apps,   games and a  lot more.

Get into Linux today!

Get into Linux today!


Digital £11.25 Every 3 months

The cheapest  way to get Linux  Format. Instant  access on your  iPad, iPhone  and Android  device.

& On iOroSid! And

SAVE 36%

Bundle £24 Every 3 months

Includes a DVD packed  with the best new distros. Exclusive access to the  Linux Format subscribersonly area – with 1,000s  of DRM-free tutorials,  features and reviews. Every new issue in print  and on your iOS or   Android device. Never miss an issue.

30     LXF214 Summer 2016

Get all the best in FOSS Every issue packed with features, tutorials and a dedicated Pi section.

Subscribe online today… Or Call: 0344 848 2852 Prices and savings quoted are compared to buying full-priced UK print and digital issues. You will receive 13 issues in a year. You can write to  us or call us to cancel your subscription within 14 days of purchase. Your subscription is for the minimum term specified and will expire at  the end of the current term. Payment is non-refundable after the 14 day cancellation period unless exceptional circumstances apply. Your  statutory rights are not affected. Prices correct at time of print and subject to change. UK calls will cost the same as other standard fixed  line numbers (starting 01 or 02) and are included as part of any inclusive or free minutes allowances (if offered by your phone tariff). For full  terms and conditions please visit: Offer ends 30/08/2016

Summer 2016 LXF214     31  

Linux Mint 18

Fresh Mint Hot on the heels of Ubuntu’s latest distro, Clement Lefebvre and his team have concocted their latest powerful and refreshing blend. Jonni Bidwell takes a sip of Mint 18.


int’s motto, ‘From freedom came elegance’, speaks to an entirely different class of distribution (distro): one that isn’t shackled by commercial interest and one that actually wants to be pleasurable for desktop users. Linux Mint was born 10 years ago out of  lead developer Clement Lefebvre’s desire to  build a distro that was both powerful and easy 

32     LXF214 Summer 2016

to use. Now it has risen through the rankings  to become one of the most popular Linux  distros out there.

features its own desktop, core applications  and support channels. Along the way there  have been hiccups and detours, but the  project continues to innovate  and show that it can stand up  well alongside the major  players that have deeper  pockets. Using Ubuntu 16.04  as a base, the latest iteration  in the Mint family, Sarah, will be supported  until 2021. And who knows what desktop  Linux will look like by then.

“Mint has risen through the rankings to become one of the most popular Linux distros.” Linux Mint has definitely become  something much bigger than its first  nickname ‘Ubuntu with codecs’. It now

Cinnamon 3 desktop One of the most anticipated features of Mint 18 is Cinnamon 3.0. Let’s see  what it means to be a modern traditional desktop.


innamon is Mint’s unique and admired desktop environment and rebels against hyper-modern desktops, such as Unity and Gnome 3. It doesn’t have an inexplicable moral objection towards tray icons, like the latter, and it doesn’t insist that your taskbar/ pager/launcher be super-glued to the left, like the former. Instead it provides what users have been used to: a taskbar at the bottom, a menu to its left and a system tray to its right. This metaphor has been refined and modernised over Cinnamon’s five-year lifespan, where it has borrowed the best of other desktops and introduced its own innovations. As is de rigeuer these days, you can search for programs by typing a few characters while the menu is activated. This has been the case for ages, but it’s extra-useful in this case given the new places where settings can be altered. There’s controls for desktop effects, extensions, window behaviour, power management and notifications etc. It would be overwhelming, but the default set up is perfectly usable so you’re unlikely to find yourself clamouring for that one setting. A lot of what’s new is just gentle refinements. Cinnamon 2.8 was already a well regarded desktop, and this release sees it further tweaked. The window snapping feature (dragging windows to the edge of the desktop will resize them to that half of the screen) seems more fluid now. You also get helpful window previews by hovering over applications in the taskbar. These are updated in real time, so you can use the feature to check on whether a terminal command has completed. Similarly, the volume control is aware of any music playing applications and will display cover art and playback controls; it even works with Spotify (which someone really needs to sort out the Linux client for). The file manager, Nemo, has seen some improvements and is a little more robust in this outing. Nemo was forked from Nautilus (now just called Files) after it was trimmed down (ie features that people liked were removed) and restyled so that it would better fit with the minimal Gnome Shell look. One of our favourite new things is the ‘squidgy’ context menu effect. It’s hard to explain why this is so satisfying, but

As in previous releases, there are some quite spectacular backgrounds that have been contributed by the community.

we challenge you to right click on the desktop and then not immediately right-click somewhere else just to make it happen again. If you look in the Effects settings, you’ll find a handful of other menu effects, but this one (called Cinnamon) is by far the best. There’s all kinds of other customisation in here, some people will want to turn off overlay scroll bars, and others will want to tweak their window animations.

Fantastic Mr. Desktop While not part of Cinnamon, Mint’s Driver Manager continues to make life much easier for beginners. It won’t provide the very latest Nvidia drivers, but it reduces the process to a single click. Likewise, the update policy displayed when the system is first installed, rather than just baffling jargon about kernels etc, the user is given a slider to choose the strictness of the update policy. It ranges from the a soothing ‘Don’t break my computer’ to the merciless ‘Always update everything’. One major criticism of Windows is its relentless updates—some users will welcome the potential quiet of Mint. Ultimately Mint is a fantastic desktop for new users or anyone wanting to get stuff done without a desktop getting in their way. Developer’s might use Mint because they’re more interested in developing stuff than fighting with the Gnome Tweak tool. Having access to the plethora of Ubuntu packages without being Ubuntu has its advantages.

Mint-Y For a long time, the default Cinnamon theme has been Mint-X. And with good reason, it offers a simple, clean look that subtly guides the user to where they want to be. In Mint 18, the default is still Mint-X, but poke around in the Theme settings and you’ll find a new one called Mint-Y. In Mint, different themes can be applied to different desktop components: window borders, icons, controls, mouse pointers and desktop etc. If you set all of

these to Mint-Y you’ll see the future of Linux Mint. Apparently, it’s not ready for the masses yet and will be tweaked further according to user feedback, eg it’s only available in one colour. That said, it already looks pretty good to us. It’s got some flattish elements without being entirely two-dimensional, and it has minimalism in the sense of being free of clutter, but not in the sense of there not being anything there.

Gaze at the future of Mint and Cinnamon, and we’re talking distros and desktops, not herbs and spices.

Summer 2016 LXF214    33

Linux Mint 18

Getting started Mint can be enjoyed by beginners and experts alike. Read on to  see how simple it is to install it on your system.


ou’ll find Mint 18 on the LXFDVD, just follow the quick guide below to get it installed. For the benefit of digital readers, those without optical drives or those that want to install a different edition (or indeed those whose discs have mysteriously gone walkabout), we’ll tell you how to create your own installation media here. You can do this from any operating system. The first step is to choose and download an ISO image from, follow the Download link at the top and choose your version. At the time of writing, Cinnamon and Mate editions of Mint 18 are available, but hopefully Xfce and KDE editions will have been released by the time you read this. Also available is LMDE2, which is a rolling-release based on Debian Stable (more suited for advanced users). Choose the version that interests you and the appropriate architecture (32- or 64-bit) for your hardware. There’s really no reason to choose 32-bit on newer hardware, despite the rubbish you may read some places. The most efficient way to

A number of extensions are available to enhance the functionality of Cinnamon, or just to further prettify it.

get the ISO is via BitTorrent, using a client such as Transmission, but there are plenty of mirrors if you just want to download from a single source. In light of unfortunate events earlier this year, it’s wise to verify your download afterwards (especially if not using BitTorrent) and you can find instructions for doing just this (see . Once you’re happy the image can then burned to a DVD (or written to a USB stick which we’ll cover in a second). On Windows 10 DVDs can be burned directly from Explorer, or using a thirdparty program such as Imgburn. Linux users can use Brasero, k3b or (from the terminal) wodim to burn a DVD. While programs such as UNetbootin allow you to write out images to USB drives, they often don’t do a particularly good job. Our experience has always been best using good ol’ fashioned dd from the command line. The important thing is to make sure you don’t accidentally dump the image to your OS drive—that would be most inconvenient. So plug in your USB stick and use the command lsblk to list all block devices. Assuming the medium is /dev/sdx and that you have downloaded the image to the current directory, the next command will make a bootable USB stick. Change the image and device names: $ sudo dd if=linuxmint18-cinnamon-64bit.iso of=/dev/sdx bs=1M $ sync The second command is a good idea because the first may finish while writes are still buffered. Wait until the command prompt returns on the next line before rebooting from the USB stick.

Installing Mint 18 1. Boot the LXFDVD Ideally, starting your computer with the DVD in  the drive should bring up the stylish menu  pictured below. Unfortunately there’s a lot that  can go wrong here, you’ll need to make sure  Secure Boot is switched off and also that your  machine will boot from the optical drive before  other devices that have operating systems on  them. There are some pointers at https://

34     LXF214 Summer 2016

2. Choose your poison We’ve provided the two most popular versions,  Cinnamon and Mate on the disc. Mate is more  suited to older hardware (it’s also your only  choice if you have a 32-bit CPU), but it will work  just fine on newer machines. It’s ideal if you  yearn for the simpler days of Gnome 2. Choose  your version and press ‘Enter’. Start it from the  next menu, if you run into problems try again in  compatibility mode.

3. The Live Environment You can get a feel for Mint from the live  environment. This is a complete edition of Mint  that looks and acts exactly like a proper install.  It will be much more responsive when it’s installed  on your machine though, since at present  everything is running from the DVD and  potentially without graphical acceleration. So  once you’ve familiarised yourself with the layout,  hit the install icon on the desktop.

Advanced installs Want to dual boot with Windows? Or not sure what you want but  want to keep your options open? Then read on.


standard install will create two partitions (three on UEFI machines): one for the OS and a swap partition. This is a perfectly reasonable set up, but some people prefer to tweak things a little, eg it’s common to have a separate /home partition so that in the event that something terrible happens the OS partition can be wiped and reinstalled, without touching any user files or settings. Be aware that if you install a different distro with this trick, then the settings saved in the /home partition may not be valid and could cause problems. A separate home partition is straightforward to do from the installer, just choose the ‘Something else’ option. The current disk layout will be displayed. Assuming we are nixing everything on the drive, we can delete the root partition (the one with mount point /) at this point, resizing it would be a waste of time. Now recreate a replacement root partition by clicking the ‘+’ arrow at the bottom, choosing a filesystem (ext4 is a good choice, Btrfs if you like shiny new things) and setting the mount point. Mint requires about 6GB to install, but it’s a good idea to allow significantly more. Unless you know better, we’d recommend at least 30GB here. Repeat the process and create a /home partition. If you (or other users of this machine) are planning on storing large files here, it’s a good idea to make this as large as possible. It’s entirely possible to have separate partitions for all the main system directories (/boot, /usr, /var). Anyone that’s ever decided to add another distro to their Linux armoury has probably run into partitioning difficulties before. Even if you nominally have the space to add another

4. Language and codecs You’ll first be prompted to choose a language,  one reason for Mint’s popularity is that  localisations and language packs for so many  different locales exist. You may wish to read the  release notes too. Next you’ll be prompted to  install proprietary drivers, codecs and the like  which you should certainly do if you plan on  gaming with a modern graphics card or your  wireless card needs extra firmware to work.

partition, it can be a time-consuming operation to move all the data so that the partition can be resized. Even if you are patient, you might run into the three primary partition limit for MS-DOS style partition tables. Sometimes it would be nice to be able to ‘extend’ a partition to another drive, but it’s just not possible with traditional partitioning. Enter the Logical Volume Manager (LVM). This abstracts away partitioning so that all of the operations we’ve mentioned can be carried out with ease. [See Tutorials, p72, LXF205 for more on LVM]. The short version is that conventional partitions have Logical Volumes (LVs) as their LVM analogue. LVs in turn live inside Volume Groups (VGs), which are collections of Physical Volumes (the drives themselves). LVM is easily set up by checking the box in the installer which will set up a Volume Group called mint-

“A separate home partition is straightforward to do from the installer.” vg with logical volumes for root and swap. You can’t tweak this set up from the installer, but you can do so afterwards. There are some caveats though: Windows doesn’t support LVM, so if you put, eg, an NTFS filesystem on a LV it still won’t be visible to Windows. Also, if you plan on dual-booting, make sure that Windows is installed first. It’s very good at breaking Grub (and sometimes its own bootloader).

5. Installation Type The first option here is the simplest, but also  the most destructive: Mint will take over the  whole target drive. Make sure there’s nothing  you care about on here before choosing this  option. Those wishing to dual-boot with  Windows should install that OS first (it’s very  rude about overwriting boot sectors) and  choose ‘Something else’ here. If you want to  use LVM, see the main feature (above).

6. Hit the button Click the ‘install’ button and you’ll be informed  of any changes the Ubiquity installer will make.  Once you’re sure everything is as it should be  (and do make sure—it will be difficult, if not  outright impossible, to recover data from a  device after it has been overwritten) click the  ‘Continue’ button. Depending on the speed of  your machine there may be time for a wellearned Linux Format Cup of Teatm at this point.

Summer 2016 LXF214    35

Linux Mint 18

Using X-Apps Some people don’t get excited about text editors and image  viewers, even when they have a cool sounding X in their names.


ead developer Clement Lefebvre explains that the X in X-Apps is a reference to the underlying X server, ‘ex’ as in the previous way of doing things, and also the algebraic notion, hinting at generality and universality. Any distribution aiming for ease of use needs its garrison of core applications. The discerning user expects a text editor, an image viewer, a media player and a document reader. Enter xed, xviewer, xplayer and xreader, respectively. The ‘X-Apps’ initiative has as its goal the production of quality desktop- and distribution-agnostic bread and butter applications. Back in the halcyon Gnome 2 days, this need was satisfied by Gnome’s own offerings (gedit, Eye of Gnome etc.). Beyond the GTK2 widget library (which is probably installed on your computer no matter what your desktop) these didn’t rely on any particular desktop libraries and so were reasonably portable. That is, it didn’t matter if you used KDE or Fluxbox or Ratpoison, you could still open a PDF or play a movie etc. Sure, maybe you needed to apply a GTK theme manually or fiddle with font settings to make it look as nice as they did in Gnome, but you definitely didn’t need to

“The production of quality desktop- and distro-agnostic bread and butter applications” install masses of Gnome libraries just to use them. As Gnome 3 enters its middle age, its native applications have become increasingly tied to it. They integrate with Gnome-specific services and have been designed around its stylings. The situation is particularly dire for Mint since it relies on the Ubuntu packages for GTK and other desktop libraries. These have already been heavily patched to work with the Unity desktop (which has some pretty funky ideas about window menus). So the Mint team is tasked with undoing two

The current cohort of xapps may not look like much, but these utilities are essential for any desktop worth its salt.

36     LXF214 Summer 2016

The venerable Gedit received a makeover in 3.14, it harmonises delightfully with Gnome, but without a toolbar it just looks silly and feels hobbled for other desktops.

layers of desktop entanglement and then possibly adding any Cinnamon or Mate-specific oddities. Developing a custom fork of GTK3 wouldn’t make sense, since deviating from the Ubuntu base would break compatibility with all the GTK apps in the Ubuntu repos. Mint 17 relied on version 3.17 of GTK, where this Gnomeintertwining situation was not so bad. Any issues that arose could be solved with patching or downgrading of the offending applications. Mint 18 sees GTK bumped to 3.18 and the team decided that the effort spent fighting everincreasing desktop coupling would be better employed to develop a universal application suite. Note that this doesn’t apply to KDE versions of Mint, the Kapplications suite already provides perfectly good utilities.

Anywhere apps The Mint developers are primarily concerned with addressing the desktops offered by their distro. Since these have little in common beyond the more rudimentary GTK3 features, a side-effect is that the xapps should largely work anywhere, not just within the confines of Cinnamon, Mate and Xfce. There is some potential duplication of effort here since Mate required its own applications to be crafted from the relics of Gnome 2.32. The Mate team are in the process of porting these to GTK3 though, and so it makes sense for the Mint team to fork at least some of the Mate applications. This they duly did, Pluma became xed, Atril became xreader etc, and thus the first xapps were begat. The current tally of four forks will be expanded in future and users of traditional desktops can once again enjoy a consistent application experience that cares neither for desktop nor distro.

Mate and other editions We find the Mint and Cinnamon combination to be quite delectable, but it’s  not for everybody so here we study the alternatives.


he Cinnamon desktop grew out of the Mint Gnome Shell Extensions (MGSE) and while it retains much of what many perceive as the traditional desktop metaphor, it also uses a number of shiny new features and desktop effects. There’s all sorts of fancy things, such as the squidgy menus, that rely on modern OpenGL extensions to work. Many of these can be disabled, but users of older hardware (particularly graphics hardware) will likely want to install something with less demanding. The natural choices here are Mate, KDE and Xfce since there are official Mint releases featuring these desktops. Xfce, which first appeared back in 1996, has long-positioned itself as a Ralph Nader type candidate, an alternative to Gnome and KDE. It aims to be lightweight, but not barren of features. Back in the day, this made it quite unusual, but now there are many other projects (LXDE, LXQt, not to mention Cinnamon and Mate) with similar goals, which has caused some to say that the project has lost its way. Regardless, Xfce is a great desktop, it’s highly configurable and if you had to place it on desktop map it’s somewhere between Gnome2-ville and KDE4-town. It has its own low-fat window manager (Xfwm), but this can be replaced with Compiz on hardware that supports it. In the same way as Sicily is seen as the largest ‘small island’, Xfce is probably the heaviest lightweight desktop. So it won’t be ideal for old hardware, although some hardware is really beyond any desktop. Sure you could install just X and a lightweight window manager, such as Openbox, but you won’t be able to run a modern browser or LibreOffice. Such machines can still be used with terminal-based applications.

If none of the other editions appeal to you, it’s easy to install a different desktop, such as LXQt.

Like Mint’s Xfce edition the KDE edition hasn’t been released at the time of writing, but our admiration for the Plasma desktop is well-documented [see Features, p59, LXF206]. This will be Mint’s first official foray into Plasma 5 world (modern Plasma releases have long been available via PPA) and we’re confident the Mint developers have done some great things to it. This one won’t be for users of old hardware, but if you have a modern system with more than 4GB of RAM and a graphics card manufactured within the last five years you’ll be fine.

Mate is named after the mateinebased South American beverage so you should feel bad for pronouncing it wrong all this time.

The friendly edition So that leaves the Mate edition. This project was born five years ago when an Arch Linux user took it upon himself to fork Gnome 2. This was quite an undertaking, as many others had written off that code as deprecated, buggy and broken. But perseverance paid off, and the desktop has been part of the official Mint lineup since Mint 12. The development team has grown and now features none other than Clement Lefebvre as well as open-source titan, Martin Wimpress. Mate stays true to the stylings of Gnome 2. Many of its core applications are forked from the original Gnome 2 ones, eg the Caja file manager still looks a lot like Gnome’s Nautilus and the Engrampa archiver looks like good ol’ File Roller (or Archive Manager as it later became known). But Mate looks forward as well as back—it’s entirely compatible with GTK3 and its core applications are all in the process of being ported there.

Installing other desktops Of course, Linux is all about choice and Mint makes it easy to install other desktops. Some people use a different desktop depending on how they feel when their computer boots up. If we were to make a desktop it might be called Hopeless Indifference. But anyway, suppose you wanted to try out LXQt on Mint. The procedure

is exactly the same as in Ubuntu, and since LXQt is now in the Ubuntu repos, you don’t even need to add a PPA: $ sudo apt-get install lxqt This will get you the latest 0.10 release. Not all desktops are in the repos, and even when they are you might be able to get a newer version

from a PPA, eg to get the latest stable version (there’s a daily builds PPA too if you want to live on the edge) of Enlightenment do: $ sudo add-apt-repository ppa:niko2040/e19 $ sudo apt-get update $ sudo apt-get install enlightenment terminology

Summer 2016 LXF214    37

Linux Mint 18

How it’s made Like several hundred other distros Mint is based on Ubuntu,  but where does the former stop and the latter begin?


uilding a desktop is no mean feat and maintaining one is even harder. Especially when you’re chasing a target moved by the evolution of upstream libraries. There are still a bunch of Gnome 3.18 libraries installed, but many are only there in an ancillary capacity, with Unity they are inextricably connected. On top of these are libraries specific to Cinnamon, some that are patched versions of Ubuntu counterparts. There’s a few older libraries forked from Gnome 3 that need to be maintained too—the newer versions being inseparable from Gnome. And then there’s all the applications. There’s a lot that’s easy to overlook too: The login screen that you don’t think about (MDM), the window manager you don’t see (Muffin), the update manager that helpfully finds the fastest mirror for you. All of these were carefully crafted based on user feedback. There’s a lot of little things that are genuinely useful too, eg the Nvidia Prime applet, the simple backup tool and USB image writer. We also only just found out about the handy upload manager for sending files via (S)FTP, but it’s been around for ages. Mint was inaugurated in 2006 with the Kubuntu-based Ada, which was only ever released as a beta. It gained popularity/ notoriety with the Barbara release which featured a Gnome 2 desktop with the (hitherto unseen) convenience of proprietary and patent-encumbered multimedia codecs out of the box. Not only that, but also the luxury of NTFS write

More than 2,000 packages out of the 2,239 that made up this Linux Mint installation came straight from Ubuntu repos.

support. By 2.2 (Bianca) things started to look more Minty— the top panel was gone and the customised menu, with its multi-column layout and search bar, it beared more than a passing resemblance to its modern Cinnamon counterpart. From Mint 5 (Elyssa) releases were synchronised with Ubuntu’s biannual schedule, in this case Ubuntu 8.04. By this point Mint’s main quintessences were the menu and software and update managers. Cosmetically it had come a long way too, featuring then state of the art Compiz Fusion effects as well as some great themes and artwork. Mint 5 was the first to feature the Mint logo which prevails to this day. Mint 8 introduced an LXDE edition and the Shiki GTK theme, which was replaced by Mint-X in Mint 10. Successive releases built on these staples and things might have continued nicely were it not for the great desktop meltdown of 2011.

I’m eighteen With Ubuntu 11.04, Gnome 2 was replaced with a strange and new desktop called Unity by Canonical. Gnome 3 was deemed unsuitable, so Mint stayed loyal to Gnome 2, but for its twelfth outing, they were stuck between a rock and a hard place: Gnome 2 was already abandoned when Lisa was released, and to stick with it would accumulate massive technical debt. But neither Unity or Gnome Shell (they’re both plugins for their respective window managers which run atop the Gnome 3 stack) seemed like the right fit. The solution was pretty ingenious—build extensions to Gnome 3 that recreated the elements to which Mint users had become accustomed. And thus the imaginatively titled Mint Gnome Shell Extensions (MGSE) were born. These evolved into the Cinnamon desktop in the next release. MGSE required 3D acceleration, so in order not to alienate users of older hardware (or just those seeking a lightweight experience) they inaugurated Mate edition. Mint 16 featured Cinnamon 2.0 which marked a significant decoupling from the Gnome libraries. By this stage, Cinnamon was no longer a fork of Gnome Shell but its own complete desktop environment, making it much more portable to other Linux distros. Since 2014 and the release of Mint 17 the distro has been tied to the LTS releases of Ubuntu, with point releases coming shortly after Ubuntu’s so that users need not be stuck with a two-year-old kernel towards the end of the cycle.

Linux Mint Debian Edition Not content with a fixed-release product, in 2010 the Mint development team decided to launch a semi-rolling release (which is periodically supplemented with update packs) edition that is based on Debian Testing. This release was aimed at more hardcore users, since it required a more hands-on approach to

38     LXF214 Summer 2016

package management and had a higher propensity for breakage. Linux Mint Debian Edition (LMDE) garnered reasonable interest, but last year a decision was made to make a second release, LMDE 2 “Betsy”, based on Debian Stable (Jessie). Debian Stable is known for having older packages than

the more adventurous distros, but the latest updates to Cinnamon and other Mint applications are backported straight away, so you will see newer versions of these applications before they land in the next Mint release. Also included is the deb-multimedia repository since the Debian repos are free of proprietary bits.

Mint criticisms Being the top distribution on Distrowatch puts you squarely in the  crosshairs of many a detractor.


ike Ubuntu faced back in the day, Mint has faced criticism from various factions. This criticism had a resurgence in February when the official website ( was hacked so that users attempting to download the Mint ISO were redirected to a malicious ISO of Mint with the Tsunami backdoor. None of this has any bearing on Mint the distro (the hacker exploited weaknesses in the forum software and Wordpress to do the dirty deed) but it precipitated a wave of criticism nonetheless. Some of this was irrelevant, eg questions like why were Mint using MD5 to hash its images? When the attacker could also have used a more secure hashing algorithm and changed the target hash on the Wordpress site. Some of it less so, but all of it answerable. Other charges have included being a FrankenDebian (a distro containing an heterogeneous package selection), which is true, but it’s also a FrankenUbuntu. And the things it puts on top of Ubuntu seem far less invasive than what Ubuntu does to Debian. There is a genuine issue looking forwards, since GTK’s roadmap sees it becoming less and less useful without the Gnome environment. In a few years when, say, GTK5 comes out then something will need to be done. Perhaps there will be a fork or perhaps someone will have invented a new toolkit by then. Even package names have come under fire: Mint’s display manager (mdm) shares its name with a Debian package containing some scripting utilities, so that package cannot be installed in Mint without renaming it. One wonders how many people would use the mdm package with Mint, though. Namespace pollution such as this was also threatened by the xed text editor, which was originally to be called xedit, the name of’s reference text editor that most people haven’t heard of.

Security quibbles The most serious criticism concerns the update policy. By default, Mint only offers up to Level 3 updates, which doesn’t include the kernel or graphics stack. Thus, said the critics, Mint users were missing potential security fixes as bugs were discovered here. One could say that a home user is far less likely to be struck by, say, a kernel exploit than an Adobe Flash exploit, which is reasonable, but not watertight. Ultimately it boils down to a question of security versus stability;

Clem was quick to provide a thorough post mortem following the attack, but it didn’t stop all the illconceived commentary on Reddit et al.

occasionally kernel updates will break things, and if users understand and are happy with that then they can tick a box and receive them. Mint 18 even has a new kernel manager which might help if things go wrong. On a related note, Mint’s lack of security advisories has been criticised. Other distros keep updated lists of which package versions are affected by which bugs, and how users should work around these (usually by upgrading a package), eg see Ubuntu’s offering at Such advisories are useful for

“Mint only offers up to Level 3 updates, which doesn’t include the kernel or graphics stack.” sysadmins, but, again, one wonders how many home users will go out of their way to act on such things. What marks out Mint from other distros is the community, and the engagement between it and the developers. Mint wants to be as friendly as possible. To achieve this it has done something quite rare—listened to users. Hence creating a whole new desktop for those used to the Windows 7 way of doing things. Hence the inclusion of Flash, Java, Wi-Fi firmware and media codecs. Hence not forcing risky (too many) updates upon the user. But this is the Bazaar, there will always be critics, and there will always be other choices. LXF

Rising from the ashes Following the Mint hack, a number of  companies and projects were quick to offer  their services and aid the recovery. This was a  great example of the community and industry  pulling together to help fix a distro they cared  about: Czech antivirus outfit Avast helped to  analyse and identify the malware, and even  blocked access to the site (to users of its 

software) where the dodgy ISO was being  hosted. Security firm, Sucuri provided incident  response, hardware and general expertise.  The website was back up and running a few  days later, now with HTTPS. Mint also provide  more visible instructions on the download  pages about how to verify checksums and  (perhaps more relevantly) GPG signatures.

Sucuri later became a major Mint sponsor and  has donated a firewall appliance and provides  monitoring for Mint’s servers. Mint survives on donations and sponsorship  to stay alive, so gestures like these are  invaluable. If you’re a regular Mint user with  some money burning a hole in your pocket,  consider donating it to them.

Summer 2016 LXF214    39

Kara Sowles

Community relations Jonni Bidwell meets Kara Sowles, Puppet Labs’ Community Manager, to learn about the perils of organising tech events.

Kara Sowles is Puppet Labs’ Community Manager, who by lucky coincidence happens to also enjoy making stop-motion movies using actual puppets. We met her at OSCON 2015 to talk about planning tech events, cultural differences, hungry sysadmins and the special importance of community in the tech sector.


Linux Format: Puppet Labs is famed for all kinds of complicated provisioning and cloud-

40     LXF214 Summer 2016

conjuring tools, but there’s a lot more to the organisation than making and maintaining software. Tell us about what you do there. Kara Sowles: I’m Community Manager, which means I help build and support programmes that support the community that we have. And when I say community I mean the whole thing – enterprise users, the thousands of open source users, contributors, people at the company... it’s a really broad definition. In our Community department we work on a lot of different programmes. For example I built a user group programme which we’ve grown from around five user groups to around 50 or 60 right now. These user groups are run by community

members and users, but we help support them with resources. We also plan events for contributors and stuff like that as well as travelling and helping to run one-day PuppetCamp events that we do. I also do content selection for those I help a little bit with our main conference. And then I think one of the most important aspects of Community Manager is also working with other teams in the company so that they can better understand the community and they can then tailor their work to be more valuable to that community. LXF: So we’re at OSCON, which is quite an amazing tech event. A huge amount of work

Kara Sowles has gone on to get this set up and a huge amount of work is still going on behind the scenes. What’s your take on this event? KS: First of all I want to say that OSCON is a really great event. I’m really impressed with it every year. There’s a huge amount of planning that goes into it and people doing really amazing things, but I want to call out their Community Manager, Josh Simmons, who does a great job of making OSCON welcoming, keeping that feeling of community running the whole year round. Josh is really exemplary in that. A great example of the big impact a good Community Manager has, on the ground as well as online. LXF: Tell me about your OSCON talk – I missed it because I was playing at the vintage arcade... er, I mean interviewing someone else. KS: I did a tutorial on Monday with Francesca Krihely from MongoDB. She and I did a tutorial about just the basics of running tech events, kind of an introduction – nothing OSCON-sized, you need professional event planners for these sorts of things. One of the things I like about tech events is that anyone can work on them. Anyone can learn what to do to host an event, anywhere from 20 people to a couple of hundred people I think is pretty accessible because there’s always been a strong community around tech. We talked a lot about setting expectations for attendees, that’s a really important part of the planning process – that they know that they want to attend and they know what to expect.

defining all that kind of stuff beforehand I think is an essential part of event planning. And then I would say that the third thing we talked about was logistically working from one document – having a single source of truth that you’re planning out of. That sounds really simple, but for smaller events a lot of us forget that. LXF: Yes. Even in our office people try to do things via horrible multi-email chains and then someone gets left off the list and everyone gets out of sync. Then comes the shouting and wailing and gnashing of teeth. KS: Exactly. The last thing you want to be doing is running through your inbox or chat logs trying to figure out details. It’s not scalable, and you can’t add other collaborators. If you suddenly got offered a six month vacation to Aruba, you can’t pass the event on to someone else – you’d have to choose between killing the event or taking the vacation. I like to make sure that I always have that option, in case that magically happens to me some day.

LXF: It’s great to see more women in tech. Have you had positive experiences being one of them? KS: Yeah, I mean I On wOmen in the tech industry formed a lot of connections with other women in the industry pretty early on. Making friends with them and spending time with them has been a really important part of working in tech and feeling A lot times when people are unhappy with an excited to be here every day. The industry has event or unhappy with a talk, it’s not always such a disproportionately large number of because the talk or the event was of low quality, men in it, I sometimes forget that women are often it’s that it didn’t match their expectations. half of the population. I feel it’s important to They were an advanced user and they went to spend time and seek that out for myself – it’s a beginner session, for example, and they didn’t rejuvenating. Honestly, I think the industry in enjoy it because it wasn’t suitable for them. It general can still be kind of hostile, often in very may have been an absolutely great beginner subtle ways that most of the men around me session, but that’s not very useful if there are aren’t aware of, and I see a lot of the women I no beginners there. Another thing we talked know leave the industry. It can be hard to work about is when you set out tech events, being somewhere where you watch a lot of your really clear on what the mission of the event is: peers leave one by one. knowing what the purpose is for the planners, But I also think that there are a lot of knowing what the purpose is for the attendees, women in organisations who are creating and then setting goals that are in line with that. programmes to make it better, and while Because I think it can get quite exciting saying, ultimately these changes need to start at the “Hey, we wanna do an event for these kind of top, with big companies making it a priority, people,” but not actually sit down and say what I’m really impressed with all the work folks they are actually going to get out of it or what have started to do. we’re going to provide to them. So really

“The industry can still be hostile, in subtle ways that most men aren’t aware of.”

LXF: What advice would you give companies wanting to support diversity in general? KS: It definitely has to be proactive and not reactive. Just because you don’t see something happening doesn’t mean it’s not happening. I think when companies or individuals are really committed to changing things, they don’t sit back and rest on their laurels. It’s not like “We did one thing, so we’re fine” or “We don’t see it, so we’re fine”, it’s continually listening and making improvements – because we’re not going to solve it overnight. LXF: Puppet seems to have something special to it community-wise. What would you say makes it unique? KS: I certainly think it’s special, but I’m a little biased, having the pleasure of spending so much time with them every day. What I love about the Puppet community is actually that it’s a very friendly and very welcoming community, and that’s really important to us. What folks are finding is that the tech industry is really growing and there’s more options for meet-ups or for software projects or different technology than we’ve ever had before. In that context people are excited about how we’re bringing new people into tech – we’re getting more people online, we’re getting more people coding. But are we thinking about what that means for our projects? Are we thinking about what that means for different software? Because if we are, we should be thinking about resources that welcome people into those spaces. As we see more and more folks joining tech, we need to have... well, community managers, and projects to welcome people, but [some] who are also dedicated to building resources that help folks learn how to use the software, that help folks start to get involved, that help folks start to contribute. That’s something I want to see more of in general. We have a very welcoming community, but I think more projects need to focus on that welcoming aspect, and also on welcoming materials. There’s a lot of room for growth there and I think that the projects that end up being

Summer 2016 LXF214     41

Kara Sowles successful at welcoming and nurturing newcomers are going to be the ones that really survive. LXF: We see similar things with Linux. It’s hard to get into it, hard to find humanreadable documentation (although LXF tries to help!). So lots of people initially approach it with enthusiasm and excitement, but then get stuck and for one reason or another don’t find the right answers and get angry and rage-quit back to Windows. Being supportive is good. But anyway, going back to tech events, I daresay there’s been meltdowns behind the scenes here at OSCON, even though it all seems to be going swimmingly. What sorts of things can go wrong when you’re planning an event? KS: When I go into an event now, every time I like to tell myself that something’s going to go wrong and I know it’s going to happen and I don’t know what it’s going to be, but I’m excited to face it. Because, honestly, something does go wrong at every event. I used to get so nervous, I would stay up the night before lying in the hotel bed sweating and afraid, but now I’m genuinely excited about it, about tackling it: “What’s it going to be? Come at me! I did my best planning and we’re gonna solve it on the ground, this adrenaline’s going to get me through.” I think that’s a really good way to approach events – do everything you can beforehand and you know that this thing’s going to go wrong and you can’t do anything about it, it’s an event. I’ve seen so many things go wrong. Many of them were my fault, many of them were someone else’s – it doesn’t matter. At the end of the day you’re on the ground, you solve

42     LXF214 Summer 2016

it, you try and make sure it doesn’t happen again. You learn something new. Here’s a fun example. We did this couple of hundred person event, it was a full day event for Puppet users we did down in Raleigh. At the last minute we had a lot of additional signups that we didn’t expect, and we were really excited about that. So we called the catering company and advised that we had a ton of extra people join and asked them to up our food order. No problem, they said, and the food arrives and it’s exactly what we need, actually it was a really nice Mediterranean spread, a beautiful buffet. Splendid. Except that they upped the food order but they didn’t up the number of plates.

couple of months and they just have a small discussion because it’s a smaller community and there’s not as many there, but they’re able to form those personal ties with other folks who are using something similar. In some cases user groups might have a regular presence, but it enables them to come together once a year and throw a larger event. That’s where they’re able to bring a tonne of folks together and form those ties with the wider community. I think that’s one of the things I really like about user groups, that flexibility to build what people need to get out of them. A successful user group would be meeting those needs; an unsuccessful one... well, you know them when you see them.

LXF: In Linux the demographic tends to be a bunch of socially awkward middle-aged guys – I guess I just described myself there – which I imagine can be quite challenging to work with. How do you get people to be more active and involved? KS: There are some interesting formats that people play around with. There’s the fishbowl format, where a couple of people will start a discussion and then people will tag in. That requires people who are interested in tagging in. There are usergroups where a lot of people are excited and want to ask a lot of questions, but for groups where that isn’t the case, where people don’t want to put their hand up and say something, that’s where event planning in the real wOrld I think it’s nice to have presentations. Especially a series of shorter presentations from different users. It’s a lot easier to get someone to come up on stage for ten minutes and talk about what they’re doing than tonnes of people get on stage and do a demo asking them to put together a full presentation. and usually something goes wrong. It’s usually So where you might not be able to get a something small and people get through their conversation going, if you can line up some demos and do a great job. They don’t focus on people in the group to give short talks about this tiny bit of code not working or missing a what they’re doing, that gives them an in to slide or whatever. talk about with other people. People have some idea of what to talk to them about, so LXF: I guess live coding is all about the stuff the conversation can be opened. that goes wrong, in a way. It’d just be coding if everything worked. You mentioned user LXF: There’s a bit of a cultural divide groups earlier. These have been a major part between US people being quite open and of the Linux community since the early days, willing to talk about things and the British, and we do our best to keep up with what all who are quite socially reticent. I really enjoy the UK LUGs are up to. What in your opinion the US for this reason – it’s refreshing. But makes for a good user group? have you seen the other side of this, people KS: It varies by region and community – each who really find it hard to talk and share? community obviously has different needs. What KS: Definitely. I travel all over the world putting I like about user groups is that they’re often on events, and every audience is different. very informal and are able to adapt to meet Britain definitely tends to have quieter audience. a particular community’s needs. So for some But one thing that can help, and I absolutely areas, users will want to meet every month really regularly and have a quality presenter that dread when this isn’t the case, is name tags. I know they can be awkward, but if I’m sitting in they can get really good information from. That a room full of people I don’t know and I have to keeps them excited, and keeps them going with remember their names as well… It’s probably the technology that they’re interested in. For not going to happen. LXF some folks it might be that they meet up every LXF: D’oh. KS: So there’s this huge line of system administrators who’ve been listening to stuff all morning and they’re hungry, like really ready to eat, and there’s nothing for them to put their food on. So I had a coworker who took the plastic tops that were on top of the food and was cutting them in half and handing these ginormous plastic half-plates to people. It was so embarrassing, but people were fine – I’d offer actual plates as they were cleaned but they were like “No thanks, I’ve got my special edition plate, they’re not giving these out any more!” They just rolled right with it. Things like that, you never know. It’s like a demo. In tech you see

“Something’s going to go wrong, I don’t know what, but I’m excited to face it.”

Helping you live better & work smarter

LIFEHACKER UK IS THE EXPERT GUIDE FOR ANYONE LOOKING TO GET THINGS DONE Thousands of tips to improve your home & workplace Get more from your smartphone, tablet & computer Be more efficient and increase your productivity

Build your own drone


This project is only suitable for those  aged 18 or over. Piloting any drone is  a dangerous and skilled task, please  seek suitable guidance and training  before attempting to do so. Be aware  there are serious legal consequences  for failing to follow UK Civil Aviation  Authority guidelines (

Build your own drone

At the heart of the commercial drones is Linux and Alastair Jennings shows you how to build your own. WHAT YOU NEED Raspberry Pi Zero Erle Robotics PXF Mini Erle Robotics PXF Mini Power Module HobbyKing Spec FPV250 100mm male to male servo cable FlySky-i6 controller Edimax AC EW-7811UAC RC XT-60 connectors


uadcopters (drones) used to take hours of practice to master as even the simplest manoeuvres, such as takeoff and landing, could prove difficult. Learning to fly one took time and ultimately determination and before you even took to the skies there was the small matter of building one. Now that there’s a good selection of pre-built and programmed drones on the market, you can go into major High Street retailers and buy one directly off the shelf. Drones such as the 3DR SOLO, Parrot Bebop and DJI Phantom have revolutionised the market, and slowly there are drones appearing with advanced flight features that make flying and controlling a drone much easier. The big turning point in drone design was when they got intelligent through small

processors being placed onboard that were able to stabilise flight and apply advance features, such as auto-braking, takeoff and landing. These enabled the pilot to get on with having fun rather than worrying about the mechanics and programming. As the availability of commercial drones has increased in popularity so has the open

far into the DIY drone community to find out that many of the main manufacturers are extremely active in the open source world and regularly contribute and support those wishing to build their own, eg companies such as 3D Robotics sell autopilot systems that can be programmed through software applications such as Mission Planner. 3D Robotics involvement in the community is apparent when you take a closer look at one of their drones. Exploring under the bonnet of the 3DR SOLO, you’ll see that it’s Linux based. The company also run a huge education program and a full SDK is available for the SOLO. We’re going to take a look at building a basic drone from this recent open source community project with the latest Raspberry Pi Zero and Erle Robotics PXFMini.

“The latest open source drones are challenging more expensive rivals with advanced features.”

44     LXF214 Summer 2016

hardware and software communities. The latest open source drones are challenging their more expensive rivals with advanced features, such as object avoidance and GPS navigation. This challenge to the commercial models is no real surprise and you don’t have to look too

Build your own drone


he first step is to prepare the Pi Zero and PXFmini. Fitting the two together is relatively simple once the 40-pin GPIO connector has been soldered onto the Pi Zero. The connector is simply a set of two lines of pins that slot into the top of the Zero and corresponding socket on the PXFMini. The cost of the Pi Zero’s basic board is low at just £4, but for this project you really need to get the starter kit that includes the unpopulated 40-pin GPIO connector, Mini USB and HDMI cables. A USB hub is also a good idea so that you can connect a keyboard mouse and Wi-Fi dongle. You’ll actually end up with two 40-pin connectors if you buy this as there’s one included with the PXFMini but it’s worth paying £8 for the other connectors along with the Pi Zero. We ordered our Pi Zero from Soldering in the pins can be a bit of a challenge due to the small size of the board

To prepare the board place the GPIO pins into the Pi Zero and turn the board over so you can see the pins coming through the board. Put it on the table so that it’s at about a 45 degree angle and use a bit of Blu-Tack to secure. Now push the pins so that there’s only a small portion – a maximum of 1mm – appearing through the board, and use Blu-Tack to secure the position of the pins at one end of the board. The Blu-Tack needs to be positioned on pins that we’re not soldering at first, then once one end of the pins is soldered and secured we can remove the Blu-Tack and finish the job. If the Blu-Tack gets hot then it will burn and become difficult to remove. You’ll need to solder all the pins and make sure that you avoid any dry solders. A finetipped soldering iron with a new tip will make your life easier if you’re not used to soldering.

“A look at building a basic drone from a recent open source community project.” and you must also make sure that the pins have enough length above the board to interface with the PXFMini. The easiest way to do this is to use some Blu-Tack. We also found that by sanding the pins with a bit of Wet and Dry or fine emery cloth it justs helps the solder to stick.

Drone building Once you have the two boards connected you can make a start on the construction of the drone. If you’re doing this using one of the

The motors might seem small but they’re powerful. It’s best to leave the propellers off until the last moment.

The small size of the Pi Zero makes soldering tricky so make sure you have plenty of light and rub the pins with some Wet and Dry to help the solder stick.

many basic kits out there the process should be pretty straight forward. Most of the small kits include the basics with a simple bolted together frame, four small electronic speed controllers (ESC), brushless motors along with a battery. The main components and electronics all need to be taped and zipped tied to the frame with only the motors requiring screws. At this point, it’s well worth leaving the propellers off until the drone is correctly configured. As you put the drone together there are a couple of key points. The PXFmini should be mounted with its connectors facing towards the front of the craft, these pins are used to connect the ESC and the receiver if you’re using an RC unit. Between the Pi Zero and frame, it’s essential that you add a section of foam to provide slight insulation from the vibration of the motors and to ensure that the PXFMini is level. A few additional extras that you’ll need during this part of the build are one male-tomale servo extension and four male and female RC XT60 LiPo connectors. At the moment the two boards are empty of

Each motor is controlled by a separate electronic speed controller (ESC).

Summer 2016 LXF214     45

Build your own drone

“The OS for the drone has been pre-compiled and is based on Debian.”

Four speed controllers adjust the lift and direction of the drone.

Check that all wires are secure and that all motors react to commands from the transmitter before attaching the propellers.

commands and before they can be used to control the drone they need to be flashed with an OS. This OS, as with all Raspberry Pi boards, is held on an SD – albeit a MicroSD card in this case – that can be quickly installed into the back of the Pi Zero. The OS needed for the drone and for use with the Pi Zero and PXFMini autopilot system has been pre-compiled by Erle Robotics and is based on Debian. As long as you purchased

The PXFmini is a low cost and open autopilot shield for the Raspberry Pi.

the board directly from them the company will email through a link to the latest version of the OS. If not, it’s possible to compile it yourself although this will be a more advanced task for some of you.

Flashing the Pi Zero The Pi Zero can only read the FAT32 file system, so before you continue make sure you have formatted your card correctly and not used exFAT. With the OS downloaded and the MicroSD card ready, Type in df -h to check the disks attached to your computer. In order to flash the MicroSD card with the latest OS Image you’ll need to unmount the card. If your card is 32GB or over you’ll see that the disk has two partitions so make sure you eject both. Type in umount /dev/disk2s1 . Now to flash the OS to the card make sure that the file

you have downloaded hasn’t been uncompressed and type in the following: sudo zcat /Path/to/image/PXFmini.img.gz | sudo dd of=/dev/disk bs=8M The flashing process can take some time as the uncompressed file is well over 7GB. Once complete eject the card from your computer and install into the Pi Zero. The Pi must now be connected to a monitor, keyboard and mouse to finish the installation process. After the Pi boots you will see the basic Erle Robotics splash screen and with a bar of Icons showing the different Vehicle projects that the board can be used with. Click the Erle Copter picture and the screen will disappear and the board will reboot. Leave the Pi Zero to run through the boot sequence and this time rather than loading a graphical user interface, the board

Get the autopilot level Due to the small size of the drone it’s quite difficult to keep things neat as you tuck in wires, wrap insulation tape and zip ties around the frame in order to hold everything together. Whilst it might seem fiddly and time-consuming to make sure that the PXFMini, Raspberry Pi Zero combo are absolutely level, it’s one of the most important aspects of the build! From the outset of building the frame you need to continually check that everything about your build is as accurate as possible. The frame kit that we’ve used in this project does the job well despite it being cheap, but there are some of the parts, including the frame legs, that take a bit to fit correctly and have a habit of dropping out. The only way around this particular issue is to glue them in place, as if one

46     LXF214 Summer 2016

leg falls out during takeoff or landing, the effect as the drone tries to correct itself as can be quite dramatic. Legs are an easy fix, but there are several other issues with the frame as it hasn’t been designed for the shape or size of the autopilot we’re using. Not only this but the size of the space for the autopilot is too small. This means that without filling the hole it’s extremely difficult to get the autopilot to sit flat. If it’s not flat then when it takes off it will try to level itself and if you do get it into the air the flight will be erratic. In order to get around this and also to help cut out vibrations from the motors, we added a large section of foam inside the slot. This helps to bulk out the section as well as create a rough damper for the autopilot to sit on.

The autopilot controls much of the drones flight settings, so ensuring the PXFMini is correctly placed is important.

Build your own drone

The first flight Before you start double check your drone and make sure that everything is in place and all wires are secure and taped to the frame. As this will be your first attempt at flying the drone remove the propellers so if anything does go wrong the drone will at least stay exactly where it is rather than bouncing round the room poking eyes out and smashing ornaments. Start by switching on the controller making sure that all switches are in the up position and the throttle is in the lower position. Now connect the battery to the power module and you should

will boot to a command line showing that the installation has completed and the board can be unplugged. At this point the drone should be ready for its first flight test. To get it to fly you’ll need to connect it to some type of control device such as a radio control, Bluetooth or Wi-Fi with ROS (Robot Operating System). You do need to take into consideration that this is an open hardware project and the components that we’ll be using will be different to those used by Erle Robotics. For complete ease, we’ve opted for the traditional radiocontrolled option and used a FLYSKY FS-i6 with the new

hear a beep as the board starts loading which can take up to a minute to complete. Another beep will highlight the end of the boot sequence and then another 5 to 10 seconds is needed before the drone is ready for testing. On the transmitter, move the left stick to the bottom-right position for five seconds and holding it in place flick down switch D (SWD) which on the SKYFLY FS-i6 is the switch on the top right of the handset. The motors should now start, flick the switch back up to stop the motors. Now you should be able to use the

FS-iA6B receiver which we located at Maplin for £50. The important factor here is that the receiver is of the PPM type mentioned earlier. PPM enables several servos or in this case several ESC to be connected to one port and controlled individually. This works in just the same way as traditional servos would with an individual port in the receiver for each. This cuts down on the amount of wires needed but more importantly it’s the hardware required by the PXFMini in order to interface with the controller. Once you’ve connected everything it should work as described here although you may find that some fine

throttle to start and increase the speed of the motors. If you decide to leave the drone for a short time then the start up process will need to be repeated. If all is in order and all four motors spin and react correctly with the transmitter then you’re ready to start your first flight once the propellers are attached. If only three motors start or you can visibly see that one motor is spinning at a much lower speed than the others, then you’ll need to load AMP and configure your transmitter with the drone.

tuning of the transmitter controls is needed. The Pi Zero doesn’t feature the ability to network over the USB ports, so a Wi-Fi dongle is required in order to connect to your machine. We’ve used a dongle bought directly from Erle Robotics at 55 euros. You’ll need to make sure that your machine has a 5GHz compatible Wi-Fi device in order to get things to work properly. Once connected you can download and install AMP Planner from and run through the calibration and setup process in order to get your drone to work correctly. LXF

Using electrical tape and zip ties might seem make shift but it’s the best way to hold everything in place.

Summer 2016 LXF214     47

Mr Brown’s Administeria

Mr Brown’s Jolyon Brown


When not consulting on Linux/DevOps, Jolyon  spends his time bootstrapping a startup. His  biggest ambition is to find a reason to use Emacs.

Esoteric system administration goodness from the impenetrable bowels of the server room.

An open source mid-


48     LXF214 Summer 2016

Google unveil ‘AI’ Chip TPU custom chip ‘supercharges’ machine learning at  a pace which leaps several years into the future.


t Google’s IO conference in May, the search giant unveiled a new component that has quietly been introduced into its data centres and which powered AlphaGo, the system that defeated Go champion Lee Sedol in the highly publicised set of matches earlier this year. TPU (or Tensor Processing Unit) is a custom ASIC (Application-Specific Integrated Circuit) for running machine learning applications, specifically tailored for TensorFlow (, an open source library for ‘machine intelligence’). Google has been using these chips to improve the relevancy of search results (‘RankBrain’) and to increase the accuracy and quality of maps in its Street View application.

(image credit: Google Inc)

his week I was reading an article that mentioned the Cathedral and the Bazaar, the essay (and later, book that’s often included on the Linux Format cover disk) is now 19 years old— it famously influenced Netscape into releasing the code for its web browser. I’m sadly all too aware of my own ageing, especially after a visit to the gym, but it was a solid reminder that a significant period of time has passed outside my internal monologue gave me pause for thought… Or what might possibly be termed a ‘mid career crisis’. Put simply, I don’t feel as though I’ve done enough when it comes to open source, in both terms of producing projects of my own or contributing to others. I mean, not everyone can be a Torvalds or a Stallman, but I definitely should have done more (and a track record of doing so is almost de rigueur for some jobs now). Perhaps I need to bite the bullet and pick a long-term project of my own? If the typical male mid-life crisis clichés involve buying a sports car or having a steamy affair, [I’m too tired! – Ed] what are the equivalents for a DevOps engineer (née old school systems administrator)? Building their own souped up Linux distribution? Switching to using Windows 10 in a fit of self-destruction (That’s not going to happen, although I’ll admit to flirting with OS X)? I have this nagging idea in the back of my head about a ‘Linux from Scratch’ type setup, using containers for everything, that distributes encrypted files out to various cloud providers and that can be used anywhere via a tiling interface. I’m thinking of a kind of ‘Plan 9 with Docker’. Something weird and experimental for my own (and hopefully others) enjoyment. We’ll see. Do you think you have contributed enough to open source? What projects have you never gotten round to building yourself? I’d love to hear about it.

Google’s Tensor Processing Unit board—This is a custom ASIC that is built specifically for running machine learning applications.

Google claims that it has found TPUs have delivered an order of magnitude better optimised performance per watt for machine learning, giving a three- generation leap of Moore’s law. Google also mentioned that from tested silicon to running applications in its data centres “at speed” took only 22 days. Further details on TPU have been hard to come by. Google has remained tightlipped on specifics, but it did reveal to journalists that TPUs can be connected together as part of a larger system and has an instruction set, but speculation remains that TPU will run algorithms trained on another system rather than being able to train itself (this is known as ‘inferencing’). Nvidia, the company best known for its graphics chips but which has also ploughed investment into artificial intelligence, doesn’t see TPU as a threat to its business due to the low numbers of companies that will be able to build their own chips. The company recently launched its Tesla P100 GPU accelerator based on its Pascal microarchitecture. CEO Jen-Hsun Huang was quoted (during the recent Computex trade show in Taiwan) as saying that training is “more complicated than inferencing”. Nvidia clearly see Pascal as the perfect solution for companies needing to do both. Nvidia sees AI as a huge market opportunity: “The last 10 years were the age of the mobile cloud”, said Huang, “…and we’re now in the era of artificial intelligence.”

Mr Brown’s Administeria

AWS Lambda A growing movement aims to do away with servers altogether.   Does this consign sysadmins to the scrapheap?


ust when I think I’ve got a handle on what elements make up a modern Linux-based infrastructure, a bunch of new ideas come along or start to gain traction. Being effectively freelance, I like to at least cast an eye over them – they might prove beneficial to my existing clients, or an opportunity to use them might arise – eg someone could request a proof of concept demo to see how a new technique fits in with their existing infrastructure. Of course, the benefit to you, dear reader, is that you get to share the fruits of my labour. So over the next couple of issues I’m going to take a high-level look at a couple of concepts (some might unkindly call them ‘buzzwords’) which have gained a bit more prominence of late, such as serverless computing and unikernels. What are they, what can they be used for and should anyone care about them? Recently, while idly procrastinating [that explains a lot – Ed] and browsing my Twitter feed, I started to see some indignant responses to slides coming out of, a conference held in New York. It seemed as though a

revolution was being announced which did away with servers and – by association – those pesky operations staff which were required to look after them. Footage emerged of servers being smashed with a baseball bat (I mean, we’ve all wanted to do that now and again) and photos of slides that screamed ‘no servers!’, and it seemed clear something was going on, but what exactly? To my chagrin, I really hadn’t paid much attention to the serverless market offerings, such as AWS Lambda when it appeared or to Firebase (a competitor, which was swallowed up by Google back in 2014) or to any of the accompanying frameworks or evolution of the whole idea. Similarly, in Slack channels I frequent there was some confusion about what this all meant. How does it differ from PaaS (Platform as a Service)? Isn’t this a step backwards from DevOps? Isn’t it just a buzzword? The term ‘serverless’ here doesn’t mean, of course, that programs run by magic. The essence of the idea is that, rather than bother with the management and maintenance of

Amazon’s Lambda provides a platform for creating serverless applications, but does it live up to the hype?

The twelve-factor app The ‘Baader-Meinhof Phenomenon’ is the name  given to situation where a person comes across  a piece of information for the first time and they  suddenly begin to see it again elsewhere  repeatedly. This is certainly the case for me and  the ‘twelve-factor app’, which has cropped up for  me all over the place since hearing about it just  this month (while doing the research for this  article). Its the name of a methodology for  building software as a service applications which  meet a certain set of criteria. This isn’t new (I’m 

just late to the party), it was developed by one of  the co-founders of the popular platform as a  service company, Heroku, and the influence of  the companies offering will be clear to anyone  who’s used it. Despite this, the areas it covers  will be familiar to anyone who has worked in a  DevOps/Agile environment. It talks about  making applications portable between execution  environments (having a clean contact with any  underlying OS) and making them suitable for  deployment to cloud environments (specifically,

obviating the need for servers and systems  administration). Minimising differences between  development and production in order to  promote agility via continuous deployment is  another familiar theme, as is ensuring the  application can scale. While the link to the  article here is the step around executing the app  as one or more stateless processes, Its all very  fascinating and I recommend taking a look at  the entire approach which can be found at

Summer 2016 LXF214     49

Mr Brown’s Administeria

There are many example functions provided which handle events from other parts of the AWS platform.

servers (either physical or virtual), users of a serverless architecture merely provide code which then runs on compute resource supplied by a cloud provider when needed. There’s no server provisioning, patching or scaling needed; the provider takes care of all of that. Indeed, the servers can’t be logged into or customised in any way. That means, eg no bespoke package installations – indeed not even SSH is available. These restrictions are balanced out by the service taking care of all other resources, dealing with health monitoring of the instances the code runs on, patching, deploying code, adding capacity and logging etc. In addition, ‘when needed’ means exactly that. Code deployed into the service runs on demand and the provider charges only for the time that it runs. No buying a minute/ hour/day or month of server time for a virtual machine. The code is spun up (typically within 100ms of a request) and runs. If the code takes only a few more milliseconds to complete it’s task then that is that. Service providers typically charge per request and for the duration that the code executes. A typically sized free tier on AWS Lambda gives 1 million free requests per month (with duration being linked to the amount of memory the code takes per second—the first 400,000 GB-seconds are also free. A smaller memory footprint means a larger free tier). On the face of it, this seems almost too good to be true for development teams. No servers to worry about, which increases the speed in which projects can be delivered and deployed—in theory there are no build processes to run through, no waiting around for another team to complete their tasks. Making the systems which run the code ephemeral removes so many barriers that there has been talk that it enables a ‘No Ops’ movement. As you might expect, this phrase in particular makes me somewhat dubious. I might be accused of bias, but things are

never so perfect in the real world; there are still issues to deal with around performance, monitoring of the service the code is running and security etc. I did read one opinion piece which suggested that adopting a serverless approach enables sysadmins to be outsourced (Don’t forget, that where the servers are, sysadmins will still remain—even if the ratio of admin to machines is enormous in a cloud provider environment). This could be construed as a good thing—by using cloud services a team can effectively be hiring the expertise of an Amazon, Microsoft or Google sysadmin team for a fraction of the price they would cost in real life, if they could be hired at all (given the skills shortages faced by our industry). But these days the skill sets of developers and operations staff are much less delineated than they were in the past (thanks in part to DevOps culture). A good team will have some members who have an infrastructure background and/or specialism in what might be called ‘platform engineering’. They might even be termed ‘full stack engineers’ if their development skills are sufficient. I believe I’ve mentioned in the past that team members with ’T’ shaped skill sets – broad general knowledge combined with a deep understanding in a specific area – are very valuable indeed. I guess what I’m trying to say is that I believe any team will benefit from having someone who understands infrastructure to what might traditionally have been thought of as the sysadmin level.

A quick tour of Lambda I’ll concentrate on Amazon’s Lambda here as its the best known of the platforms out there and the one anyone reading this will most likely come across. This isn’t a recommendation of it above any of the others—competition is good and having a single entity dominating the cloud computing landscape will be a very, very bad thing. For better or worse though, the AWS behemoth was the one I turned to when researching this idea of serverless computing. Amazon suggest Lambda is a good fit for event-driven applications that run functions when something changes, such as a file arriving in S3 (Amazon’s storage service). Imagine a function that converts the data into different filetypes, eg, and drops these copies back onto S3. Lambda can also run behind a more traditional (where ‘traditional’ means something that’s became mainstream in the last couple of years) microservice type infrastructure, where it is fronted by Amazon’s API gateway which can process HTTP and make function calls accordingly. Out of the box, so to speak, Lambda supports Node.js, Java and Python as languages for writing functions in. Offering the java virtual machine as a target means that it will support some of the newer languages, such as Clojure, which can run on the Java Virtual Machine (JVM).

Serverless FAQ So how is this in any way different to Platform  Infrastructure as a Service architectural  as a Service (PaaS) then? The comparison can  patterns. Scaling is seamless in the serverless  be confusing—after all, I would simply deploy  world too (at least, in the utopia it hints at)  my application to a PaaS provider and not worry  whereas a PaaS provider will charge for adding  about servers there either. Serverless  additional scale (which is usually very costly). architectures though, differ in that they are  How can I avoid the overhead of starting up a  expected to bring the whole application up and  JVM instance when I write my serverless  take it back down for every request if necessary.  functions in Java? Compared to Node and  A PaaS infrastructure would never do this and  Python, the JVM is very slow out of the blocks.  sits as a halfway house between serverless and  One way to avoid dealing with this invocation 

50     LXF214 Summer 2016

penalty is to keep the instance hanging around.  Typically this is handled by using another  function to ping the slow starting one  periodically (to avoid it timing out). In Lambda  this can be done via a scheduled function, which  is pretty much as it sounds—a function who’s  frequency is controlled by a  Cron-like interface.  I’ll admit here that I haven’t tried running a JVM  on Lambda myself or tested this to see how  reliable it is! The best idea is to test it for yourself.

Mr Brown’s Administeria I decided to run through a simple initial function set up as a test. After setting up credentials in IAM (the identity and access management module for AWS) I selected the suggested hello-world-python function from the list of over 40 provided examples. This simple function merely echoes back data supplied to it but the exercise shows how a function is set up and how the interface works. Now – bearing in mind I was looking for a high level overview of how everything hangs together – my first impression with the web-based interface is that there is a lot of clicking around to be done before getting anywhere with it. There is an AWS Lambda API available which can be used in conjunction with the AWS command line interface but even so, my understanding is that this is a reported bugbear while getting used to Lambda. As ever, enterprising projects appear to be rushing to fit the gaps where they exist by providing tools to handle the management. These are definitely worth checking out as they aim to ease some of the pain and complications anyone starting from scratch will encounter. Serverless (, and Terraform (https://www. – from Hashicorp, developers of Vagrant and Consul, among others) are two I saw mentioned several times during my Lambda research. Zappa ( Miserlou/Zappa) is a Python specific project which looks interesting, and Apex ( has the added attraction of allowing Go to be used as well as the other standard Lambda languages. Running through the examples did show how the billing is recorded and just as importantly how the logging works. Lambda uses Amazon Cloudwatch to gather output and also statistics (some graphs are available via the ‘Monitoring’ tab in the web interface). Cloudwatch can be configured to handle events as well, posting alerts to say, Slack (and other systems are available). Functions are easily tested via the interface and can then be linked to either an event or an API (https) call. I found that there’s a fair amount of background knowledge to soak in to get more than the basics running. Using functions with the API gateway would likely be a common scenario for many deployments and applications. The gateway adds a layer between the end users and the Lambda functions, giving the ability to throttle individual users or requests, protect against DDoS attacks and providing a caching layer to cache response from the functions themselves. There is a bit of setup to be able to get to this point. Online, the API gateway seemed to be a common point for complaints and issues. The test examples available seemed easy enough to configure though. This is the difference between testing and running a production service I guess! The API can apparently import and export api definitions (using swagger, a common format for this kind of thing) for ease of migration. The myriad numbers of clicks aside, everything seemed pretty slick, I have to say.

Under the hood Amazon has given a few details away about how Lambda works. Each function runs in its own isolated environment, with its own resources and file system view. They share a bunch in common with EC2 in terms of security and separation. The code itself is stored within S3 and is encrypted at rest. There are vague noises about ‘additional integrity checks’, although I haven’t been able to find out exactly what they are. A small amount of scratch space (500MB) is available for functions if they need it. Quite importantly though, Lambda requires that functions be stateless - they “cannot assume any affinity to the underlying

compute infrastructure” (to use Amazon’s phrasing). Once a request has been dealt with, the function might well disappear completely, so this has to be the case. Applications that need state have to take care of it themselves, using something like DynamoDB (another AWS service) or an alternative. This statelessness (in conjunction with Amazon’s infrastructure management software) allows Lambda to scale up easily from the point of view of the application developer. Its also a requirement in development methodologies which lend themselves nicely to serverless architecture (as well as web apps in general – see The Twelve-Factor Apps box which is on p49). Application deployments seem a little limiting to my mind—code can be pasted in using the web interface or dropped onto S3 as a ZIP file either through the web interface or the command-line. Having gotten used to versioned docker configurations and configuration management tools, this seems a bit retrograde. I’m sure handling this automatically would be top of any development teams to do list when starting a new project (and is undoubtedly handled by the Lambda-related projects I mentioned earlier).

Testing and creating functions is pretty easy through the web interface but involves a myriad of mouse clicks.

Magic bullet or marketing hype? Overall, from my digging around the web (expect to do a lot of this if you start a Lambda project) and testing some basic things out myself, I think Lambda, at the moment, is a system that would reward an investment in time taken to get to grips with it. It’s not a magic bullet and the amount of time needed to get a operational workflow that wells works will be nontrivial. Researching the use of some of the third-part tools would be highly recommended. For the right project though it could be an ideal fit for the development side of the house. Operationally, there will be challenges, not least around the integration of monitoring, eg a common complaint with Lambda seems to be that error handing is poor. There is a sense that Lambda isn’t quite production ready, depending on what your system looks like in production (low latency 24x7 applications should probably be housed elsewhere). However, Amazon has a history of improving services at a decent clip. I suspect this sounds like I’m sitting on the fence, so I’d say for the right application it’s definitely worth checking out if you have time to do so. I hope this was a useful column and at least gave some of you something to think about. Next month I’m going to be looking at Unikernels: Are they an even more lightweight alternative to containers or an unmanageable mess best avoided? See you then. LXF

Summer 2016 LXF214     51

The best new open source software on the planet Alexander Tolstoy kong vaults and wall runs his way over the cable-choked alleys of the internet to courier the best open source and free software to you each month.

Eko StyleProject Qt VirtManager SquashFS Synapse qBittorent Trojitá FlightGear Raging Gardens Pdfgrep Shotwell Sound editor

Eko Version: 5.2 Web:


ko is a sound wave editor that recently hit version 5.20, yet there are very few people that have ever heard of this small but very useful application. We think that Eko has received very little attention considering what it manages to achieve as a one-man project, but it’s worth taking a closer look at it. Eko may look minimalistic and sometimes resemble simple wave editors, such as KWave, but there are many features with some extras that you will not find even in Audacity. It’s

best to start exploring Eko from its Functions menu, which includes silence and noise generators, fade in/fade out feature, wave normaliser and offset editor etc. By default, Eko opens the extra Mixer window, where you can apply versatile effects on top of your recording or loaded file. Those who are

Eko is a fully fledged audio editor and mixer.

“Those who are keen on home recording will appreciate the guitar overdrive pedal effect.”

Exploring the Eko’s interface... Large toolbar

The waveform

The icons are intentionally larger so that you can click them more easily and icons animate when you hover over them.

Here you see your audio track in graphic form. Scroll bars enable you to reach any place if the track is too long.

Control the overdrive

View modes


Play with gain, tone and driver amount to get the best sound out of your input.

The tabs placement is unusual, but it makes navigating the Eko interface a lot easier once you get used to it.

A very nice addition to a wave editor. Add extra effects to your track with a few clicks!

52     LXF214 Summer 2016

keen on home recording will also appreciate the guitar overdrive pedal and retro vinyl effect. Each effect has its own Settings panel where you can adjust its amount, tone, level or resonance, depending on what each effect is about. You can mix several effects and sort them to your liking. Eko has a convenient user interface that fits a lot of information into limited space. You can open more than one file at a time and switch between files as you do with tabs in a web browser. There’s also a handy vertical bar to the right of the main wave area. The tabs there enable you to go to Eko’s settings (Tune), browse the filesystem and save/ rename files (Manage) or consult the built-in help manual (Learn). Eko doesn’t have any problems with capturing sound either, because it follows your desktop-wide PulseAudio settings, but you can still set custom input/output devices at Tune > Sound Devices. Surprisingly, the project website only offers a Windows build, but there are Eko builds available in many distributions’ (distros) extra repositories (repos), such as Arch Linux and OpenSUSE. If you’re missing your copy, feel free to build Eko from source, it’s as simple as $ qmake && make && sudo make install .

LXFHotPicks Qt style engine

StyleProject Version: Git Web:


ou might have noticed that trends have changed in recent years, if you like desktop eye candy as much as we do. There was a burst of customisation around the shiny new KDE 4 between 2009 and 2013 but now most of the attention is on smooth GTK3 themes. It would be unfair for the next-gen Plasma 5 desktop to have only a modest set of styles other than the default Breeze; we need more than just a bunch of Qtcurve configs and few historic styles in KDE and that’s why StyleProject is a breath of fresh air. The idea is very similar to Bespin in KDE 4, but the StyleProject developers decided to focus on mimicking OS X to recreate the original and smooth user experience. The screenshot (right) demonstrates it better than words and most people wouldn’t know that it’s actually the KDE desktop.

StyleProject is more than just another style engine for KDE as it offers noticeably more features than other styles (eg QtCurve). Apart from decorating widgets and controls, StyleProject implements client-side decorations (CSD), content-aware toolbars, user-defined gradients, shadows for buttons (Raised, Sunken, Carved) and other eye-candy, from semi-translucent and brushed windows to global menu bar. All these features are achieved through numerous hacks and playing with KDE settings. If you combine StyleProject with the Dfilemanager (http://dfilemanager.

Tired of the QtCurve config? Try StyleProject.

“StyleProject implements client-side decorations (CSD) on Plasma 5.” you’ll get a greater Mac-like feel for your desktop. However, the whole project is highly experimental and there are nearly no binaries to try StyleProject right away. Instead, you must build them yourself: $ git clone git:// styleproject/code styleproject-code $ mkdir qt5build; cd qt5build; cmake ../ -DCMAKE_INSTALL_PREFIX=/usr -DQT5BUILD=ON && make && sudo make install This way you’ll compile the StyleProject version for Plasma5.

Virtual hosts manager

Qt VirtManager Version: 0.27.50 Web:


ot everyone will want to deal with system administration or be up to the task, but for those who do they need handy and welldesigned tools for the job. Managing virtual machines can be a tricky task, especially when several different virtualisation technologies are in use and the guest machines are located on remote hosts. You might know that Fedora and some other Linux distros are shipped with a built-in virtualisation stack, which includes the KVM back-end (kernelbased virtual machine), libvirt library and the desktop application to manage guest machines. Qt VirtManager is the alternative GUI that reveals the power of Libvirt and adds a lot of useful features on top of it. Qt VirtManager enables you to add connections to remote machines, including real workstations, virtual guest sessions or application containers. The list of supported variants is solid: Xen, VirtualBox, KVM,

Qemu and VMware etc. More than that, the application is tailored for quick access to application containers, such as LXC and OpenVZ. The idea behind Qt VirtManager is to bring more comfort to Libvirt, which, in turn, allows you to access hypervisors running on remote machines through authenticated and encrypted connections. To start, press the ‘gear’ icon on the application’s toolbar and select ‘New Connection’. Fill in credentials of your remote machine, such as driver (machine type), transport (TLS, SSH, TCP etc) and, of course, full path to the target host. Working with remote instances requires you to not only create connections, but also open it. Once you do that, press the ‘Dock Control’ button to add extra views

Shuffle and play as many remote hosts as you like with this nice management program.

for your host, you’ll notice the new tabs appeared along the bottom of the window. Switch between tabs to control remote hosts’ network settings, storage pools and running state. To view the remote host’s interface, press the ‘Play’ button and this will open a graphical VNC session with full access to the host. Qt VirtManager can be compiled against either Qt4 or Qt5, and it looks modern and brings more convenience to sysadmins that prefer KDE Plasma.

“The application is tailored for quick access to application containers.”

Summer 2016 LXF214     53

LXFHotPicks Filesystem

SquashFS Version: 4.3 Web:


e’re a little skewed towards system administration in this HotPicks, but there’s nothing a desktop user should be afraid of. SquashFS is a peculiar technology that will remind most Linux users of watching live distros booted from CD/ DVD or USB media. That’s right, SquashFS is the name for a compressed filesystem which can be used in many cases, including block devices. SquashFS is a read-only filesystem and that means you should use it primarily for unchanged or rarely changed data. Common use cases are file servers (FTP), storage pools with lots of small files or compartmentalised content. Virtually anything can be squashed into a filesystem with a small amount of effort. In order to start, you’ll need the squashfs-tools or similarly named package installed in your system. Then you can turn the directory into a filesystem image with this single

command: $ mksquashfs /path/to/dir image_name . SquashFS will perform its job using several threads (depending on how many cores your CPU has) and will auto-remove duplicate files. The default archiving algorithm is GZIP, but you can change it to LZMA, LZO, LZ4 or XZ by adding the respected option to your command (see the $ mksquashfs --help for details). Notice how many cool things you can do with the SquashFS packager: change file ownership on the fly; exclude certain files from the image or adjust block size to your target device. By the way, you can make SquashFS write an image directly to the target device, like this:

Many popular Linux distros use SquashFS, the read-only filesystem, for rolling out live sessions.

$ mksquashfs /path/to/dir /dev/sdX Mounting is done just the same way as for other filesystems, once you specify the FS type explicitly: $ mount /dev/sdX /path/to/other/dir -t squashfs Finally, you can’t just mount the filesystem image as you need to ‘unsquash’ it back using the $ unsquashfs command. If you’re using custom commands to tweak the contents of the image when squashing, then the unsquash process would be the right way to restore your content to the initial state.

“There are many cool things you can do with the SquashFS packager!”

Launcher and search tool

Synapse Version: 0.2.99 Web:


e clearly remember how much fuss there was when instant search appeared on the desktop in mid-2000s. Apple was boasting about its Spotlight search field that provided near-instant search results of everything on your system. This was achieved by indexing your whole home directory ‘in the background’, which everyone would notice due to constant hard drive rattling. The Linux world responded with the Beagle tool, rolled out by Novell, but, generally, instant desktop search hasn’t received that much attention since then. Nowadays there are even distros that have no search feature at all (eg elementaryOS) and most users feel alright about it. Synapse is a launcher and a search field provider with a rich set of features. It offers a semantic search capability by using the Zeitgeist engine, and it is

54     LXF214 Summer 2016

written in Vala programming language. Using Synapse you can fire up applications quickly; find and access documents; locate files/folders easily; and it offers quick access to exactly what you’re looking for. You can launch Synapse quickly using Ctrl+Space shortcut key (and surely change it easily from the preferences dialog). Using Left/Right Arrow keys you can navigate between different categories to enhance your search or you can extend your search using Arrow Down key. The key strength of Synapse is its universal approach to different entities from one point. You may be looking for a file or an app, a command in Bash

You can customise the greeter to look the way you like.

“Synapse offers quick access to exactly what you’re looking for.”

history, a contact, a photo/video etc. There are so many things that can be searched for. If you wonder how Synapse can do all that, go ahead and look through its plug-in list in Preferences > Plugins. Here you’ll find that Synapse has a lot more connectors, including Launchpad bugs, Chromium bookmarks and Jabber chat history. If you install Synapse for the first time, it will return limited search results because very few events have occurred and have been logged by Zeigeist. Once you start using the system for longer, your computer the search will be made much better by Synapse.

LXFHotPicks LXFHotPicks Torrent client

qBittorrent Version: 3.3.5 Web:


hat makes qBittorrent special is the legal notice that pops up when you launch the program for the first time, which asks that you don’t misbehave and share what you shouldn’t! The qBittorrent interface is very clean, with status, categories and tracker details on the left and the main area next to it with current torrents. At first glance, it doesn’t look like qBittorrent has many options, but, in fact, it’s very featurerich once you start exploring it. The application comes with an integrated search engine; web interface; sequential download support; bandwidth scheduler; advanced RSS support with download filters; torrent creation tool; IP filtering and other useful features. Many users will notice that qBittorrent is a close match to μTorrent, which is considered one of the best torrent clients out there. With qBittorrent you can’t just download and

share torrents, but increase/decrease torrent priority flawlessly, visualise speed statistics, label your torrents and sort them by labels. You can also view lots of extra details about your torrents, explore trackers and view your peers etc. Additionally, the application enables you to seamlessly create new torrents yourself, by going to Tools > Torrent Creator and filling in the concise form to get started. The latest version, 3.3.5, brings some new features, such as a torrent management mode, a new cookie management dialog as well as other improvements and bug fixes. You can go to Options > Downloads to set up Saving management and if you set

After few hours of using qBittorrent, we discovered lots of goodness and zero drawbacks.

Torrent management mode to ‘atomatic’, qBittotrremt will keep an eye on your *.torrent files and automatically relocate it when you change a category or a saving path for a torrent. Another cool feature is Web UI, which is something that’s common in Wi-Fi routers (though vendors tend to prefer Transmission). With a few mouse clicks you can launch a fully fledged torrent web server and access all the qBittorrent goodness from the remote host. Very nice!

“With a few mouse clicks you can launch a fully fledged torrent web server.”

Mail client

Trojitá Version: 0.7 Web:


e featured Nylas N1, a supermodern email client that works on cloud-based principles back in LXF206 [p66]. It was also built using the Electron framework, which is, essentially, a tweaked Chromium web engine wrapped for desktop applications. Many people like modern Electron-based apps, but the downside is the size of the applications. Perhaps nearly 250MB is too weighty for an email client? And if you agree, you may want to consider Trojitá. This is a light Qt5-based email software with a focus on high performance, particularly when it comes to working with several mailboxes at once, which are each loaded with thousands of emails. Trojitá is specifically designed to work with the IMAP protocol, which is widely used everywhere nowadays. This is just a remark, because if you need to work

with legacy POP3 mail server, you’ll need another software. Once you start Trojitá for the first time, a pop-up dialog will ask you to fill in required credentials. You can provide your personal details and also specify connection details for incoming email (IMAP server) and outgoing (SMTP server). You can set up Trojitá to use any sort of IMAP-based email service, be it an in-office corporate mailbox or a public service (like Gmail). Due to the way IMAP works, you can customise the offline mode in Trojitá and tell it, for instance, how long it should keep downloaded messages in its cache. The application also provides three modes

Sorting out an endless inbox is a breeze with Trojitá.

“A light Qt5-based email software with the focus on high performance.”

for connectivity and respects your budget if you pay for each megabyte— just select IMAP > Network Access > Expensive Connection and Trojitá will not download attachments and heavy content from your emails. The latest Trojitá 0.7 brings various GUI refinements as well as many bug fixes to the application’s internals. The most visible new feature in this release is perhaps the support for OpenPGP/ GPG/S-MIME/CMS/X.509 signatures. Trojitá can now verify and decrypt such emails seamlessly. The ability to sign new emails with such signatures is a proposed future feature.

Summer 2016 LXF214     55

LXFHotPicks HotGames Entertainment apps Flight simulator

FlightGear Version: 2016.2.1 Web:


here’s a reason why we postponed our FlightGear review. Although the games is very impressive and receives frequent updates, unless you’re a real pilot there’s little chance that you will do anything useful in-game. This game simulates the plane’s cockpit very precisely and carefully retains all the switches, levers, buttons, handlers, lights and all sorts of other control. The default plane is a Cessna 172P, while the Linux game bundle contains 88 other models. You can find more planes at the FlightGear website and overall there are more than 400 planes to satisfy any fastidious pilot. So the game starts and you want to take off on your shiny cool Cessna… But, first, you need to walk

through essential training that can be found in the Help > Tutorials menu. Start with preflight checks and proceed to starting the engine. To do this, set the parking brake on, prime the engine a few times, check that the carburettor heat is cold, set the fuel mixture to full reach, turn the master switch on, and do the same with the beacon, turn on both magnetos, and – yes – start the engine (and don’t forget to adjust the throttle). FlightGear keeps on training you with a taxiing tutorial and only after that you’ll know the necessary routines to take off! The game isn’t for the

Throttle up to taxi. No effect? Turn off the brake.

“Not for the impatient but nothing compares to the joy of the first flight!”

impatient but nothing compares to the joy of the first flight! We strongly recommend running through at least the first eight tutorials, so that you know how to start, drive, take off and securely land a plane. There are many airports to choose from and the game respects each airport’s time zone, so you can find yourself dropping in to land somewhere in the middle of night.

Arcade game

Raging Gardens Version: Git Web:


his a great game for taking a break from work and spending half an hour playing something completely mad and addictive. Raging Gardens has a very low barrier of entry and it even offers a playable online version at http:// Regardless of whether you play in the browser or install it, the game uses very little system resources. The plot of the game is simple: you control a rabbit on a 2D field and catch carrots that randomly appear in different spots. The playing field is limited by a fence but it also has various obstacles, such as trees, barrels, tortoises and haystacks. There are many AI-controlled rabbits that are hanging around eating the carrots, and it’s your job to outrun them. Eating carrot takes time and

56     LXF214 Summer 2016

while your rabbit is having his meal (by holding down z), you’ll be willing a small progress bar to fill up so you dash about some more. While hopping about the field your heroic rabbit can do two things to deter other rabbits: The first trick is farting (by pressing q), which stuns other rabbits for a moment (although it doesn’t slow them down for long), and another is placing a carrot on a fork as a decoy, which is a more effective way to fool the AI bunnies for a little while. You need to balance your carrot hunting and chomping with careful decoy placement because the latter

Looks like Snake, but you’ll need better reactions.

makes you spend two carrots from your score and you need to collect more than you spend. Each game lasts for three minutes and if you play the online browser version, you’ll be asked to publish your results on Gamejolt for others to ridicule (as usual there are lots of players who have far too much time on their hands). It’s easy to collect about 60 carrots during the match, but you’ll need to be skilful to collect more!

“You need to balance your carrot hunting and chomping with careful decoy placement.”

LXFHotPicks Search tool

Pdfgrep Version: 1.4.1 Web:


he command-line tool, grep is impressive and enables you to filter out the information you’ve been searching for from almost anything. But that ‘almost’ means that some limitations exist. Not everything in Linux terminal is reachable as plain text (we’re looking at you, journalctl), but more than that, some sorts of data are just not designed to be handled in the command line at all. The portable document format (PDF) has been used for decades as a reliable interchange format for publishing houses that needed to print neatly designed layouts on paper. Since PDF was made the official open ISO standard for publishing, its adoption increased everywhere. We don’t print so much as we used [speak for yourself – Ed], but PDF is ideal for delivering electronic magazines and books and

there are no signs it will fade in near future. Pdfgrep is a grep implementation for searching inside PDF files. It uses the libpoppler library for parsing PDF and works in nearly the same way as the regular grep does. eg let’s find the first 10 occurrences of the word ‘linux’ in our sample issue: $ pdfgrep -nHm 10 linux lxftest.pdf Pdfgrep highlights found patterns by default, and supports many familiar grep options (eg -r , -i , -n or -c ). Of course, there’s a lot more you can do with this application, such as use simple wildcards ( $ pdfgrep -F ".*” test. pdf ) or even using Perl-compatible

The project’s website is concise but it has enough materials to learn how to use the application.

regular expressions. Pdfgrep inspires you to learn Linux command-line powers and offers a very decent set of available options. If you have a massive library of magazines in PDF or a document archive, Pdfgrep will enable you to surf through all your collection without using the desktop or GUI. It also means that you can store PDFs on a web server and analyse them remotely from the CLI.

“Handles PDFs and works nearly the same way as the regular GNU Grep.”

Photo manager

Shotwell Version: 0.12.3 Web:


e suspect that few still hold all their vacation photos on smartphones or even DSLR cameras anymore. Modern devices can apply clever automatic retouching and enhancements so that most photos are at their best possible quality and you don’t need to manually retouch or fix them yourself. However, sorting and managing a large photo library is tricky and time-consuming job if done manually. Shotwell is a leading photo organiser and a default photo manager in many Linux distros with GTK-based desktops (eg Gnome and Unity etc). The application sports an easy to use interface, with Library items, Events and Folder tree on the left and image thumbnails on the right. On the first startup, Shotwell will ask if you want to import your ~/Pictures directory and will process the lot once you confirm it. One of Shotwell’s most prominent features is tagging. Select one or several photos, right-click the selection

and choose ‘Add tags’. You can enter some keywords that relate to the photos and you can provide many tags at once, simply by separating them with commas. When you have at least one tag, the Tags section will appear on the left panel and clicking a tag will make Shotwell display all images with that tag. Tagging is a wonderful feature that enables you to quickly filter out desired photos by certain criteria, instead of surfing through the entire photo library stream. Of course, tagging takes time and sometimes you’ll need to motivate yourself to perform this routine, but it will save you more time in the future, especially when your photo archive begins to grow. Shotwell also has a useful bottom bar, which has essential

Shotwell makes your photo library even more useful.

image-handling features. Pressing the ‘Enhance’ button to apply automatic enhancements, and don’t forget about the ‘Rotate’ and ‘Publish’ buttons. Again, all of the above can be applied to more than one image, so consider these features as a handy bulk photo processing tools. The latest version of Shotwell, 0.12.3, is more a maintenance release and includes bugfixes, rewritten code parts (eg the legacy gphoto-2.4 code was removed) and minor updates, such as working Facebook popup window. LXF

“Tagging is a wonderful feature that enables you to quickly filter out desired photos.”

Summer 2016 LXF214     57

Reviews Xxx

Pi user

Giving you your fill of delicious Raspberry Pi news, reviews and tutorials

Joshua LowE is a 12-yearold coder and the creator of EduPython.



ince the Raspberry Pi was  released in 2012 it has opened  up a whole new world of  opportunities for children to learn  about computer science. At last  computing is affordable for people of  all ages, and children stand a much  better chance of learning computer  science and programming. However  the mention of computer science for  some teachers can be daunting –  they don’t feel confident enough with  their own coding skills, never mind  having to teach it. Some children also  feel it’s too hard, but in my opinion  we are all on this journey together,  and that got me thinking about how  to try and make it less of a barrier.  So I have developed a new Python  module that can aid this problem.  EduPython is a companion for the  CamJam EduKits and is designed as  an easy introduction to electronics  and coding for children aged 7 and  above. The purpose of EduPython is  to create a module that children can  use in a classroom environment to  aid experiential learning and develop  problem-solving skills. Teachers who  have used it have loved how easy and  simple it was, and the children were  amazed at what they could create in  such a short space of time. The  EduPython project has also branched  out, with a SEN (Special Educational  Needs) range with flash cards and  also a Graphical User Interface to  help learn electronics and computer  science. I couldn’t have developed  my own skills without support from  the people at my local Makerspaces  and Raspberry Jams and the  extensive Raspberry Pi community.  You can read more about my project  at

58     LXF214 Summer 2016

The Scratch Olympics Cool Scratch games celebrate the 2016 Olympic Games in Rio.


he recent merger of the Raspberry Pi Foundation and Code Club has seen the first fruits of their newly combined labour. A series of mini Scratch games have been devised to celebrate the coming 2016 Olympic Games over in Rio de Janeiro, Brazil. Taking inspiration from classic games such as Geoff Capes Strongman and Daley Thompson’s Decathlon [See my seminal 1984 Ceefax review – Ed] the team decided to bestow these fun keyboard-bashing games upon a new generation, and the Scratch Olympics were born. Using awesome pixel art from Sam Alder and

Alex Carter, the projects cover weight-lifting, hurdling, synchronised swimming with Scratch the cat performing loops and other moves, and additional games like archery and running. All are available now at the Raspberry Pi resource page (search “Olympics” if need be).

Psibernetix Raspberry Jams

Killer Pi AI 


t seems the Pi has become a killer called ALPHA. It’s a AI system that runs on the Pi, and was developed by Nick Ernest at the University of Cincinnati. When pitted against Col. Gene Lee of the US Air Force, the AI totally outgunned him on a flight simulator.

Pi parties near you! 


iscover a Raspberry Pi Jam taking place near you by visiting the website at

Potton, Sat 20 August The Rising Sun Free to register

hull, Sat 10 September Malet Lambert School Free to register

southend Jam, Sat 20 August Hive Enterprise Centre Free to register www.southendtech.

Cambridge, Sat 17 September Adults £3, children free

Power supply Review

S.USV Pi Advanced Les Pounder has always loved Star Trek and often pretends to be Captain Kirk, but can Scotty provide more power? Maybe with this board... In brief... A flexible UPS (Uninterruptible Power Supply) that provides multiple power sources, such as batteries and solar power, for critical Raspberry Pi projects. The board attaches to the GPIO and comes with an extensive suite of software tools that enable configuration and management of even the most minute details of the board and the attached sources of power.


he Raspberry Pi has been used to power many different projects, including some where it is a mission-critical component. For these dedicated projects it is prudent to have a secondary means of powering the Pi, should the mains supply suffer a sudden interruption or shutdown. An Uninterruptible Power Supply (UPS) provides a temporary solution to powering your Pi. The S.USV HAT is compatible with all versions of Raspberry Pi that come with the 40-pin GPIO and features a pass-through connector enabling the use of other Raspberry Pi add-on boards. The S.USV HAT, being a HAT compliant board, comes with an EPROM connected to pins 27 and 28 of the GPIO, which communicates with the Raspberry Pi and handles initial configuration. The rest of the configuration and installation is handled via the terminal, and while being quite involved and requiring a number of changes to the default Raspberry Pi configuration, installation is straightforward and well covered in the supporting documentation. (Sadly the same cannot be said for all of the documentation – more on this shortly.) The S.USV board connects and communicates with the Raspberry Pi using the I2C protocol, requiring only two GPIO pins for data, and an additional 5V and GND connection to power the board. We tested the Advanced version, which comes with a 300mAh (milli-Amp-hour) battery. A noticeable feature present only on the

Features at a glance

Multiple power sources

The Advanced version has inputs for two power sources – an LiPo battery plus other sources you can also connect as backups.

GPIO access

Some add-on boards prevent access to the GPIO but the handy passthrough enables access to it for direct connection or the use of other boards.

advanced model is the inclusion of an alternate power source which can be used with any power supply rated between 7v and 24v. The software that handles the functionality of the board is controlled by a background process, a daemon, which provides the options to start, stop and restart the daemon process. Installing the daemon also enables the use of two buttons, S1 (used to turn off or reboot your Pi) and S2 (which can be used to power up your Pi). The software is installed to /opt/susvd and comprises the aforementioned daemon The S.USV board fits very neatly on top of all and a client tool which models of 40-pin Raspberry Pi. can provide the status aspect of installing and configuring the and configuration of the board and the board; it is just let down by a few errors attached power source via a handy that cause “hiccups” for the user. On terminal interface. The client tool has the plus side, this board provides full options to read and set the charging access to all of the GPIO pins – and current for the battery, and we used since it uses I2C you can easily add it to ensure that the supplied 3.7v more boards to your Raspberry Pi 300mAh battery was charged and projects with little fear of conflicts. ready for use. Also present are options There are other UPS boards on to create shutdown timers and timed the market and when considering a boots, using the clock embedded in purchase it would be prudent to look the board, and the facility to upgrade around for a board that provides exactly the firmware to ensure that your board the functionality you require. LXF is running the latest version.

Power user Sadly the documentation for this board has a few issues, chiefly found when trying to enable the real time clock (RTC). We found a few faults with the configuration necessary to enable the RTC, and this meant that we spent rather too much time to enable such a simple, yet key, piece of functionality. Hopefully this can be fixed in future revisions of the documentation. This board promises so much, and it does deliver on that promise – it just needs a little refinement to make it a better product. The documentation is generally great and it covers every

Verdict S.USV Pi Advanced Developer: SEProtronic GmbH Web: Price: €54.99 (about £46)

Features Performance Ease of use Value

8/10 8/10 6/10 6/10

A powerful UPS board that offers so much flexibility. A few documentation issues but well worth considering.

Rating 7/10

Summer 2016 LXF214 59

Raspberry Pi Lifecam Tutorial

Pi Zero: Lifecam

Les Pounder shows us how to use the new Raspberry Pi Zero v1.3 and  official Pi Camera to create a device to instantly capture the moment.


Our expert Les Pounder

works with the  Raspberry Pi  Foundation as one  of its Picademy  trainers. He hacks  all manner of  projects to life and  blogs about them


Pi Zero only

he new Raspberry Pi camera and the latest version of the Pi Zero with camera interface offer a compact HD recording platform and with it we can easily build our own ‘lifecam’ that can be worn slug about our person while out and about—and that’s precisely what we’re going to build in this month’s tutorial! For this project you will need a Raspberry Pi Zero 1.3, a Pi Camera, the latest release of Raspbian, one LED, one 220 ohm resistor (RED-RED-BROWN-GOLD) and one momentary switch/pushbutton. All of the code for this project can be downloaded from With the Raspberry Pi turned off, we install the ribbon cable for our camera into the Raspberry Pi. For Pi Zero users this is a new port on version 1.3 of the board and requires the purchase of an adaptor cable. For other users please insert the camera ribbon cable into the Camera port. For every Raspberry Pi ensure that you carefully pull the tabs to unlock the port. For Pi Zero users ensure the metal tips of the cable are facing the Raspberry Pi board. For other Pi users ensure that the metal tips are facing the Ethernet/USB ports. Press the tabs to gently hold on to the cable. For your button you can use any type of momentary switch, commonly they are four-pin square buttons, but you can purchase others from retailers. We used a small-capped button and soldered wires to the GPIO pins. For the LED, we used a multi-colour LED and soldered a 220 Ohm resistor to the long leg, the anode. We then soldered wires to each leg to connect to the GPIO. To power the unit we used a USB battery pack that we picked up from eBay.

Setting up the camera

Quick tip Choosing the right container for your project is a tricky task. Project boxes are the semi-professional way to get good results. But if you have access to a 3D printer or laser cutter you can make your own. Low-cost solutions are slide boxes and plastic trays.

When we first boot our Raspberry Pi we will need to enable our camera. From the main menu navigate to Preferences and then to the Raspberry Pi Configuration application. In the application look for the Interfaces tab and click on it. In the Interfaces section click to enable the camera. Now click ‘OK’ and reboot your Raspberry Pi. Once your Raspberry Pi has rebooted, navigate to the main menu, go to Programming and then open Python 3. Once the editor has opened, click on File > New to create a new blank file. Immediately save this blank document as We start the code for this project with a configuration step that forms part of making our Python code executable outside of the Python 3 editor. It instructs Raspbian to use the Python 3 environment installed as standard: #!/usr/bin/env python3 Next, we import a series of modules that will enable our code to use extra functionality. import picamera from gpiozero import Button, LED import time import datetime from signal import pause Our first import is the picamera library, necessary to work

60     LXF214 Summer 2016

Our finished project is encased in a project box that we purchased online. We used a Dremel to drill holes in the case. Please be careful if you try this.

with the camera. Our second import are two classes from the gpiozero module (a module that enables easy use of the General Purpose Input Output). The first class is Button , for our input device, and the second is LED for output device. Another required import is time , a module that we use to control the pace of our project. We use the datetime module to create a time stamp for our video filenames. Last, we import the pause function from the signal module. Next, we include two variables which will contain the location of our button and LED. button = Button(17) led = LED(27) We imported the Button class from GPIO Zero and so now we create a variable called button that will store the GPIO pin used for our button, in this case 17 . The Button class automatically pulls pin 17 to high, so it’s active. When we press the button we momentarily pull pin 17 to Ground (GND) which is the other pin attached to our button. This change in state is what triggers our code to start. The LED variable records the GPIO pin used ( 27 ) for our multicolour LED. We now create a function that’s used to group code into one place that we can execute by calling its name. Our function is called video . Note: Code inside of our function is shown indented, to identify that it’s part of the function: def video(): print(“RECORDING”) led.on()

Raspberry Lifecam Raspberry Pi Tutorial Pi Powering the project For this project we wanted to create a portable device that could be easily worn with little discomfort. So we elected to use the Raspberry Pi Zero with the new Pi camera. We enclosed the project in a project box, which can be bought from various electronics retailers. But our biggest issue was powering the project. Yes, we could use a standard USB powerbank, but that would be too large. Luckily, we found a credit-card sized battery pack from eBay. This Lithium Polymer (LiPo) battery provides plenty of power and is relatively light,

but we had to be careful as LiPo batteries are sensitive and can be dangerous is punctured or damaged We carefully removed the battery and its protection driver board from the case. We soldered wires from the output of the driver board to connect to the 5V and GND pins of our Pi. Powering the Pi directly from the GPIO. We knew that the LiPo was providing 5V and around 1 Amp and that was within the tolerances for the Pi. We protected the LiPo battery and driver board using hot glue and card to form a barrier between the Pi Zero and any loose components.

If we used a normal GPIO header with our Pi Zero, it wouldn’t have fit in the case. So we purchased a rightangled connector to give us enough space.

Our first line in the function prints to the Python shell that we are recording. This line is optional but it is a handy debug step. Our second line of code turns the LED on ( led.on() ) for the duration that our video records for. Still inside of our function we now use with to reference the PiCamera module as camera : with picamera.PiCamera() as camera: The code (below) is indented so that it’s still inside the with statement: timestamp = str( timestamp = timestamp[0:19] We create a variable called timestamp that will contain the current date and time, taken from the datetime.datetime. now() function. The datetime is also converted to a string using the str() helper function. On the following line we update the contents of the timestamp variable by using string slicing to remove only the date time information that we require. Typically this is from positions 0 to 19 ( [0:19 ] )and contains the date and time in hours minutes and seconds.

We now start the recording of our video, camera.start_recording(timestamp +(”.h264”)) camera.wait_recording(30) We use the timestamp variable that we created earlier as the filename, and using concatenation we join the “.h264” file extension to identify that it’s a video. We then instruct the code to record for 30 seconds to capture the action. We now stop the recording and stop the preview window. Remember the preview window is an optional step and can be removed: camera.stop_recording() camera.stop_preview() With the video recording over we now need to call the LED class and turn off the LED that we’ve attached to pin 27 of the GPIO. This ends all of the codes for our video function. We now move on to code that uses no loops, rather we use a function from the GPIO Zero Button class that will wait for the user to press the button on pin 17. button.when_pressed = video pause() When pressed it will call our video function and run the code. The last line of code simply calls the pause function that will prevent our code from exiting. With the code done, save your work and click on Run> Run Module to test it. Remember to press the button to start the recording. To view your videos, power down the Pi, insert the micro SD card into a reader on your PC and use a viewer such as VLC. LXF

Recording footage We now set the resolution of our camera, in this case we are recording at 1,280 by 720 pixels, commonly referred to as 720p. You can change the resolution up to 1080p or as low as 640 by 480 pixels: camera.resolution = (1280, 720) Next, we call the preview window to enable our test to see that whether camera is working correctly. This line can be removed from the final project, but it’s handy to use for debugging purposes camera.start_preview()

The circuit diagram for this project is very simple, we have included a high resolution version of this in the download for this project.

Get print and digital subs See

Summer 2016 LXF214     61

Raspberry Pi Piratebox

Pi-ratebox: Offline

shared storage

Nate Drake shares a treasure map to turn your Raspberry Pi into a secure  device for offline chat, forums and sharing your media.

B Our expert Nate Drake

enjoys writing  about security  and retro tech,  when not fixing  machines for  Apple. [What!? –  Ed] His girlfriend  has strictly  forbidden him   to put her Pi up   a tree.




Quick tip The Piratebox software can also run on any router running OpenWRT or a rooted Android device. See http://piratebox. cc for detailed instructions.

ack when the Information Superhighway was the nosy old lady who owned the corner shop, Secret Agents would often exchange messages by placing them at a prearranged location under a rock to be collected later by another shadowy figure in trench coat and dark glasses. With the revelations about mass surveillance in recent years, privacy has never been more important and as conventional wisdom seems to be that the only way to secure data is to keep it offline (and under a rock), it was only a matter of time before technology provided us with a way to keep your data safe in a secure offline vault. Aram Bartholl already has given us which involves a ‘sneakernet’ of USB sticks stuck into brick walls and cement floor with useful information. Now there’s a wireless, hightech equivalent.

What’s the PirateBox? The PirateBox is described on its website as a device which creates offline networks for anonymous file sharing, chatting, message boarding and media streaming. The reason this piqued our interest is that some of the team are both Geocachers. For those who don’t know this rather anorak-like hobby involves obtaining clues online about the GPS coordinates of caches of hidden goodies and rambling the countryside to retrieve them. Combined with a battery pack or solar panel, the PirateBox could easily be placed somewhere discrete like up a tree, allowing fellow cachers to connect and download any digital goodies you see fit to bestow, such as a congratulatory video or virtual medal. The PirateBox also has more mundane uses, such as allowing people to work together on sensitive projects, supplying material to conference goers etc. In order to proceed, common sense dictates that you need a dedicated Raspberry Pi for this purpose to keep your files safe. The PirateBox turns your Pi into a wireless Access Point. Technically, you could manually install other files and programs onto your SD card but this would defeat the purpose of a PirateBox. Now that you have been suitably warned and are hopefully already planning who you’re planning to share your PirateBox with, in order to proceed you’ll naturally need to have a Raspberry Pi of your own. The website suggests that versions A/B, B+, Zero and Version 2 are compatible but this worked on our Raspberry Pi 3 perfectly, too. As we are engaging in skullduggery (pun intended but instantly regretted) then it would make sense to invest in a

62     LXF214 Summer 2016

An example of the image board you can access on a Piratebox as endorsed by Penny the Wonder Poodle.

new Raspberry Pi. The Pi Zero is a good choice, as it’s the least expensive. You’ll also need the requisite SD card, USB to Wi-Fi Adaptor, a 16GB USB flash drive formatted to FAT32 and a 5V power adaptor. If you are thinking of making the device portable then you could choose to have a 5V device with battery pack. You can buy all of the above by investing in a Raspberry Pi Zero and Essentials kit from the good people at Needless to say you’re also going to need access to a computer with a working Ethernet port and a BitTorrent client, such as Transmission. The first step is to use your computer to download the files for the SD card from the PirateBox website and install it onto the SD card as outlined in the set up section. While it’s technically possible to do this manually from the Pi and there are instructions for this on the PirateBox website, this is much simpler, so unless you have an extremely good reason to tinker with the default set up. Once this has been done, simply connect the FAT32 formatted USB drive and the Wi-Fi Adaptor to your Raspberry Pi. Also connect your power source of choice (if you are Geocaching this may be the point at which you break out your solar panel) and insert the SD card. Next connect the Pi via Ethernet Cable to your router to begin to access it via

Piratebox Raspberry Pi Changing Default options The PirateBox set up is ridiculously simple but there may be a few tweaks you wish to make. Chief among these is that you may wish to change the name of your access point from ‘Pirate Box : Share Freely’ to something less conspicuous. You can do this fairly easily by accessing the box via ssh then entering the command vi /etc/config/wireless which will allow you to change both the wireless channel and the network name. Be sure to type wq when you are done to quit and save any changes. By default the PirateBox’s network name also has no encryption, which is a must if you’re going to be accessing sensitive data without any snooping on your connection. Enter the command vi /opt/ piratebox/conf/hostapd.conf and change the following options as needed to enable WPA-PSK: interface=wlan0 driver=yourdrivernamewillshowhere ssid=yournetworknamehere hw_mode=g channel=1 ieee80211n=1

wmm_enabled=0 wpa=1 auth_algs=1 wpa_passphrase=writeyourpasswordhere wpa_key_mgmt=WPA-PSK wpa_pairwise=TKIP rsn_pairwise=CCMP macaddr_acl=0 Simply type /etc/init.d/network reload to apply the above changes. Also don’t forget to supply your shipmates with the WPA password. If English isn’t your first language, or skulls and crossbones simply aren’t your thing, you can also modify the text of the default welcome page by entering the command vi /opt/piratebox/www/ index.html and changing the text/image links as you see fit. (Note you don’t have to use the vi editor.) By default, the PirateBox also clears the chat history each time the device is restarted. If you want to change this simply enter the command vi /opt/ piratebox/conf/piratebox.conf and change RESET_ CHAT="yes” to RESET_CHAT="no” .

SSH. Once the software is set up, you can connect to the PirateBox Access Point from any device. The network will appear under the SSID ‘Pirate Box : Share Freely’ though it is possible to change this. Once connected, open your browser of choice and try to navigate to any website address. You’ll be taken to the PirateBox portal. If you’ve ever used 4Chan or one of its many clones, Kahera will pose no difficulties. Having tested the waters and had a highly entertaining conversation with yourself on the discussion boards, you may decide to move onto streaming video. As the PirateBox is kept offline it’s not possible to download video or music directly from the internet. However if you have content already on your computer, you can upload this via a web browser. It can take a long time however to transfer large files over wireless, so the recommended method would be to dismount the USB stick on the Raspberry PI, connect it to your computer, then copy and paste your media files onto the Shared folder on the USB stick. Insert the USB stick into the Raspberry PI to make the content available to the PirateBox’s network. This has the added advantage that you can stay connected to the internet on your computer while the transfer is taking place. If you want to play the content any UPnP client will do, althought the PirateBox website recommends Kodi, the open source home theatre software (formerly XBMC), which has the advantage of being available for Linux as well as your Android devices.

For fellow Francophiles, a French version of the welcome page.

take this further. The chief advantage of the Piratebox is that you can securely exchange files with people in close proximity to you. This could be used, eg, by a band playing a gig who only wish to share music with audience members. It also can be used at your next CryptoParty (what do you mean you’ve never been to one?) to securely exchange keys. Since founder David Darts open-sourced the whole project, others have leapt onto the idea and take it further eager to apply the PirateBox in new ways. Chief amongst these is the LibraryBox project, which is a fork that concentrates specifically on providing books to those living in areas without internet access such as African villages. If a trip to Africa is not currently on the cards, you may prefer to host an outside vault like the Keepalive Project in Niedersachsen, Germany. Launched last year, the project involves a PirateBox set inside a large stone boulder, which also contains a thermoelectric generator capable of converting fire to electricity. Passersby are then encouraged to light a small fire on a plate next to the boulder to power up the Piratebox and retrieve a number of PDF guides on survival skills, presumably including how to set a fire. The Keepalive project had a rocky start but it’s impossible to slate this igneous idea. Sorry. For those less down to earth, a much easier and more

Quick tip You can view tips for optimising your video/music playback as well as setup instructions for specific players at : https:// streaming_media.

Many wide and varied projects have been developed, including this one in Germany where a Piratebox inside a boulder can be charged up by lighting a fire.

Modding the box With any luck by now you’ll have applied your Pi-Patch, boarded your box via SSH and have a welcome page linking to Captain Kidd’s treasure map or something similar on a secure connection. Having achieved your ideal of a secure offline vault for your files and invited your friends/colleagues to come and chat with you over the PirateBox, you may be at a loss how to

Love your Pi more: Subscribe and save at

Summer 2016 LXF214     63

Raspberry Pi Piratebox sensible application may be to see if you can buy a reinforced case for your Raspberry Pi. Labels with the PirateBox logo, if you want to prmote the project, can be printed from the PirateBox website or bought for a few pieces of eight from

Man Overboard

Quick tip According to the PirateBox forum, you can use an external hard drive rather than a USB stick if you think you’l need more space. Just bear in mind a hard drive will draw more power if you’re using a portable battery pack.

Due to the fact that there’s an image for the SD card and the website outlines instructions clearly, there’s very little that can go wrong with your initial set up. However, from browsing the annals of the interweb, it’s clear that most people who encounter issues do so because their Wi-Fi adaptor isn’t compatible with PirateBox’s software. If you are happy to buy a new one, you can head over to compatibility to see a list of all known compatible adaptors. Alternatively you can try to do a manual installation of the necessary driver software onto your Rapsberry Pi and the steps for doing this are available on the website. If you make any irrevocable changes, it’s also possible to simply install the PirateBox software again by following the set up steps. If you wish to use the vault as part of some elaborate digital scavenger hunt, bear in mind that the battery pack will

The Raspberry Pi Zero and essentials kit from Remember to purchase a USB stick separately.

This is what we’re building—crossbones logo is entirely optional as is talking pirate.

only last a few hours. Also make sure you have permission from the land or property owner before you place a box there. In the current climate, a suspicious box found hidden in a public place is liable to waste a lot of valuable police time! You may wish to take your box to a public place to see who logs on. (Note: we do not recommend using it at an airport.) When you do this, it might be an idea to change the name of your network to something a little more friendly as the word ‘pirate’ is also likely to generate some unwanted attention. On the other hand, if you take the device to your local Starbucks and find that people are logging on and posting images like crazy, you may want to enable the admin feature on the forum so you can moderate them. You can do this by modifying the config file for the image board located at opt/ piratebox/www/board/ with your favourite text editor. Simply set an admin username and password by filling in the fields there. As your USB drive is formatted to FAT32, do make sure you tell your shipmates not to try to place any files on your Piratebox that are larger than 4GB. Finally, if you do get your PirateBox up and running, don’t hesitate to post, tweet and chat about it, as there are exciting plans afoot to provide a feature that will turn individual PirateBoxes into a ‘mesh network’. If you think having your own data vault is a cool idea, then think how much cooler it would be to have your own private internet! LXF

Boarding via SSH To use SSH for accessing your Piratebox, open your terminal on your computer and type the following command: ssh alarm@alarmpi . The system will ask you for the password. The default password is alarm. You will certainly want to change this via the passwd command to keep your Piratebox safe. If the ssh command times out, the most likely reason is that you need to give your Pi a few more minutes to boot up. If time fails to heal this particular wound, then your Wi-Fi adaptor may not be supported. Feel free to visit the Piratebox website for a list of mods or to ask for further help on the forum. Assuming all went to plan, if you are planning on hosting a large amount of images etc, this would be a good time to tell the PirateBox to use

the USB stick and not the SD card for storing files. Once you’ve accessed the PirateBox, you’ll need to create a permanent mount point for the USB stick: sudo sh -c ‘echo “/dev/sda1 /mnt/usbdrive vfat uid=nobody,gid=nogroup,umask=0,noatime,rw,u ser 0 0” >> /etc/fstab’ sudo mkdir -p /mnt/usbdrive sudo mount /mnt/usbdrive Next, move the folder for shared files from the SD card to the USB stick: sudo mv /opt/piratebox/share/Shared /mnt/ usbdrive sudo ln -s /mnt/usbdrive/Shared /opt/piratebox/ share You can also move the folders for Kahera image boards:

sudo mv /opt/piratebox/share/board/src /mnt/ usbdrive/kareha_uploads sudo ln -s /mnt/usbdrive/kareha_uploads /opt/ piratebox/share/board/src If you see a permissions error at this stage, ignore it. Finally, restart the Piratebox by running the command sudo systemctl restart piratebox . From now on, all your Piratebox files will be stored in two dedicated folders on the USB stick named kahera_ uploads and Shared.

If you can find a Pirate-themed USB stick, so much the better.

Get your Pi filling here: Subscribe and save at 64     LXF214 Summer 2016

Piratebox Raspberry Pi How to set up Pirate Box


Download the PirateBox software


To get started you’ll need to use your favourite BitTorrent client (if  you don’t have one use Transmission) and download a copy of the  Piratebox software. You will need to use piratebox_rpi_1.0.7-1.img. zip if you’re running a Raspberry Pi 1 A, B, B+ and Zero. For Raspberry  Pi 2 and 3, you’ll need  There are  Magnet links available at :


Bring your Piratebox to life


Once the image has written to the SD card (this takes around 5-10  minutes), eject it from your computer and insert it into the Pi. Make  sure your power cable, Wi-Fi Adaptor and USB stick are connected  too. Connect the Pi to your router and switch it on. The Pi should start  automatically. At this stage you can connect to it via SSH and update  the password (see Boarding via SSH, p64).


Set up and activate the media server

Activate the Media Server by copying over the necessary config files:  sudo cp /etc/minidlna.conf /etc/minidlna.conf.bkp sudo cp /opt/piratebox/src/linux.example.minidlna.conf /etc/minidlna.conf

Next, start the media server by running these commands: sudo systemctl start minidlna sudo systemctl enable minidlna

Extract and write your image

Right-click your file of choice in the Downloads folder and choose  ‘extract here’. At this point we need to place the image on your the SD  card. Open terminal and use  df -h  to work out the name of your SD  card. Open Terminal, use  cd  to navigate to the folder where the  Piratebox image is stored and then run  dd  to copy it onto the SD  card eg  sudo dd bs=4M if=piratebox_rpi_1.0.7-1.img of=/dev/sdb1 .

Set up an image board

You can activate the Kareha Image Board and Discussion forum using  the autoconf tool over SSH. Just run the command  sudo /opt/ piratebox/bin/  to do so. You should also activate the  ‘timesave’ functionality with the following command  sudo /opt/ piratebox/bin/ /opt/piratebox/conf/piratebox.conf install   followed by  sudo systemctl enable timesave .


Connect to your PirateBox

Your Pirate Box should now be ready to go. You’ll need to disconnect  the Ethernet cable from the Raspberry Pi and connect to its wireless  network on your chosen device (by default this is called ‘Pirate Box –  Share Freely’). When you try to visit any website, you’ll be taken  automatically to the main page. At this stage you may want to change  some of the default options.

Summer 2016 LXF214     65

Tutorial Xxxx Terminal Learn how to access X

Window apps remotely using SSH

Terminal: Remote access Nick Peers uncovers how to run another computer’s Windows X  applications through your own desktop with SSH.


Our expert Nick Peers

has been using  SSH to maintain  his Pi-powered  Plex media server.  Now the thought  of installing a few  X Window apps  for remote  access purposes  takes on a whole  new appeal.

ne of the great things about the Terminal is that it allows you to access and control another PC remotely using SSH. This is particularly useful if you have a PC set up as a dedicated server, one that’s running headless (so no attached monitor or input devices), as it enables you to tuck it away somewhere while retaining easy access to it from another computer. Systems running Ubuntu typically use OpenSSH to manage command-line connections—this basically gives you access from the Terminal to the command line of your target PC, but what if you need to run an application that requires a graphical interface? If your target PC has a desktop environment in place, such as Unity, then you could investigate VNC as an option for connecting the two. Most dedicated servers, however, don’t ship with a desktop in place to cut resources and improve performance. Thankfully, you can still access the GUI of an application through the X Window system with SSH, using a process called X Forwarding. This is done using the X11 network protocol. First, you need to set up both PCs for SSH—if you’re running Ubuntu,

then the OpenSSH client is already installed, but you may need to install OpenSSH Server on your server or target PC: $ sudo apt-get update $ sudo apt-get install openssh-server Once installed, switch to your client PC while on the same network and try $ ssh username@hostname . Replace username with your server PC’s username, and hostname with its computer name or IP address, eg nick@ubuntu . You should see a message warning you that the host’s authenticity can’t be established. Type ‘yes’ to continue connecting, and you’ll see that the server has been permanently added to the list of known hosts, so future attempts to connect won’t throw up this message. You’ll now be prompted to enter the password of the target machine’s user you’re logging on as. Once accepted, you’ll see the command prefix changes to point to the username and hostname of your server PC (the Terminal window title also changes to reflect this). This helps you identify this window as your SSH connection should you open another Terminal window to enter commands to your own PC. When you’re done with the connection, type exit and hit

Disable password authentication SSH servers are particularly vulnerable to password brute-force attack. Even if you have what you consider a reasonably strong password in place, it’s worth considering disabling password authentication in favour of SSH keys. Start by generating the required public and private SSH keys on your client: $ ssh-keygen -t rsa -b 4096 Hit Enter to accept the default location for the file. When prompted, a passphrase gives you greater security, but you can skip this by simply hitting Enter. Once created, you need to transfer this key to your host: $ ssh-copy-id username@hostname (Note, if you’ve changed the port number

66     LXF214 Summer 2016

for TCP communications you’ll need to specify this,eg ssh-copy-id nick@ubuntuvm -p 100 ) Once done, type the following to log in: $ ssh ‘user@hostname’ Specify your passphrase if you set it for a more secure connection. You can then disable insecure connections on the host PC by editing the sshd_config file to replace the line #PasswordAuthentication yes with: PasswordAuthentication no Once done, you’ll no longer be prompted for your user password when logging in. If you subsequently need access from another trusted computer, copy the key file (~/.ssh/id_rsa) from your client to the same location on that computer using a USB stick.

Generate SSH public and private keys to ensure a more secure connection.

Terminal Tutorial Set up restricted access You can grant different levels of access to the server to different users if you wish, too. If necessary, you can create a limited user on the server – the simplest way to this is through the User Accounts system settings tool where you then enable the account and set a password – and then log into that account once to set it up. Once done, log off and back into your main

account, then add the following line to the end of your sshd_config file: Match User username Beneath this line, add any specific rules you wish to employ for that user: eg you could grant a limited user access using a password rather than via an SSH key, eg PasswordAuthentication yes

Enter to close the connection. You’ve now gained access to the remote server through your own PC. How do you go about running a desktop application on the server through your own PC’s GUI? You need to enable X11 forwarding. This should be enabled by default on the server, which you can verify by examining OpenSSH’s configuration file: $ sudo nano /etc/ssh/sshd_config Look for a line marked X11Forwarding yes and make sure it’s not commented out (it isn’t by default). Assuming this is the case, close the file and then verify the existence of xauth on the server with $ which xauth . You should find it’s located under /usr/bin/xauth. Now switch back to your client PC and connect using the -X switch: $ ssh -X username@hostname . Test the connection is working by launching a GUI from the command line, eg $ firefox & . The Firefox window should appear, indicating success. Note the & character, which runs the application in background mode, allowing you to continue issuing commands to the server through Terminal. You can now open other remote apps in the same way, eg nautilus & . It’s also possible to SSH in precisely for the purposes of running a single application, using the following syntax: $ ssh -f -T -X username@hostname appname When you exit the application, the connection is automatically terminated too.

Remote graphical access If you’re solely interested in accessing your server over your local network, you’re pretty much set, but SSH is also set up by default to allow tunnelled network connections, giving you access to your server over the internet. Before going down this route, it pays to take a few precautions. The first is to switch from password to SSH key authentication (see Disable Password Authentication, bottom, p66). Second, consider signing up for a Dynamic DNS address from somewhere like— this looks like a regular web address, but is designed to provide an outside connection to your network (typically over the internet). There are two benefits to doing this: first, it’s easier to remember a name like than it is a fourdigit IP address, and second, the service is designed to spot when your internet IP address changes, as it usually does, ensuring the DDNS continues to point to your network. SSH uses port 22 for its connections – this resolves automatically within your local network, but when you come to connect over the internet you’ll probably find your router stands stubbornly in the way, rejecting any attempts to connect. To resolve this, you’ll need to open your router’s administration page and find the forwarding section. From here, set up a rule that forwards any connections that use

If your server has multiple users set up, but you wish to only give remote access to a select few, limit access by adding the following lines (make sure these go above any Match User lines you define): AllowUsers name1 name2 name3 Alternatively, you could restrict by group: AllowGroup group1 group2 The sshd_ config file allows you to fine-tune who has access to your server over SSH.

port 22 to your server PC using its local IP address (192.168.x.y). Once set up, connecting over the internet should be as simple as $ ssh -X . If you’d like to fine-tune your server settings further, reopen the sshd_config settings file using nano and focus on these areas. First, consider changing the port SSH uses from 22 to something less obvious, such as 222. Remember to update your port-forwarding settings on your router if applicable, and connect using the following syntax: ssh -X user@host -p 222 . If you plan to use SSH key authentication (see bottom, p66), set that up before changing the port due to a bug. If you’d like to restrict access to your server to the local network only, the most effective way to do this is by removing the port forwarding rule from your router and disabling UPnP. You can also restrict SSH to only listen to specific IP addresses—add separate ListenAddress entries for each individual IP address or define a range: ListenAddress ListenAddress There are two specific settings relating to X Windows access: if you want to disable it for any reason or disable it for specific users (see the ‘Set up Restricted Access box, above), then set X11Forwarding to no . The X11DisplayOffset value of 10 should be fine in most cases—if you get a Can’t open display: :0.0 error, then some other file or setting is interfering with the DISPLAY value. In the vast majority of cases, however, this won’t happen. After saving the sshd_ config file with your settings, remember to restart the server to enable your changes: with $ sudo service ssh restart . One final thing—if you want to access the X11 Window manager without having to use the -X flag, you need to edit a configuration file on your client PC: $ nano ~/.ssh/config This will create an empty file—add the line: ForwardX11 yes . Save the file and exit to be able to log on using SSH and launch graphical applications. LXF

If you missed last issue Head over to now!

Summer 2016 LXF214     67

Tutorial Ubuntu Server Ubuntu Server Create a MAAS setup

and add Juju for deploying software

Ubuntu Server: Bare metal Does Ubuntu Server’s Metal as a Service carry any weight?


Our expert Mayank Sharma

has configured so  many of his  devices for  anonymous use  lately that he’s  confused about  who is anymore.  It’s all gone a bit A  Scanner Darkly.

istributing software and infrastructure as   services has established itself in the mainstream  with solutions such as Google Docs and Amazon  Web Services. There’s also Canonical’s Metal as a Service  (MAAS) that’s designed to simplify the provisioning of  individual server nodes in a cluster. In many ways, MAAS is  similar to Infrastructure as a Service (IaaS) in that it’s a  mechanism for provisioning a new machine. However, the  key difference between the two is that while IaaS usually  deals with virtual machines, MAAS is designed to provision  bare metal. When we say provisioning bare metal, we really mean  that MAAS is designed to bring a server with no operating  system installed to a completely working server ready for  the user to deploy services on. You can also use MAAS to  configure hardware and make sure the deployed machines  are recognised by the existing network management  software such as networking monitors etc.  Canonical’s MAAS relies on PXE to control the other  servers in its realm pretty much like other provisioning  software. It employs a web-based administrative interface  to help you manage the nodes. As soon as it detects a new  node, MAAS steps in, registers it and then provides it with  a server image for installation. Primarily, MAAS supports deploying Ubuntu images.  These are fully-supported by Canonical. Additionally, you  can also use MAAS to deploy custom images that can be  modified Ubuntu images or images of CentOS, RHEL,  OpenSUSE and even Windows Server. MAAS supports  multiple architectures including x86, x64, ARM A8 and A9  and can deploy both physical as well as virtual machines.  As you’ve probably guessed by now, MAAS is intended  for deploying physical servers. It primarily targets  environments that require many physical servers. If you 

One of the first orders of business on a freshly installed MAAS server is to grab the images that will be used for commissioning the nodes.

only have one server, MAAS might be an overkill for you.  A MAAS setup is made up of multiple components. On a  small network with limited nodes, MAAS installs the Region  and Cluster controllers on the same server. Larger setups  are better managed with multiple Cluster controllers, each  managing a different set of nodes.

Gain some MAAS Follow the walkthrough to use the Ubuntu Server  installation disc to install a MAAS controller. If you’ve  already installed Ubuntu server, you can easily convert   it into a MAAS controller as well.

Provision servers Switch to the Nodes tab in the administration  interface and use the ‘Add Hardware’ button to  add a new machine. The form for defining a  new machine is fairly straightforward. You’ll  be asked to choose a name for the machine  as well as the domain, architecture and its  MAC address. The most important information on this  page is selecting the right power type. A set up  where you’d want to use MAAS will typically  have a IP-controlled power distribution unit 

68     LXF214 Summer 2016

(PDU) which you can remotely power on the  connected machines.���The MAAS server  supports a large number of PDUs that you can  select using the Power type pull-down menu.  Depending on the PDU, you’ll be asked for  more information to enable the MAAS Server  to communicate with it. Once you’ve added a machine, it’ll be  marked for commissioning. The MAAS server  will automatically switch on the machine, take  stock of its available hardware, register the

machines and then shut them down. When the  machines have been commissioned they’ll be  listed as such in the MAAS administration  interface. Now select as many machines as you  want from under the Nodes tab. As soon as you  select the machines, MAAS will change the Add  Hardware pull-down menu to the Take Action  pull-down menu with several options. Use the  Deploy option to automatically provision the  machine with the Ubuntu server image you’ve  defined for the individual nodes.

Ubuntu Server Tutorial Install the Zentyal Business server In addition to using it as a MAAS provisioning  server, you can setup an Ubuntu Server install  as a traditional server as well. However,  deploying and configuring a server is an  involved process. The Ubuntu Server-based  Zentyal enables you to build complex server  installations using a point-and-click interface.  You can use a Zentyal installation as a file  sharing server as well as a Domain Controller.  It can also filter email, scan for viruses, manage  printers, deploy VPNs and other core  infrastructure services, such as DNS and DHCP.  Additionally, you can issue and manage your  secure certificates.

The distro is available on an installation disc  of its own. But you can also install it on top of  an existing Ubuntu Server installation. First, add  the Zentyal repository with: sudo add-apt-repository “deb http://archive. 4.2 main extra” Next, import its public key with  sudo apt-key adv --keyserver keyserver.ubuntu. com --recv-keys 10E239FF followed by  wget -q -O- | sudo apt-key add Now use: sudo apt-get update

There are several packages that make up a MAAS  installation. These include maas-region-controller, which  regulates the network and provides the web-based  interface as well. Then there’s the maas-cluster-controller  which is useful for managing a cluster of nodes and for   the DHCP and boot images. Finally, there are maas-dns   and maas-dhcp components that provide custom DNS and  DHCP services to enable MAAS to enroll nodes and hand  out IP addresses. The command  sudo apt-get install maas maas-dhcp maas-dns will install all the components required for a  MAAS controller installation. Once the components have   been installed, you’ll be shown a dialog box with the IP  address of the MAAS server. For this tutorial, let’s assume   it to be Now launch a browser and head to to bring up the MAAS interface.  Before you login to the server for the first time, you will  be asked to create a superuser account with the  following command: sudo maas-region-admin createsuperuser When you run this command, you’ll be prompted for   the login credentials for the admin user. You can, 

This refresh the package list before installing  Zentyal with: sudo apt-get install zentyal During installation you’ll be prompted to  choose Zentyal’s HTTPS port (8443). Once it’s  installed, fire up a browser and bring up  Zentyal’s web interface at, assuming that’s  the IP address of the box that you’ve installed  Zentyal on. Login with the credentials of the  administrator on the Ubuntu server. Use the  ‘Skip Install’ button during the setup wizard  and instead follow the walkthrough for installing  the components.

optionally, run this command again to create multiple  administrator accounts. Once you’ve created the account,  head back to the Maas administration interface at and login with the credentials  you’ve just created.

Work some magic As soon as you log in, the interface will ask you to import   a boot image. Click on the link in the warning to add an  image or switch to the images tab. Here you’ll find a list of  images supported by MAAS along with the architecture.  Toggle the checkbox next to the Ubuntu release you wish  to deploy along with the architecture. Click on the button  labelled Apply Changes and sit back while the MAAS server  downloads the images from the Internet. The process  could take a while depending on the number of images  you’ve selected and the speed of your Internet connection.  Now it’s time to add some machines to our MAAS server’s  realm. Follow the instructions in the box to use the  administration interface to provision nodes. MAAS is an impressive tool to provision bare metal  servers. But at this stage the servers are just plain 

Quick tip Since it’s difficult for many users to assemble the components for a MAAS server, Canonical has published instructions for the virtual machine manager at https:// insights.ubuntu. com/2013/11/15/

Head to the Settings tab to configure various default parameters for the MAAS, such as the default Ubuntu release that will be used for commissioning and the URL of the Main Ubuntu archive for fetching packages.

Summer 2016 LXF214     69

Tutorial Ubuntu Server

If started using the default mode, the Ubuntu Server installer offers to install several predefined collection of server software.

installations of an Ubuntu Server release. You can make  them useful by deploying a service on top of them. MAAS  was originally a means to complement Juju, which is  Canonical’s service orchestration framework. Juju allows  you to easily deploy services with its Charms architecture.  In that sense, Juju works a little bit like a package  management system that you can use to automatically  deploy and configure various server software stacks. Together with MAAS, Juju allows you to deploy  services and software to the nodes in a MAAS cluster.  You can use MAAS to provision nodes and then use   Juju to populate those nodes with complete software  configurations. In essence, using MAAS and Juju together  simplifies the process of bringing up an Ubuntu-based  private-cloud. Using Juju you can deploy individual  components and servers, such as  MongoDB and  PostgreSQL to complete services like Mediawiki and  Apache Hive. Ubuntu MAAS and Juju are popularly used  for rolling out the OpenStack platform. To install Juju in conjunction with MAAS, first you’ll have  to download its dependency with sudo apt-get install python-software-properties

Install a MAAS server


Install Region Controller


Boot the computer with a normal Ubuntu Server installation disc.  The boot menu will displays several boot options. Instead of the  default option, select the ‘Install MAAS Region Controller’ option.  Then run through the usual process of selecting a language, keyboard  layout, and other region and time zone settings. 


Partition disks


The rest of the setup is the same as any Ubuntu Server installation.  You’ll be prompted for various details. However, the most important  of these is the partitioning step. You can select an LVM based layout  instead of plain partitions as long as you make sure the installation  takes over the entire disk.

70     LXF214 Summer 2016

Confirm installation

After going through the usual rigmarole as it sets up the various  components, the installer will once again prompt you to confirm if  you’d like to press ahead with the installation of the MAAS. It’ll also  list the components it’ll install for you to use this installation as a  MAAS server. Select ‘Yes’ and proceed further.

MAAS Dashboard

It then updates the packages database and asks you to take part in  an anonymous package usage survey. The next screen displays the  list of software it can fetch from the mirrors. You can select the SSH  server and standard utilities for a minimal and secure installation that  you can flesh out manually as per your requirements later.

Ubuntu Server Tutorial

Now add the repository with sudo add-apt-repository ppa:juju/stable; sudo apt-get update and finally install juju with sudo apt-get install juju-core To configure Juju to work with MAAS, you’ll first have   to generate an SSH key with  ssh-keygen -t rsa -b 2048 You’ll also need an API key from MAAS so that the   Juju client can access the server. In the administration  interface, click on the username and select ‘Account’ which  will list the generated keys. While you’re here, scroll down  the page and use the Add SSH key button to add the public  SSH key. You can now generate a Juju boilerplate configuration  file by typing  juju init which will be written to ~/.juju/ environments.yaml. The file contains configuration for all  supported environments. Since we’re only interested in  MAAS at the moment, use juju switch maas to switch to this environment. Now modify the ~/.juju/ environments.yaml with the following content: environments:

maas: type: maas maas-server: ‘’ maas-oauth: ‘MAAS-API-KEY’ admin-secret: secure-password You will need to substitute the API key from earlier into  the MAAS-API-KEY slot. The admin password will be  automatically generated when you bootstrap the Juju  instance but you can manually specify it in the  configuration file here. Finally, you’ll need to prepare the  environment with juju bootstrap You’re now all set to use juju to deploy charms. A simple  juju deploy mediawiki  is all you need to install the MediaWiki charms bundle.  Once the charms bundle has been downloaded, you can  make it publicly accessible by using the following juju expose mediawiki  It takes some time for the service to come up and you  can check its status with: juju status mediawiki  which will also point to its public address.  LXF

Quick tip If you run into unexpected errors, make sure you’re not running another DHCP server on the same network as the MAAS. Also if MAAS fails to boot the nodes make sure you’ve set them up to boot via the network.

Flesh out your server with Zentyal




To install a service using Zentyal, fire up a browser and bring up its  dashboard. Now click the Software Management tab and select  Zentyal Components which displays a list of available components.  Click the ‘View basic mode’ link to view the components grouped neatly  as different server roles that are easier to grasp and comprehend.


Configure components


Zentyal will prompt you for any essential information required, which  will be listed in the navigation bar on the left. Different components  will have different number of configuration options. Every time you  make a change, Zentyal will ask you to click the ‘Save Changes’ button  in the top-right corner of the interface before these can be enabled.

Install components

When you select a component, Zentyal will show you a list of  additional dependencies that need to be installed. It’ll then fetch,  install and configure them. Once the packages have been assimilated,  Zentyal will warn you that the administration interface will become  unresponsive for a few seconds as updates the installation. 

Enable components

After you’ve configured a component, head to the Module Status tab.  The components with a corresponding empty checkbox are disabled.  As soon as you select the checkbox, Zentyal will display a full  summary of changes it’s going to make in order to enable the  component. Click ‘Accept’ to activate the component.

Summer 2016 LXF214     71

Tutorial Xxxx Set up a Turtl server and use an Turtl

open source alternative to Evernote

Turtl: Online shared notes Nick Peers carefully threads his way through the minefield of setting up and  running your own Evernote alternative in the form of a Turtl server.


Our expert Nick Peers

has been playing  around with  computers for  over 30 years,  and has been  dabbling with  Linux for the best  part of a decade.

Quick tip Once Turtl is running, explore the options for running it behind a proxy server, such as Apache or Nginx, some useful info can be found at TurtlDocker.

ote-taking tools such as Evernote can be incredibly useful, but if you’re looking for an open-source alternative then Turtl is shaping up to be a good rival. Turtl ( is a little tricker to set up, but has all the core functionality you need—it can take notes, bookmark websites and store photos and other documents. You can share notes with others, organise notes into boards (and make them easier to find with tagging and filtering) and access it on a range of devices, from computers (Windows and Mac as well as Linux) and Android phones (a standalone APK file is available for those who’d rather not use Google Play). Your notes are stored securely in Turtl’s cloud – premium plans are in the pipeline – and thanks to the fact the encryption key is stored locally, your notes’ security is in your own hands. But here’s where Turtl goes one step further—you can run it as a server, enabling you to keep all your data stored locally, avoiding any potential data limits and limiting access to just your local network as well as making it available over the net. The instructions for doing this are a little bare, but this is where we come in. We’ll show you how to set up your PC to run Turtl as a server, then reveal how to connect to it from other computers and devices. Turtl’s server component is written in Common Lisp, so we’ve gone with installing Clozure CL. The simplest way to install this on Ubuntu 16.04 LTS is via Subversion, open the Terminal and type: sudo apt get update sudo apt install subversion sudo svn co release/1.11/linuxx86/ccl (Other builds are also available, including one for Linux ARM, FreeBSD and Solaris.) With Clozure CL installed, you next need to download quicklisp.lisp (

beta) to your Downloads folder, then type the following: ./ccl/lx86cl64 --load ~/Downloads/quicklisp.lisp When prompted, type the following (including the enclosing brackets but obviously not this bit): (quicklisp-quickstart:install) (ql:add-to-init-file) The second line ensures Quicklisp runs with Lisp. Now type quit to exit CCL. Now we need to create scripts to allow us to easily launch CCL from the command line: sudo nano ~/ccl/scripts/ccl Change the value CCL_DEFAULT_DIRECTORY line to the following: CCL_DEFAULT_DIRECTORY=~/ccl Save the ccl file, close nano and then repeat for ccl64 ( sudo nano ~/ccl/scripts/ccl64 ). Once done, you need to copy both files to /usr/local/bin with sudo cp ~/ccl/scripts/ccl* /usr/local/bin Once done, you can launch CCL with the ccl64 command.

Install database Next, you need to install RethinkDB ( In the case of Ubuntu, you’ll be prompted to copy and paste the lines required to add the official RethinkDB repository before installing it. RethinkDB requires no configuration – Turtl handles all of that – so it’s time to add one last prerequisite. The Turtl server is event-driven, and this is handled by libuv, so add it by downloading the latest version (v1.9.1 at time of writing) from, then extract the tar.gz file’s contents before opening a Terminal and entering the following commands (note, you’ll need libtool and automake installed): cd Downloads/libuv-v1.9.1 sudo ./

Turtl limitations It’s fair to say Turtl is still in relatively early stages of its development, particularly as far as the server is concerned. It’s annoying having to start everything up manually, leaving a Terminal window open to run the Turtl API—you then need to manually hit Ctrl+C to stop Turtl. There are also various bugs to contend with – the TIME-FORMATTER

72     LXF214 Summer 2016

error was worked around by removing the offending line from the start.lisp file under common-lisp. We also ran into problems logging into our accounts on different platforms—the server threw up a CL-HASH-UTIL:HGET error, which has been reported at Bounty Source. It’s worth persevering with though— we were able to access Turtl remotely over the internet and store file

attachments on the server, which means it’s doing the basics right, even if it’s a frustrating experience at times. You can also make things a bit more elegant by having RethinkDB automatically launch when you start up (see RethinkDBStartup), and examine options for running ccl64 noninteractively using something like detachtty (

Turtl Tutorial

Configure the server Before you launch Turtl as a server for the first time, you need to configure it using the config.lisp file. You’ll see a series of options—these are the ones worth noting: Leave defvar *site-url* set to Note, that when you log on remotely, you should point the Turtl client to your server’s local IP address if connected to the same network or its URL if you’re connecting over the internet. This latter approach works best when you have a Dynamic DNS server configured (eg see If you choose the latter, configure your router to forward port 8181 to your server’s local IP address and check out the quick tip (p74) for some security advice. Elsewhere, there are parameters for setting user storage limits and a section dedicated to file storage. This is crucial as it defines where your server stores the notes your users make. You can use Amazon-S3 storage (enter your details), but most people will want to use local storage, which is defined under local-upload . If you set this, make sure you change local-upload-url to point to

sudo ./configure sudo make sudo make check sudo make install Install, configure and launch. With everything in place, it’s time to install the Turtl server itself (you’ll need git installed): cd ~ git clone Once downloaded, a new API folder will reside in your home folder – this needs to be renamed to common-lisp: mv ~/api ~/common-lisp Next, we need to rename and edit the main config file: nano ~/common-lisp/config/config.default.lisp Hit Ctrl+o and save the file as config.lisp. You now need to edit this file to configure your server (see top, p75) check the box for some useful hints and tips, such as selecting local storage over a S3 server. Once done, you need to launch Rethinkdb. To do this without tying up the Terminal, use: rethinkdb --daemon Verify it’s running by opening your browser and going to localhost:8080 where you should see the RethinkDB dashboard appear. Close your browser. Next, it’s time to launch Turtl—use this each time you start the server: ccl64 --load ~/common-lisp/start.lisp

Use the Turtl client to verify your connection to the server is working correctly.

Set up the server by editing the config.lisp file before you run Turtl.

The first time this launches, you’ll see various packages being grabbed and installed, ending with the turtl package. You may then see things halt with an error about there being no external symbol named *TIME-FORMATTER* for now, type ‘Go’ and hit Enter to skip this and create the database schema. A message telling you Wookie has started will appear and your Turtl server is now up and running.

Test server Now you need to install the Turtl application—either on the same PC or another device. Clients exist for Linux, Windows and Mac as well as Android, all of which can be downloaded from On Linux, save the TAR.BZ2 file to your Downloads folder, then open it in Nautilus and extract the folder within. Next, it’s time to install Turtl: the following assumes you’re installing the 64-bit version from the Downloads folder, and wish to install it into an apps folder inside your own home folder: cd ~/Downloads/turtl-linux64 sudo ./ Once complete, you’ll find a shortcut to Turtl on the Launcher, but you’re more likely to have success launching it direct from the command line: sudo /opt/turtl/turtl When the app launches, start by clicking ‘Advanced settings’ and entering the IP address or URL of your server – for example, – then click ‘Login’. Ignore the error (clicking this basically confirms the new server). Click ‘Create an account’ and follow the prompts to create your Turtl account—as you do so, you should see messages appear in the Terminal window running ccl64 on your server, indicating the connection has been made. From here, test your connection by creating a new photo or file attachment note. When it’s done, keep an eye on the Turtl server window – a PUT line should appear, indicating the attachment has been uploaded. If you check the folder you’ve designated as your storage, you should see an encrypted copy of the file appear, indicating your server is up and running successfully. LXF

Improve your Linux skills Subscribe now at

Summer 2016 LXF214     73

Tutorial ggplot2 Create stunning graphics using ggplot2

R’s powerful plot creation package

ggplot2: Make stylish plots Mihalis Tsoukalos demonstrates how to make beautiful plots with  your data, such as bar charts, histograms, density plots and more.

Our expert Mihalis Tsoukalos

(@mactsouk) has  an M.Sc. in IT from  UCL and a B.Sc. in  Mathematics. He’s  a DB-admining,  software-coding,  Unix-using,  mathematical  machine. You can  reach him at www.

This is the output of the LXF.R script and illustrates how the ggpairs() command works as well as how to construct autonomous R scripts.

W Quick tip R is a statisticsspecific language and environment that can be extended with the help of packages. You can find more information about it at

e’ll cover ggplot2 in this tutorial, a powerful R package written by Hadley Wickham, that works, thinks and plots in layers and allows you to generate impressive plots. You will learn how to create histograms, bar charts, box plots, density plots, draw points and lines—but most importantly you’ll learn how to customise the output. All presented examples will be developed as R scripts, which means that they can be executed without the need for a graphical environment. This also allows you to run them as cron jobs. Warning: plotting data is one thing but understanding and explaining a plot is a completely different thing, which is what separates a good data analyst from a mediocre one! As ggplot2 isn’t installed by default, you will need to manually check whether it’s already installed or not by running > library(ggplot2) Error in library(ggplot2) : there is no package called ‘ggplot2’ If your output looks like this (above), then ggplot2 isn’t installed and you should execute the following commands in order to install it: > update.packages() ...

74     LXF214 Summer 2016

The downloaded source packages are in ‘/tmp/RtmpVu9lzc/downloaded_packages’ Updating HTML index of packages in '.Library’ Making ‘packages.html’ ... done > install.packages("ggplot2") Installing package into ‘/usr/local/lib/R/site-library’ (as ‘lib’ is unspecified) also installing the dependencies ‘reshape2’, ‘scales’ ... ** testing if installed package can be loaded * DONE (ggplot2) Running update.packages() isn’t necessary but it’s considered good practice to update existing packages before installing new ones because not having the most recent version of a package might create incompatibilities. You might need to execute R as root before installing ggplot2 in order to make it available to all users on your Linux system. Should you wish to find out the version of ggplot2 you are using, you should execute the following commands: > library(ggplot2) > sessionInfo() ... other attached packages: [1] ggplot2_2.1.0 ... The first command loads ggplot2 whereas the second command returns information about ggplot2 including its version, which, in this case, is 2.1.0.

ggplot2 Tutorial

Executing R commands As this tutorial will use R scripts, this part will tell you more about them. Imagine that you put some R commands into a file so that they can be executed all together. If the file is named LXF.R you can execute it as follows: $ R CMD BATCH ./LXF.R $ ls -l LXF.Rout -rw-rw-r-- 1 mtsouk mtsouk 2206 Jun 3 21:22 LXF.Rout This technique is very convenient because it automatically writes any output to a file named after the name of the R script. In this case, the output is written to LXF.Rout. Alternatively, you can start R and use the source() command: > source(”./LXF.R") . However, there’s another more practical way that allows you to execute an R script as if it were a Unix program. This version of LXF.R is the following: #!/usr/bin/env Rscript require(ggplot2) require(GGally) require(CCA) data <- read.table(”./data”, header=TRUE) outputfile <- paste("LXF”, ".png”, sep="") png(filename=outputfile, width=1600, height=1200) ggpairs(data) After giving LXF.R the execute permission, you can run it as follows: $ chmod 755 LXF.R $ ./LXF.R ... $ ls -l LXF.png -rw-r--r-- 1 mtsouk staff 142503 Jun 3 20:45 LXF.png $ file LXF.png LXF.png: PNG image data, 1600 x 1200, 8-bit colormap, noninterlaced The LXF.R script demonstrates the ggpairs() function which allows you to get a quick overview of your data. If you have ggplot2, GGally and CCA installed you can execute LXF.R in order to make sure that everything works fine with your installation. The output of the script is called LXF.png (and can be seen on p76).

Using ggplot2 The ggplot2 package offers two main functions for drawing: quickplot() and ggplot(). This tutorial will only deal with the ggplot() function because this is the one that shows off the full capabilities of ggplot2. The quickplot() function is similar to the plot() function, which is the default R function used for plotting. Before doing any actual plotting, it would be good to explain how ggplot2 works. A ggplot2 graphic has various distinct parts: data, aesthetics, geometric objects, statistics, scales, coordinate and facets. The data part is where you define that data set you want to use; the aesthetics part enables you to select the desired parts of the input data set; the geometric objects part is where you define what will be drawn on screen (points, boxes, etc); the statistics part enables you to summarise your data in various ways, the scales part helps you map values between the data space and the aesthetic space (eg bigger values can have a darker colour), the coordinate part deals with the coordinate system and the facets part allows you to define rules that break up the data set into subjects that you can individually draw.

The grammar of graphics The main difference between ggplot2 and other R packages with similar functionality is that ggplot2 is based on the principles of grammar. The underlying grammar of ggplot2 is based on the book The Grammar of Graphics written by Leland Wilkinson along with statisticians, computer scientists, geographers, researchers and others interested in visualising data in mind. According to the book, a graphic is a physical representation of a graph that has various aesthetic attributes, such as colour and size. What the grammar does is tell you that a plot is a mapping from data to aesthetic properties of

geometric objects. Therefore, it contains two main kinds of rules: mathematical and aesthetic. The mathematical rules are used for generating graphs and the aesthetic rules are used when drawing the graphs. Although it’s difficult to understand the advantages of the grammar of graphics, the look of the ggplot2 output will surely make you appreciate the grammar! Feeling comfortable with the grammar will help you design better plots but it is not required for following this tutorial. Talking more about the grammar of graphics is beyond the scope of this tutorial.

Enough with the theory, it’s time to do some plotting! You can create a bar chart as follows: > ggplot(data, aes(x=Fixed)) + geom_bar() This graph has three parts: the data, aesthetic and geometric objects parts. The first parameter ( data ) is the name of the data set used. The aes() function, with the help of x, tells R that you want to use the Fixed column from the data set. The geom_bar() function is what tells ggplot2 that you want to create a bar chart. The generated bar chart has two columns, because the value of the Fixed column can be either YES or NO and this counts the number of YES and NO values in the data set. By default, the bars are painted in black but you can make them blue as follows: > ggplot(data, aes(x = Fixed)) + geom_bar(fill="blue") Each bar of the following bar chart displays the total number of machines that have a given number of disks as defined in the Disks column: > ggplot(data, aes(x = factor(Disks))) + geom_bar(fill="blue”, stat="count") The factor() function converts the Disks column to a factor R variable, which is a categorical variable that can be either a numeric value or a string. The stat parameter can take three values: count, identity and bin. You can change the stroke colour of each bar to black as follows:

Quick tip The most common error in R scripts is forgetting to load the necessary libraries before using them. You can load any library using the require() command.

This is the output of simple.R that uses some of the capabilities of ggplot2 to create a bar chart with blue bars!

Improve your Linux skills Subscribe now at

Summer 2016 LXF214     75

Tutorial ggplot2 > ggplot(data, aes(x = factor(Disks))) + geom_bar(fill="blue”, stat="count”, color="black") Last, you can change the colour of each bar in relation to the value of the Uptime column as follows: > ggplot(data, aes(x=Fixed, y=Disks, fill=Uptime)) + geom_ bar(stat="identity") However, an output with too much colour is not always appealing to look at. The contents of the simple.R script are: #!/usr/bin/env Rscript require(ggplot2)

Quick tip If you don’t know the kind of information a certain plot can give you, you cannot fully appreciate the ggplot2 output, eg box plots are good for easily finding outliers, which are values in an abnormal distance from the other values of the sample.

data <- read.table(”./data”, header=TRUE) outputfile <- paste("simple”, ".png”, sep="") png(filename=outputfile, width=1600, height=1200) ggplot(data, aes(x = factor(Disks))) + geom_bar(fill="blue”, color="black”, stat="count") We’ve shown the output of the simple.R script (see bottom p77). As you can see, we have 12 machines with 10 hard disks each, which can also be verified with a little help from AWK: $ awk ‘{print $4}’ data | grep 10 | wc -l 12

Beautifying your output The previous output looks pretty simplistic and naïve because it lacks various things, including a proper title, labels for both axis etc. This section will help you improve the quality of the output by adding various elements, just make sure that you don’t put too many things on the output because this might make it look amateurish. You can add a main title to the output with the help of the labs() function. You can add labels for the x and y axis using the xlab() and ylab() functions, respectively. You can further change the appearance of titles and labels using ggplot2 by modifying their size, font and colour according to your needs. For instance, the theme() function combined with plot.title can help you alter the colour and the size of the main title. Other identifiers can help you

This is the output of the histogram.R script we’ve created. As you can see, the produced graph contains both a histogram and a density plot!

change the background colour of the plot, the colour of the grid lines, etc. Combining all previous commands and adding some more will generate the following script that will give you the bar chart (pictured below): #!/usr/bin/env Rscript require(ggplot2) data <- read.table(”./data”, header=TRUE) outputfile <- paste("beautify”, ".png”, sep="") png(filename=outputfile, width=1600, height=1200) p <- ggplot(data, aes(x = factor(Disks))) + geom_ bar(fill="orange”, color="black”, stat="count") p <- p + labs(title="Counting Machines") p <- p + xlab("Number of Disks") + ylab("Number of Machines") p <- p + theme(plot.title = element_text(size = rel(3), colour = “Darkgreen")) p <- p + theme(axis.title.y = element_text(size = rel(1.5), face = “bold")) p <- p + theme(axis.title.x = element_text(size = rel(1.5), face = “bold")) p <- p + theme(panel.background = element_rect(fill = “grey")) p <- p + theme(panel.grid.major = element_line(colour = “black")) p <- p + theme(panel.grid.minor = element_line(colour = “brown")) print(p) The previous script is saved as beautify.R. This script constructs the final graph step by step with the help of variables. When the graph is ready, it calls the print() function to actually create the graph.

Drawing lines and points

This figure shows the output of beautify.R, which improves simple.R by adding titles and colours, etc. However, the actual information remains the same.

You can tell ggplot2 to draw points and lines using the geom_ point() and geom_line() functions, respectively. The following code tells ggplot2 to draw points: > ggplot(data, aes(Uptime, Disks)) + geom_point() Similarly, the following command connects consecutive points using lines: > ggplot(data, aes(Uptime, Disks)) + geom_line() The geom_smooth() function adds a Smooth Layer to an

If you missed last issue Head over to now! 76     LXF214 Summer 2016

ggplot2 Tutorial

About the data used The most important part of the visualisation process is the data that’s used! For our examples the format of the data set is the following table: $ head data RAM Fixed Disks SSD Uptime L1 8 YES 9 128 100 L2 4 YES 5 256 102 L3 16 NO 3 512 80 L4 32 YES 10 1024 300

The data file contains data for 55 server machines. The first column is the machine name, the second column is the amount of RAM in GigaBytes, the third column says whether a machine has gone to service or not, the fourth column is the total number of physical disks a machine has, the fifth column holds the size of the SSD disk of the machine in GigaBytes and the last column is the uptime of the machine in days, that’s right, total days uptime! Loading the

existing graph using Line Regression that allows you to predict future data: > ggplot(data, aes(Uptime, Disks)) + geom_point() + geom_ smooth(method='lm') But please bear in mind that it’s not always easy to predict future data!

Other types of graphs Now let’s talk about histograms and box plots. A histogram is a method for displaying the shape of a distribution that’s particularly useful when you have a large amount of data. The range of the elements of a data set is broken into intervals which are also called bins. Each bar of a histogram is the number of values that fall into that bin. Therefore, the single most important property of a histogram is the width of its bins. Please note that there’s no ideal number of bins because different bin sizes can reveal different data characteristics. Although histograms can reveal the distribution of a variable, density plots offer a better way to view the distribution of a variable. Another type of plot, called box plot, can give you information regarding the shape, variability and median of a data set quickly and efficiently, eg the following command draws a histogram of the Disks column with the binwidth parameter set to 5 : > ggplot(data, aes(Disks)) + geom_histogram(binwidth = 5) The following script, saved as histogram.R, creates a histogram and a density plot on the same plot: #!/usr/bin/env Rscript require(ggplot2) data <- read.table(”./data”, header=TRUE) outputfile <- paste("histogram”, ".png”, sep="") png(filename=outputfile, width=1600, height=1200) p <- ggplot(data, aes(SSD)) + geom_histogram(aes(y = .. density..), binwidth = 128, alpha = 0.7, fill = “#333300") p <- p + geom_density(fill = “#44004d”, alpha = 0.4) p <- p + theme(panel.background = element_rect(fill = ‘#ffffff')) p <- p + ggtitle("Histogram with Density") p <- p + theme(plot.title = element_text(size = rel(3), colour = “black")) p <- p + theme(panel.grid.major = element_line(colour = “black")) p <- p + theme(panel.grid.minor = element_line(colour = “brown")) print(p) As you can see, a plot can have more than one call to a geometric function. (See p78 for the output of histogram.R.) The ggtitle() function can also be used for putting a title to a graph.

data file into R can be easily done with the help of the following command that is used in every script of this tutorial: > data <- read.table(”./data”, header=TRUE) The fact that each column of the data set, except from the first one, has a title allows you to use each column of the data set by name. Therefore, the third column can be accessed and used as Fixed, which shows it pays to label things nicely.

Bear in mind that while histograms are used to show distributions of variables, bar charts are more appropriate for comparing variables. In other words, a histogram label represents a quantitative variable whereas a bar char label represents a categorical variable. One other minor difference between them is that a histogram has no spaces between the bars. The following script, saved as boxPlot.R, creates a box plot, using geom_boxplot() (pictured below): #!/usr/bin/env Rscript require(ggplot2) data <- read.table(”./data”, header=TRUE) outputfile <- paste("boxplot”, ".png”, sep="") png(filename=outputfile, width=1600, height=1200) p <- ggplot(data, aes(Fixed, Disks)) + geom_point() + geom_ boxplot(colour = “brown") p <- p + labs(title="A Box Plot") print(p) You should have a better idea by now what the capabilities of the ggplot2 package and the high-quality output it can produce. A single tutorial cannot cover every little detail of such a powerful package. If you are going to remember one thing from this tutorial, it’s that ggplot2 works in layers. If you are really into learning more about ggplot2, the best resource is the second edition of the ggplot2: Elegant Graphics for Data Analysis book written by Hadley Wickham ( LXF

Next issu Tracing e: with LLTn g

This is the output of boxPlot.R that generates a Box Plot.

Summer 2016 LXF214     77

Libreboot Free your system of binary blobs by using open boot firmware

Libreboot: Free your BIOS Chasing the dream of an entirely blob-free system, Neil Mohr rips open his  Lenovo X200 and performs morally questionable surgery on it.

Y Our expert Neil Mohr

has booted more  systems than he  cares to recall,  though there was  that one time he  booted a PC out  of the window,  well, he had  given it plenty  of warning…

ou don’t want to install Libreboot on your perfectly working system, you really don’t. But you’re now thinking “why wouldn’t I want to install Libreboot?” which is akin to someone that’s walked up a slight incline getting to work thinking “why wouldn’t I want to climb K2?” It’s not that climbing K2 is a bad idea – it’s actually a terrible idea – it’s just that for most people it’s entirely unnecessary. Hopefully that’s put off most people but we’ll persevere putting the rest of you off. Libreboot is a solution looking for a problem. That problem takes two forms: the first is a entirely open hardware system that requires a firmware bootstrap – Libreboot would be awesome for that – and, second, someone wanting to eradicate all closed-source, proprietary binary blobs from a system. Currently that first scenario doesn’t exist outside of a few engineering samples or highly expensive commercial development boards. The second is rallying behind the philosophically sound, idealism of the open software and open hardware doctrine championed by the likes of Richard Stallman. That latter reason is fine, if you really do live the life. Otherwise you’re taking a system that is perfectly set to boot any operating system you’d like to throw at it and installing unsupported firmware using techniques that could potentially brick the entire system. Still here? Then let us begin. What is Libreboot? It’s a GPLv3 project that aims to replace any proprietary closedsource firmware BIOS on a supported system. Now, as you’ll see the number of supported systems is somewhat limited (see below for a full list of laptops and motherboards). The reasons are understandable: Libreboot won’t support systems that use binary blobs. This eliminates almost all Intel chipsets made after 2008, as they use the locked-down Intel

Compatible hardware Motherboards


Gigabyte GA-G41M-ES2L - Intel G41  ICH7 Intel D510MO - Intel NM10 ASUS KCMA-D8 - AMD SR5670/ SP5100 ASUS KFSN4-DRE - NVIDIA nForce  Professional 2200 ASUS KGPE-D16 - AMD SR5690/AMD  SP5100

ASUS Chromebook C201 Lenovo ThinkPad X60/X60s - Mobile  Intel 945GM Lenovo ThinkPad X60 Tablet Lenovo ThinkPad T60 (some) Lenovo ThinkPad X200 - Intel GM45 Lenovo ThinkPad R400 Lenovo ThinkPad T400 Lenovo ThinkPad T500 Apple MacBook1,1 Apple MacBook2,1

78     LXF214 Summer 2016

Behold! The glorious Lenovo Stinkpad X200, greatest of all humankind’s laptops.

Management Engine which can’t be disabled or removed. Similarly, most AMD systems post-2012 can’t be used with Libreboot as AMD has its own black-box Platform Security Processor that it refuses to provide anyone access to. At this point it’s probably worth mentioning that Libreboot itself is a parallel development of CoreBoot, but one that aims to be entirely freed of proprietary binary blobs and one that offers at least a semi-automatic installation. Coreboot is an open source BIOS development that’s used to create firmware for many devices, but leaves it to the user to sort out the details. For example Coreboot is used by Google to boot its Chromebooks. Its wider use is explained by the fact that Coreboot is happy to utilise binary blobs to access hardware features, an example is with Intel systems. Intel supplies a binary blob called the Firmware Support Package, which handles the POST elements and is almost impossible to reverse engineer in any useable timeframe. This enables Chromebooks to use Intel processors along with Coreboot – that they do very successfully – and be able to use an open source BIOS without disclosing the source for the elements that remain proprietary. While this might be fine for Intel, it is not fine for FLOSS fans. As an example we’re going to take a look at the most widely known system that supports Libreboot: the 8-year old Lenovo ThinkPad X200. It’s not because IBM/Lenovo at the time was feeling benevolent, but just luck, flaw and timing that Intel released the Intel GM45/ICH7 chipset that wasn’t entirely locked down – the ME (Management Engine) can be removed and all binary blobs replaced – while the Core 2 Duo processor today remains powerful enough. This is a recurring theme for Libreboot, only a small

Libreboot Tutorial

This little lot is going to provide us with a libre system

number of chipsets are supported and then only if they enable flashing and the removal of all binary blobs. Libreboot take a principled stand against fixed binary blobs in firmware, so when both Intel and AMD include encrypted, binary blobs that include sub-processors and mini operating systems, Libreboot won’t touch them. It’s a testament to how many subtle things the BIOS has to do that there remains a number of odd little issues with the X200 using Libreboot, eg one that’s very specific is a suspend issue when running Samsung memory DIMMs. But it also depends on the stepping – that’s the revision – of the processor. There is one key general issue with the Libreboot version of the firmware and that’s the loss of VT-X virtulisation capabilities of the Core 2 processor. If you use Qemu with VT-X and two cores on a virtual guest the kernel panics, with a single-core virtual guest the guest panics and the host kernel is fine. It seems the issue is fixed with one of Intel’s closed firmware updates, but this is unavailable to Libreboot users (see here for more details: install/x200_external.html.)

complete the project and in fact that’s the recommended technique. We’re basing the project on a standard but up-todate Raspbian installation. We’ll cover setting up in the walkthrough, but we recommend ensuring you can use SSH and that it persists after a reboot using either a wired or wireless connection. A couple of caveats here. You will need a full-power supply, below we outline the power issues during flashing. The Pi needs all the power required for the 3.3V line, so at least a two amp supply is recommended. The other point is interference. This is speculation but potentially having a Wi-Fi antenna so close to the serial read/ write lines could cause interference. If you do encounter issue you might want to switch to a wired Ethernet connecton.

Quick tip Memory problems. Libreboot won’t work with mismatched DIMMs. One DIMM fine, matching fine, mismatched ones results in a blank screen. It also reports it won’t work with Samsung memory, go figure.

No disassemble! Yes disassemble. The Lenovo X200 is a glorious laptop, it’s made to be maintained by humans, so can be easily taken apart and rebuilt using standard tools ie a jewellers screwdriver kit. Find a clear work space that’s well lit, and ideally with a directable lamp, with containers for the not so many screws. Note: The network MAC address printed on the base of the laptop, but we also cover interrogating the NIC from the laptop, too. We’re not going to go too in depth, as we’re assuming if you’re brave enough to flash a BIOS, you’ve the wherewithal

Ensure you remove all the required screws, you can remove more, but that’s not required.

Preparation For this tutorial we’re using a Raspberry Pi—you’ll find that many people have also had success with the BeagleBone Black. It’s perfectly possible to run this headerless and

What is a BIOS? Turning on a computer for the first time causes a cascade of processes and acronyms. When power first reaches a CPU its first job is to start running the code it finds at its Reset Vector memory location, for modern x86 processors that’s 16 bytes below 4GB or FFFFFFF0h. On a hard reset the North Bridge redirects this to the device’s firmware flash location. This is where the BIOS (Basic Input Output System) is stored – now replaced by the UEFI but the process is similar – its first job is to run a Power On Self Test (POST). POST has the job of initialising and checking the base hardware, resetting CPU registers, checking memory, DMA/ interrupts, parts of the chipset, video, keyboard and storage devices.

Importantly, if issues were encountered it tries to communicate the problem to the user via beeps, an onscreen error or code over the serial bus. The BIOS also offers configuration of these basic features and though it’s redundant now with older operating systems the BIOS delivered the access to I/O hardware, such as the keyboard and mouse. With everything set the BIOS calls interrupt 19h (25) and attempts to load the bootstrap from the selected boot device. UEFI is the modern version of the flash-stored BIOS that supports a host of enhanced features including a boot manager, GPT partitions, services, applications and signed system file Secure Boot capabilities among others.

Functional, that’s about the kindest thing we can think of to say about the BIOS.

We’re #1 for Linux! Subscribe and save at

Summer 2016 LXF214     79

Tutorial Libreboot Quick tip Be gentle with the Pomona clip, we managed to damage a SOIC connection adjusting how it was sat on the chip.

to take a laptop apart. The Lenovo X200 has a removable keyboard and the top panel that sits just in front of it. You’ll find the BIOS firmware SOIC-16 just under here. Remove the battery and disconnect the X200 from the mains, if you haven’t already done so. To gain access flip the laptop upside down. You don’t need to remove all the screws you see, just the ones marked with either the keyboard or odd ‘chip’ symbol. Flip the laptop the right-way up, If you’ve removed all the correct screws the black front panel should lift off with a little coaxing. Be gentle it’s attached via a small ribbon cable, which can be gently removed. The keyboard should flip up, you don’t have to disconnect its connection, but, again, it’s a simple push-in connector. The 16-pin firmware SOIC-16 (or 8-pin with 4MB devices) should be clearly visible. Lift up the protective layer and you’re ready to get started.

Power flash To electrically flash the new ROM the firmware chip needs a steady supply of 3.3v. The Libreboot documentation makes a lot of fuss about rigging supplies of PSUs, which we’re not big fans of. From our experience the Raspberry Pi’s 3.3v output line should supply enough current, though only enough for slow write speeds. Again, if this doesn’t work for you it’s likely down to a different model and you might have to resort to the PSU trick. We live in a golden age when communications are reliable, error checked and resent automatically. This firmware flashing nonsense is sending a large ROM over a slow serial connection that’s been jerry-rigged by a chap called Steven in Donnington, possibly. The point is it’s highly likely you’re going to get transfer errors unless you’re lucky. Outside of checking your connections are correct and you’re using jumper wires that are no longer than 20cm (10cm would be best) there are then other tricks you can try. The first is to simply use a slower transfer speed, change the spispeed= to 128 or 256 , though obviously this will double or quadruple the time taken. For writing we had to use 128

For 4MB SOIC8 devices this is the pin out.





There’s the firmware waiting for flashing. Pin 1 is closest to the front, next to the large Intel chip.

that took almost 25 minutes to complete. For reading a setting of 512 worked without major issues. This website has generalised suggestions for fixing read/write issues, but it does descend into amateur electronics. One option is to create twisted pairs (this reduces the external noise) by simply twisting a second wire around the main one. The next option is to increase the resistance of the transfer wire by adding inline a <100 ohm resistor. At this point if the flash has gone well – it can take a good few tries – your Lenovo X200 will boot to Grub and is ready for your to install GNU/Linux. LXF Pin #

SOIC-16 Pin Name



Raspberry Pi


VDD (3.3v)













MISO(Master In Slave Out)


3.3v (pin 1 or 17)

Pin #

SOIC-8 Pin Name

Raspberry Pi














not used

















MOSI(Master Out Slave In)



not used





VDD (3.3v)

3.3v (pin 1 or 17)


For standard 8MB SOIC-16 devices this is the pin out.

What you need? The Libreboot process requires that all binary blobs and proprietary software be removed or disabled from the processor and chipset. The upshot of this is that they have to be flashed by an external device, in our case a

Raspberry Pi, but Libreboot offers a Beagle Board Black guide, too. Most devices use a Small Outline Integrated Circuit or SOIC. The X200, eg, has a SOIC-16 as it has 16 pins. A dedicated tool, the Pomona 5252 provides a

secure clip with jumper pins, making it easy to connect to your flashing device. Other models will vary, eg, the X200S or Table version uses a surface mount chip and requires soldering, not recommend.

Externally flashing the firmware is needed to get Libreboot working.

Never miss another issue Subscribe to the #1 source for Linux on page 30. 80     LXF214 Summer 2016

Libreboot Tutorial Flash Libreboot


Check thy ROM


Before doing/buying anything you need to check the type of  firmware SOIC used. You have Linux installed right? To begin with we  need to know what size ROM it uses. This will either be an 8- or 16-pin  SOIC run  sudo dmidecode | grep ROM \ size  and it’ll return either   4096  or  8192 . The MAC should be on the bottom of the case if not  use  ifconfig eth0  and note this down.


Prepare thy ROM


Grab the latest release of libreboot_<model_Xmb>.tar.xz from: and extract it with: tar -xvJf ~/ Downloads/libreboot_bin.tar.xz . We need to write the MAC address for  your device:  /libreboot_util/ich9deblob/armv7l generate the MAC binary with ./ich9gen --macaddress XX:XX:XX:XX:XX:XX and write it with: dd if=ich9fdgbe_8m.bin of=libreboot.rom bs=1 count=12k conv=notrunc


Backup thy ROM

sudo ./flashrom -p linux_spi:dev=/dev/spidev0.0,spispeed=512 -c <chip_ name> -r romreadX.rom sha512sum romread*.rom

Test thy flash

Wire up the Pi and the 8- or 16-pin clip as per the pinout. The Pi’s 3.3V  output should be enough to power the flash, but don’t connect this  yet, we need to test things. Without the clip connected use  sudo ./ flashrom -p linux_spi:dev=/dev/spidev1.0,spispeed=512  this should  report calibrating as OK but no EEPROM/flash chip. Connect the clip,  3.3v power and run the command again, it should report a flash chip.


Typically the “chip_name” is the MX25L6405D, read the physical  SOIC to confirm. Back up the existing ROM and safely store this.  Ideally make three copies and validate them with sha512sum.

Prepare thy Pi

Ensure Raspbian is up to date, in the Advanced options set SPI and  I2C to Enabled. It is possible to download Build Essentials and  compile the flashrom binary yourself, but for space we suggest you  just grab the libreboot_utils pack from download. This contains pre-built versions of the flashrom and the  ich9gen tool for x86 and ARM platforms.

Flash thy ROM

Now for the moment of truth you can flash the downloaded Libreboot  ROM to your X200 or a backed up original one, using: sudo ./flashrom -p linux_spi:dev=/dev/spidev0.0,spispeed=128 -c MX25L6405D -w <path/romimagename.rom>

It may require a couple of attempts but a successful write will output: Verifying flash... Verified

Summer 2016 LXF214     81

Not from the UK? Don’t wait for the latest issue to reach your local store – subscribe today and let Linux Format come straight to you.

“If you want to expand your knowledge, get more from your code and discover the latest technologies, Linux Format is your one-stop shop covering the best in FoSS, raspberry Pi and more!” Neil Mohr, Editor

To SubScribE Europe?

From only €117 for a year


From only $120 for a year

rest of the world

From only $153 for a year

IT’S eASy To SubScrIbe... cALL +44 344 848 2852 Lines open 8AM–7PM GMT weekdays, 10AM–2PM GMT Saturdays Savings compared to buying 13 full-priced issues. You will receive 13 issues in a year. You can write to us or call us to  cancel your subscription within 14 days of purchase. Your subscription is for the minimum term specified and will expire  at the end of the current term. Payment is non-refundable after the 14 day cancellation period unless exceptional  circumstances apply. Your statutory rights are not affected. Prices correct at time of print and subject to change.   For full terms and conditions please visit

82     LXF214 Summer 2016


Swagger: Build a REST API

Bernard Jason exposes your APIs using REST, with a little Swagger to make it clear to use.

T Our expert Bernard Jason has been developing applications on Unix since the early ‘90s in C, Java and Scala.

here is often a need to expose functionality in a company for consumers whether internal or external. This kind of thing had been achieved, this century at least, using SOAP but now there’s REST, too. There are pros and cons to both methods. SOAP is very specific and is well defined and tends to cause fewer surprises due to its maturity. However, it’s also heavy to define and even to implement, but the end product, if done well, leaves less room for mistakes or discussion. REST is newer, and more suited to the web, and tends to be based around JSON request and responses, but can be XML or other data formats. REST is less rigid and lightweight and this can be both a pro and a con. We’ve seen implementations where the request was easy for HTML form submission and the response was JSON for JavaScript consumption. This is great for the browser, but more fiddly for back-end consumption. There’s the other extreme where the interface is aiming for perfection and ends up putting lipstick on a pig of a back-end. If we want to implement a utility or set of utilities or functionality for a community and it’s web-centric (and there isn’t a formal or rigid requirement and we trust the consumer), we’d pick a REST implementation with Swagger to document every time. The development time is quick and with Swagger to document the results, it’s clear to use and even to try out. In this article, we’ll create a REST API to dump information from another system into our API’s application datastore, then retrieve this information and display it in a graph to show some end benefit to the exercise. Hopefully, by this stage, we’ve piqued your interest in Swagger as here is where things get contentious and we need to make tool choices. First of all, if a language can support the implementation of a web server then it will be able to support REST and very likely somehow

Is London hotter than Darlington?

support Swagger. We’ve always used Swagger with JVM based environments, but in this article we’ve steered away from other frameworks, such as Spring or Apache CXF, so as to implement something concisely.

Swagger with play Play is an open source web application framework, written in Scala and Java, which follows the model–view–controller (mvc) architectural pattern. It doesn’t follow the J2EE route which means it’s leaner to implement. Play should come with most of what you want built-in, if it doesn’t it’s easy to get the required modules as it uses SBT to manage dependencies. What makes it a good fit for REST API development is the use of routes to define resources, be it web pages or in this case a map of URI’s to Scala functions. SQLite is easy to set up, deploy and maintain. Until requirements says differently I’d stick to this over an H2 database. The database will store information here about stock price or temperature. We want to put a slightly different spin on the usual CRUD style (create, read, update and delete) hello world examples. We want to expose a REST interface that will be relatively easy to plug into a Google

What is Swagger? Swagger is a specification for documentation of REST APIs. It takes out some of the guess work, as it can define the request, response information as well as provide some useful text description along with the ability to try it out. There is a HTML user interface for Swagger that when pointed at the appropriate data can display the information about your APIs. This appropriate data is a JSON payload that provides

all the information about your application. The JSON document is swagger.json. The interface also provides the ability to experiment with the APIs with ‘Try it out!’ buttons. This can help developers experiment with the APIs or test sample requests. How you provide the functionality behind this button is the trick. You can do any of the following: Provide a separate mock code base, but note:

that this can lead to more code to maintain and increase the chance of discrepancies between the actual API and what has been mocked. Provide connectivity to the real APIs. This can be a safer option, but it depends what you are implementing and the cost of keeping a real API back-end. Provide a standalone implementation that has a test database or a mocked back-end.

Summer 2016 LXF214 83


How’s BT share price compared to FTSE?

Chart. The chart should be dynamic to show data being fed into our application database and being displayed as a line chart.

Start Playing

Quick tip You may want to take the optional step to set up Eclipse or another IDE to enable easier development by amending project/ plugins.sbt, eg addSbtPlugin ("com.typesafe. sbteclipse” % “sbteclipseplugin” % “4.0.0") can create files .settings, .project and .classpath for Eclipse to recognise the Play project and allow it to be imported.

First, download the Play activator with: $ wget $ unzip $ cd activator-1.3.10-minimal/bin/ $ export PATH=`pwd`:$PATH Next, you’ll need to change directory to where you’ll create your application: $ activator new rest-swagger-app play-scala Fetching the latest list of templates... OK, application “rest-swagger-app” is being created using the “play-scala” template. To run the Activator UI for “rest-swagger-app” from the command line, “cd rest-swagger-app” then: /home/src/article/rest-swagger-app/activator ui To make sure things are OK, ignore the above message and pick up activator now in your PATH. $ cd rest-swagger-app $ activator run Be warned that when the application is started for the first time there will be a lot of downloads from the external library repository. These are cached in $HOME/.ivy2. You should see this in your browser after visiting To stop activator either CTRL+c or CTRL+d. First, let’s clean up some things we don’t want but are included when a fresh project is created: $ rm app/controllers/AsyncController.scala

This is Play’s equivalent to hello world!

$ rm app/controllers/CountController.scala $ rm app/filters/ExampleFilter.scala $ rm app/services/*.scala $ rm app/Filters.scala $ rm app/Module.scala Edit down routes/conf to these two entries GET / controllers.HomeController.index GET /assets/*file controllers.Assets.versioned(path="/ public”, file: Asset) and issue the command $ activator run and go and visit to ensure that all is still working. Then CTRL+d or CTRL+c again. For this project we can’t, at the time of writing, use the very latest version of the Play framework. We do, however, want to use a supported version of the Swagger Play library (, which supports Play versions up to 2.4. At the time of writing, Play 2.5 had been recently released. So we must change project/ plugins.sbt where it specifies the Play framework plug to: addSbtPlugin("” % “sbt-plugin” % “2.4.6") Finally, we will inject route information rather than the default which is to use the static routes. Append this entry to the end of build.sbt: routesGenerator := InjectedRoutesGenerator This helps separate your component behaviour from dependency resolution. It’s the default with Play 2.5 but not with version 2.4 which is our target.

Project specifics The aim is to get some third-party information, store it in our application database and present it as nice graphs. All this with the aim of documenting the REST API so we end up with an interface that is self documenting. There are two APIs that we will use to provide source information for our project: Open Weather Map API (https://home.openweather This will require you to sign up before use. After signing up you will have an API key which the documentation shows how to use to authenticate. This API will be queried to obtain the temperature of London and Darlington. Yahoo! API ( The second API will be Yahoo’s to get the stock price of a UK share (BT) and the value of the UK index FTSE. This will use the Yahoo Query Language. This API allows for a query-style language to get information from various databases that are maintained by Yahoo. This example will extract data from the quotes table. We will use a SQLite database for storage of what I refer to as a time series. The application will store data via a POST operation and retrieve via GET. There will not be the possibility via the REST interface to DELETE or PUT. The SQLite database will be very simple create table timeseries ( id INTEGER PRIMARY KEY, name VARCHAR NOT NULL, label VARCHAR NOT NULL, value VARCHAR NOT NULL ); Finally, the Swagger-Play library means at the end we just need to add some code annotations to expose our API. We will also include an initial page for loading the Swagger UI,

Improve your code skills Subscribe now at 84 LXF214 Summer 2016


Explained: SOAP, REST, HTTP and JSON SOAP (Simple Object Access Protocol)

REST (Representational State Transfer)

A messaging protocol, usually over HTTP, that can expose your services and data with XML. The interface uses a WSDL (Web Services Description Language) to define request and response information, and this interface, again, being XML and is the interface contact, and it’s a mature protocol with lots of tool support to help provide or control an interface.

A style of interface that’s usually over HTTP and follows the same verbs to define operations, GET, PUT, POST and DELETE. In a nutshell, REST interfaces should be simple, stateless for scalability (ideal for Play framework). It’s not mandatory to use JSON for representing data but it’s common due to the links with web applications and REST API usage.

but the supporting JavaScript libraries for the UI will be added via dependencies so limiting the actual source code we maintain. The SwaggerUI will request the specification from the Swagger-Play library. This library will inspect the code at runtime looking for the appropriate annotations and result in SwaggerUI specific to your application being built. The Play Framework uses SBT as its build tool, the library dependencies are defined in build.sbt file, amend to libraryDependencies ++= Seq( filters ,“org.xerial” % “sqlite-jdbc” % “3.8.6” ,“” %% “play-slick” % “1.1.1” ,“” %% “play-slick-evolutions” % “1.1.1” ,“io.swagger” %% “swagger-play2” % “1.5.1” ,“org.webjars” % “swagger-ui” % “2.1.4” ,javaWs ) This filters library allows us to add CORS support. (See master/ This is required in our example to allow the Swagger ‘Try it out!’ button to work. Otherwise, browsers will complain about No ‘Access-Control-Allow-Origin’ header is present on the requested resource. Origin play-slick and play-slick-evolutions provide the code to create and query a local SQLite database. Play provides the ability to create the schema within a database. And because we are using SQLite the actual database creation is very simple. Play evolutions keeps track of the state of a database and the evolution script. It creates a table within the database called play_evolutons. At start up if it detects a difference between the current state of the database and the schema SQL scripts it will ask via the browser whether you want to apply the change. The core libraries for the annotations are provided by swagger-play2 and describe the interface and the code that allows Swagger UI to query the interface. Swagger uses a REST interface itself to get information about the interface and the library provides the code for inspecting the application that’s gathering the required information. The entry point being http://localhost:9000/swagger.json. The swagger-ui is a jar that contains the JavaScript needed to display the Swagger user interface and javaWs is required to allow the application to call the external APIs. Define application configurations, replace the application. conf supplied with the generated project creation with a minimal version containing: slick.dbs.default.driver="slick.driver.SQLiteDriver$”

JSON (JavaScript Object Notation) Originated with JavaScript but is now used by many languages. Its human readability along with suitability for web-based applications makes it a common choice for REST interfaces.

HTTP (Hypertext Transfer Protocol) Defines verbs like GET to retrieve a resource, PUT to update a resource, POST to create a new resource and DELETE to remove a resource.

Use Playful DBA to create or change the database.

..... play.modules.enabled += “play.modules.swagger. SwaggerModule” = { description="a sample swagger documented api implemented on PLAY. see links <a href='/weatherchart'>/ weatherchart</a> or <a href='/stockchart'>/stockchart</a>” title="Swagger Play with Scala and Google Chart” } Swagger requires two entries: play.modules.enabled and (see swagger-play/tree/master/play-2.4/swagger-play2). While we also have to define the database for the Slick library to talk to the local SQLite database. Create a file conf/ evolutions/default/1.sql that describes our single table. Evolutions has a concept of what to do to upgrade and downgrade a database: # --- !Ups

Quick tip To quickly try the tutorial out – after ensuring Java JDK1.8 – in your path do: $ git clone https:// bernardjason/restswagger-app.git $ cd rest-swaggerapp $ ./activator run visit http://127.0.0. 1:9000 and apply the Play database evolution.

create table timeseries ( id INTEGER PRIMARY KEY, name VARCHAR NOT NULL, label VARCHAR NOT NULL, value VARCHAR NOT NULL ); # --- !Downs drop table timeseries ; Evolutions provides a reliable way to keep track of the state of a database and how to upgrade it.

Never miss another issue Subscribe to the #1 source for Linux on page 30.

Summer 2016 LXF214 85


We must map from Scala to the database and expose some functionality with the source app/models/TimeSeriesRow. scala: case class TimeSeriesRow(id: Long, name: String, label:String , value:String) object TimeSeriesRow extends ((Long, String, String,String) => TimeSeriesRow) { implicit val timeSeriesFormat = Json. format[TimeSeriesRow] }

Let’s code The Scala case class defines our TimeSeries table model while we override the companion object to say we have a way to create a JSON response for this class using the handy JSON automated mapping feature built into Play. We define in app/dal/TimeSeriesOperations.scala the mapping of database schema to Scala and data access methods: class TimeSeriesTable(tag: Tag) extends Table[models. TimeSeriesRow](tag, “timeseries") { def id = column[Long]("id”,O.PrimaryKey,O.AutoInc) def name = column[String]("name") def label = column[String]("label") def value = column[String]("value") override def * = (id,name,label,value) <> (models. TimeSeriesRow.tupled, models.TimeSeriesRow.unapply _) } def create(name: String, label: String , value: String): Future[TimeSeriesRow] ... def list(name:String): Future[Seq[TimeSeriesRow]] .... Then app/controllers/TimeSeriesApi.scala implements an API to retrieve (and store) information def getByName(name: String) = Action.async { timeSeries.list(name).map { timeSeriresRows => Ok(Json.toJson(timeSeriresRows)) } } Without the Swagger annotations the code is quite small. The method getByName will hand over control to async action. To quote Play documentation: “Because of the way Play works, action code must be as fast as possible and not to block main execution threads. The response is a future

result which is a way of handing over control so that the execution threads can return to accepting the next call. When the future is ready Play will return the response to the client.” The method itself queries the database using the data access layer app/dal/TimeSeriesOperations class and because we indicated in TimeSeriesRow companion object it can turn the response as JSON. Implementing the API to get stock quotes we’ll use Yahoo’s YQL to query a source of stock quotes: quotes. The API doesn’t require any authentication. We just have to issue the SQL: select * from where symbol = ‘?’ Against the API endpoint v1/public/yql. Then the response JSON is parsed. val dt = (yahooResponse.json \ “query” \ “created” ).as[String] val lastTradePriceOnly = (yahooResponse.json \ “query” \"results” \"quote” \"LastTradePriceOnly” ). as[String] Note: the use of the \ operator when traversing the JSON response we received from YQL. This makes it easy to get the creation date from the path query \ created. Then for the quote value query\results\quote\LastTradePriceOnly. For brevity a lot of the response has been omitted: {"query":{"count":1,“created":"2016-0515T09:02:27Z”,“lang":"en-US”,“results":{"quote":{"symbol":" ^FTSE”,“Ask":.........,“LastTradePriceOnly":"6138.50”,......}}}} Weather API is similar in nature to the Stock API. In that a URL is called, the JSON is parsed and the response is saved again to the TimeSeries API. However, there is an API key to authenticate against Open Weather Map. To get this key sign up at The sign up process will provide you with an APPID which has to be supplied as a query as defined in the API documentation. The API is buried in the code rather than config here: val APPID = “xxxxxxxxxxxxxxxxxxxx” The system.scheduler.schedule is the equivalent of Cron. Both the weather API and stock API use a scheduled job so that the TimeSeries is automatically updated periodically. A real world example could use Cron and curl, Quartz or other technology as a trigger to the APIs: POST /stock/{name} POST /weather/{name} For example: curl -XPOST In this example application, we use the built in Akka support within PLAY Framework: system.scheduler.schedule(60 seconds, 60 seconds) { val port = conf.getInt("http.port").getOrElse(9000) ws.url(s”${port}/weather/London").post("") We inject the System Actor into our controller then request that we wait 60 seconds after start then repeat every 60 seconds. With the function we’ve passed into the schedule we call the Play REST resources either using the default port or the port set as a command line parameter when invoking Play.

Add Swagger annotations

Swagger’s representation of weather API with the ability to ‘Try it out’.

86 LXF214 Summer 2016

You can annotate the class with: @Api(value = “/weather”, description = “simple api to get the weather we are interested in") And a method: @ApiOperation(value = “a wrapper api to populate the weather for a city…….”,


notes = “Returns timeseries point”, response = TimeSeriesRow, httpMethod = “POST”) @ApiResponses(Array(new ApiResponse(code = 400, message = “something bad happened")) The annotations in a class are enough for the Playswagger library to inspect the code and generate the information for the Swagger UI. If the structure of the response or request changes, Swagger will keep up with the changes without the need to update a separate document. You may need to enhance the descriptions but it cannot do everything.

Routes For Play to know what to advertise to the browsers, other consumers and for the Play-swagger library to know what endpoints need to be queried, you will have to provide a list of endpoints in conf/routes, with methods and what they map to. GET / controllers.HomeController.swagger GET /weatherchart GET /stockchart controllers.HomeController.stock GET /swagger.json controllers.ApiHelpController. getResources These first three provide endpoints for the browser. The default / for the Swagger UI while the other two provide web pages showing the two Google charts The final endpoint exposes the REST resource created by Play-Swagger that describes the REST interfaces and is used by Swagger UI. GET /timeseries/:name controllers.TimeSeriesApi. getByName(name) POST /timeseries/:name controllers.TimeSeriesApi. postByName(name) POST /weather/:name controllers.WeatherApi. addWeather(name) POST /stock/:name controllers.StockApi. addStock(name) These four endpoints define the three resources with four supported methods. It is how Play transfers control to our API application code. Finally, test it all out with: $activator run And visit, weatherchart and

Cloud deploy The example we have been using has so far been run locally and using a simple database that requires no maintenance. With some minor modifications, this application can be deployed on the Heroku platform (among others see DeployingCloud for the full list of options) accessible to the internet for free. There are a number of jobs to address to get our example running on Heroku are: Get a Heroku account Get the Heroku toolbelt. This is a command line utility for creating and maintaining applications on the Heroku platform. See and make sure you follow the instructions to log the toolbelt into your Heroku account. Change the conf/application.conf so that it uses Postgres. Using SQLite has disadvantages too! Add the Postgres library to the build.sbt dependencies.

References The full source code is here:

You can leave the SQLite library in there, too. The required driver is picked as default to be SQLite but override this with the Heroku Procfile. Create a Procfile that gives Heroku some instructions on what to do to start the application. Two important ones are to not perform automatic Play evolutions step as the database schema is for SQLite and needs modification for Postgres. The second item is to define the HTTP port for the application so that the scheduled job can call the REST resources. Procfile also defines the Postgres drive information. The database schema between Postgres and SQLite is different due to the way both SQLite and Postgres generate the ID field so we cannot use Play evolutions for both. Procfile, and changes for conf/application.conf are already applied at Basic commands to execute: $ git init $ git add . $ git commit -m ’before push to Heroku’ $ heroku create Creating app... done, secure-retreat-29275 | https://git. $git push heroku master After much scrolling a message similar to this is displayed. remote: Procfile declares types -> web remote: -----> Launching... remote: Released v4 remote: deployed to Heroku remote: remote: Verifying deploy... done. To * [new branch] master -> master Now to fix the database using psql Postgres terminal: $ heroku pg:psql $ create table timeseries ( $ id SERIAL, $ name VARCHAR NOT NULL, $ label VARCHAR NOT NULL, $ value VARCHAR NOT NULL $ ); $ \q Now to restart the application as the database is created with: $ heroku ps:restart . To open up the main page on your system’s default browser enter $ heroku open . To check if anything has gone wrong, just look at the logs with $ heroku logs . Even if you don’t deploy to the cloud or decide to develop using Java and Play framework you can use Swagger to document your API or use other Swagger implementations not built around Play to develop new APIs. LXF

Summer 2016 LXF214 87


Rust: A guide to concurrency Mihalis Tsoukalos explains the complex subject of concurrency, covering   all that you need to know to start programming using threads in Rust.

Our expert Mihalis Tsoukalos

(@mactsouk) has  an M.Sc. in IT from  UCL and a B.Sc. in  Mathematics. He’s  a DB-admining,  software-coding,  Unix-using,  mathematical  machine. You can  reach him at www.

This is a small section of the extensive std::thread function man page. This is the standard Rust module that deals with thread manipulation.

‘shared’ part. Put simply, the compiler will let you know about any errors in the ‘shared’ part, so if your Rust code compiles, you’ll have no such errors and as a result, it isn’t dangerous to have shared memory among threads in Rust!


Quick tip Concurrency is when multiple computations are being executed at the same time on a computer. Sometimes, they might need to interact with each other, which can be very tricky and needs special care.

his month we’ll cover concurrent programming in Rust. Concurrency and threads are first class citizens in Rust and you should use concurrent techniques as a first resort when solving a problem. Bear in mind, however, that usually there’s more than one way to solve a problem concurrently! So, when developing concurrent applications think about their design and consider that the most important property of a concurrent application is making functions as autonomous and independent as possible so that they can be executed in parallel. This tutorial uses the current stable release of Rust which is 1.9. The next stable release of Rust is scheduled to come out on 7 July, but if you want to try version 1.10 now, you can install the beta already. This tutorial will not use Cargo to create the presented programs because it’s not necessary. As you already know, Cargo is more useful when you need to use external crates. Concurrency is a complex subject that Rust tries to make easier than it usually is. The difficulty with concurrency is that ‘A Shared Mutable State is the root of all problems’ and to deal with that problem Rust takes a different approach: Instead of dealing with the ‘Mutable’ part, it deals with the

88     LXF214 Summer 2016

Concurrency in Rust Look at the following Rust code: &foo &mut bar The first line defines a reference to a variable named foo that’s shared but immutable, whereas the second line defines a reference to a variable named bar that is mutable but not shared! Neither of these variables can generate a race condition when used in concurrent programming. The first variable because it’s immutable, so it can only be read and the second variable because it’s not shared. This approach is fundamentally different from the C approach. The question now is what happens when you need variables that are both shared and mutable? Keep reading to find the answer. Rust starts a new thread using the thread::spawn which is part of the std::thread module. (The documentation page of the std::thread module is pictured, above.) The thread::spawn function returns a JoinHandle structure that’s ‘an owned permission to join on a thread’. In other words, JoinHandle provides a way to join the child thread. Most often, you will use the provided JoinHandle to call the join() method to make the main() function wait for the thread to finish.

Rust Concurrency vs Parallelism It is a common misconception that concurrency and parallelism are the same thing. As you will see, concurrency is actually better! Parallelism is the simultaneous execution of multiple things whereas concurrency is a composition of independently executing things. In other words, concurrency is about structuring a thing in a way that it can be executed in parallel. So, the goal of concurrency is not parallelism but creating things with an independent and good structure. It is only when you manage to make two or more things that you build in a

concurrent way to execute simultaneously that you have parallelism. What is being executed is usually a function— this isn’t compulsory but is very helpful because functions, when implemented correctly, allow for a somehow autonomous execution. In concurrency, adding more concurrent entities actually makes things run faster and better—in other words, you improve the performance of a program by adding concurrent procedures to an existing design, so the parallelism comes from a better concurrent

Instead of starting with something that works, it would be more interesting to begin with something that doesn’t work and start appreciating the Rust compiler, which is aware of concurrency. The following code looks valid: use std::thread; fn main() { let my_array = [1, 2, 3, 4, 5]; let join_handle = thread::spawn( || { println!("The array is {:?}”, my_array); }); } Unfortunately, trying to compile it will fail with the next error message: 9:3 error: closure may outlive the current function, but it borrows `my_array`, which is owned by the current function [E0373]

Compiler help So Rust compiler tells you what the error is and it’s that you are trying to borrow the my_array variable that is owned by another function. Unfortunately, this isn’t allowed by the type system of Rust and the compiler is here to make sure that this will not happen! The following code, saved as, will compile just fine: use std::thread; fn main() { let my_array = [1, 2, 3, 4, 5]; let join_handle = thread::spawn( move || { println!("The array is {:?}”, my_array); }); join_handle.join(); } All the difference here is made by the move keyword that tells Rust that instead of just borrowing my_array , you want the closure – the function that’s used as a free variable and is called by thread::spawn – to have complete access to it. As you will see in a while the join() call is really important despite the fact that it has nothing to do with the error message you saw earlier. Please note that {:?} prints the debug representation whereas {} prints the user representation of a variable when used in println . The general idea here is to try not to fight with the compiler; you will have much better results if you consider the Rust compiler as your friend! The presented example will just display text coming from multiple threads on the screen. The Rust code of is:

expression of the problem. It’s the job of the developer to take concurrency into account when designing a solution to a problem in order to benefit from parallelism. Even if you cannot run your procedures in parallel, you’ll still benefit from a valid concurrent design because a future system might be able to execute multiple things! Remember when CPUs had only one core? So when doing concurrency, you should not think about parallelism but about breaking things into independent components that when composed solve the original problem.

use std::thread; use std::time::Duration;

Quick tip

fn main() { for _ in 0..5 { thread::spawn( || { thread::sleep(Duration::from_millis(4000)); println!("Hello from thread!"); }); } }

A simple example As you already know, you can create a new thread with the help of the std::thread module that contains the thread::spawn function. The reason for using the thread::sleep() is for mimicking the processing time of a function because functions don’t just print a message and exit. The delay introduced by sleep() in this case is four seconds. The for loop is used for creating five threads. The reason for using an underscore (_) instead of a real variable in the for loop is that you don’t need to use any of the values in the 0..5 range. If you use a variable, Rust will let you know that it’s not being used: 6:7 warning: unused variable: `i`, #[warn(unused_variables)] on by default And the output of is the following:

Concurrent programming is important because it allows things to run in parallel in a way that increases the performance of a program. However, you must design your software with concurrency in mind – a very simple prerequisite is that functions must be autonomous and independent.

The output of demonstrates that you cannot predict the order of execution of the threads.

Catch up – buy back issues Head over to now!

Summer 2016 LXF214     89

Rust $ time ./simple real 0m0.004s user 0m0.000s sys 0m0.000s If you look closely at the Rust code, you should expect to get five messages on screen because five threads are created. However, produces no output! As if this was not enough, finishes in much less than four seconds. The root of the problem is simple: as soon as the main program exits all of its threads are automatically terminated so none of them has enough time to execute println!() . Therefore, the current implementation of isn’t that good. The next section will try to improve

Quick tip A process is an execution environment with instructions, user-data and system-data parts and other kinds of resources that are obtained during runtime. A quick and naïve way to differentiate a thread from a process is to consider a process the binary file and a thread a subset of a process.

This is the documentation page of std::sync::Arc. This will automatically deal with the number of references of a variable.

Giving threads time to finish The most difficult task when you are dealing with lots of threads is synchronising them and giving them time to finish their jobs. As you already know, the return value of thread::spawn is a JoinHandle structure that can be used for thread synchronisation. In this case, you’ll use the JoinHandle structure to make the main() function synchronise with its threads—this will give the necessary time to the threads to print their messages and will let the main program know when all threads have finished their jobs. After that, the main() function can exit peacefully and with a clear conscience. The following is the code of, which is based on use std::thread; use std::time::Duration; fn main() { let mut my_threads = vec![]; for i in 0..5 { let join_handle = thread::spawn( move || { thread::sleep(Duration::from_millis(4000)); println!("Hello from thread {}”, i); }); my_threads.push(join_handle); } for join_handle in my_threads { let _ = join_handle.join(); } }

As you can see from the code, it was relatively easy to correct the problem. What you need to do is to collect the return values of each call to thread::spawn , store them into a vector, which is named my_threads , and use the join() function afterwards to give them time to do their jobs. Executing generates the expected output with a small surprise: $ time ./synch Hello from thread 3 Hello from thread 2 Hello from thread 0 Hello from thread 4 Hello from thread 1 real 0m4.005s user 0m0.000s sys 0m0.000s The surprise here is the fact that threads don’t get executed in the same order that they were created—this happens for a lot of reasons but it’s mainly due to the way Unix works so never make assumptions about the order of execution when dealing with threads! If you have more threads, the results are even more unpredictable (as you can see bottom, p90). This section will present an example where all threads need to access the same variable, in this case the array you saw in; the rest of the code will be based on The code of the new example, which is saved as, is the following: use std::thread; use std::sync::Arc; static NTHREADS: i32 = 5; fn main() { let my_array = Arc::new([-1, -2, -3, -4, -5]); let mut my_threads = vec![]; for i in 0..NTHREADS as usize { let ar = my_array.clone(); let join_handle = thread::spawn( move || { println!("The {} element of the array is {}”, i, ar[i]); }); my_threads.push(join_handle); } for join_handle in my_threads { let _ = join_handle.join(); } } Arc stands for Atomic Reference Counting and is an atomically reference counted wrapper for shared state. Put simply, when you use the clone() call for an Arc<T> , it will create another pointer to the data and increase the reference counter. This will happen each time the for loop gets executed. As you have many pointers pointing to a variable, multiple threads can read the single variable. You can learn more about std::sync::Arc at its documentation page (see left). Executing produces the following output: $ ./access

Improve your code skills Subscribe now at 90     LXF214 Summer 2016

Rust Unsafe blocks of code If you use a mutable static global variable then you will need to put it in an unsafe block of code. You mainly use unsafe blocks of code for low level operations that you don���t want to be handled by the strict type system of Rust. The following code, saved as, illustrates the concept of the unsafe block: static mut MONEY: i32 = 500;

fn main() { unsafe { MONEY += 10; println!("MONEY: {}”, MONEY); } } Here, the static global mutable variable is

MONEY . Being mutable means than its value can change. If you’re using multiple threads, then one thread can read this variable while another one is changing its value which may result in unsafe memory—for this reason you have to use an unsafe block. The good thing is that very little code needs to be unsafe.

The 0 element of the array is -1 The 1 element of the array is -2 The 4 element of the array is -5 The 2 element of the array is -3 The 3 element of the array is -4 However, you are still unable to change any of the values in my_array , even if you make it mutable because of the Rust type system: 14:21 error: cannot assign to immutable indexed content 14 ar[i] = ar[i] + 1;

About sharing In this section will show you an easy way to solve this problem. We’ll illustrate a technique that allows multiple threads to write to a shared variable using a Mutex. This is a mutually exclusive flag that allows a single thread to do something critical. In this case, it changes the value of a shared variable while the other threads are waiting for the same mutex to become available in order to get it and do their jobs. You can achieve mutation using either RWLock<T> or Mutex<T> . The former allows multiple readers and only one writer, whereas the latter allows only one reference to a shared resource. In both cases, only one thread can modify the shared resource. The code of, which uses Mutex<T> , is: use std::thread; use std::sync::Arc; use std::sync::Mutex; static NTHREADS: i32 = 5; fn main() { let my_array = Arc::new(Mutex::new([-1, -2, -3, -4, -5])); let mut my_threads = vec![]; for i in 0..NTHREADS as usize { let mutex = my_array.clone(); let join_handle = thread::spawn( move || { let mut ar = mutex.lock(). unwrap(); ar[i] = ar[i] + 2; println!("The {} element of the array is {}”, i, ar[i]); }); my_threads.push(join_handle); } for join_handle in my_threads { let _ = join_handle.join();

} } Most of the code is the same as before. The main difference is that you need to define my_array differently in order to make the Mutex available for use and that you introduce the ar variable inside the closure instead of outside. Additionally, the ar variable is defined using let mut which makes it mutable. The output of is: $ ./share The 0 element of the array is 1 The 1 element of the array is 0 The 3 element of the array is -2 The 2 element of the array is -1 The 4 element of the array is -3 As you can see, the value of each array item is increased by two using a single thread for processing each array item.

This illustrates the use of channels in Rust as presented in http:// rustbyexample. com. Channels allow threads to talk to each other.

About channels Finally we’ll briefly mention another technique that allows you to create threads that communicate with each other. The new concept is called a Channel and is used by Rust to provide asynchronous communication between threads. (Above, shows a simple example as well as its output as found in You can find the Rust code of the example as This is the last tutorial in the Rust series. The best thing you can do now is start writing real applications in Rust using what you have learnt from the tutorials. Good luck and have fun with Rust! LXF

Summer 2016 LXF214     91

Got a question about open source? Whatever your level, email it to for a solution.

This month we answer questions on: 1 Copying to Android 2 Watching TV on a Pi 3 Boot Rescatux from USB


4 Mounting music players 5 Restarting crashed services ★ Storing files on a different drive

Much troubled protocol

I’m a complete newbie to Linux. After my laptop that was running Windows 10 (upgraded from Windows 7) completely failed on me, I decided to try out Linux, so downloaded Linux Mint 17.2 on my desktop PC and installed this on my laptop. Installation was very simple and within half an hour my laptop had been given a new lease of life. I’m enjoying it so far but have hit a couple of stumbling blocks. One being I just cannot

Syncthing may be the simplest way to keep your phone’s music collection up to date. get Linux Mint to recognise my Android phone (it’s an HTC One M8). I’ve enabled USB debugging on the phone and also installed mtptools, but I get the same error every time: Could not display “mtp://[usb:001,004]/”.

Enter our competition Linux Format is proud to produce the biggest and Get into Linux today! best magazine that we can. A rough word count of LXF193 showed it had 55,242 words. That’s a few thousand more than Animal Farm and Kafka’s The Metamorphosis combined, but with way more Linux, coding and free software (but hopefully less bugs). That’s as much as the competition, and as for the best, well… that’s a subjective claim, but we do sell

92 LXF214 Summer 2016


way more copies than any other Linux mag in the UK. As we like giving things to our readers, each issue the Star Question will win a copy or two of our amazing Guru Guides or Made Simple books – discover the full range at: For a chance to win, email a question to, or post it at to seek help from our very lively community. See page 94 for our star question.

Error: No such interface ‘org.gtk.vfs.Mount’ A big part of what I use the laptop for is managing my music library, so it’s vital I get the computer to recognise my phone’s SD card. Do you have any suggestions? Should I try a different distro such as the new release of Ubuntu that came with LXF211? Steve Williams We’ve found connecting Linux desktops to Android phones using MTP to be unreliable. I’ve not tried with other OSes so I don’t know if this is down to Linux, Android or MTP in general. First of all, make sure you have both mtpfs and libmtpfs installed. Then you have to make sure you don’t have more than one program trying to access the phone. Some music players connect to the phone via MTP, which then blocks the file manager, so make sure no such programs are running. We’ve given up on MTP and use wireless syncing to keep my music and other files up to date. You can do this over SSH by installing SSH Helper on the phone and openssh-server on the computer, but there is a much simpler way of doing this with the handy Syncthing ( You’ll need to Install it on both devices first, then connect to http://localhost:8384 on your computer to set it up. Press the ‘Add Folder’ button and select your music folder, give it a useful name that must be the same on each device (the folder path doesn’t have to be the same). You also need to add the phone to the list of remote devices, the easiest way to do this is to select Show ID from the Actions menu, which

Answers Terminals and superusers We often give a solution as commands to type in a terminal. While it is usually possible to do the same with a distro’s graphical tools, the differences between these mean that such solutions are very specific. The terminal commands are more flexible and, most importantly, can be used with all distributions. System configuration commands often have to be run as the superuser, often called root. There are two main ways of doing this depending on your distro. Many, especially Ubuntu and its derivatives, prefix the command with sudo , which asks for the user password and sets up root privileges for the duration of the command only. Other distros use su , which requires the root password and gives full root access until you type logout. If your distro uses su , run this once and then run any given commands without the preceding sudo .

displays a QR code. Then you can run Syncthing on your phone, go to the Devices tab and press ‘+’ and then the QR icon to open the camera. Scan the QR code and the device will be added, after you accept it on the computer. Now you can share your music folder between the devices. Unless you have an unlimited data plan (and a lot of patience) you should tick the ‘Sync only on Wi-Fi’ box in the phone’s Syncthing settings. Now your music collection will be in sync between the smartphone and computer whenever the two are connected to the same network.


TV on a Pi

I have just bought a Raspberry Pi 3 and have installed OpenELEC Media centre on it. I have learnt how to install the various add-ons, via Linux Format and YouTube. I was wondering if you could recommend a decent TV tuner which is preferably a USB type of unit. I want to extend it by being able to watch and record live TV. Bearing in mind I live in Australia, I

don’t know if that makes a difference to the UK system. Derek Martin There are a number of DVB-T USB tuners available, DVB-T is the standard for digital OTA (Freeview) television in both the UK and Australia. It appears that DVB-T is also used for HD broadcasts in Australia, while in the UK these use DVB-T2. USB DVB-T2 receivers are in much shorter supply. You are best off sticking with a wellknown and supported brand, such as Hauppauge, although we also have a Freecom USB DVB-T stick working with Linux here. The main problem with these TV sticks is that in addition to a driver in the kernel, they also require a firmware file. The problem is compounded by the way manufacturers change the chipset inside a device without changing the name or outward appearance, so it’s only possible to find out what it actually needs by plugging it in – so either borrow one from a friend or buy from somewhere with a good returns policy. Once you have a stick, run this in a terminal $ sudo journalctl -f to view the system log (assuming you are using Raspbian Jessie, otherwise it’s $ tail -f /var/log/syslog . Then plug in the stick and watch the messages it gives. You’ll see something like: Direct firmware load for sms1xxx-hcw-55xxxdvbt-02.fw failed with error -2 This is for a Hauppauge stick, what matters is the name of the firmware file. There are some firmware files in the firmware-linux package, so start by installing that. If that doesn’t help, you can grab individual firmware files from git/firmware/linux-firmware.git/tree and copy them to /lib/firmware. If you can’t find the firmware here, plug the name of the file along with the word download into your favourite search engine.

If you’re having trouble with this, use lsusb to show the vendor and product IDs for the device, these do relate to the chipset involved. Then search for more information. It can be a lot of messing around, but most DVB-T devices are supported in one way or another these days, it’s just a case of finding the files you need. The firmware files are often extracted from the Windows drivers for the devices, so it’s not always possible for them to be distributed with the distro or even as a package, but they are out there. You’ll know when you have cracked it as, after plugging in the device, /dev/dvb magically shows up, then you can let your recording software at it.


Rescatux on USB

I have tried several methods to create a bootable USB for Rescatux 0.40b6, including from Windows 7 and from Mint 17.3. There was no difference in behaviour. It boots to the opening screen that offers several options, which I have tried. If Hardware Detection Tool is selected, it errors with this message: Failed to load libmenu.c32 Failed to load COM32 file hdt.c32 I did extract the files from the ISO and found that libmenu.c32 and hdt.c32 are, indeed, not available. Are these known problems that have solutions or am I just the lucky one? I have successfully created bootable USBs for Linux Mint 17.3 and the Ultimate Boot CD application, so my inclination is to assume something is wrong with Rescatux. Richard The problem almost certainly lies with the way you have tried to create the bootable USB stick. You mentioned that you tried several methods, which was the case in the past, with the likes of UNetbootin

A quick reference to...

Systemd journal


mong the other services that Systemd provides is an alternative to the old syslog daemon, called journald. In may ways this works just the same, any program uses the standard system calls to write entries to the system log, and those are handled by journald. What is different is the way in which journald stores the information, using an indexed binary format rather than plain text files. Apart from using far less disk space (log entries are highly compressible) this makes extracting information from the logs so much easier, eg $ journalctl -d -p err will

show all entries since the last reboot ( -b ) with a priority of error or more serious. You can run this after installing new hardware, or a kernel upgrade, to make sure everything is working correctly. It’s also possible to search on dates and times: $ journalctl --since yesterday $ journalctl --since -2h The second example shows all entries for the last two hours. You can also match on entries from a specific command, to see all entries from Cron in the journal, you could use one of: $ sudo journalctl _COMM=crond

$ sudo journalctl SYSLOG_ IDENTIFIER=CROND The first uses the executable name, the second the identifier string in the log. Options can be combined, they are added by default, so to find all cron errors today, you could use: $ sudo journalctl SYSLOG_ IDENTIFIER=CROND --since today -p err There’s more than one journal, the system level journal and one for each user. Normal users can only read their own, users that are members of the systemd-journal, adm or wheel groups can read the system journal too. By default, journalctl outputs entries from all journals accessible to the user.

Summer 2016 LXF214 93

Answers and each distro’s own USB creator software. The trouble with all of these methods is that they are each tied to particular distros and the specifics of how they boot. All of that is history now and we should do all we can to forget those methods. The vast majority of ISO images are now what are known as hybrid images, which means they work on optical discs and USB sticks. The simple way to tell if an image is a hybrid is to check whether it has a partition table with fdisk (although the Rescatux download page states that this is for CDs and USB sticks): $ sudo fdisk -l rescatux-0.40b6.iso Device Boot Start End Sectors Size rescatux-0.40b6.iso1 * 64 1234943 1234880 603M If there is a partition shown, like here, this is a hybrid image and you simply copy it to a USB stick with dd : $ sudo dd if=rescatux-0.40b6.iso of=/dev/sdX bs=4M Note: Where sdX is your USB stick. You copy to the full device, not a partition on it, so it would be, eg /dev/sdc not /dev/sdc1. In the rare instance that an ISO image isn’t a hybrid, you can make it one with isohybrid. This is part of the syslinux package, so install that first, then you can convert a bootable ISO to hybrid with $ isohybrid somedistro-1.2.3.iso . There’s no need to use sudo , you are only converting a file, then you can dd the ISO to a USB stick. If you want to create a UEFI bootable image, add the -u option. For the curious, hybrid images work by putting a partition table in the first 2,048 bytes of the image, which is unused space in the CD ISO specification. When copied to a CD, this is completely ignored but it enables a USB stick to boot from it as it sees a bootable partition.


Musical drives

I’m having some trouble copying some MP3 music to an MP3 player. My friend has two unbranded and a Samsung YP-U2 ZB/XSA model MP3

Star Question ★


This month’s winner is Dutch_Master. Get in touch with us to claim your glittering prize!

Where is it stored?

I have created a symbolic link (symlink) from /usr/share/<dir> to /home/<user>/. The /usr/share tree is on sda and /home is on sdb. If I store a file in /usr/share/<dir>—is that file physically stored on sda or sdb? And if it’s on sda, how do I get it to sdb? I want it to survive a re-install. Yes, I can copy it over by hand, but there’s a lot of files/packages/ directories to copy, so why not be lazy? Dutch_Master

94 LXF214 Summer 2016

players. I’m trying to copy his music to his MP3 players, as he’s not very technically minded. However, I’m not able to copy any of his music across to any of the MP3 players. I’m also getting a few different error messages when they don’t copy over. Initially, I suspected that there might be a permissions problem, but it seems that it may be even more obscure an issue than Rescatux, like most live CD/DVDs nowadays, uses a hybrid image. that. Trying to copy Forget the clever tricks to get it on a USB stick and use dd. using the graphical user interface fails with the message The first step is to identify the device node Access denied. Could not write to assigned to the player when you plug it in. /media/30A3-239E . Players like this generally appear as USB mass I tried to mount from the terminal but storage devices so the procedure is the same couldn’t work out the correct device and as for a USB flash drive. In a terminal, run syntax. The bizarre thing is when I plug in $ dmesg -w . the MP3 players, they seem to be recognised Then connect a device and watch the as I will get a prompt from KDE asking me terminal output, which should show the device. what I want to do with the device. I can then If you have a single hard drive at /dev/sda, mount the device, look around it and see the next USB device would probably be /dev/ music already there. But If I try to copy or sdb1. Most USB storage devices are formatted move any music across to the MP3 player, with a single partition filling the device, but you I will get the above error message. All devices do get some devices that are formatted with appear to use the VFAT filesystem. no partitioning, in which case you would be GeordieJedi looking at /dev/sdb. The dmesg output will There’s definitely something odd help uncover the truth, if a partition appears, going on here. VFAT filesystems don’t use that, otherwise use the whole device. Now have any concept of users and you know the device node, you can create a permissions, so they are mounted with pseudo mount point and mount it there: ownership assigned to the user that mounted $ sudo mkdir -p /mnt/mp3 them. So a desktop automounter should $ sudo mount /dev/sdb1 /mnt/mp3 -o mount it with everything apparently owned by uid=$USER,umask=0 the user running the desktop. Working in the In this instance, $USER will be replaced terminal is the best way to diagnose this as the with your user name, so the device will be terminal will give you feedback if something owned by you and the umask=0 means goes wrong. every file is world readable and writeable. If the

A symbolic link is simply a pointer to a different location, so when you write to /home/user/dir you are redirected to /usr/share/dir. You can test this for yourself by using $ df -h /usr /home to see how much space is available on each filesystem, then running: $ dd if=/dev/zero of=/home/user/dir/test bs=1G count=1 This creates a 1GB file in /home/user/dir. Now run the df command again and see which filesystem has lost a gigabyte of available space. If you want all the files to be

physically stored under home, get rid of the symlink, move the directory there and then create a symlink in the new location: $ sudo rm /home/user/dir $ sudo mv /usr/share/dir /home/user $ sudo ln -s /home/user/dir /usr/share Depending on how your distro handles reinstalls – most reformat the filesystem but Ubuntu sometimes deletes the contents, which may follow the symlink – your files may be at risk during a reinstall. However, the solution is simple, delete the symlink before installing and recreate it afterwards.

Answers mount command supplies no output, you should have successfully mounted the device, you can verify this with $ df -Th /mnt.mp3 . You should now be able to copy files as your user. Just remember to unmount the device before you A couple of lines in a text file is all it takes to have remove it. As USB drives can Systemd automatically restart a service should it fail. cache files for a while, it is safest to run: $ sync Systemd provides an easy way to $ sudo umount /mnt/mp3 monitor and restart a service, This will make sure that everything is simply by adding a line to the service written to the drive. If you find yourself doing file. However, you shouldn’t modify the system this often, we’d suggest installing pmount. installed service files, as your changes will be overwritten the next time the package Then you can mount the drive as a user with providing it is updated. Instead you use the $ pmount sdb1 . mechanism that Systemd provides to make This will mount it at /media/sdb1, use the pumount command to unmount the drive (it additions or amendments to unit files when takes care of the sync too). they are loaded. When a unit file is loaded, Systemd looks for any *.conf files in /etc/ 5 systemd/system/<unit-name>. These are added to the existing unit file before it is run. I have a problem with the sshd There are several directives that control the service which keeps failing restarting of a service, the two of interest here intermittently on my Debian 8 are Restart and RestartSec . The first system. When this happens I have to restart determines under what circumstances a the service, but I can only do that from the service should be restarted, the second sets a machine, because sshd is no longer running! I’m trying to find the cause of the failures, delay before restarting (defaulting to 100ms). So to restart the service after a five second but this is very difficult because the nature pause, you could put this in /etc/systemd/ of the failure itself prevents me investigating system/sshd.service.d/restart.conf. it properly! Is there a way to monitor the [Service] sshd service and automatically restart it in Restart=always the event of a failure? I’m not looking at this RestartSec=10 as a solution to the problem, but I’m thinking The always setting for Restart means it it would make it easier for me to find out will restart whatever manner in which the why this is happening. process exited (except for a shutdown by Duncan Burns

Restarting crashed service

Help us to help you We receive several questions each month that we are unable to answer, because they give insufficient detail about the problem. In order to give the best answers to your questions, we need to know as much as possible. If you get an error message, please tell us the exact message and precisely what you did to invoke it. If you have a hardware problem, let us know about the hardware. If Linux is already running, use the Hardinfo program ( that gives a full report on your hardware and system as an HTML file you can send us. Alternatively, the output from lshw is just as useful ( One or both of these should be in your distro’s repositories. If you are unwilling, or unable, to install these, run the following commands in a root terminal and attach the system.txt file to your email. This will still be a great help in diagnosing your problem. uname -a >system.txt lspci >>system.txt lspci -vv >>system.txt

Systemd, of course). Other options include on-success , which only restarts the service if it exited with a return code of zero; on-failure , which restarts the service on a non-zero exit code, or on-abnormal which is similar to on-failure but covers more exit options. You can see the exit code that’s sent when the service terminates by running $ systemctl status sshd.service and you can set the appropriate Restart option, or leave it at always to catch all eventualities. It would also be worth using a log watcher to catch when the service exits and restarts, possibly emailing you the output from the above command to help you in discovering the cause of your problem. You could also look at running this with FailureAction in the unit file. See man systemd. service for a full list of directives. LXF

Frequently asked questions…

GPG What is GPG? GPG, or GnuPG, is Gnu Privacy Guard, which is an open source implementation of the PGP (Pretty Good Privacy) encryption system. So it’s an encryption program. There are plenty of those, what’s so special about this one? It uses public-key cryptography. What’s that then and why does it matter? Traditional cryptography uses a key to encrypt and decrypt data, it’s often called symmetric cryptography because of that. The problem is that once you encrypt a message with the key, you have to get the key to the

recipient, securely, in order for them to decrypt the message. Anyone who gets hold of that key can also read the message. Yet if you had a secure means of sending the key without fear of interception, you could just send the message that way. That makes sense, so how does GnuPG solve this? It uses asymmetric cryptography, where the encryption key comes in two parts. This is generally known as the public and private keys. You only need one of them to encrypt some data, but you need both to decrypt it? How does that help? It means you can give your public key to anyone, even publish it on the web, and anyone can then

encrypt a message to you. But you keep the private key to yourself so no one else, not even the person that encrypted the message with your public key, can read it. What if I want to be able to read my copy of a message I send to you? That’s easy, encrypt it with your public key too. You can encrypt a message with as many public keys as you like then the holders of any of the corresponding private keys can decrypt it. Isn’t that going to make for some huge files if it contains a copy encrypted for each of several recipients? That’s the clever bit, GnuPG and PGP don’t encrypt the whole file with the asymmetric key (that’s a

very intensive process). Instead they use standard symmetric encryption using a large, random one-time key, used only for that one message. Then the message header contains a copy of that key encrypted with each of the public keys, so adding recipients only increases the header size. This solves the problem of symmetric keys, of getting the key securely to the recipient. What software do I need to use GnuPG? There is a command-line program, called gpg (, but many programs, especially mail clients, are able to encrypt and decrypt messages as you send and receive them, making the whole process quite painless once set up.

Summer 2016 LXF214 95

On the disc Distros, apps, games, books, miscellany and more…

The best of the internet, crammed into a phantom-zone like 4GB DVD.

Distros You may have noticed fewer 32-bit distros on our DVDs. Many of the smaller distro projects have dropped support for it. Testing on two separate architectures has become more than many of them can manage given the dwindling 32-bit user base. Now the larger distros are following suit. The current Ubuntu 16.04 desktop edition is the last that will be available with both 32-bit and 64bit installers—16.10 will be 64-bit only. Other distros, such as OpenSUSE and even Debian are going down the same road. All is not lost for 32-bit users, you’ll still be able to install Ubuntu 32-bit from a netinstall disc and add the desktop packages, but there will be no more 32-bit live disc and easy installer. Ubuntu 16.04 is an LTS release, so packages will be available for a while, but with 18.04 the plan is to drop 32-bit entirely. Bear in mind that 64-bit desktop CPUs became available in 2002 and were reasonably priced within a couple of years. By 2018 the AMD64 architecture will be sixteen years old—hardly bleeding edge. Still, there will always be Gentoo for those that want to build their own 32bit distro, although the compile cycle may well finish off the hardware!

Minty minty fresh

Linux Mint 18 Cinnamon We have an extremely refreshing DVD for you this month: three distros with a minty flavour. The first is Linux Mint 18 (codenamed Sarah). Linux Mint had a massive surge in popularity when Ubuntu switched to the Unity interface and upset the hordes of Gnome 2 diehards who looked for an alternative. Unity has become mainstream, and popular, now, but the people at Linux Mint were clearly doing more than provide an alternative desktop as the distro is still extremely popular among home users. While there are various desktops variant available, or will be by the time you read this, the two desktops that define Mint are Cinnamon and Mate. Both are more modern takes on the Gnome 2 experience, but in subtly different ways. Cinnamon is the fresher, more modern desktop—with hardware requirements to match. Cinnamon needs reasonable hardware 3D rendering, it will run on unaccelerated video hardware but it has to use software rendering which slows things down. Run it on a half-decent graphics card with 3D acceleration, such as Nvidia, AMD or Intel, and it’s really nice. As it requires fairly modern hardware, although by no means cutting edge, we have included the 64-bit version of Mint 18 Cinnamon. If you only have a 32-bit processor, or a low-end graphics card, you may prefer Mate. While each desktop is available in 32- and 64-bit versions, we only have room for one flavour of each, so it made sense to include the 64-bit version


NOtIce! Defective discs

For basic help on running the disc or in the  unlikely event of your Linux Format  coverdisc being in any way defective,  please visit our support site at:  Unfortunately, we are unable to offer  advice on using the applications, your  hardware or the operating system itself.

96     LXF214 Summer 2016


of Cinnamon and the 32-bit Mate release. If you want to try Mate on a 64-bit system, by all means use the version on the LXFDVD. You’ll find that 64-bit hardware is fully backwards compatible and will quite happily run a 32-bit OS. However, if you decide you like the Mate desktop and you have a 64-bit system, you are better off with a proper 64-bit OS. That doesn’t mean you have to download the full 64-bit Mate ISO image and burn it to a DVD, you can install the Cinnamon version from the DVD and then go into the Software Manager and install the mint-meta-mate package, which will download and install the Mate desktop. Then go to System Settings > Login Window > Options and set the default desktop to Mate.

If you want 64-bit Mint Mate, just install the Mate metapackage from the Software Manager.

New to Linux? Start here

What is Linux? How do I install it? Is there an equivalent of MS Office? What’s this command line all about? Are you reading this on a tablet? How do I install software?

Open Index.html on the disc to find out Friendly and minty fresh


Linux Mint 18 Mate This is the latest Linux Mint release with the Mate desktop environment and It’s a 32-bit distro suitable for old and new hardware. The Mate desktop bears a strong resemblance to the old Gnome 2 desktop visually, but it’s been brought more up to date. It’s fast and quite light and makes a good choice for those who find Xfce and LXDE just a bit too lightweight. If you want to try Cinnamon on a 32-bit system, you can install the distro in a similar way to that described on the previous page, except the package to install this time is mint-meta-cinnamon.

And more! System tools Essentials

Checkinstall Install tarballs with your  package manager. Coreutils The basic utilities that should  exist on every operating system. HardInfo A system benchmarking tool. Kernel Source code for the latest stable  kernel release, should you need it. Memtest86+ Check for faulty memory. Plop A simple manager for booting  OSes, from CD, DVD and USB. RawWrite Create boot floppy disks  under MS-DOS in Windows. Smart Boot Manager An OS-agnostic  manager with an easy-to-use interface.

cloudy and minty fresh

Peppermint 7 Cloud computing has its advantages, offloading storage and some of the work to a central server, with your data accessible from anywhere with an Internet connection. However, as any Chromebook owner will tell you, things can get a bit tricky when you lose connectivity. Peppermint tries to give the best of both worlds, it’s a light stable OS that’s at home in the cloud but works as a standalone system too. It’s not just about unifying local and cloud data sources, Peppermint uses a tool called

Download your DVD from


Ice to treat web apps like local ones, making them available from the system menus and the desktop. The desktop is a bit of a hybrid, using components cherry picked from various desktops, eg it uses the Xfce panel and the Cinnamon Nemo file manager. It sounds a bit messy, but care has been taken to make everything fit together and look part of the same desktop. Peppermint 7 is based on Ubuntu 16.04, so it has LTS support with security updates available for a good while.

WvDial Connect with a dial-up modem.

Reading matter Bookshelf

Advanced Bash-Scripting Guide   Go further with shell scripting. Bash Guide for Beginners Get to grips  with Bash scripting. Bourne Shell Scripting Guide Get started with shell scripting. The Cathedral and the Bazaar Eric S  Raymond’s classic text explaining the  advantages of open development. The Debian Administrator’s Handbook   An essential guide for sysadmins. Introduction to Linux A handy guide  full of pointers for new Linux users. Linux Dictionary The A-Z of everything  to do with Linux. Linux Kernel in a Nutshell An  introduction to the kernel written by  master hacker Greg Kroah-Hartman. The Linux System Administrator’s Guide Take control of your system. Tools Summary A complete overview  of GNU tools.

Summer 2016 LXF214    97

Get into Linux today! Future Publishing, Quay House, The Ambury, Bath, BA1 1UA Tel 01225 442244 Email


Editor Neil Mohr Technical editor Jonni Bidwell Operations editor Chris Thornett Art editor Efrain Hernandez-Mendoza Editorial contributors Neil Bothwick, Jolyon Brown, Nate Drake, Matthew Hanson, Bernard Christopher Jason, Alastair Jennings, Nick Peers, Les Pounder, Mayank Sharma, Zak Storey, Alex Summersby, Alexander Tolstoy, Mihalis Tsoukalos, Steven Wong Cover illustration Cartoons Shane Collinge


Commercial sales director Clare Dove Senior advertising manager Lara Jaggon Advertising manager Michael Pyatt Director of agency sales Matt Downs Ad director – Technology John Burke Head of strategic partnerships Clare Jonik


Marketing manager Richard Stephens



LXF 215

Linux is 25!

will be on sa le Thursday 1 Sept 2016

The world’s favourite kernel hits a quarter of a century  of development. We chart how it conquered the world!

Best distros ever!

Production controller Nola Cokely Production manager Mark Constance Distributed by Seymour Distribution Ltd, 2 East Poultry Avenue, London EC1A 9PT Tel 020 7429 4000 Overseas distribution by Seymour International


Senior Licensing & Syndication Manager Matt Ellis Tel + 44 (0)1225 442244


Trade marketing manager Juliette Winyard Tel 07551 150 984

subscRIPTIOns & bAck IssuEs

UK reader order line & enquiries 0344 848 2852 Overseas order line & enquiries +44 344 848 2852 Online enquiries Email


Managing director, Magazines Joe McEvoy Editorial director Paul Newman Group art director Graham Dalzell Editor-in-chief, Technology Graham Barlow LINUX is a trademark of Linus Torvalds, GNU/Linux is abbreviated to Linux throughout for brevity. All other trademarks are the property of their respective owners. Where applicable code printed in this magazine is licensed under the GNU GPL v2 or later. See

We look at the distros that shaped the Linux world and  how you can run them again. There will be Slackware.

Compile a kernel

Copyright © 2016 Future Publishing Ltd. No part of this publication may be reproduced without written permission from our publisher. We assume all letters sent – by email, fax or post – are for publication unless otherwise stated, and reserve the right to edit contributions. All contributions to Linux Format are submitted and accepted on the basis of non-exclusive worldwide licence to publish or license others to do so unless otherwise agreed in advance in writing. Linux Format recognises all copyrights in this issue. Where possible, we have acknowledged the copyright holder. Contact us if we haven’t credited your copyright and we will always correct any oversight. We cannot be held responsible for mistakes or misprints. All DVD demos and reader submissions are supplied to us on the assumption they can be incorporated into a future covermounted DVD, unless stated to the contrary.

With all this kernel talk it’d be a travesty not to explain  how to compile your own – it’s not that scary!

GPS explained

Get to grips with geolocation technology and use it   to create new projects that know where they are.

Disclaimer All tips in this magazine are used at your own risk. We accept no liability for any loss of data or damage to your computer, peripherals or software through the use of any tips or advice. Printed in the UK by William Gibbons on behalf of Future.

Future is an award-winning international media group and leading digital business. We reach more than 57 million international consumers a month and create world-class content and advertising solutions for passionate consumers online, on tablet & smartphone and in print. Future plc is a public company quoted on the London Stock Exchange (symbol: FUTR).

Chief executive officer Zillah Byng-Thorne Non-executive chairman Peter Allen Chief financial officer Penny Ladkin-Brand Managing director, Magazines Joe McEvoy

We are committed to only using magazine paper which is derived from well-managed, certified forestry and chlorinefree manufacture. Future Publishing and its paper suppliers have been independently certified in accordance with the rules of the FSC (Forest Stewardship Council).

Contents of future issues subject to change – we might still be installing Slackware v1.0.

98     LXF214 Summer 2016

Tel +44 (0)1225 442 244



▼Linux Format - New Mint