Issuu on Google+



Pages of tutorials and features Automate Ubuntu tasks Build a Pi coffee machine Get precise time with GPS Coding Academy: Phoenix framework and Jenkins data

Get into Linux today!

STOP THE HACKERS Build a digital browsing bunker Mask and hide your online identity Encrypt your files and messages Browse anonymously with Tails

Pi Zero HAT Reviews

saging re mes y Pi Secu Raspberr

T iro Envh y:pHA

grap Crypto e PTad One Tim

Raspberry Pi Sonic Tutorial Pi

bring us he Pirates of Sheffield their own another board using addpHAT standard for smaller for the

In brief...

Review s Xxx

Python/SonicP i user Pi: Use a dance mat

for new ways to integrate data and is always lookingboard can fulfil his needs? Les Pounder loves project. Perhaps this time data into his latest


Giving you

your fill


designed on boards primarily A Raspberry Pi of compatible clewith Raspberry Pi Zero but Zero sized add-on pinna It’s very pin Raspberry Pi boards. board that offers all 40 GPIO Enviro pHAT sensors to read you toalsothe worth noting cy.that the(I2C else being a temperature, no one can bring uses three I2C pins ct secre pressure, light truly random,twice with the areprotocol) your Pi in perfe only simplified communication levels, colours, encoded the numbers ins how g messages isn’twhich puter power, fortext that other pins orientation and and two supercom Provided e expla the sameto attach the board fastest angin compass pad, and it is possible that means the world’s sees the sensitive Nate Drakphy by exch even headings. . cables or a randomness thenPis nes so using jumper away. to older same pad, the message who lies in the rs. Micropho s from yards Designed to the decodeboard. encryption cryptogra Compute breakout these, anyoneencrypted provide data via couldn’t of the is a platform uantum your keystroke working round knowing for

enthusiary Pi Les Pounder busts st and creator out his best moves on of PiBakery to create music (or something the dance . how like it) using a dance floor and explains mat to control Wel Sonic Pi. or our tutorial


of delic ious Rasp berry

Pi news , revie ws and


10 mill ion Pis sold T


this month we delve into our loft and dust off the USB dance project to talk to the mat that we bought server that runs Sonic in the Pi. But in order to 2000s. The goal of connect Python Sonic y Raspberr this project is to use to Sonic Pi we need the dance mat y Pi pythonas a method of input to install is amazing osc. In the terminal for Sonic Pi. Now, we type the following,however, , can’t just plug in pressinglike and go; we first need many Enter end of each line. at others I’ve the to write some Python found that up of the code that will talk he Raspber to Sonic Pi’s server the initial $ sudo pip3 install SD card and react to our dance ry Pi Foundati python-osc set Dependin can moves. contact To start our project, $ sudo pip3 install on has to inform g on which be a challenge connect your keyboard, python-sonic been in system rs . has passed us that mouse, record operating The Enviro pHAT HDMI and SD card, a robust Python The strength you use, . Without the the word ‘LINUX’ another the compute ent designed That concludes the mileston and finally the USB they can Sense than enough to installation the process numbers similar dance mat to a spare it isn’t from tricky Raspberry which e saying another monume Raspberry Pi of zombie of the software, library padgathering, USB might see fit atop the port. Now power up entirely terminal can now be to drop of thedata the sale of “Please The of knowing Networks passwords. Governm to becan board designed to to break-ou andranges your Raspberry Pi and looks like ntal project. Pi way a message easily closed. be Astrono another the 10,000,0 comman Our next step join us is a compact sensor for the have Les Pounder boot to the ce used ed or HATed the next is to open t-the-Sonic Pi. neat projects. Raspbian desktop. Once been harder The Enviro pHAT 2 million This is crucial because intercept d-line, which years, of brute-for would 00th Raspberto celebrate at the desktop, we need million transmitt some record profile, enabling small, integrated into is a maker who with we’ve Python sales. So you really . a plethora around in It has never Sonic requires clock to sales of pHAT comes can be the to open will and sit with a flush LXTerminal, the icon isn’tapplication Enviro but Pi Zerothat the it be open for use. Sonic as ‘OSYAJ’ ry Pi. In somethin to become grown from specialises in the and s in code. the Commod be that of want like ‘CHILE’temperature existing projects. for which is in the top-left you send achievem Pithat’s four can be found infrom the 12.5 been kicking g until 1917 backdoor any message The BMP280 online of the screen the best-selli a single pallet a compute ent for letter word, Raspberry ore 64. designed the Programming The board uses Pi andwith anand resembles a monitor. In the section of the main five sensors. Pad has can be used data a charity that back into There’s for between but it wasn’t An amazing in a garage r terminal we’ll first install menu. work to be used ng British secrecy. board comes long frustrated Minimise works with the of all One Time certain sensor can I2C pins 1880s Sonic where thefor teaching pressure Pi and return The the ages and a machineAs well as sensors the such as that will enable our dance phy has settle Programming menu. sales, other never been anything compute by people to the all the in absolute to only resource since the andpatented Raspberry +85C, and children punched skills. mat to be used with This To ally Pi cryptogra time r ever.” the Raspberry received or form ofal–40C 4-channel analog to the board are willing oftheoretic than the reels of and training.profits go Python. temperatures earlier, Inputs ADS1015 tryselect formally an grail of is a Python library created with two with shape Foundation mentioned China. and Python 3. Once As people et holy one Python to (hecto while other initial batch modest about case hPa 3 opens, and by Zeth. The goal of This Even Pi, enabling click File >solve that can be PiBakery GPIO, Vernam of 300 to 1100 In this this, I created , which and most you’re deliver Pi Inputs pressures to simplify messagedigital converter (ADC) new blank of 10,000 the I2C pins ofisthe Gilbert document will appear. had been after the first the use of game controllers usestheir like gpg , a New and a experts, only boards to be and RGB long after the process. lightoriginal Immediately Picademy ng The 12-mont built in also be TCS3472 in Python. We’ll security use setup blocks-ba shipped. can save n programs install the analog sensors. Pascals). attempts with the by clicking Inputs that other boards automati used with external hs, 1.5 rate slowed > Save and name the using the Python package easy to File used as unique If anything for meaning tool for thissed, teacher training. million is rather used, one new use file can provide .a reading exploit this during manager. In the using encryptioresist all cracking PiBakery the Pi. you cantype weresensor with another that of your the second terminal colour fake to the ADC Future methods numbers quick For He has a blog to tape Put simply, Attaching a sensor will that was saves will be quicker. the following. enables connected. atexample be certain of your it to be used as a -out nickels, random year the million requiring of level, because enabling and breakable the daisies. customis you to datapip3 output. light pad US were $ sudo a way to the to gather if it uses 3.3V logic, being sold, pHAT the the simple was easily hollowed install the Nate Drake Enviro up in In were ed and the backups the inputs blank document we plotting, Raspber projects, create inside to the if a freelance there of pushing startSD Pi, a chain for light dependent ourcards via Once pet kitten, ry Pi Foundati but planning KGB agents Nate is codefor trigger simply attach the sensor hiding places. byRaspberr however, importing the year with safe even Inputs is installed the small pads thatywe libraries that will provide using a Scratch of your the then display that data Pi lend If returning What if, terminal will be returned placing colours,of dastardly on were journalist g in the release and preparin on y pictures If the functionality required. also identify present on the board. for known as and so can War, pinsperfect back Cold header are number From the A+ sales emails, Inputs library we importyou’ve ever used inspired WS2811 LEDs, commonlyuse, and the next thing to do is install interface rs like Raspberr of the refined g. In its third specialisin rity and separated and and anyas a comma surged the get_gamepad personal for the past decade Pad. the second Python you’ll colours . to5V logic, and this can compute Scratch easily uses library. value detected class;before, sensor walnuts cybersecu His know this is used Model yourBut Time to communicate with in order around On its third to 4.5 million. century, Neopixels, where different Python Sonic is a Python library that the B+ and 21stknown as a tuple. The our tax returns Enter the One dance how enables a Python retro tech.won’t let Pads. conditions or use values, inofthe data sheet for ofthe Next With birthday listIn weather we being carried PiBakery import identified this Time be to different ed? entire Python Sonic Scratch, pitfalls denote swelled the Pi 2 g One girlfrienda kitten, es well can be increased library ! the intercept nd the input. need to use a sales in to * –you thisdrag was released makeusing sensor’s removes random themselvaccuracy understa sensor, then you will him buy to make blocksthe need saw the Pi 2015 your sprites react to severe accelerometer g and processin y to side series of LEDs on each out three Zero released to 8 million. and PiBakery to two white a great board that so he has Pad is a thanks for generatin COMPATIBILITY with whom on its fourth it’s necessar voltage divider, commonly Late 2015 The Enviro pHAT is photos , the idea do things. In and the nd why, Next is an LSM303D a One Time with someone person and do with birthday except is the same, into a home the light sensor. Pi 3 was of terms, understa in resistors of equal value. that the in 2016. The can be easily integrated of kittens. launched agree upon by meeting Time Pad. In simplest blocks and magnetometer. you These do One experiment. an or accelerometer things the make your which usually were more as automation project Nothing instead. orientation and it to numbers communicate, Pi Time Pad connecti There’s exceptional and the accelerometer detects convert to appetite can stand in sense.” the One a block ng to Wi-Fi, The build quality is need to A basic compass you wish the for a bit for the password of the board. Enviro pHAT requires in a practical the libraries are easy to one for have , you first ng pads. once described of Raspberway of our insatiabl to the numberscan motion supporting Python the not secure Assembling that OTPssoldering a message exchangi and another changing Schneier they be taken using but... In skills. Controlling installing ry Pi! an extensive range Bruce can heading e basic these numbers message, some al been times secure, this single security. use while providing for a VNC When sending the cally So each of via a with there have g the server. Scratch’s for those who pHAT is then handled “theoreti magnetometer. , then add recipient receives pad, deductin Instead the Enviro their the fact on theoretic of data capture options ‘On Green telligence based numbers an input the of despite reflects can create use Python 2 / 3 OTP projects. LXF have ‘On counterin easy to Once the their copy of This we Flag Clicked’ sensor robust and . in practice the of Soviet wish to take on advanced First board, he son of the in the pad. US SIGINT’s brokenor heading of , you s using message of a Boot’, allowing Boot’, and ‘On a number orientation can be used with all thebeen library that for instance, projects. has decidedJapanese cucumbe Every your original to decrypt had work backward been reused. you 1940s, sensors can be gesture controlled scripts thefor was able or individual handy to retrieve the to combine r farmer and performto easily run Arduino, Venona some pads sensors, numbers d because theeach class as the power soon as the Raspber Any model of program Cloud because setup tasks for used by importing TensorFl your Pi of was committe s simply he Pi Foundati enough Raspberry Pi as powers of the Python ow to enable ry Pi 3 and message sort their rdinal sin ILITY What’s pads fast needed. Installation on. more, if family’s A USB dance story outlining on’s blog COMPATIB deep-lea Google generate to an wartime. This crypto-ca pHAT card that’s you insert has an arduous crop of automated in Neal thanks rning AI couldn’t libraries iskers mat Enviro amazing s sent during spikey how the process, simply an SD been created person to the Pimoroni codebrea cucumbe message Pi – and PiBakery Soviets currently for his power educatio script available Developer: Pimoroni An Internet a raft of from with poor mother, rs – an – has brought s of daily a lot of by Germaninstallemploy back into done manually n to balls install scriptsconnection Web: some way thousand chink is found you’ll be your compute Typicallydusing for has worked Togo, West that’s The British website.numbere compute . While given the to go from Price: £16 your Raspberry done micon. A similar the system r on a numberAfrica. Dominiq r, Pi to draw another the scripts websites is not the option his latest random its 70% 8/10 ue Laloux don’t always on’s Cryptono machines from example still has of projects the that you’ve to modify is accuracy code for bingo can come Stephens old dears in this of how theywe can trust All of theFeatures Go to a network. refitting a schoolro , it’s 8/10 in Africa timecase originally this tutorial can disparate with small Raspberr ately the thing, but together The classroo and set. Performance wish to installbe downloaded om with away eachthose who http://b old ladies pads. Unfortun technolo y Jam and iously connect for novel one for 8/10 to looksource. For power m fail gies subconsc to was each want solutions use the Wi-Fi and Ease of they kitted – they student, to full instructions from https:// to generate Wi-Fi block that pCucum 9/10 driving . there? manually there are the teacher out with 21 practices the ber Use the d, meaning to get your Value obey best the next and another Pis: http://b LED projector instructe hasn’tguide. LXF216-SonicPiand a step-by-step Pi to . . time it a ball as the issue for into its Dancemat/ A cost-effective and easy-to-use boots up. connect draw and le numbers sInWest Head to real time to thiscomes Analog The Enviro pHAT predictab the of Africa www.pib 21st century been devoted add-on board for gathering GPIO Sensors project. Eacharchive/master. select aren’t learn more The RaspberrytoPi akery.or in a data logging has zip forward data about your environment. analog books have own rs generally polled g to The Enviro pHAT Fast not have an does Entire be it’s and data @PiBake and download can when the sensors much. that compute of what , and follow sensors for temperature/ ry as ess. Usually input by default. The it to say Regardless use and improved your file, such an external date with on Twitter to into suffice recordedto wiggle pressure, light/colour you comes with stay up pHAT PiBakery g true randomn Envirobut method you subject simple then be imported to from. can news. your OTP willaask Yes, we are kind of motion. These are CSV file which for extending at generatin to work its own to create assuming you’ve goodADC or program very source” got a dance mat up idea to application, or the to integrate into any of sensors. “noise in your loft, if not into a spreadsheet it’s a good a website the number “Bob on your floor right required a so-called Who knew Python project. now. Hasn’t everyone? 56 LXF216 have separate“Alice to provide 58 LXF216 October 57 that cucumbe and October mouse 2016 to Alice” October 2016 LXF216 the message



Ain’t no Pi is one stopping us now, step away we’re on from a the move world recor ! The Rasp d. berry

Our expert

Our t exper

ious OTP The Notor

isn’t what Entropy

it used

to be... Take charge



You will need

Features at a glance



Rag and bone

Giving a second life to ageing computers reduces the electronic waste in the world

Emmabuntüs on how Linux can revive old PCs Roundup


Open source 3D printers for makers, schools and business

Dive into the pro-level suite to stop the hackers

3D printers

Master Kali 2016

Quick tip

Pi cucu mbers African the Pi

Smart farming with



Pi labs

Powering education


Rating 8/10

pads to to Bob” you both make sure don’t accidentally messages encrypt same pad. with the

60 LXF216


it. OTP encodingnot to use of the quackers ntation You’d be One impleme has flown.” m “The falcon www.linu




2016 12:39 06/09/2016 LXF216.pi_tut1.indd 58

LXF216.pi_ 06/09/2016 15:38



rs needed

good grades?

www.linu xformat.c om


educatio n to all


of the


9/6/16 3:40 PM

Plus: Pi User otp.indd





8-page Raspberry Pi companion

Make music with dance mats One-time password machine Enviro pHAT for real-time data

Welcome Get into Linux today!

What we do

We support the open source community by providing a resource of information, and a forum for debate. We help all readers get more from Linux with our tutorials section – we’ve something for everyone! We license all the source code we print in our tutorials section under the GNU GPL v3. We give you the most accurate, unbiased and up-to-date information on all things Linux.

Who we are

This issue we asked our experts: We’re all about privacy this month, so what trick or tool do you use to protect all your private things online?

Jonni Bidwell I have built up a series of false identities between which I alternate depending on the corner of the internet I’m visiting. There’s Pirate Jo (a scurvy seadog with a penchant for correcting people’s grammar), Sloop John B (a die-hard Beach Boys Fan) and Long John Bidwell (don’t ask).

Neil Bothwick Apart from the obligatory tin foil hat and matching (and very uncomfortable) underwear, I always have a browser tab open and spoofing as IE6. That should keep the script kiddies so busy they don’t see the important stuff, although it does slow down my network connection a little.

Nick Peers LastPass, the popular password manager, would be my response if that was allowed in this case, but it’s neither open source or (in the way I use it) free to use. There is, of course, a perfectly good open source alternative called KeePass. I could use this but I like wasting money for no reason.

Les Pounder Stay suspicious. It’s not really a trick, more a state of mind as, eg emails are a major source of phishing! If it seems too good to be true, or just highly unlikely, then take a look at who sent the email. If its your bank, ring them or use their online chat for help. Do not click on any links in the email.

Alexander Tolstoy Protection would be an issue if I was running my own mail server these days, but instead Google keeps an eye on my privates for me. One day it even banned my account for a few days for sending too large attachments. Luckily, the data I value the most is kept firmly away from the internet.

Privacy, privacy, privacy “Please, come into my home nameless government agent. Yes, I have no issues with you looking through my letters, my computers, my phone, my messages, my photos and my emails. You want to store copies on your own servers forever? Fine, fine, take what you want, what a wonderful use of my tax money!” That’s effectively the situation that’s happening every second of the day while you’re online. Without being told, or warned, or asked, it appears all of our internet traffic has been stored away by government agencies in a number of secret programmes around the globe. It’s all done to protect us, you understand, despite the documented misuse of this information – and that’s only the leaked cases we know about – and the almost zero documented cases of it actually stopping anything, but unlimited government surveillance is here. Thankfully with FOSS we have the tools, know-how and organisation to block prying eyes and protect sensitive data. In many ways, the tinfoil hat brigade were right all along, but it’s not just a matter of protecting your privacy, many of the techniques are just good practice. And securing sensitive information will help if equipment is stolen or hacked. So even if you’re fine with the government having unfettered warrantless access to all your data, there’s still reason to defend yourself. This issue we’ve focused on ways for you to protect yourself on and offline. We’re using the stalwart Tails and Tor in one hand, while in the other we have Kali Linux and encryption in its many forms. Together they offer an industrial-strength set of security solutions. Once you’re feeling more secure about things, we have a host of practical projects this issue: build a custom coffee machine with bespoke touchscreen for your artisan espresso, get accurate time through GPS, build a blog with Cassandra from Apache, leverage Jenkins for big data, automate Linux with Bash scripts, discover terminal shortcuts, get the best 3D printer and much more!

Neil Mohr Editor

Subscribe & save!

On digital and print, see p28

October 2016 LXF216 3


“These programmes were never about terrorism… They’re about power.” – Edward Snowden

Reviews Onda OBook 10 SE........... 17 Us? Buy hardware from a Chinese retailer? Never, but that sounds like the sort of postBrexit world we live in now. The interesting part is Remix OS that comes with it.


Our 2016 guide to the tools and tricks you need to protect your privacy online. Secure chat, file protection and private browsing. All on page 30.

We can see Remix OS getting attention, the hardware not so much.

C.H.I.P. ...............................18 Sold as the first $9 computer can the C.H.I.P. push the Raspberry Pi Zero off our IoT top spot? It’s a 1GHz device with Wi-Fi and Bluetooth built in, storage and more.

Black Panther ....................19 Shashank Sharma tries out a new KDE distro that’s pleasing to look at, but discovers that sometimes appearances can indeed be deceptive.

Apricity OS 07.2016......... 20 Fearless admirer of all things Arch Linux, Shashank Sharma tries out a derivative distro to determine if its bogroll or a chip off the old block.

Roundup: 3D printers p22

Roll around in fields of flowers enjoying the sounds of laughter.

Skype for Linux Alpha .....21 Afnan Rehman catches wind of a new version of Skype for Linux. Can it live up to expectations? Also, rhetorical questions.

Enviro pHAT ......................57 Les Pounder loves data and is always looking for new ways to integrate real-time data into his latest project, perhaps this board can fulfil his needs?

4     LXF216 October 2016

On your FREE DVD Kali Linux 2016.1, Tails 2.5, AntiX 16, SystemRescueCd 4.8.1 64-bit

32- & 64-bit


32 & 64-bit

Subscribe p96 & save! p28

Only the best distros every month PLUS: Hotpicks, Roundup & more!

Raspberry Pi User

In-depth... Discover Kali 2016 ............... 42

Pi news................................... 56

Jonni Bidwell thinks everyone’s trying to hack  his servers, but this way he can hack them  himself! That’ll show them all.

10 million Pis have been sold, we’ve counted  them all! Pis in Togo, Africa and Japan too.

Enviro pHAT ...........................57 Monitor your Pi’s environment with the most  accomplished of sensor HATs.

USB Dance mat .................... 58 Les Pounder busts out his best moves on the  dance floor and shows us how to create music  with a dance mat.

One-time pad ........................ 60 Nate Drake, in theme with this issue trusts,  no man – just his Pi-packing girlfriend – so he’s  using it to create one-time password pads.

Metasploit yourself right in the servers.

Coding Academy

Tutorials Terminal basics Shortcuts ..........................66

The Phoenix framework...... 84 Build a blog with the help of Mihalis Tsoukalos  as he explains how to leverage this web  framework to rival Ruby on Rails.

Nick Peers figures out how to configure your screen resolution from the terminal and hits the configuration files hard.

Jenkins for big data ............. 88

Scripting Automate with Bash .......68

Ramanathan Muthaiah explores the basic  nuances of accessing Jenkins via Python that  opens up a whole new world of opportunities.

Alexander Tolstoy explains the basics of Bash scripting so you can get tasks done while holding down your zero-hours day job.

Pi project Build a coffee machine .. 72

Regulars at a glance

New boy, Dan Smith builds an espresso  machine hipsters would be proud of. 

News............................. 6 Subscriptions ...........28 Back issues ...............64 AMD announces its Zen CPU. Intel 

Help management hit its targets, 

announces its Kaby lake CPU. Google  children need to be fed, gruel needs  announces a new kernel and we’re  going to need a new Tor system.

Mailserver................... 11

to be bought! Subscribe today.

Sysadmin...................46 Mr. Brown pits machine against 

Want to know how the kernel works?  You need to grab LXF215 now!

Next month ...............98 Build a Super Pi! Nick Peers looks at  how to get the best desktop Pi, 

Linux running on Mac latops, Linux 

machine in the DEF CON battle of the  mobile Pi, IoT things Pi and more.

running on Windows laptops, Linux 

bots and takes a look at sysdig to 

running old desktops!

solves your sysadmin problems.

A frapadapadoochino, please. Punch.

Big data Cassandra & Spark ......... 76

User groups................15 HotPicks ....................50 Les Pounder runs up that hill to 

Alexander Tolstoy hasn’t been

attend the Wuthering Bytes festival.

complaining to the authorities about

Roundup .................... 22

Mihalis Tsoukalos takes on big data with  Cassandra and Spark from Apache.

GPS Time sync......................... 80

his rations, he’s too stuffed on FLOSS like: Lumina, EncryptPad, Qt5-

Grab only the best 3D printer with 

Fsarchiver, Fontforge, Museeks, Tor 

the help of Alastair Jennings as he 

Browser, Pitivi, Bovo, Blobwars: Blob 

tests the best on the market.

Metal Solid, Krop and Clementine.

Our subscription team is waiting for your call.

With a working GPS dongle, Sean Conway explains how you can use it to synchronise  your time through NTP.

October 2016 LXF216    5

This issUE: AMD Zen

Google drops Linux?

Tor alts

Powershell open-sourced

Processor news

AMD fights back

Launches new Zen CPU architecture to compete with Intel.


he CPU market has been quite lopsided for a while now with Intel ruling the x86 roost with its Core i3, i5 and i7 offerings, which were roundly beating AMD’s processors in the power stakes. But it looks like things could be changing with the release of AMD’s next generation of processor architecture known as Zen. According to AMD’s press release (which can be read at, Zen processors have been designed for maximum data throughput and instruction execute with high bandwidth, low latency cache-memory support, which gives them a 40% improvement in instructions-perclock cycle over the previous generation AMD core, but without increasing the power consumption. Although those claims could prove to be marketing bluster, if the Zen processors do reach their full potential Intel could be in for a shock. Intel’s power advantage, along with large market share, allowed it to effectively set its own prices as Intel CPUs often cost a lot more than its AMD counterparts. If AMD is closing the power gap with Intel, it could cause Intel to rethink its pricing strategy and consider more competitive prices. We’re often partial to AMD’s offerings thanks to the company’s support for open source projects, eg AMD contributes Mantle technology to the open source Vulkan codebase as well as offering open source solutions in the ACL (AMD Compute Libraries) and providing the open source Radeon Graphics driver. AMD also contributes to the Linux kernel. We’re not fans of monopolies either and while Intel hasn’t quite got a monopoly over the CPU market thanks to AMD’s continued presence and the

6 LXF216 October 2016

growing popularity of ARM, having AMD provide some proper competition will certainly benefit customers. It should also stop Intel from resting on its laurels and get around to releasing a major upgrade to its line of CPUs, instead of the rather uninspired iterative updates it’s been tossing out lately. Zen is AMD’s first new, ground-up processor design since Bulldozer, which was released five years ago. Thanks to a number of leaks, we expect the Zen chips (for servers at least) to have 24 and 32 cores in two variants. Desktop Zen CPUs look set to come with 4 and 8 cores, 8 and 16 threads and base clocks of 2.8GHz. AMD will be using 14nm FinFET lithography and support DDR4 RAM. We’re not sure when the CPUs will come out, but you’ll need a new motherboard to support them. With the two desktop CPUs running at 65W TDP for the quad-core and 95W for the eight-core, the Zen CPUs look set to be less power-hungry than Intel’s. The

Can AMD challenge intel’s dominance in the processor market? We are hoping it can.

chips designed for tablets and other handheld devices (known as Kaby Lake-Y chips), and processors for laptops (Kaby Lake-U) which means the headline features here are lower power draws (4.5W and 15W, respectively). This means a bump in processing power while keeping battery life as long as possible. If that doesn’t sound too exciting then you may have to wait for 2017, when the desktop Kaby Lake processors are likely to drop. Check your excitement, though, as Intel has all-but dropped [no competition–ED] its tick-tock processor release schedule, and is positioning this as the middle generation of chips, which means no new architecture or production process. So while the top-of-the-line Intel Core i7 7700K CPU is set to arrive with a base clockspeed of 4.2GHz and a Turbo of 4.5GHz, the speed increase may be all we’re going to get. AMD’s resurgence can’t come quick enough.

“To have AMD emerge from the wilderness and take the fight to Intel has got us excited.” CPUs range from 91W to 140W making them a more tempting choice for embedded and server PCs. The lack of competition in the processor market has stifled innovation and led to a number of rather unexciting years, so to have AMD emerge from the wilderness and take the fight to Intel has certainly got us excited. Meanwhile, Intel itself has been busy with its latest processor architecture, Kaby Lake, which is the follow-up to its Skylake platform. So far we’ve only seen the mobile Kaby Lake

newsdesk os news

Is Google dumping Linux?

Scrutiny of recent project code suggests that the company is working on Fuscia, a new OS without the Linux kernel.


reports that this new OS is running on Intel NUC e don’t like starting a news story with PCs and various laptops, and a Google developer the line ‘Google is up to something’, but working on the project has stated that it will soon Google is up to something. It’s been support Raspberry Pi 3. While further details are revealed that a project known as Fuscia could be a thin on the ground, it looks like Google’s Fuscia OS completely new operating system that the search could end up on a range of devices. There’s giant is working on. Code has been added to certainly one person who doesn’t think this upstart Google Git (, OS will be a success, however, thanks to the fact which highlights some interesting features of the that it won’t be made available under the GPL, and OS. One thing that’s missing is any reference to the that man’s name may be familiar to readers of this Linux kernel, which means that this operating fine magazine: Linus Torvalds. Torvalds has system could be a complete departure from commented at the recent LinuxCon that operating Android and Chrome OS, Google’s other operating systems, such as Fuscia that shun the GPL will systems that are based on Linux. suffer from a lack of community. Instead, the OS uses the Magenta kernel, which is based on LittleKernel ( littlekernel/lk), which is designed for embedded systems, so there’s a possibility that Google’s OS could be aimed at Internet of Things and it looks like Google is building a new other smart devices. Os—and Linux isn’t invited. However, there are

PrIvAcy news

Tor replacements pop up

newsbytes For many people the name ‘PowerShell’ will take you back to when you had to use Windows, as it’s the command-line shell for Microsoft’s operating system. It looks like PowerShell may once again be a part of your life, Microsoft is open-sourcing the command line shell and is looking to bring it to Linux and Mac OS X. This follows on from previous news that Microsoft has open-sourced its .NET framework, and shows that the company finally recognises that people like to choose their own operating systems (which, shock, horror, may not be Windows), while still wanting to use its cloud platforms, such as Azure. Stellar animation studio, Pixar has recently revealed that it uses open source software, namely Red Hat Enterprise Linux and OpenGL, when animating its films, and uses System 76 machines, which come with open source software installed. Most excitingly of all, Pixar has opensourced its Universal Scene Description (USD) technology, which is a powerful set of tools for filmmakers. Pixar views open sourcing its technology as a way to encourage innovation, with Disney, the owner of Pixar, commenting that “we want to contribute back to the community; therefore, we have established this platform. We encourage you to investigate and use the technologies we are sharing. We also very much welcome your collaboration and contribution in these areas.” You can download the source code from PixarAnimationstudios/UsD.

Network struggles under strain from security agencies.


hanks to a number of high-profile news stories, not least involving Edward Snowden, the Tor network (see p34) became the go-to solution for many people seeking anonymity on the internet. Sadly, Tor’s increased popularity has also meant that government agencies, such as the NSA and GCHQ, have also taken a keen interest in discovering ways to de-anonymise Tor users and it looks like their attempts to crack it are making headway. A study that was undertaken at the US Naval Research Laboratory found is that faithful onion going that 80% of Tor to make some users cry?

users were at risk of being de-anonymised in the next six months, ( UsersGetRouted) as Tor is vulnerable to traffic analysis attacks that can see traffic entering and leaving the Tor network, which can lead to users being identified. While the Tor project is looking at ways to make these attacks more difficult to launch – and improving user-friendliness by lowering the latency of its service – a number of projects aiming to dethrone it have emerged. The most notable ones are Aqua (https://, an anonymous file-sharing network, and Herd (, PDF), which is based on Aqua and concentrates on anonymising VoIP communications. Both projects use ‘chaff’, which is random noise mixed in with network traffic, making individual users’ traffic more difficult to distinguish and identify.

By open sourcing its technology, we love Pixar even more. Mozilla will allow the Let’s Encrypt root key, which provides a free SSL/TLS certificate authority, to be trusted by default in the next version of Firefox 50. This means that websites with certificates from Let’s Encrypt are now more widely supported, and Let’s Encrypt doesn’t have to rely on its partnership with IndenTrust to supply trusted roots. The project has also applied to Apple, Microsoft, Google, Oracle and Blackberry to trust its root key by default.

October 2016 LXF216 7

newsdesk comment

The double-pointed nature of forks Alex Campbell

It started innocently enough. There I was, setting up my VPS with Docker, getting my blog and other services running. Eventually, it came time for me to install OwnCloud, so I headed over to Docker Hub to look for the right container to pull. I looked at the most popular container that wasn’t the official one provided by OwnCloud, which turn out to be NextCloud. There have been a few famous forks to note. LibreOffice and MariaDB forked from OpenOffice and MySQL respectively, after Oracle acquired those two. More recently, the Chromium team started Blink. Plex is a proprietary fork of Kodi. Even the popular WordPress is a fork from b2/CafeLog.

Fear the fork?

Forking a project can create a lot of strife, as developers are often forced to choose sides by deciding which project they will contribute code to. Users are faced with problems whenever there‘s a fork, too. Users of the old software have to choose whether to continue with the current product or switch to the new one. Compatibility becomes an issue over time, so this choice can become quite time-sensitive, as the fork and original projects release new versions. New users or users looking to install software on clean systems are faced with a similar problem. So, is it better to go with a fork? Maybe. Major forked projects that I’ve listed, such as WordPress, LibreOffice, and MariaDB, are better maintained than the projects they originated from. But forking is never a smooth path, and the fork may experience bumps in the road as it tries to find its footing. For projects like Nextcloud, that you’ll be entrusting your data to, it’s a good idea to keep backups of your data, just in case something breaks. As for my VPS, I’ll install Nextcloud (via the greyltc/nextcloud Docker container) and cross my fingers.

Alex Campbell is a Linux geek who enjoys learning about computer security..

8 LXF216 October 2016

Distro watch

What’s behind the free software sofa?

AryALInuX 2016.08


he latest release of the AryaLinux distro, which was built with the Linux From Scratch, is now available for download, and comes with Mate 1.15, KDE and LXQt desktop support and Qt 5. The GCC has been updated to version 6 and the latest Linux

kernel – version 4.7 – is also present and correct. As a sourcebased distro it uses source/ports package management with a custom package manager known as ALPS (Arya Linux Packaging System). To find out more about this version head to AryaLinux2016_08.

Av LInuX 2016.8.30


V Linux 2016.8.30 has been released and offers a Debian-based project (it doesn’t consider itself a Linux distro in the proper sense) aimed at users working with audio and video formats. There are big changes, including fixes and improved support for AMD

video cards, a new Zukitre-based theme, and the removal of a some software, such as Kdenlive and KDE 5 runtime components, which the project felt were holding it back For much more information check out the release announcement at the official site: http://www.


PostgreSQL & Postgres-XL Support • 24/7, 15 minute response • Guaranteed bug fix within 24 hrs PostgreSQL Remote DBA • 24/7 remote monitoring • Fast intervention and resolution PostgreSQL Developer Support • Avoid issues before they occur • Fix issues after they occur PostgreSQL Training • Public and onsite courses • Taught by leading code developers PostgreSQL Consultancy • Leading expert resources • Perfect PostgreSQL project partners +44 (0)870 766 7756 +1 650 378 1218

Discover the most amazing facts from the worlD of science!

orDer now!

delivered direct to your door

Order online at

or find us in your nearest supermarket, newsagent or bookstore!

Write to us at Linux Format, Future Publishing, Quay House, The Ambury, Bath BA1 1UA or

Fan boy! I recently re-subbed for two more years as Linux Format has become my favourite Linux magazine. I hope the recent vote on EU membership won’t impact you guys negatively. I’d love to know what folks at the magazine favoured and why? I have wondered if Britain, Canada and America shouldn’t be forming an AU (Anglo Union) that unlike the EU was just based on free trade (especially that favoured small- and medium-sized businesses) and exchange of culture… and of course that supported free software and hardware. Thanks so much for the tutorials on Rust. I think it just might be capable of replacing C for me. I also enjoyed the firewall Roundup. I saw in a video that Linus Torvalds said they now use three separate hardware

firewalls since GitHub was hacked. I wonder what they use, what Linux and firewall is inside, and the hardware firewalls you guys favour? I hope Neil Bothwick will do his magic on Fedora 24 when it’s seen to be reasonably stable. Ed Scott, via email. Neil says: Thanks for subscribing! It’s humbling enough that people trust us to stump up their hardearned money for a three-month subscription, never mind for two whole years. As for politics, that’s something I’d usually avoid, wanting to stick to the software stuff. I personally would have preferred to stay in the admittedly flawed EU, as free-trade access to our biggest trading region seemed to be a no-brainer, while free movement of EU citizens is pretty handy for the holidays along with the free health-cover while in the EU. But not anymore, and anyway,

everyone has their own view on the topic. I don’t think Linus has anything to do with GitHub other than creating Git in the first place, but I’d imagine they’d want to keep any security details about its internal architecture a guarded secret. Perhaps we could nudge Jolyon into revealing some of his security secrets?

Mac Linux Over the last few months I’ve started looking at some cheap secondhand laptops to run Linux. But after trying a few distro discs and even setting up a few USB drives, I decided to convert a four-year old MacBook Pro I’ve had since new to Linux (I’ve also completely wiped the Mac OS X). Madness, I suspect some Mac lovers would say but hear me out. A few recent products I purchased on the app store

(including a well-known photo and video suite which has been plagued with problems, neither which have been addressed by Apple or the provider) left me searching for alternatives. Also a variety of high profile music plug-in companies advised El Capitan wasn’t compatible at the time of initial release and in some cases it took months before they advised it was OK to upgrade. I’m a newbie to Linux and the journey to find a suitable distro has been an interesting one. The first one I loaded couldn’t read SD cards directly from the laptop; one distro worked great via the Live disc, but there was a problem connecting to Wi-Fi when I loaded it permanently. With another distro I found that I could easily connect via Ethernet, but couldn’t get it to connect to Wi-Fi.

Letter of the month

Windows, tisk!


aving doubts over Windows and its download shenanigans and its unpredictable behaviour, I decided after some thought I would try Linux. We have two Lenovo tablets, an HP laptop and a Dell Optiplex 745 desktop donated free from my sister, so I purchased your magazine with the 16.04 disc, tried the disc, that did not work on the laptop, so I went on the Linux Mint website and proceeded to download Mint 17.3 Cinnamon 64-bit as recommended, but as the website runs at walking pace here, some of the data was missing… Off I went to work and decided to try a download on my work computer. As I was doing this, an IT friend asked what was up and instead he did two discs: one

Cinnamon and one Ubuntu 16.04 desktop both 64-bit, and decided to put it on the Dell. If I broke out the BIOS then a free computer is less of a loss. It took a while to adjust the BIOS to start from the DVD drive, but I did find the right place and it booted and installed from the drive. Now I have a computer running Ubuntu 16.04 desktop 64-bit. I have set a firewall, installed Chromium, and so far have Facebook, photos in Gnome and some music—all this from a 60-year old former technophobe. Do I need antivirus as there doesn’t appear to be any on Ubuntu Software Center? Now all I need to do is decide to replace Windows 10 from my HP laptop! Ray Lee-Adams, via email. Neil says: Sounds like despite encountering

all manner of the standard PC problems you managed to power through and get Linux up and running—good work! If you’ve never used the BIOS/UEFI it can be a strange place. The general thinking is that Linux doesn’t need antivirus, yet, but you need to maintain good security practices.

The UEFI and BIOS can cause all manner of headaches for people.

October 2016 LXF216 11

Mailserver All the above problems I found comments on and often solutions were available on different forums or YouTube. Now today, thanks to your Best Distro feature [Features, p32, LXF203], which I sourced via Zinio and then a hard copy from the library, I thought I would set up Korora (via USB flash). There were no problems with it reading SD cards and this has been the easiest system to connect to Wi-Fi. I’ve had a bit of a learning curve, but trying different distros and reading your magazine has really helped me to persevere as well as get a better understanding of Linux… and find a distro that suits my needs. As far as running Linux on a Mac goes, loading the distros and booting up has never been a problem, but I can’t say the same about a Sony Vaio that I have been unable to boot up with any Linux distro. I hope that in the future more companies come on board providing computers pre-installed with Linux, giving the public an alternative to Apple and Microsoft. Brendan, Adelaide, Aus.

Maximum Linux I am a long-time Maximum PC reader, and read some of their recent articles on Linux topics and picked up a subscription to LXF as a result. So far I really like the magazine. I’ve tried Linux multiple times over the years and want to completely abandon Windows. I love the idea of one source and hate proprietary technologies, but the software issues have stopped me.

Gaming on Linux is a thing and it’s bringing in new users.

As a former Mac user, I found that even with Mac OS, and now with Linux, that I spend most of my time trying to make software I own run on it. LibreOffice is a fine alternative to Microsoft Office, but what about more specific things? Like games. I read the recent coverage on games and was encouraged, but for those that don’t have Linux versions Wine Isn’t always an option either. Any suggestions? Dave Kings, NC, USA. Neil says: Glad to hear you’re enjoying the magazine and Linux! At this stage I think for people that really like to play with their PCs, Linux is the only option these days, but being pragmatic you don’t have to abandon Windows. A dual-boot orVirtualBox installation can eliminate most problems for non-3D accelerated programs, though migrating to native Linux or browser-based software is really the only longterm option. As for games just four years ago, Linux was a borderline unusable gaming platform—next

Running Linux on a MacBook is more common than you might think.

Neil says: Congratulations on persevering and getting Linux how you want it on the hardware you want. There’s a bit of a DIY element to installing Linux, but then that should be the expectation—and let’s face it, installing Windows isn’t without its issues. It seems odd that at this stage Apple makes hardware that’s more open than many Windows box shifters, making them ideal for a Linux install.

12 LXF216 October 2016

to no games, poor graphics driver support and no optimisation. Then Steam appeared in early 2013 and now there’s almost 2,000 native titles with quite a few AAA releases in there. I’ve started only buying games that offer Linux support. Drivers are now almost sorted and once Vulkan goes mainstream (in a year or two) performance will be nearing parity with Windows outof-the box. For day-to-day computing you can dump Windows, but for gaming it depends how choosy you feel you want to be with titles.

Linux laptops Recently, I thought my threeyear-old laptop had finally bitten the dust. It had suffered too many careless drops, and I had thought I would have to replace my faithful companion. While my laptop was in the hands of my IT guy, I did come across what I thought was a great deal. It was a Lenovo unit and very similar to the Lenovo I own. I bought it only to discover the challenges of UEFI.

Mailserver I was somewhat aware of the issues, but I really hadn’t payed close attention to the matter. Needless to say, I couldn’t boot the unit from the optical drive. In fact, I couldn’t install Linux Mint at all. With a little research, I was able to remove a number of the roadblocks, but, ultimately, the firmware on the Lenovo’s motherboard stymied my attempts. For the record, I am an old fart who has been a Linux convert for a number of years, but I’m not an administrator, a programmer or any such thing. However, I do read your magazine every month. One of the things I like about your publication is that there is something for almost every skill level. Admittedly, there is much that is over my head, but I almost always find some tidbit that is quite useful to me. I believe one of the things the Linux community needs is more information about which manufacturers, and which of their computer models, are Linux ‘friendly’. I know that Ubuntu maintains a database of units (and hardware) that are compatible with Linux. However, that database is a bit unwieldy and not completely user friendly. I think it would be a

very helpful thing if Linux Format would include a brief review of three or four computer models that the novice would have little or no trouble installing his, or her, favorite Linux distro. I believe it would be nice if the offerings covered a wide spectrum from the budget minded to the extravagant. I believe this is important not just to maintain the Linux community at its current levels, but also to attract new users who are fed up with Windows. For the moment, my old Lenovo is functioning once again, but the day is coming when I will have to replace it. I only hope that when that day arrives, I will have a buying choice that includes a computer

upon which I can install a Linux distribution. D. Hunter Armstrong, Richmond, VA, USA. Neil says: It’s a great suggestion but one that won’t work in the real world (pesky reality). No manufacturer is going to send us a unit to review knowing we’re going to install an unsupported OS. It’s hard enough and usually impossible to get them to part with Linux-running devices. Perhaps we could do a gorilla-style invasion of PC World and force install Linux on systems?

BackBox I’ve used BackBox Linux ( for a while and it has a perfect free suite for penetration testers— you should give it a try. Here is

Dell is one of the few big-names that support Linux out of the box in the form of Ubuntu.

where you can download it from the latest version is backbox-4.6 Denis Zabiyako, Ukraine. Neil says: An ideal suggestion for your security issue, we’ll try and take some time out to do a review soon too!

Too much First, let me say that I am a fully converted addict to Linux although still in the newbie days of Ubuntu and Linux Mint. My problems with the system are where to start as there’s too much choice with an array of distros all seemingly vying to be the best to use. The main issue though is being ably to correct problems. There’s almost an over abundance of forum posts but most of it has been on the web for years. I have just had a case of my screen having an epileptic fit every time I log back on after going to sleep. The only solution was to press down the off key. I finally found a cure, I think, by finding an old thread for Mint 17.1 about installing Intel video drivers. I thought I had nothing to lose so gave it a go and it seems to work so far. My issue is the difficulty in finding this information for the latest releases, I am now using Mint 17.3 on a brand new Acer Aspire F5-571. Like Android, it feels like there are too many versions available causing much confusion to the meek! Steve Herridge, Peterhead. Jonni says: It’s a tough one, but imagine the opposite situation: A single Linux distro that offends the least number of people, with limited options for customisation. That’d be pretty boring. LXF

Write to us Do you have a burning Linuxrelated issue you want to discuss? Want to let us know what game you’d really like to see on native Linux or just have a fantastic idea for a feature or software to cover? Write to us at Linux Format, Future Publishing, Quay House, The Ambury, Bath, BA1 1UA or

October 2016 LXF216 13



Linux user groups

United Linux!

The intrepid Les Pounder brings you the latest community and LUG news.

Find and join a LUG Blackpool Makerspace 64 Tyldesley Road, Blackpool 10am every Saturday. Bristol and Bath LUG Meet on the fourth Saturday of each month at the Knights Templar (near Temple Meads Station) at 12:30pm until 4pm.

Egham Raspberry Jam Meet at Gartner UK HQ every quarter.

Liverpool LUG Meet on the first Wednesday of the month from 7pm onwards at DoES Liverpool, Gostins Building, Hanover Street, Liverpool. Lincoln LUG Meet on the third Wednesday of the month at 7:00pm, Lincoln Bowl, Washingborough Road, Lincoln, LN4 1EF. Manchester Hackspace Open night every Wednesday at their space at Wellington House, Pollard St, Manchester M40 7FS Surrey & Hampshire Hackspace Meet weekly each Thursday from 6:30pm at Games Galaxy in Farnborough. Tyneside LUG Meet from 12pm, first Saturday of the month at the Discovery Museum, Blandford Square, Newcastle.

Wuthering Bytes Walking a thin between passion and terror.


he picturesque surrounds of the illustrated a map resource that he has Upper Calder Valley, Hebden developed, showing the location and Bridge plays host to Wuthering services offered by the many UK maker Bytes for one week a year in September. groups, suppliers and manufacturers. This event is a celebration of maker McEwan is keen for others to learn what culture and encompasses many facets is out there and for makers to work with of the movement. We were lucky manufacturers and kickstart an enough to pop along for the Open increase in UK makers. Source User Group meeting, which Another interesting talk was by showcases new and interesting projects. Daniel Mulligan who talked about open To open the event, we heard from Dr source film production, from open Tim Drysdale who has been part of the source cameras and equipment to work LabRTC team at the Open University to flows that involve open source software enable real-time control of hardware for video production. over an internet connection. The goal of We enjoyed our time at the event the project is to enable anyone to have and it was exciting to see so many real-time access to expensive laboratory makers/engineers working on projects equipment, no matter their location. both big and small. LXF Next up was Ken Boak, an ex-BBC R&D engineer who is striving to open up how the next generation of engineers are trained so that we can enrich our future with the best technologies. Also presenting was Adrian McEwan from DoES Liverpool, a long standing and standard setting Adrian McEwan advises the audience that we need to make, manufacture and share more projects. hackspace, McEwan

Community events news

Make:Shift:Do The UK Crafts Council are once again asking for maker/hack spaces to open their doors to the public and demonstrate their skills. On October 28-29, venues around the UK will hold events

where the public can get hands on with technologies, such as 3D printers, CNC machinery and laser cutters. The events are free to attend and vary in scope dependant on the services provided by the venue. For more information head over to what-we-do/makeshiftdo Linux Presentation Day October is becoming a busy month. The first Linux Presentation Day was held on

April 30 this year and had LUGs reach out to their communities and explain the benefits of running Linux on all forms of hardware. The second Linux Presentation Day is taking place on October 22 in venues around Europe! To run an event and for more information head over to. Derby Mini Maker Faire The Maker Faire phenomenon is taking over the world! Derby’s

fifth Mini Maker Faire is taking place on October 22 at The Silk Mill. This great event will contain stalls and workshops covering the breadth of the maker movement. Mini Maker Faire events are great fun and enable children and parents to learn more about skills, such as soldering and electronics. If you are a fan of the Pi, Arduino or robotics then this event is great for getting ideas and gaining knowledge. More details here:

October 2016 LXF216 15

The home of technology

All the latest software and hardware reviewed and rated by our experts

Onda OBook 10 SE

If Android and desktop Linux had an offspring Desire Athow thinks it’d be as scary as ordering hardware from China. Specs OS: Remix OS 2.0 CPU: Intel Atom Z3735 quad-core 1.33GHz GPU: Intel HD Graphic Gen7 Display: 10.1inch IPS 1,280x800 RAM: 2GB DDR3 Storage: 32GB SSD, microSD card Comms: 802.11 b/g/n, Bluetooth 4.0 Ports: 1x microHDMI, 1x microUSB Size: 288 x 195 x 16mm Weight: 1190g

Slightly dated components, but a well-built tablet.


arly in 2016, a little known startup, Jide, announced that it was releasing a new OS based on the Android-x86 project which ported the ttAOSP (Android Open Source Project) to the x86 platform (Intel and AMD). Jide is backed by Foxconn, one of the biggest technology manufacturers in the world, and was founded by ex-Google employees. The project usually lags behind its ARM-based counterpart by a few months but the thriving community means that with a regular release cycle, Remix OS is shaping up to be a nice little rival for Google’s own Chrome OS. Remix OS looks a lot like desktop Linux, with a very familiar appearance and feel. There’s a file manager, a menu button and a taskbar, plus a number of improvements over Android such as better multitasking and windowing. However, Remix OS is only available outside of the UK via China, the Onda OBook 10 SE tablet runs Remix OS 2.0 that’s based on Android 5.1 Lollipop, a Marshmallow update (Android 6.0) should be available soon. The OBook is a reasonably well-designed tablet that suffers from one major flaw—it doesn’t have a rear camera.

This device looks and feels like a traditional 10.1-inch Windows tablet without the dedicated Windows key. There’s even an optional keyboard (despite most images showing the keyboard, it’s an extra), which costs an further £39, bringing the total price of the combination to about £150 depending on the latest exchange rate fall. The tablet adopts a flashy gold champagne colour scheme with a reasonably small bezel, a full HD webcam, a 10.1-inch glass-covered 1280x800 pixel display and a lonely circular capacitive Home button at the front. On the sides are connectors for the keyboard dock, a microSD (TF) card slot, audio connector, microUSB, microHDMI, a DC power socket, the power button, volume rocker buttons and a speaker grill.

Old and frail Overall, it’s a well-built tablet although the hardware inside is a bit long in the tooth. There’s an Intel Atom Z3735F (from January 2014), 2GB of RAM, 32GB onboard storage, a 5,400mAh battery, 802.11n Wi-Fi and Bluetooth. As for the overall experience of Remix OS, it felt natural and comes with additional apps some of which support windowing, but none of the ones from Google Play do. You are likely to encounter issues with some Android apps which might not support Remix OS. A quick test using Antutu showed that the Z3735F ranked behind the original OnePlus One smartphone with a score of 52,665. But at no time during

Looks the part and is a decent showcase for the Remix OS.

our testing did we experience lagging or freezing, even given the low amount of memory and the old processor. It’s hard to recommend the tablet itself as it’s expensive, based on old technology and has unproven support—a quick read of online comments leaves us with mixed feelings about the level of support you receive. As for our first encounter with Remix OS, it went well—it’s Android but done differently. It’s finding its way into a rising number of devices so expect to see much more of it in the future. LXF

Verdict Onda OBook 10 SE Developer: Onda Web: Price: £110 (£39 keyboard)

Features Performance Ease of use Value

5/10 6/10 7/10 6/10

Remix OS is refreshing and as for the OBook, it’s a decent little tablet but there are better options elsewhere.

Rating 6/10

October 2016 LXF216 17

Reviews Single board computer

C.H.I.P. Les Pounder loves Pi, and loves hacking something to life using the single  board computer but could he be adding C.H.I.P.s to his plate? Specs... CPU: 1GHz Allwinner R8 RAM : 512MB Storage: 4GB NAND Flash GPU: Mali 400 Connectivity: , Bluetooth module, combined headphone and composite TRRS connector, JST Battery Connector, AXP209 Power Management IC, 1x USB 2.0, Wi-Fi.


rowdfunding has brought many single board computers to life and the latest of these is C.H.I.P., a $9 computer from Next Thing Co. Measuring 6.1cm by 4cm by 1cm tall, C.H.I.P. is slightly larger than a Raspberry Pi Zero, a device that shares the same 512MB of RAM and a 1GHz CPU speed, but in the case of CHIP it’s provided by an Allwinner R8 ARM Cortex A8 CPU. But what C.H.I.P. provides is extra integrated Wi-Fi (B/G/N), Bluetooth 4.0 and 4GB of flash storage for the OS. The board also comes with one USB 2.0 port, a micro USB for power and a combined audio/video composite jack. But lacks digital video output. There’s an 80-pin GPIO, with some pins that can be user-controlled and others that are bespoke connections, such as connections for PocketC.H.I.P.’s LCD screen. Also present on the board is a JST connector that’s commonly used with Lithium Polymer batteries. It gives an instant portable long-term power supply for embedded projects thanks to an onboard AXP209 power management chip. Installing the operating system is handled via a Chrome browser plugin and requires an extra configuration step for Linux users. Once the plugin is ready, installation is simple and takes around 10 minutes. When ready, you need to attach your peripherals and power up to the desktop. C.H.I.P. doesn’t come with any HDMI or VGA interfaces and requires the user to

Features at a glance

Battery backup

Comes with a handy JST connector for charging an external LiPo battery which acts as a UPS.

18     LXF216 October 2016


To keep the C.H.I.P.’s $9 price point, add-on boards called ‘dips’ provide such things as HDMI and VGA.

C.H.I.P. is priced in between a Raspberry Pi Zero and Pi 3 and provides just enough computational grunt and plenty of connectivity and expansion options.

purchase an add-on board, known as ‘dips. We used the HDMI dip to connect our C.H.I.P. to a screen. Booting into the OS takes 1m 12s and once fully loaded we see an interface similar to Mate’s desktop. Applications are in a menu to the top left and configuration and notification icons are in the top-right. The interface has a noticeable lag but is still usable. We installed the htop dashboard and saw that around 70% of the RAM was being used and the CPU was maxed out. This generated a lot of heat, which was enough to make the CPU a little uncomfortable to touch.

Deep fried chips Installing software is possible using the Debian APT packaging tool and the package manager GUI. On first boot, we updated the software to ensure we were up to date which updated our kernel to 4.4.11 released on May 28 2016. We installed the Arduino IDE and flashed the blink sketch, the Hello World of hardware, to an Arduino Uno. Compilation was noticeably slower than our Core i5 laptop, but that’s to be expected given the C.H.I.P.s hardware. To test the GPIO, we installed the Python library, which is very similar to that used on the Pi. Within a few minutes we had the obligatory flashing LED test completed. As part of our test we installed Open Arena, in order to test

the prowess of the Mali 400 GPU. Sadly, this test was short-lived as loading the game menu took a long time and choosing menu options meant a delay of around 1-2 seconds, so it was clear the game would be unplayable. Looking on the official forums it appears as though there’s no driver for the GPU but hopefully this will be remedied. But is C.H.I.P. a games machine? Not really. Its closest competition is the Raspberry Pi Zero and for just $4 extra it offers Bluetooth, Wi-Fi and 4GB of storage. This means that C.H.I.P. can be used for embedded Internet of Things applications and for these applications it excels. C.H.I.P., then is a serious piece of kit for embedded projects and provides everything you need to get going. LXF

Verdict C.H.I.P. Developer: Next Thing Co Web: Price: From $9 (exVAT) basic board

Features Performance Ease of use Value

7/10 5/10 7/10 9/10

Priced competitively and will draw lots of attention from makers and hackers for their next embedded project.

Rating 7/10

Linux distribution Reviews

BlackPanther OS 16 Shashank Sharma tries out a new KDE distro that’s pleasing to look at, but discovers that sometimes appearances can indeed be deceptive. In brief... Although not advertised as such, its homegrown CLI utilities can be a USP with a little work. While incredibly fast, the distro falters on account of the poor package management and unexpected behaviour. See also: OpenSUSE.


he Hungarian distribution (distro) has been in production since 2003 and although it was originally based on Mandriva, the KDEbased distro is now independently developed. As with all KDE distros, BlackPanther OS is aesthetically pleasing and its colourful wallpaper is a welcome change from the string of patterns and hues that have become the norm with most other distros. The first screen when running the live media is in Hungarian and offers you the choice of several other languages, including English. You choose your language from the list and hit the button on the bottom-right to proceed and all subsequent screens will be in your chosen language. BlackPanther uses the distroagnostic Calameres installer and the process is very user friendly, fast and straightforward to use.

Caveat emptor The best thing about a live-installable distro is that you don’t have to go through the rigours of an installation before you can take it for a spin. If our tests are any indication, BlackPanther OS should come with a caveat: ‘For optimum results and to truly appreciate the efforts that have gone into producing the distro, please consider installing it to disk before forming any opinions’. Unlike most other Liveinstallable distros, such as Ubuntu and Fedora, which offer a seamless experience between live and installed

Features at a glance

CLI utilities

The home-brewed CLI utilities including ones for package management are fairly easy to use.

KDE Plasma

Features KDE Plasma 5.7 which provides for a fast and highly customisable desktop environment.

environments, BlackPanther OS hiccups occasionally when running in live mode. Some applications, such as xterm, failed to launch occasionally during our tests but such problems disappeared once the distro was installed. However, this brought about many other issues. Although still Everything that’s good and fantastic with KDE sums up early in this review, it everything that is good about BlackPanther. wouldn’t be out of place to mention that no other distro command can be used to upgrade the has driven this reviewer as batty as system or an installed application. BlackPanther did. Most frustrating was Unfortunately, the distro doesn’t offer the lack of any error messages when enough documentation to fully certain tasks failed, which had acclimatise new users to its unique functioned flawlessly only moments package management systems. While ago. The inability to replicate quirky the website hosts a wiki, it only has a few behaviour makes it impossible to track disparate and narrow tutorials, such as down the problem and attempt a fix. configuring wireless networks or Package management is another installing Postfix and Dovecot with SMTParea where the distro still requires a lot Auth and TLS. The English-language of work. Admittedly, the plan was for the forum doesn’t see much action, but the distro to ship with Plasma-Discover as Hungarian forum is fairly active and so is the distro’s Facebook group. the graphical package management The distro also doesn’t have a clear tool, but it couldn’t be incorporated into update policy and appears to advocate the system on account of operational the release when ready philosophy. errors. The release announcement Even though the latest offering comes advocates the option of App-Helper, Apper and other tools for this task. App- well over a year after the last, the distro performs like an unpredictable and Helper, the default graphical package unstable alpha release. LXF management utility refused to launch. The Applications menu is chock full of useful applications but most of them are not included with the distro. Clicking on these launches the graphical BlackPanther OS 16.1 installation process using the default Developer: Charles Barcza App-Helper tool, which failed to launch Web: during our tests and so the applications Licence: GPL couldn’t be installed. Features 5/10 The distro also includes its own Performance 5/10 command-line package management Ease of use 7/10 utility which is fairly straightforward to Documentation 5/10 use. To install a package, run the For the parts of it that work, the installing packagename command and distro functions flawlessly. But when it use the removing packagename falters at a task, it fails spectacularly. command to uninstall packages. These commands work as advertised and can resolve dependencies. The updating


Rating 5/10

October 2016 LXF216 19

Reviews Linux distribution

Apricity OS 07.2016 Fearless admirer of all things Arch Linux, Shashank Sharma tries out a distro  derived from it to determine whether it’s bogroll or a chip off the old block. In brief... Aimed at newbies and experienced users alike, Apricity is a rolling release distro based on Arch Linux. Offered as a live installable distro, Apricity produces two variants favouring Gnome and Cinnamon desktops respectively (we tested the latter). It features useful cloud-centric and file-sharing applications and multimedia support out of the box. See also: Manjaro Linux, Sabayon.


ith all the documentation resources available to Linux users now, rolling release distros are no longer considered just for experienced users. This might explain why we’ve seen so many recently. Apricity OS is based on Arch Linux, one of the most stable and awe-inspiring rolling-release distros ever. Despite its heritage, Apricity OS is a live-installable distro that produces two variants, with Gnome or Cinnamon desktop environments. Once you’ve dd’d the 2GB ISO onto a USB drive, the live environment boots up in seconds offering complete multimedia support out of the box and applications to suit all manner of Linux users. But while Arch Linux offers the opportunity to meticulously shape the distro to your liking, an Apricity installation – as is the norm with most contemporary distros – doesn’t even allow users to decide what packages to install. Like many of its peers, Apricity uses the fairly straightforward Calamares installer. If you don’t already have a free partition available, it will enable you to carve space out by shrinking the size of an existing partition. Unlike distros which follow a fixed period release schedule, the advantage of a rolling-release distro is that you don’t have to worry about reinstalling the distro or applications with each new release. By its very nature, the crux with a rolling-release distro is its underlying package management system. Thanks to its Arch roots, Apricity features

Features at a glance


The client/server sync app can sync files between devices on a LAN and remotely over the internet.

20     LXF216 October 2016


Customise Apricity to your liking before downloading. (You first need to create an account on the website.)

Apricity means ‘warmth of the sun in winter’. And it does inspire warm feelings!

Pacman, which is an incredibly powerful and robust command-line package manager. For graphical users, Pamac is available too – it’s similar in appearance to Ubuntu Software Center or Gnome Software but without screenshots of the applications you wish to install. On top of its multimedia support, Apricity includes the ICE application, which is borrowed from Peppermint OS. If you have a list of websites you visit frequently, you can use ICE to create Site Specific Browsers (SSBs), which serve as quick launch apps to take you straight to these sites. However, this application requires the Google Chrome or Chromium browser, and while the distro ships with Chrome as the default browser, we found the app is unable to detect it. This bug can be fixed by installing the available app update.

utility to these to install the changes onto your system. The distro’s documentation section is sparse, with a quick introduction to its new features such as Freezedry and answers to a few FAQs. For more detailed help, the website also hosts a very active forum board. With excellent hardware detection, a well rounded default package selection and interesting offerings such as ICE and Syncthing, Apricity OS easily beats its goal of being a distro suitable for new and skilled users alike. Recommended for users tired of fixed-release distros and looking for one that’ll keep their system running without needing too much coddling. LXF

Sunny disposition

Apricity OS 07.2016

The latest release also introduces a new feature called Freezedry. Apricity maintains two configuration files, written in TOML, for Gnome and Cinnamon. You can make changes or additions to these defaults and create your own Apricity variant. A number of such community built variants are already on offer on the distro’s website, as downloadable ISOs. Or you can download the configuration file and point the Freezedry command-line

Verdict Developer: Alex Gajewski Web: Licence: GPL and others

Features Performance Ease of use Documentation

10/10 10/10 10/10 7/10

Delivers speed and performance, and takes away the learning curve of manually piecing together your distro.

Rating 9/10

Skype client Reviews

Skype Alpha Afnan Rehman catches wind of a new version of Skype for Linux Alpha. Can it live up to expectations and become our comms app of choice? In brief... Skype on Linux has long been abandoned by Microsoft in favour of the Windows and OS X versions, and has been stuck on version 4.3 for years. However, it seems Microsoft has now realised that Skype is at its best when available on all platforms, and to that end has created this allnew Skype client for Linux.


his will come as a surprise to many Linux users, who’ve long since abandoned the defunct app in favour of other open-source options. Skype for Linux Alpha is a new web application for Linux based on the existing Skype for Web website, which Chromebook users have used for voice calls. While not a full-fledged desktop application, it is a sort of application wrapper around the web interface to give it the look, feel and convenience of a native desktop application. Let’s start with the good: the new interface is modern and offers more convenience and accessibility than the website version. Microsoft has promised to deliver regular updates to the site and add features to make it competitive with other voice and video communication services on Linux. The latest version at the time of writing supports multiple audio and video input devices and settings, a more reliable chat interface, and the ability to run in the background to avoid cluttering up your desktop. The download offers both RPM and DEB packages, giving more options for more Linux users to download and install on different distros. So far updates have been consistently released, adding mostly small but appreciated features such as audio voice messaging and notification settings, among other improvements. The not-so-good? This version is so far essentially just a web wrapper looking like a desktop application. While certainly welcome, it is not the

Features at a glance

Proper user interface The new UI, while still a web wrapper, is far better than the basic website interface it is based on.

Instant Messaging

Almost required in today’s VoIP programs, the IM capabilities of Skype are stable and reliable.

The Settings menu still lacks in functions but is expanding with each update.

full-featured app that most users would want. In fact, it is currently missing one of Skype’s (and most other VoIP platforms’) most important features, video calling. Although Microsoft promises it in a future update, at this time Skype for Linux Alpha is missing an essential feature that many VoIP users need to communicate with family, friends and business contacts. The lack of this feature will certainly turn off most users until it is added. Adding to this, some users have been reporting inconsistent and dropped voice calls as Skype appears to be taking the time to upgrade its network infrastructure to accommodate increased demand.

Promises, promises Another strike against the Alpha is that it is not open source – an important consideration for all Linux users. Many Linux devs and users will likely refuse to use Skype as long as it remains proprietary, which could limit interest. Should you wish to give it a spin, the application is available for download from the Skype community forum website. As noted, there are both RPM and DEB packages, and it has been tested for Ubuntu, Fedora, Debian, and OpenSuse, although your mileage may vary depending on your distro. Microsoft promises testing and optimisation for other distros in future updates to the client. If you’re looking for a Skype alternative, there are plenty

available, some of the more popular ones being Ekiga, Linphone and Yate. Drawing both cheers and jeers from the Linux community, Skype for Linux Alpha faces a long and possibly bumpy road to widespread adoption. Many will be wary, given that Microsoft dropped Linux support once before and this app is still not FOSS. Overall, this may be a way for Skype to make a comeback on Linux and win back some old users as well as gain new ones. It certainly faces hiccups as it takes its first steps back into the Linux community, although it has potential to become something great down the road. For now, given its lack of video calling features in particular, we’ll wait and watch for further updates before adopting this as our new VoIP application. LXF

Verdict Skype for Linux Alpha Developer: Microsoft Web: Licence: Proprietary ‘alpha’ release

Features Performance Ease of use Documentation

4/10 6/10 8/10 5/10

With plenty of potential but also a host of drawbacks, this application is certainly still in alpha.

Rating 5/10

October 2016 LXF216 21

Roundup Roundup 3D printers

Every month we compare tons of stuff so you don’t have to!

3D printers Stuck on which 3D printer to go for to get the best balance of features? It’s a  tough call, Ali Jennings takes a look at five leading printers.

How we tested... It’s tempting to just look at the  quality of prints when it comes to 3D  printers, but if you do that you’ll only  get half the story. There are some  other major factors that should drive  your decision, such as ease of set  up, configuration and use. In this  test, we’ve looked at how easy it is to  get started, from the unboxing to the  first print. We also run a speed test  to see how long it takes to print out  a variety of test models including the  standard calibration cube and the  slightly more fun but equally testing  3D Benchy (  The two most important factors are  reliability – there’s nothing worse  than a 12-hour print that goes wrong  after 11 hours – and accuracy. To  test these, the 3D Benchy model is  ideal but we’ve also created a simple  model that features a 10mm cube  and 10mm cylinder that we can  measure with calipers.

P Our selection FormLabs Form 2 LulzBot Taz 6 Prusa i3 Ultimaker 2 Extended XYZPrinting da Vinci Jr 1.0

redictions that 3D printers would soon be in every home have yet to be realised, but there’s no doubt that these ingenious machines are going to continue to increase in popularity and everyday use. The latest generation of printers is easier than ever to use and now we’re starting to see each manufacturer chase defined user groups, such as education, makers and professionals etc. Each group of users has very different demands and although the basic technology used in the majority of printers is the same, the end use for

22     LXF216 October 2016

each one is very different. In education you need speed and safety; there’s no use developing a fine-resolution printer if the students don’t have the time available to see the end result. Likewise, makers tend to like to tinker and the professionals just want a machine that gets the job done. The 3D printer field is dominated by fused filament fabrication (FFF) which

is the technology used by the likes of LulzBot and Ultimaker. The alternative is Stereolithographic printing (SLA), as used by the FormLabs Form2. This technique generally produces better prints, but takes far more work to finish the print. Today there’s a massive range of excellent printers, it’s now just a case of selecting the one that best suits your needs.

“There’s no doubt these ingenious machines are going continue to increase in popularity.”

3D printers Roundup

Ease of set up Are we talking minutes or days?


he majority of 3D printers have evolved from maker projects and some, such as the Prusa i3, are still available in kit form. When it comes to unboxing, few printers are ready to go and require brief calibration, material loading and a checking process that will vary greatly in time and technical ability. All five of our printers are at the top of their game, but some are decidedly easier than others to set up and use. In the past, SLA printers have been a bit of a hassle to set up as they have required lengthy resin commissioning compared to the lock and load approach of filament printers. The FormLabs Form2 changes this with a clean and simple cartridge and resin basin system that takes a couple of minutes to install and 35 minutes to fully commission out of the box. The LulzBot Taz 6 is a huge machine and requires a little assembly before you can start: The print head and base along with electronics all needed to be bolted and screwed into place prior to loading the filament. Once switched on, the Taz 6 has an auto-leveling bed

which cuts out the hard set up work. The process is well documented and all tools that you need are included. There really isn’t a reason why anyone should find getting started with the Taz 6 a problem and it will take around 25 minutes to set up. The Ooznest Prusa i3 is a different creature: it arrives in a box with further boxes inside and requires building from the ground up. It does take some skill to work your way through the construction, but it’s all well documented and every box, bag and part is labelled. You’ll likely find that the full construction job will take you the best part of a day. The Ultimaker 2 Extended is the largest of the Ultimaker 2 series. Setting up this printer is exceptionally easy and only requires the filament holder to be clicked into the back before loading

it up with filament. The Ultimaker doesn’t offer auto bed-levelling, but there’s a handy step-by-step guide and the full set up takes about 25 minutes. The XYZPrinting da Vinci Jr 1.0 is the smallest of the printers in this Roundup, and when out of the box it’s a matter of switching it on, loading the filament and the printer is ready to go— no challenge or fuss in less than 10 minutes.

Verdict FormLabs Form 2


LulzBot Taz 6

HHHHH Prusa i3


Ultimaker 2 Extended


XYZ Printing da Vinci Jr 1.0


When it comes to set up and ease of use the da Vinci really couldn’t be easier.

The XYZ can be up and running in less time than it takes to make a LXF Cup of TeaTM.

Commissioning Getting ready to print your masterpiece.


he Initial set up of the materials is generally quite straightforward. However, the difficulty often lies in changing the material from one type or colour to another. In the case of the FormLabs Form 2, it uses resin and

while this produces stunningly detailed prints, swapping material isn’t as easy as you’d think. This is mainly because the basin has to be cleaned or changed for a clean one ready for a second run with the new material.

Swapping materials in some printers can be frustrating and time-consuming.

The LulzBot Taz 6 uses a slightly adapted wade-style extruder. This means that changing material only means waiting for the hot end to heat up and the old material can be quickly removed and replaced with the new. The Ooznest Prusa i3 uses a Bulldog lite drive extruder, and its small print head neatly combines everything, similar to the Taz 6, which makes swapping materials extremely easy. The Ultimaker 2 Extended uses a Bowden-style system which when it comes to swapping filaments works in much the same way as the other FFF printers. However, the small feed hole where the filament enters can be hard to access if you have the printer on a desk against a wall. In contrast, the filament for the da Vinci Junior feeds through the top of the machine and back in, but despite the printer being fully encased swapping filaments is easy.

Verdict FormLabs Form 2


LulzBot Taz 6

HHHHH Prusa i3


Ultimaker 2 Extended


XYZ Printing da Vinci Junior


The FFF printers make swapping material exceptionally easy, but the LulzBot design just has the edge for us.

October 2016 LXF216    23

Roundup 3D printers

Computer to printer Setting up your model’s  software for printing  shouldn’t be hard.


ach of the printers either ships with its own software or uses Cura by Ultimaker. The ease of use of the software, even when it’s the same Cura package, varies greatly between printers and not all manufacturers offer software that’s compatible with Linux. However, with the use of Wine or a 3D print package compatible with your printer, such as Simply3D, all the 3D printers can be made to work with Linux. The three machines that use Cura have the greatest platform compatibility with versions of Cura available for Windows, OSX and several varieties of Cura are available for different versions of Linux— in our case, we used the Ubuntu version. This isn’t just handy if you need to swap machines, but it also means that when it comes to configuring your printer to work with the software all of the hard work has already been done for you. Of course, one of the greatest aspects of the crossbrowser application and its foundation in the open source community is that there’s plenty of support and loads of presets that are available from others that you can download and use.

FormLabs Form 2

LulzBot Taz 6



FormLabs Form 2, at the time of writing, doesn’t support Linux directly, but there are plenty of users using Wine in order to run the software. PreForm is about the most advanced 3D printing software to come with a 3D printer. On loading, it’s a little more strict compared with other packages as it requires registration and your printers serial number in order to be used. However, once set up it offers one of the most comprehensive free applications bundled with any 3D printer. Alongside the more usual print quality and material options, the software will also automatically check the integrity of imported models prior to printing, fixing any holes or highlight any other issues that might appear. It also fully supports network printing over a Wi-Fi network without the need for any additions. The interface and workflow is extremely well thought out, but you do have to run it through Wine in order to use it.

LulzBot uses a branded version of the Cura software and there are specially tailored versions for Debian, Fedora and Ubuntu. The software already has the Taz 6 profile coded in so after running through the setting up process the printer is ready to go. From the software you have a choice of directly sending the print to the LulzBot or saving to SD card for card printing and both techniques are equally simple. Cura only handles the preparation of the STL file ready for print and enables you to shift the model around the print platform. You can select the print quality settings and either direct print the design or copy to SD card. All material selection and advanced options for the printer can be accessed using the small interface on the front of the printer. The printer interface is pretty simple but enables a huge amount of control and fine tune options for your printer.

Materials support Don’t limit yourself to one material.


our out of five of our printers are FFF (Fused filament fabrication), which means that there’s a huge variety of materials and colours that they can use and print, whereas the SLA (Stereolithography apparatus) is limited with less than ten options. The FormLabs Form 2 is a bit of a surprise, as although you only have a choice of five materials (standard, castable, flexible, tough and dental) each resin is aimed at a practical use. You don’t get a choice of colours but you do get materials that will enable

24     LXF216 October 2016

Verdict FormLabs Form 2


LulzBot Taz 6 you to cast, prototype working devices and print yourself a set of new teeth! The Lulzbot Taz 6 takes most filament materials at 2.85mm and the hot end will reach scorching temperatures of up to 300 degrees. The list of printable materials is extensive and due to the design that reflects its maker past and future, new and adapted print heads can be fitted to expand on the materials it can take. The Prusa i3 uses 1.75mm filament and, again, with a bit of maker skill can print almost anything, with a top hot

end temperature of 195 degrees. The Ultimaker 2 Extended has the potential to print almost as many materials with the hot end temperature able to heat to between 180 and 260 degrees, which slightly limits material compared with the Taz 6 and i3. The XYZPrinting da Vinci Jr 1.0 is PLA only, this plastic is biodegradable and available in a huge selection of colours. Although you’re limited to the XYZ own brand, the hot end temperature is unstated as it only prints filament designed for this system.

HHHHH Prusa i3


Ultimaker 2 Extended


XYZ Printing da Vinci Jr


There are very few printers that can compete with the material compatibility of the LulzBot.

3D printers Roundup

Ooznest Prusa i3

Ultimaker 2 Extended+

XYZPrinting da Vinci 1.0




Ooznest also suggests Cura or Slicr for its take on the Prusa i3 printer, and, again, supplies you with profiles for the printer that can be downloaded and installed. However, you’ll, also need an application called Pronterface which will enable you to interface with the printer. As the Prusa i3 isn’t an out-of-the-box solution (see Ease of Set Up, top, p25), there’s a certain amount of computer configuration required to enable the printer to be recognised by the computer prior to use. Once the initial hardware setup of the Prusa i3 is out of the way, the Pronterface software is used to calibrate the machine: This moves the print head around the print base so that base levelling can be achieved accurately. Once all of the initial calibration is out of the way setting up the main Cura software only takes a few minutes and will enable access to pretty much the same features as the LuzBot version of the software.

Ultimaker is the company that maintains the popular Cura software and it’s used with all of its machines. The set up process is pretty much the same as for the Taz 6 and Prusa i3 and, again, special profiles are built in. As you’d expect, the use of the software is a fluid experience with the model of printer being quickly selected from a drop-down list which features all the preset quality settings. The Extended is slightly different to the other printers on test as it has been designed as a standalone printer. This means that you can’t tether it to a machine and, unlike the Form2, there’s no network Wi-Fi feature. However, being unable to tether isn’t really an issue. As with the LulzBot the material type is selected on the printer through the small interface and navigation of the interface is simple enough. The Extended also enables the fine tuning of settings, such as fan speed and hot end temperature.

XYZPrinting aims its printer at novices and education and the XYZware is extremely easy to use if a little limited. Unfortunately, there’s no Linux version at this time. The printer is fully compatible with the excellent Simplify3D software ( which is available for Linux, but this is proprietary and will cost you $150. If you do decide to use the XYZPrinting software through Wine you’ll find, compared with the other solutions, it’s very limited, but then it caters to the beginner market. There’s nothing bad about the software – aside from being paid-for and proprietary – it enables you to adjust the position of the model on the print platform and change the quality settings, other than that there is little that you can actively do. As with all but the Ultimaker, the da Vinci Jr enables either direct or SD card printing, and the small interface built into the printer is, again, simple and easy to use.

Printing time and size

Verdict FormLabs Form 2

Some of printers are faster than others but none are fast.


e have to face the fact that 3D printers just aren’t very quick currently, not even moderately so, and although speed should be taken into consideration when choosing your printer, you shouldn’t base your purchase on it. More important are the print quality options, and all our printers offer different preset quality settings and these can vary the print time from a low of one hour to well over eight. Three of the printers: LulzBot Taz 6, Ooznest Prusa i3 and the Ultimaker 2


Lulzbot Taz 6 Extended all manage to produce prints using PLA at the lowest settings in around one hour and at the top settings the times stretch out to around four hours. The Ultimaker has one higher quality setting that increased the printing time to six hours. The XYZ Printing at the lowest setting matches the three printers using Cura software with a low-quality print taking an hour and high quality being achieved in four hours. The Form2 stands out with its print times. On its lowest quality print, it took just

over two hours and high quality print took a staggering eight hours. The other big influence on speed is, of course, size and despite the difference in physical footprint between all printers the print volume isn’t quite so big. The FormLabs Form 2 is relatively small at 145×145×175mm, the LulzBot offers a huge dimensions at 280x280x250mm, the Prusa i3 is 200x200x175mm, Ultimaker 2 Extended is 223x223x305mm and the XYZ has a very regimented 150mmx150mmx150mm for the XYZ.

HHHHH Prusa i3


Ultimaker 2 Extended


XYZPrinting da Vinci Jr


With more experience, you’ll want a larger print platform and that’s probably the Ultimaker.

October 2016 LXF216    25

Roundup 3D printers

Resolution and quality The defining quality of all 3D printers.


rint resolution is really what gets 3D printer enthusiasts excited. For many, it’s all about microns which represent the minimum layer height, but consistency of flow and reliability is far more important. All the printers proved exceptionally accurate showing both holes in flat material, model heights and widths that were exactly what they should be. More revealing is the difference in overall print quality and resolutions, the FormLabs Form 2 at the lowest resolution of 100 microns produces a print that’s visually the same quality as the best print from any of the other printers. Increasing the quality to its minimum layer height of 25 microns, and it’s so close to a injection-moulded print that it’s difficult to tell that it has

been 3D printed. The LulzBot Taz 6 supplies consistent, low-quality prints that are well produced and there’s a marked improvement at each quality settings to the minimum layer height of 50 microns. The larger 0.5mm nozzle also copes well with bridges and overhangs but limits sharp detail. The Ooznest Prusa i3’s 0.4mm nozzle and 1.75mm filament are capable of producing some fine quality prints too with plenty of detail, but it did struggle with bridging and overhangs. The Ultimaker 2 Extended performed well at all settings and with overhangs and bridges but showed little difference between the middle- and high-quality settings of 20 microns—aside from time each took. The XYZPrinting da Vinci Jr 1.0 was noticeably cruder in

Verdict FormLabs Form2


LulzBot Taz 6


Ooznest Prusa i3


Ultimaker 2 Extended

HHHHH The numbers are important and for 3D printers it’s microns.

quality than the other printers. There’s visible stepping between layers and a lack of fine detail although it’s always accurate and consistent—its highest quality setting is 100 microns.

XYZPrinting da Vinci Jr 1.0


When it comes to print quality the Form 2 SLA printer is streets ahead.

Reliability is key It can print once, but will it keep printing?


eliability is the big-selling point for one of our printers and in this test, and previous tests, the LulzBot Taz 6 has worked without fault. The other four printers have a challenge if they are to come close to the near 100% print success rate of the LulzBot. FormLabs Form 2 has very little to go wrong as long as you remember to open the resin cartridge vent. During the printing, there were no issues and the support structure that surrounds the print is extremely easy and satisfying to remove. The Ooznest Prusa i3 is the least reliable with about a 70% success rate when the base plate isn’t checked, but the printer’s biggest issue is the thinner 1.75mm filament getting tangled on the roll. As the i3 is a DIY project, some of the reliability issues are in the hands of the builder and generally the more time spent over the build, the better quality and reliability you can expect at the end. The Ultimaker proved consistent, but swapping between materials some filaments did require a slight adjustment of the feed to stop slippage. The 100% print success result for the XYZPrinting da Vinci Jr 1.0 was a surprise. In all our reliability tests, every

26     LXF216 October 2016

print was usable and nothing seemed to affect its accuracy. However, in reliability terms the LulzBot continues to shine as the leading star, and this can be directly attributed to the new auto-leveling base. Although the LulzBot still looks like it was built in a garage, the 10mm rods and high-quality lead screws are built solid and designed to just keep on going.

The Ooznest Prusa i3 may lack reliability in comparison, but this can be easily overcome as long as you check that everything is tight, levelled and aligned every few prints. A spot of loctite on the nuts and swapping threaded screws for lead ones will also make a huge difference to the Prusa. This is part of the fun, the initial kit is just the start.

Verdict FormLabs Form 2


Lulzbot Taz 6

HHHHH Prusa i3


Ultimaker 2 Extended


XYZPrinting da Vinci Jr


The LulzBot’s reliability still can’t be beaten by the others.

Having a reliable printer will save you time and frustration.

3D printers Roundup 3D printers

The verdict T

here’s a huge selection of 3D printers available now, but we selected these five as they represent a good cross-selection of the market. The performance of all five is excellent and each is aimed at a different audience which is why looking at the five together is so interesting. The FormLabs Form 2 comes out as the best printer for high quality and practical printing, although the expense and time it takes to clean up prints as well as all the extra equipment you need firmly places this printer in the professional or at least high-end home user category. The small 25-micron layer height is only beaten by the Ultimaker at 20 microns, but side-byside the different use of technology and material shows that Form 2 wins the crown for absolute quality. Although the LulzBot Taz 6 looks like it’s been designed and built in someone’s shed it’s far more refined than it looks. The Taz range has long

1st LulzBot Taz 6

been favoured by the maker community and it’s easy to see why when it’s so reliable and compatible with a huge variety of materials and is great fun to adapt and upgrade. The fine detail work using the standard 0.5mm nozzle might not match that of the Ultimaker but part of the beauty of this machine is you can easily adapt it to your needs. The Ooznest Prusa i3 is great for the home hobbyist, it will teach you more about 3D printing in a day than the other printers could in a lifetime. If you’re new to 3D printing then we would highly recommend starting here. If you need the no-nonsense solution then the Ultimaker 2 Extended is outstanding, perfect for serious home users, schools and universities. It’s compatible with the majority of materials and has a huge print platform with the extra height that makes it

4th Ooznest Prusa i3

Web: Licence: Open hardware Price: £2,250 Unmatched when it comes to quality and reliability—one for the makers.


Web: Licence: Open hardware Price: £400 Great fun to build and a great way to learn about 3D printing.

5th Da Vinci 1.0 Junior

HHHHH Licence: CC BY-NC-SA Price: £2,750) Amazing all-rounder that will suite most printers needs.

3rd FormLabs Form2

ideal for all sorts of projects. Finally, the XYZ Printing da Vinci Junior is a real surprise for such a cheap printer. OK, the print quality is far behind the others but it’s consistent, easy to use and ideal for kids to get started in the world of 3D printing. It’s fully enclosed design also makes it nice and safe.

“When it comes to absolute quality no consumer 3D printer comes close to the Form 2.”


2nd Ultimaker 2 E+

The Form 2’s print quality is excellent but once the print is finished you’ll need to clean and finish the model.


Web: Licence: N/A Price: £3,316


Web: Licence: N/A Price: £300 Good starter printer and solution for schools, but lacking Linux support.

Over to you... Did we miss your favourite 3D printer or have you done something  unique with yours? Email us at

One for the pros who need near production quality prints.

Also consider... The initial outlay for a 3D printer isn’t cheap, even if you build your own with a Prusa i3 kit you’re still looking at £400. If you’re not sure about printing but still want to print out some models or products that you have designed there are other ways to do it. The first is to find one of the many maker clubs and hackspaces around the country.

They often have a selection of printers that you can use, and the organisers will have loads of advice about which printers will suit the type of work you’re wanting to do, they might even print it for you. If the prospect of going to one of these clubs seems a little daunting then why not give a go. This site links up 3D

printer owners with those looking to get something printed, and the cost of getting something printed is surprisingly low. Another great 3D resource is, where you can catch up on the latest developments. To see the latest selection of 3D printers and materials as they are released, head to the TCT show website ( LXF

October 2016 LXF216    27

Subscribe to Get into Linux today!

Choose the perfect package for you! Get the print edition

Get the diGital edition

On iOS & Android!

 Every issue comes with a 4GB DVD  packed full of the hottest distros,   apps, games and loads more!

 The cheapest way to get Linux Format.  Instant access on your iPad, iPhone   and Android device.

Only £18

Only £11.25

Every 3 months by direct debit

Every 3 months by direct debit

28     LXF216 October 2016

Get the bundLe deAL Get both the print & digital editions for one low price!


SAVE 36%

Every 3 months by direct debit

PLUS: Exclusive access to the Linux  Format subs area – 1,000s of DRM-free  issues, tutorials, features and reviews.

Subscribe online today… Or Call: 0344 848 2852 Prices and savings quoted are compared to buying full-priced UK print and digital issues. You will receive 13 issues in a year. You can write to  us or call us to cancel your subscription within 14 days of purchase. Your subscription is for the minimum term specified and will expire at the  end of the current term. Payment is non-refundable after the 14 day cancellation period unless exceptional circumstances apply.   Your statutory rights are not affected. Prices correct at time of print and subject to change. UK calls will cost the same as other standard  fixed line numbers (starting 01 or 02) and are included as part of any inclusive or free minutes allowances (if offered by your phone tariff).   For full terms and conditions please visit: Offer ends 25/10/2016

October 2016 LXF216     29  

Protect your privacy

Protect your

Privacy Jonni Bidwell knows that fraudsters are just dying to get their hands on his overdraft so he’s created this feature.


hether it’s pesky nation states taking an interest in Auntie Ethel’s dark web browsing, ad networks tracking users’ daily surfing routines or hackers in Eastern Europe hijacking Amazon accounts and ordering bulk confectioneries, there’s plenty to be wary of online. With so much of our lives lived online, if systems are compromised or credentials fall into the wrong hands then things can get ugly pretty quickly. Most banks will act quickly to cancel cards when fraudulent transactions

are detected, and generally they will provide refunds, though the process is tedious. But there are other risks as well, chief among them is identity theft. Convincing someone that the person that’s claiming to

The effects were presciently and  amusingly illustrated in the classic movie  Hackers (where a Secret Service agent sees  his credit rating destroyed, unfortunate  personal adverts taken out in his name and  eventually him being declared  deceased), but the reality can be  devastating. Unless you’re  prepared to go completely off-grid  there’s no way to defend against a  determined and well-equipped  adversary. Fortunately, there are  steps to thwart the more common attacks  without requiring you to don the old tinfoil  tricorn. Read on, gain knowledge, stay safe.

“If credentials fall into the wrong hands then things can get ugly quickly.”

30     LXF216 October 2016

be you was not, in fact, you but this person talking to them now is very definitely, absolutely you, can be tricky.

Protect your privacy

Who’s after your data? B

You don’t want to wake up to a screen like this, so be careful where you click.

Image credit: Bromium Labs

y now most people are aware of the old adage, ‘if something sounds too good to be true, then it probably is’ and, thankfully, the once common ‘419 emails’ purportedly from executors of recently deceased Nigerian princes offering riches in exchange for a small fee are becoming less prevalent. But phishing and social engineering attacks have evolved and represent a very real, probably even the largest, threat to online security. The miscreants who indulge in it have a battery of tools at their disposal. A common type of phishing attack is to send potential marks an email which appears to come from their bank. It’s trivially easy to copy the styles, wording and address information from an official bank email, and easy enough to register a domain name that looks something like the official domain (think replacing letter ‘o’ with number zero, or using a ‘.co’ domain instead of ‘’) from which to send the email. This email might say something like ‘following a security review, <meaningless jargon>, you need to log in here and update your details.’ From here victims are taken to the imitation domain, which looks exactly like their bank’s website (because cloning websites is trivially easy too) and unwittingly key in all the details needed by the fraudster to drain their account. Campaigns may target a specific individual (spear

be siphoning from your internet traffic, it’s probably significantly less than what many people happily give to Facebook, Google et al for free. If you have a Google account visit and have a look in the My Activity section. All those web searches, YouTube videos, directions and even audio snippets (if you’re one of the ‘OK Google’ people) have all been dutifully archived by the Chocolate Factory, so that they can ‘tailor your web experience’ (or just target ads better). Facebook has a similar tool for viewing and downloading everything they know about you. Of course, none of this data retention and analytics should come as a surprise, since these companies’ entire business models are based on selling ad space. That ad space becomes highly valuable when marketeers can target buyers in a particular area, with a particular interest, who are friends with one demographic or another… The more data, the more revenue. It makes some people feel, rightly or wrongly, a little bit queasy. Then again, it would be silly to just go giving away a neat webmail account with a ton of storage, or a way to connect with all your friends (or hold them at arm’s length and just like or emote your way through the social jungle). That would be an expensive business model. Of course, you don’t have to use these services, but if you do can always be more wary about how you use them.

“Social engineering attacks have evolved and represent a very real threat to online security.” phishing), perhaps a sysadmin or a high-ranking manager with access to valuable or incriminating data. Such an effort relies on knowing something about the individual, and in some cases a lot can be gleaned from a simple Google search. Dates of birth, employment history, email addresses and even Amazon reviews can be enough to get the ball rolling. Presented with an email that seems to know something about them, people are much more likely to open that dodgy attachment or visit that link. Worse, with the right information and the right patois, a fraudster can sweet talk their way past many companies’ security checks, allowing them to reset passwords, change addresses and generally do not nice things. It’s always worth remembering that no matter what information governmental agents or private detectives may

Shadows and equations Recently offered for sale (by a collective going  by the handle the Shadow Brokers) was a  collection of high-powered hacking tools and  exploits. To whet potential buyers’ appetites,  a free sample of the material was released. The  asking price for the rest was a cool $1 million,  to be paid in bitcoins. The auctioneers claimed,  and subsequent analysis of the freebies 

corroborated, that the malware originated  from the revered Equation Group, said to be  a Tailored Access Operations (TAO) unit  within the NSA. The most interesting parts of the cache  exploited vulnerabilities in enterprise-grade  networking appliances. Cisco and Fortinet  released emergency patches, suggesting that

the dump included prized ‘zero-day’ exploits  (those of which the manufacturer is not aware  and no security patch exists). It’s hard to  overstate the (predisclosure) value of these  things to a well-qualified attacker—the junction  boxes of the internet offer see all manner of  interesting traffic and a few carefully rewritten  routing rules could cause mayhem.

October 2016 LXF216     31

Protect your privacy

Ways to hide B

esides financial or identity theft, many people are concerned about governments or law enforcement eavesdropping on their communications. Whether you’re a journalist criticising a brutal regime or a whistleblower that’s got hold of an Excel spreadsheet detailing exactly how much the company is wasting on motivational Powerpoint designs you should be cautious as to what you send over the wire, and how you send it. Thanks to public key cryptography, it’s (barring several implementation barriers, as we’ll see later) possible for two parties that have never met to communicate in secret— that’s without having to agree a private key in advance. The mathematics behind public key cryptography (see Feature, p50, LXF189) has been around since the ‘70s, but is only recently starting to be used wholesale (see the End to End Encryption box, below). People have had at their disposal the tools required to generate key pairs, use them to encrypt an email and paste the result into any standard mail client— but they don’t. It’s not hard (see p33) but in a world where we expect so much from a few clicks, it doesn’t fly. Plus one requires the other party to play ball. What if, confounded by the decryption

The EFF’s and Tor Project’s HTTPS Everywhere browser extension will ensure that your web browsing will be as private as it can be.

process they reply, frustrated, spaffing sensitive information in cleartext? In this sense the web has done a better job at encryption. HTTPS enables us to communicate privately with websites, and we’ve been doing it since the mid-90s. Ideally, HTTPS performs two duties: authentication, which guarantees the website you’re communicating with is indeed the one you think it is and confidentiality, which guarantees that information, even if it were intercepted, remains secret. But even that system is far from perfect, since it relies on implied trust of a Certificate Authority, and there have been reports of rogue CAs finding their way into major web browsers’ trust lists. Beyond that several attacks on the protocols (BEAST, CRIME) and underlying protocols (Xiaoyun Wang’s 2005 attack on MD5) have showed that it’s no silver bullet. Nonetheless, it’s the best we have, and it’s more secure than sending passwords and card numbers in the clear. Browser extensions, such as HTTPS everywhere, ensure that you’ll always visit the https version of a website, where one exists. Thanks to LetsEncrypt ( it’s free and easy for anyone to enable HTTPS on their websites. The Tor network has received a great deal of interest, partly due to being given the shadowy sounding title ‘The Dark Web’ by media pundits. Tor began as a US Department of Defence project and a great deal of its funding still comes from there and other government sources. Given the eyebrows this naturally raises it’s looking to diversify its funding. The Tor Project, though, is run independently from its funding and according to time-honoured open source principles, so we shouldn’t really be worrying about governments meddling with the Tor codebase at source. That doesn’t mean that governments aren’t interested in breaking Tor communications, because they very much are. In February this year, it emerged that researchers at Carnegie Mellon had partially de-anonymised the network, and that they had turned over their findings to the FBI. It’s believed that the attack involved setting up a number of Tor nodes and correlating traffic between them, in what is known as a Sybil attack. The fruits of this operation came in November 2014, when Operation Onymous saw widespread and co-ordinated action against users and operators of darknet marketplaces. Tor was never meant to protect against these activities, and it remains one of the most vital tools for privacy activists. (Find out how to set it up on p40).

End-to-end encryption This year, the Facebook-owned Whatsapp  messaging app rolled out so-called end to end  encryption for all its users. This means that the  key required to decrypt messages as they pass  through the Whatsapp network is only ever  known to the sender and receiver of those  messages. The underlying technology used by  WhatsApp is the same as that used in Open  Whisper Systems privacy-centric messenger  client, Signal. The open source Signal Protocol 

32     LXF216 October 2016

was designed by privacy activist and wellknown hacker, Moxie Marlinspike. That protocol  will also be rolled out to Facebook Messenger  shortly, although secure messages will only be  available on one device, otherwise key  distribution becomes problematic.  As this kind of ‘strong encryption’ becomes  more prevalent, those depending on it will see  increased attacks on the endpoints themselves,  namely devices used by the communicating

parties. The woeful state of Android security,  largely due to carriers’ ineptitude at rolling out  OS updates across their networks (because it’ll  mess with the stupid apps they insist on  installing on contract phones), offers hackers  many opportunities. The UK government seems  to have toned down its asinine anti-encryption  rhetoric for the moment, but other countries  will not tolerate platforms that allow their  citizens to communicate without fear of reprisal.

Protect your privacy

Better living through GPG O

n Linux the Swiss Army knife of encryption is Werner Koch’s Gnu Privacy Guard (GnuPG) suite. Its core application’s command ( gpg ) conjures memories of Pretty Good Privacy (PGP), a crypto tool originally released in 1991 by Phil Zimmermann. Nowadays OpenPGP is a standard specifying how encrypted messages and the bits associated with them should be stored and GnuPG fully implements this standard. GnuPG works from the command line and has a reputation of being complicated and unfriendly (see blog/gpg-and-me). It avails the user of all the modern private and public key algorithms, as well as all manner of other knobs to twiddle. As a result, there are a huge number of command-line options and the man pages make for some lengthy reading. Most distros will install GnuPG as standard, so we shouldn’t need to install anything for this tutorial. Traditional, symmetric encryption (where two parties share a secret key or password) is all very well, but it relies on the communicating parties having a secure channel to share the key in the first place. Typically, this would involve a dodgy meeting in a shady carpark, possibly an exchange of

Key generation requires entropy (random data) so you’ll then be asked to mash your keyboard and move the mouse while the key is generated. Once that happens you can check that everything worked with gpg --list-keys . GnuPG keeps all keys it generates safely in a keyring. They can be imported and exported as required, but the utmost caution should be exercised when moving private keys around. Since you’ll want to share your public key, export the key with $ gpg --output lxfpublic.key --armor –export <user id> replacing <user id> with the email address you have used during key generation. The resulting file can be emailed to your co-conspirators who can import it using the following: $ gpg --import lxfpublic.key Alternatively it can be uploaded to a key server so that anyone can find you. To send you an encrypted message, say instructions.txt, your colleague would do $ gpg --recipient <user id> --encrypt instructions.txt and send the resulting instructions.txt.gpg file your way. They should also securely delete (eg using shred ) the original file, as should you when you receive it. And that was a very quick intro to GPG, not too painful at all. If you prefer though, there’s a graphical front-end called GPA (Gnu Privacy Assistant) and also the excellent Enigmail plugin for the Thunderbird email client.

“The utmost caution should be exercised when moving private keys around.” briefcases and ideally destroying any written record of the key or passphrase. One never knows who’s lurking in the shadows in such locales, and ideally one would rather avoid such situations. So let’s generate a keypair and see how public key encryption works. Enter $ gpg –full-gen-key . Accept the defaults for the first three questions. We’ll generate an 2048-bit RSA key with an RSA subkey and that key will never expire. Then answer yes to Is this correct? . You are asked for a name and email address. This information will be stored with the public key, which it is good practice to make as public as possible, so don’t use your real name or primary email address if you don’t want those things being public. The email address you use here doesn’t have to agree with the one from which you’ll actually send your encrypted mail. You’ll then be asked for a passphrase to protect the key.

If commandline GPG is too daunting, then why not use its graphical counterpart GPA, or try KGPG if you are using KDE/Plasma.

Webs of Trust and key signing parties We mentioned uploading your key to a public key server earlier. If you have someone’s public key, then you know that only the holder of the associated private key can read any encrypted messages you send them. The trouble is, unless you conducted the key exchange in person, there is no guarantee that the public key belongs to whom you think it belongs, it could well belong to an imposter. Thus it makes sense to advertise your key as

much as possible: Put it on several keyservers, your website, your email signature. Conversely, do a background check before trusting a random public key. All of those things could potentially be hacked, though, so there’s another way. If you have met someone in person, or are otherwise sure of the authenticity of their public key, then you can sign it. This relation can be built up to establish a decentralised structure known as a web of trust.

Over time, people will see whose keys that you have signed and if they trust you then they ought to transitively trust those keys. In order to get things started it’s common to hold a key signing party, where participants – amidst other revelry – meet in person, verify photo IDs and sign keys and certificates. A fictional key signing party occurs in Cory Doctorow’s Little Brother on a California beach—where will you host yours?

October 2016 LXF216     33

Protect your privacy

Privacy with Tails Nate Drake explores the latest additions to the Tails armoury to  keep your data private online.


Quick tip Can’t connect to the internet using Tails? Your ISP may be blocking connections to the Tor network. Restart Tails, choose ‘Yes’ for ‘Additional options’ and in the ‘Network Configuration’ section choose ‘This computer’s Internet connection is censored, filtered, or proxied.’ You should now be able to bypass the block.

ttendees of Eurocrypt 2016 in Vienna, earlier this year, were lucky enough to receive an information sheet and a USB with a live version of Tails preinstalled. Since 1987, the conference has been set up to promote the privacy of your data through encryption and anonymising techniques and Tails has often been a subject of presentations. Now it seems the conference organisers have decided that privacy lovers should have their own copy. For those who are new to Tails, it’s, in simple terms, an OS which has been created primarily with security in mind. This is achieved by using a carefully handpicked suite of applications as well as routing all internet traffic through the Tor network, which results in much higher anonymity and much slower speeds. Used in the past by the likes of Edward Snowden, the result is an out-of-the-box privacy OS with its own set of advantages and drawbacks. Linux lovers will be aware that most iterations of popular OS’ can be used in a ‘live’ mode whereby the system boots entirely from a DVD or USB stick. This is an excellent feature to help you get a feel for a particular flavour of Linux to see if it’s really meant for you. The other advantage, which Tails exploits, is that once you remove the DVD/USB, no trace of your activities are left on the machine —the hard drive is left out of the loop entirely.

The case for Tails When the subject of privacy comes up among Linux users, people usually fall between two camps. The first camp claim that there’s no such thing as online privacy and that the only way to keep your data safe is to go and live in an underground cavern while wrapping your head in tinfoil. The other extreme are those people who feel that they cannot imagine any situation in which they would ever need an OS like Tails as they have nothing to hide. At this point, it’s

Love him or hate him, Snowden was a hard man to find, thanks in part to Tails.

34     LXF216 October 2016

Tails comes with Tor browser 6.0.3. Yes, that’s a picture of an onion. It’s a long story.

usually a good idea to ask if they have curtains, if they’d give you their credit card number or indeed why they don’t walk around naked with all their secrets written on their skin. For the rest of us in the middle, who may be concerned about the Investigatory Powers Bill in the UK or Apple’s fight with the FBI to weaken encryption in the US, the new features in Tails 2.5 offer stronger ways of remaining anonymous online than the previous iteration [see Reviews, p18, LXF204]. First, Tails has become much easier to download, depending on the platform you’re using. Visitors to the website ( will see that the site is much more polished and all you do is select your operating system to download right version. The team behind Tails has also closed down its IRC channel and set up a chatroom using XMPP. This is easily set up using Pidgin, the built in instant messenger. As in previous versions, Pidgin comes with OTR (Off the Record) messaging built in, which means that messages are encrypted before they ever leave your device and is a must to keep your conversations private. Since our previous review, the clunky and outdated Vidalia software has also now been replaced with a simple system status icon indicating whether or not Tails is connected to the Tor network. The latest version of Tails also patches a few major vulnerabilities from previous versions. Back in February, the Tails project announced that the email client used at the time, Claws Mail was in fact storing plain text copies of all emails accessed by IMAP on the server. There wasn’t a quick and easy way to fix this vulnerability therefore the mail client has now been replaced with IceDove, an unbranded version of Mozilla Thunderbird. IceDove includes the fantastic Enigmail plugin, which not only now uses the secure HKPS OpenPGP server but has an excellent setup wizard to generate your own keypair to encrypt your e-mails. A mail setup assistant is now also included out of the box meaning IceDove will load your configuration settings if you have a common email provider. (see For the Key-Rings of IceDove box, p36.)

Protect your privacy

Under the hood, both the firewall and kernel have been hardened and numerous security vulnerabilities from Tails 2.3 and 2.4 have been fixed. The Tor Browser has been updated to version 6.0.3 which is based on Firefox 45.3. The usual extensions Adblock Plus and HTTPS Everywhere have been included to remove pesky ads and enforce SSL where possible. Since February of 2016, Tails 2.x has been based on Debian 8 (Jessie) with the Classic Gnome Shell desktop environment, which makes for a much slicker look and feel than before. Live systems will usually take a little longer to respond than those installed on a hard drive, but the admittedly spartan desktop reacts with lightning speed. Although Tails isn’t recommended for day-to-day use, it’s good to see that some effort has been made in the past year to make it more accessible in other ways. Support for playing DRM protected DVDs out of the box has now been included. Media die-hards will also appreciate the inclusion of updated versions of Audacity and Traverso, which are multi-track audio recorders and editors, as well as Sound Juicer for ripping CDs. Those in need of a video editor to splice an instruction video for their next diamond heist can also make use of Pitivi, which was the pre-bundled video editor for Ubuntu up until October 2011. Tails 2.5 also comes with the awesome LibreOffice preinstalled although as with other bundled applications, it’s not the latest version as being based on Debian, applications are chosen for stability over novelty. This means you may not be able to use the latest features in your favourite applications. Technically, it’s also possible to install additional programs or manually update others from the terminal but doing so can undermine your anonymity through ‘browser fingerprinting’.

technology, it has an extensive list of attacks against which using Tails (even the most recent version) won’t protect. Much of these are the same as for using the Tor Browser. If, for instance, you have a global adversary like a shadowy three-letter government organisation capable of monitoring all the Tor entry and exit nodes, they may see that you were on the network around the same time your YouTube account was accessed. This can be mitigated by finding out if websites have a deep web (.onion) address and visiting that instead, eg the main page for Riseup (, which provides online communication tools for people and groups working on liberatory social change, is nzh3fv6jc6jskki3.onion. This means your traffic never leaves the Tor network. In previous versions of Tails, it was also possible to put off casual snoopers by disguising the distinctive Tails desktop so that it resembled Microsoft Windows but this feature has been disabled for the time being pending an update. Tails is an open source so expert coders can regularly review the code and check it for bugs or backdoors. However, the security features built into Tails 2.5 won’t be much use if you are a victim of DNS poisoning and are redirected to a similar-looking website to download a compromised version of the software. For this reason, it’s very important to use a feature now available on the Tails website to verify the cryptographic hash of the ISO file you’re downloading to make sure it’s the real deal. The Tails project also can’t protect against a system where the hardware is compromised, such

Quick tip If you download Tails with Firefox Version 38+/Tor Browser version 5+, there’s now an add-on you can install to verify that you are downloading genuine Tails software. See https://tails. to install this.

The Sting in the tail Even if you decide to stay with the suite of default applications, you’ll find that unless you copy your content to an external drive or enable persistence, everything will be lost when you next restart the machine. The Tails project website is also pretty open about the vulnerabilities of its own

The Florence virtual keyboard. It’s possible to change the colour scheme to something less reminiscent of your first coding project.

Persistence pays off

Each time that you start Tails, you will be presented with the rather Zen choice of selecting between using persistence and transience.

If you decide that the benefits of persistence  outweigh the downsides then this is very simple  to set up. Go to Applications > Tails > Configure  Persistent Volume. Needless to say this can  only be done if Tails is installed on a USB stick.  Once you’ve chosen a passphrase, click on  ‘Create’ and wait for the volume to be ready. You’ll then be presented with options as to what  data you’d like to preserve.  You can enable the  following list of options: GnuPG (Stores all  OpenPGP keys), SSH keys (both public and  private), Pidgin settings (accounts, chats and  OTR keys etc.), Icedove configuration and  emails, Gnome Keyring, Network Connections,  Browser Bookmarks, Printers, Electrum Bitcoin  Client settings, APT Packages and APT Lists  (enables installation of additional software).

Finally, you will also have the option to create  a folder called ‘persistent’ (which is stored in  Places > Persistent) to store any personal  documents that you create or download. Note  that by default the password manager,  KeepassX doesn’t save its database here, so  make sure to use the Save As… feature if you  want to use this. You can deactivate any of these options at  any time but the files you’ve already created will  already be on the persistent volume.  Finally, shut down your computer and restart  using your Tails USB stick. You’ll notice you’ll  have the choice each time you boot on whether  to use persistence or not, so if you don’t need  access to your personal data, it’s possible to  use the Live mode.

October 2016 LXF216     35

Protect your privacy

Quick tip The Diceware website is available at http://bit. ly/Diceware PassPhrase. You may also wish to use it to generate random usernames (2–3 words strung together) instead of those you usually use.

as a USB keylogger which records everything that’s typed. Users can reduce the risk of this by using Tails built-in virtual keyboard ‘Florence’ located at the top right. For those people who do choose the persistence route it’s important to bear in mind that Tails doesn’t strip out metadata from your files by default, eg the name of a document’s author. Fortunately, Tails does come with MAT (Metadata Anonymisation Toolkit) which can remove names of authors, GPS locations etc from documents. Additionally, to quote the website directly, Tails also ”doesn’t make your crappy passwords stronger.” A weak password can be brute -forced by a moderately fast computer in minutes or hours, no matter which ultra secure OS you decide to choose. Mercifully, Tails comes to the rescue here offering PWGen which can generate secure passwords. This application is actually surplus to requirements as the excellent Password Manager, KeepassX, also has a feature to generate passwords using randomness that’s obtained from wiggling your mouse. For those who don’t wish to enable persistence, it may be an idea to write down new passwords (using a Diceware word list is one excellent way). Tails 2.5 also comes with paperkey, which is an command-line tool which will allow you to back up your OpenPGP keys on paper too. If, like many dark web users, you have some bitcoins and want somewhere safe to put them, Tails comes with the brilliant lightweight Electrum Bitcoin Wallet. You can either enable the bitcoin client persistence feature to make sure

Be warned: If found, Customs may be within its rights to seize your Tails USB stick.

your coins are available to spend, or you can leave Tails in live mode and write down your wallet’s ‘seed’, a chain of words that will let you recover your wallet at any time.

Isolating applications On a technical note, since switching to using Debian, the Tails team have been trying to isolate specific applications using AppArmor. This is enabled on the kernel command line whenever you use Tails and tries to make sure for instance that Pidgin Messenger can’t access your GnuPG keyring. The development team so far has had mixed success with the live version of Tails, so it’s safe to say this privacy measure is a work in progress. In addition, in case your USB stick is ever seized, the LUKS Persistent volume is encrypted, but by default any documents you save elsewhere will not be. Therefore, if you decide to go undercover in North Korea (or in IKEA) and save your report to another USB stick, anyone in possession of the stick will be able to read it. However, Tails Disk Utility does allow you to encrypt an external drive with a password before transferring files over. Note that, as the tinfoil hat brigade are fond of pointing out, the very fact that you are using an OS such as Tails can draw unwanted attention, by the notion that you’re using an OS with a strong emphasis on privacy which means you have something to hide. Additionally, by default, Tails does nothing to disguise itself when installed on a DVD or USB stick, so if it’s found in your Louis Vutton bag next time you’re at a border checkpoint, you may find you’re asked some difficult questions. An excellent if expensive workaround for this is to visit the Tails website as outlined in the setup guide (see p37) each time you need to use it and install it to a fresh DVD. Finally, as noted on the website, Tails is a work in progress. A look at all the security vulnerabilities that are fixed in various versions are enough to make a grown person weep. Take the time to explore the operating system and its limitations and if you feel there’s an application that would better suit your purposes, don’t be afraid to head to to provide your feedback.

Getting your key for Icedove Make sure to start Icedove when Tails informs  you that Tor is ready. If you’re privacy  conscious it’s probably best to create a new  email address to be used exclusively with  Tails. If you’re feeling lazy you can just enter  the details of a common mail provider such  as Gmail and Icedove will do the rest. If,  however, you want to keep all your email  traffic inside the Tor network, it’s best to make  sure you and your contacts use a mail  provider that offers a dark web (.onion)  address. A few providers that do this include  Mail2Tor, Riseup and OnionMail. Just click  ‘manual config’ and enter the server settings  from the relevant website. Once this is set up. Select Enigmail from  the Icedove Menu and start the Setup Wizard. 

36     LXF216 October 2016

Choose the ‘extended’ set up for Advanced  Users and choose to create a new keypair.  Even if you have used GPG before, it’s  important to have a separate keypair for your  dark web identity. Next, you will be asked to create a  passphrase for your GPG key. Size matters so  do make sure to choose a good quality one. Next Tails will gather some entropy to make  your keypair. This could take some time.  At this stage you may also want to generate a  revocation certificate in case you ever happen  to lose access to your key or want to change  your key. You’ll need to make sure that you  save the file to your persistent folder or to  another medium such as a USB stick to keep  it safe.

It may take a few minutes to generate the keys. You can speed up the process by wiggling your mouse and bashing at the keyboard so that the computer can generate enough entropy. [Or a LXF feature–ED]

Protect your privacy

Set up Tails on a USB stick


Tails installation Assistant


Visit the main page for the Tails installer at install/index.en.html. Click on ‘Let’s start the journey’ to continue.  Make sure you have both a blank DVD  and a USB stick (ideally at  least 8GB but 4GB will do at a pinch) inserted into your computer.


Verify and install Tails


Use your favourite Bittorrent client to do this by clicking ‘Download  Torrent file’. You may verify the download at this stage if you wish. On Ubuntu, open the Software Center, then click Edit > Software  sources > Other >> Add’. You’ll need to add the following archive:  ppa:tails-team/tails-installer.


Install to USB stick

Run the TAILS installer

 Software Center will now update and you should be able to search  for and install tails-installer. For Debian users, you will need to open  the Synaptic Package Repository and ensure that jessie-backports  is enabled. You should now be able to search for and install tailsinstaller package.


Choose the first option to install Tails onto a USB stick. You’ll need to  navigate to the ISO image of the TAILS DVD and then click ‘install  Tails.’ Read the warning message, then click ‘Yes’ to continue. You’ll  need to enter your admin password at the end of install.

Choose your operating system

For the purposes of this guide, we’re assuming you’re using a  Debian/Ubuntu based system, but if you’re using another flavour of  Linux feel free to click and follow the instructions there. Click ‘Let’s  go” at the bottom of the next page to proceed to the next step.

Boot into TAILS

Restart your machine to boot into Tails. If it doesn’t start  automatically you may need to look at your machine’s boot menu to  configure booting from USB first. For now choose the live mode.  You’re now free to set up persistence and a new GPG keypair etc.  LXF

October 2016 LXF216     37

Emmabuntü Collective

In the North Togo Savannah region, the first training room equipped with Emmabuntüs.

Jonni Bidwell talks to Patrick from the Emmabuntüs collective to see how its distribution is helping humanitarian efforts in France and around the world.

Emmabuntü has been used in award-winning digital projects in West Africa, see p41.

A Distro for All Seasons 38     LXF216 October 2016

Emmabuntü Collective Emmabuntüs is an Ubuntu (and now Debian) based distribution (distro) designed for lowerspec hardware and suitable for users of all ages. The project is based in France, where donated machines are refurbished by volunteers and sold at bargain prices to raise funds. But it’s also part of a bigger picture, being allied with all kinds of charity work around the world, particularly in Africa. We spent some time with lead developer Patrick to find out more.


Linux Format: Let’s talk about the charity side of things first, tell me about the history of the Emmaüs communities, how they’ve grown, where they are etc? Patrick d’Emmabuntüs: The Emmaüs movement was initiated in 1949 by a catholic priest named Abbé Pierre (although the movement has no religious affiliation), who really wanted to put solidarity into action in helping “those who suffer most” and also being “the voice of those without a voice”. You will find more historical information on the Emmaüs-International website ( The official Emmaüs France association was officially created in 1985 to regroup and somewhat unify the various flavours of the Emmaüs movement. As of today, this association federates 240 communities across the entire world, among which 115 are located in France for historical reasons. These communities are living, working and welcoming places cemented by social solidarity, and they are functioning thanks to the collecting and recycling work of the Emmaüs companions. These people (around 4,000 in France today) are usually homeless people hosted unconditionally and for an unlimited duration. The main activity of these communities is to receive donations from individuals (furniture, clothes, ornaments, bicycles and computers etc) to repair them if necessary and resale them to the public. LXF: How did you get involved with them? P d’E: As far as I am concerned, I started to help as a volunteer in the computer refurbishing activity of the Emmaüs community of NeuillyPlaisance in May 2010. I started by developing a set of scripts to handle the software installs in Windows XP, making sure we didn’t mess around with the original Windows licence. After that, I noticed that the majority of the PCs were donated without a hard drive, so I had the idea of creating a script to install free software and a Dock on a Ubuntu Distribution, in line with the scripts developed for Windows XP. I presented this work at the Ubuntu-party 10.10 in Paris; I wanted to increase the

A rare photo of Patrick d’Emmabuntu. Emmaüs helpers prefer to put the collective before the individual, so surnames (and mugshots) usually aren’t provided.

awareness of other people to the necessity of: Developing and promoting a free distribution well suited for the refurbishing of the machines in the Emmaüs communities. Help these communities to refurbish and sell these PCs to beginners without any previous knowledge about Linux distributions. Reduce all waste generated by the overconsumption of raw materials, by extending the hardware lifetime. During this Ubuntu-party, I had the chance to meet Gérard and Hervé. They convinced me to create an ISO image dedicated to installations without an internet connection, and they had the idea of calling it ‘Emmabuntüs’, a portmanteau obviously made of Emmaüs and Ubuntu. After that, David Rochelet and Morgan Duerte joined the core of the actual Emmabuntüs Collective. They are responsible for publishing Emmabuntüs respectively on Sourceforge and on Freetorrent ( The first version of Emmabuntüs, based on Ubuntu 10.04, was released on the March, 29 2011.

The LXDE desktop environment consumes only 155MB of memory at start up.

the digital gap in France between poor and rich people.

LXF: So hardware is donated by the public, tested/fixed and Emmabuntüs installed by volunteers, and then sold back to the public. What sorts of other work are the volunteers involved with? P d’E: Our Emmabuntüs volunteers have a regular job during the week, and they spend on the collective’s goals their week-end refurbishing old computers. They are between 25 and 75 years old. At the beginning of the Emmabuntüs adventure, they were mainly engineers or In short, the Emmabuntüs Collective technicians in Electronic or Computer Science, develops and maintains the various flavours of but we see more and more non-geek persons the Emmabuntüs Distribution, and in parallel joining us. Teachers, for example, who see in its members help Emmaüs, and other Emmabuntüs a great set of tools to educate associations, to refurbish old donated children and promote the ‘Free Culture’. At any computers. The synergy between these two rate there is no entry selection: all volunteers tasks is obvious. Due to historical and deep are very welcome. sociological reasons in France, many in the We must mention also Montpel’libre Emmaus Communities are computer (, the Software Libre illiterate and selling them used computers at User Group, which has helped us for three a very attractive price is a good way to close

“Selling used computers at a very attractive price is a good way to close the digital gap.”

October 2016 LXF216     39

Emmabuntü Collective years to promote Emmabuntüs and free software at the Emmaüs Community of Montpellier ( in French), by performing once a month sales animations including Emmabuntüs presentation and quick takeover of the system. Besides the machine refurbishment and on-selling, the Emmabuntüs Community contribute in a number of other ways too, eg we train and support other associations like CaLviX ( which do the Emmabuntüs installations for the humanitarian charity Ailleurs Solidaires ( AilleursSolidaires4Linux), helping needy children in Nepal, and refurbishing donated computers in our own lab and give them back, in turn, to support the projects in partnership with Partner Communities. There are also some training session around Paris, or we travel in the regions during our vacations, and sometimes we do remote training (as an example in Africa for YovoTogo and JUMP Lab’Orione) by having a remote access to the local computers (using TeamViewer) and holding the hand of the trainee until he is up to speed. We collaborate also with associations specialised in the computer refurbishing, like THOT Cis, Les PC de l’Espoir ( en français), and Trira ( which was founded by the Emmaüs community in Lyon and is using Emmabuntüs in the frame of their hacker workshops where they conduct training sessions explaining how to reuse components of obsolete machines to build a computer within a plastic can: Jerry Do-ItTogether ( LXF: Installing Linux on one machine (usually) doesn’t take too long. But when

Computer class in the Akashganga Intl Academy equipped by Ailleurs-Solidaires.

40     LXF216 October 2016

We run a test by launching, in parallel, Chromium, the Clementine music player, the Thunar file manager, the Geany text editor, and the htop system utility, you get a memory utilization of 346MB, which leaves yet another 156MB of free space, if you have only 512MB of RAM installed on your computer.

you have to do it on hundreds its nice to automate things. Have you managed to streamline the installation process? P d’E: Volunteers spend 30 minutes on average per computer, using a automatic cloning technique based on an USB key, which installs the system in five minutes, then we load the various Free Culture components. These include ePub books in the public domain, free music, the Vikidia kids encyclopedia (see, plus some language customisation when the computers are sent to foreign countries (we do have an Albanian version of Emmabuntüs). This year alone, and beside the Emmaüs activity, the Emmabuntüs collective prepared and donated about 130 machines for various projects run by humanitarian associations, like YovoTogo et JUMP Lab’Orione (in Togo, see the Emmabuntü blog,, RAP2S (actions in Togo and Ivory Coast, RAP2S standing for ‘Réseau Afrique Partage Savoir Solidaire’ which translates into Africa Solidarity Network Sharing Knowledge), Emmaüs Solidarity for Albany; and we also equipped four preschools in the Parisian area. LXF: If you have an old motherboard lying around doing nothing, then it’s tempting to update the BIOS, find the fastest CPU it can handle on eBay and fill it with RAM. This is fine from a hobbist point of view, but it takes time and money and sometimes doesn’t work. Do you get involved with this kind of thing? Do you provide any aftersales support for the machines that you sell? P d’E: No, BIOS upgrades and sourcing suitable hardware would take too much of our time and cost too much, without being sure of a good result. It’s just not worth it when you sell a system for between 50 and 70 Euros. On the other hand, in the frame of an Emmaüs sale, there is a three months’ warranty for the hardware. The customer can return the machine without any justification and get, in exchange, a purchase voucher of the same

value. During the sale transaction we are training the new user during the half hour and show the first steps to get used to it. And yes, we are also handling some postsale support, sometimes six months after the purchase, eg to install a new printer. For some customers we replace – free of charge – their computer with a more powerful one, eg because there were driver issues with fullscreen video. Sometimes big companies give us a lot of depreciated computers which are still in very good shape. They are easy to refurbish, and we give them back to other humanitarian associations, eg the YovoTogo et JUMP Lab’Orione project in Togo. LXF: Linux can run on pretty much anything, but if you want to run a desktop and browse modern websites then there must be some minimum hardware requirements. What are the recommended specs for Emmabuntüs? What can be done with hardware that falls below these? P d’E: We recommend the following minimum configuration: 2.0 GHz CPU, 40GB of hard disk space and 1024MB of RAM. If the system is on the low end of the performance you can launch LXDE instead of Xfce. We dispose responsibly of computers which are really too old and hardware limited. LXF: Major distros are starting to talk about dropping 32-bit support. Ubuntu has said that we won’t be seeing 32-bit live images for 16.10. How will this affect you? P d’E: Yes, we are concerned by the 32/64 bits issue. As a matter of fact we started developing Emmabuntüs Debian Edition because of the impending Ubuntu situation. We hope Debian will continue to support the 32-bit architecture for a long time. To build the Debian image we are using the LiveBuild utility and the same source for both 32 and 64-bit systems. And in order to improve co-ordination with different partners, we are using a Git solution on the collaborative Framagit.

Emmabuntü Collective LXF: There’s a few different software options for a lightweight desktop system—can you explain some of the choices made in Emmabuntüs? How do you balance wanting to give users a choice against baffling them with too many alternatives? P d’E: The heart of our distribution is the Cairo Dock (which actually was developed at the beginning, with a different name, when I was refurbishing the first XP machines). The Dock gives us some degree of independence against the regular desktop of the underlying distribution (be it Xubuntu or Debian). In addition the Dock has three different profiles (Experts, Beginners, Kids) which give you access to different sets of applications and activities. Kids really love Cairo Dock, and it’s great that they can have easy access to Wikipedia (or a subset of it), even when no Internet connection is available. The educational tools embedded in this distribution are also quite fun to use (speech synthesis, eg).

on the project’s values

250 children), the other is the Disabled Service Assocation ( (centre for 55 disabled – blind, deaf and dumb, physically impaired – children and all of them are very poor). We also mentioned the Jerry DIT project. Jerry is an open source hardware project which is fully up-cycled and very low-cost. It gives a new life to computer components that would otherwise be directly dumped in the trash bin. A couple of years ago this project chose Emmabuntüs as its favourite distribution on the Jerry desktop version, and on the JerryClan Ivory Coast work. Thanks to this, the JerryClan de Côte d’Ivoire and FabLab AyiyiKoh ( teams were able to build SMS-based services aimed at medical aids and information, for which they were awarded more than five prizes during the Digital innovation challenge in Africa. These services are based on a mobile application using SMS to monitor patients with tuberculosis, or follow up pregnant women, and provide more accurate information (see JerryTub, m-Pregancy, OpenDjeliba, GBATA, Môh Ni Bah, Gbamé, JerryCyber). And you can watch the video of JerryMarathon at Attécoubé at http://www. In the coming years we will support important projects in Cameroon with David, who is a founder member of our collective, and who recently settled in Douala to establish its hackerspace: the DouaLab (http://blog. One project, running in 2017, is to equip an orphanage in partnership with the

“It’s not a distribution for poor people, but a distro for all the people.” At the same time we kept the Xfce menu in its corner for people who do not want to use the Dock, and then later on, we added the option to use LXDE instead for reduce even more the desktop footprint in memory. LXF: Are you looking for any help? How can volunteers get involved? P d’E: Yes, we welcome all people who want to help. We need more people in the Refurbishing Labs; more people to staff training centres; more people to write user documentation and articles. As mentioned above we once shipped an Albanian version of Emmabuntüs, but besides the French version, we support several languages (Arabic, English, German, Italian, Portuguese, Spanish) in our regular Emmabuntüs releases. This implies a lot of translation work in the code itself, but also for the user’s documentation online, the wiki, the forum etc. And, as a matter of fact, one of our volunteers, Arpinux, just completed an excellent Debian Beginner’s Book of 280 pages ( and we need to translate it quickly into various language. Any volunteers out there?

SAVAS association (to support women who are victims of sexual abuse). To remain in Africa, we also have a project in Togo with the YovoTogo and JUMP Lab’Orione to help disabled children. These two associations are taking care of the transport, the installation and the maintenance of the computers, as well as building classrooms. In October 2016, there will be seven operational training rooms with 140 computers under Emmabuntüs in the North Togo Savannah region. The full story can be read in post The march of the YovoTogo children ( Finally, we have also a project with ( in Benin, to refurbish used computers, donate them to schools in less favoured areas and teach the students how to use free software LXF: What do you think are the best things about being part of the Emmabuntüs collective? P d’E: The Emmabuntüs collective thinks that Information Technology should be accessible to everybody, whatever could be their revenue level. We believe that giving a second life to aging computers reduces the electronic waste in the world. But to facilitate this choice, the cost is an important criteria, and this is why we offer computers at a very attractive price. In addition, we recycle waste that nobody want to reuse or knows how to dispose of correctly, with an estimated value under one euro. We transform it into a very useful object for training, knowledge and information, which costs between 50 and 70 euros. Brilliant isn’t it? In short, Emmabuntüs is not a distribution for poor people, but a distribution for all the people. Interview realised with David, Patrick and Yves for the Emmabuntüs collective. LXF

Akashganga Academy were so happy and enthusiastic after the Ailleurs Solidaires visit that they repainted the main gate.

LXF: Can you talk about the other communities you partner with ? P d’E: As mentioned before, Ailleurs Solidaires installed computers running Emmabuntüs in two places: one is the Akashganga International Academy (school based in Katmandhu, with

October 2016 LXF216     41

Kali Linux

Kali Linux Hack Wi-Fi, break things Jonni Bidwell wonders how the goddess Kali feels about being associated with so many script kiddies.


he fact you’ll find the amazing Kali Linux on the LXFDVD this month is no coincidence. It’s not the full edition, but it’s certainly enough to introduce some of the tools and basic penetration testing strategies. Before we do anything though, a standard disclaimer: Do not use any of these techniques against a machine that’s not under your control, unless you have been given explicit permission to do so. This guide could potentially be used to access things that you’re not supposed to, and if you get caught (and, believe us, you will get caught if this guide is your only source) you might find yourself at the wrong end of the Computer Misuse Act, or whatever is the legislation in your locale. Even if it doesn’t get to the courts, being woken up at 6am by law enforcement officers demanding that you surrender all your hardware is no fun (is there something you’re

not telling us—Ed). Also if, eg, you’re using Wireshark to collect packets from your home wireless network, then as a matter of course you should tell other members of your household what you’re up to. With that out of the way, we can get on with some introductory penetration testing. You

password-protected networks now (the original WPA has been deprecated, but is still much more secure than WEP). Cracking wireless networks (not just WEP ones) isn’t just a matter of repeatedly trying to connect using different passwords as most routers would blacklist the MAC address of any device that tried that. Instead, a more passive approach is required, so we set our wireless adaptor to a special mode where it silently sucks up all packets as they fly through the air, rather than sending any of its own. This is often called ‘monitor’ mode. We won’t cover setting up a WEP network here, you can do it with an old router or even on your current one, so long as everyone else in the household knows their network activities are potentially all visible. Our preferred solution is to set up a Raspberry Pi running hostapd, [which we covered in Tutorials, p80, LXF196], the relevant hostapd.

“Do not use any of these techniques against a machine that’s not under your control.”

42     LXF216 October 2016

can use Kali straight from the disc, install it, or just install the tools (wireshark and aircrack-ng are available in most repos) on your preferred Linux distribution (distro). For our first trick, we’ll show you how trivially easy it is to crack a WEP-secured wireless network. The underlying attacks used by aircrack-ng first came into being about fifteen years ago, and everyone should be using WPA2 for their

config file looks like: interface=wlan0 driver=nl80211 bridge=br0 ssid=WEPnet hw_mode=g channel=6 auth_algs=3 wep_default_key=0 wep_key0="short” Our 5-character key corresponds to 40 bits, which is the best place to start. Cracking longer keys is certainly possible, but requires more packets and more time. We should be able to crack a 40-bit key in around one minute (and that includes the time taken to capture enough packets). Once you’ve got a target WEP hotspot set up, we can focus on our Kali Linuxrunning attack machine.

Preparing the attack Getting wireless devices working in Linux is traditionally a source of headaches. Some adaptors require extra firmware to work, and many have other peculiar quirks all their own. As such we can’t really help you, but in general if your device works in another distro, it should do so in Kali Linux too. Unfortunately, even if you do get it working normally, many wireless drivers will still not support monitor mode. Some (such as Broadcom’s wl driver for BCM2235-2238 chipsets commonly used in laptops) do, but require you to activate it in a non-standard way, others claim to but don’t. All in all it’s a bit of a minefield, but the aircrack_ng website maintains an up to date list showing the state of various chipsets at Before we attempt to activate monitor mode, it’s a good idea to disable NetworkManager or any other process which talks to the network card (wpa_supplicant, avahi etc). These might interfere with things and the last thing we need is interference. Once the device is in monitor mode it will no longer be a part of the network, so you won’t be able to browse the web etc unless you also have a wired connection. To test if monitor mode is available on your device, fire up Kali Linux, open up a terminal and run # airmon-ng start wlan0 6 replacing wlan0 with the name of your wireless interface (which you can find out from iwconfig ) and 6 with the channel which the wireless channel of the target network (although at this stage it doesn’t matter). You’ll get a warning if NetworkManager or friends were detected, along with their PIDS so that they can be duly killed. Hopefully at the end of the output there will be a message such as: (mac80211 monitor mode vif enabled for [phy0]wlan0 on [phy0]wlan0mon) We end up with a new network interface called wlan0mon, different drivers will result in different names, mon0 is common too, so keep a note and adjust any subsequent

commands accordingly. You can check that monitor mode is indeed active by running iwconfig wlan0mon . Note that in Kali Linux, unlike pretty much every other distro, the default user is root. Just as well because most of these commands need privileged access to the hardware. Kali isn’t really intended to be a general purpose distro, so the usual concerns about privilege separation don’t apply. Now the fun can begin with # airodump-ng wlan0mon . Airodump will have your adaptor hop between channels and tell you everything it sees—access point names (ESSIDs) and MAC addresses (BSSIDs) and any clients connected to them. Note the BSSID and channel of the network you wish to attack, we’ll refer to the fictitious 00:de:ad:be:ef:00 and channel 6. Knowing the MAC address of a client connected to the network may come in handy later on when we come to inject packets. You can generate traffic by connecting to the WEP network and doing some web browsing or other activity. You should see the #data column increase as more packets are collected. When you begin to feel slightly voyeuristic, press Ctrl+c to stop the capture. In a genuine penetration testing scenario though, it would be cheating to generate traffic this way (we’re not supposed to know the key at this stage, that’s what we’re trying to figure out). But we have a cunning trick up our sleeve, hardware permitting. Test if your card can inject packets, this works best if the attacking machine is close to the router (which might be hard if said machine isn’t portable): # aireplay-ng -9 -e WEPnet -a 00:de:ad:be:ef:00 wlan0mon Hopefully you’ll see something like this, the replay attack won’t work well unless packets can be injected reliably: 02:23:13 00:13:EF:C7:00:16 - channel: 6 - ‘WEPnet’

DVWA is all kinds of vulnerable, we wonder what havoc this query will wreak?

We need to talk about WEP Apart from short keys (the original WEP specified 40- or 104-bit keys and early routers were forced into choosing the former), the protocol itself is vulnerable to a statistical attack. Besides the 40-bit key, a 24-bit initialisation vector (IV) is used to encrypt the data packet. The most practical attack against WEP involves collecting many IVs and their associated packets

and doing some number crunching to derive the key. For a 40-bit key, we can get away with as few as 5,000 packets. If the network (or rather the nodes of it within earshot of our wireless device) is busy, this will not be a problem. If not we can use a sneaky trick to get the router to generate some. Specifically, we can listen for ARP request packets (used to connect MAC and

IP addresses), capture them and inject them back to the router, so that it sends out corresponding ARP replies. We can recognise ARP request packets by their size so it doesn’t matter that we can’t decrypt their contents. Each ARP reply will give us a new IV, which will be another rap at the door of our WEP network’s undoing.

October 2016 LXF216     43

Kali Linux 02:23:14 Ping (min/avg/max): 1.384ms/7.336ms/21.115ms Power: -39.73 02:23:14 30/30: 100% If that works we can inject packets of any shape or size to the router. Unfortunately, it will generally ignore them because (a) we aren’t authenticated with the network and (b) we still can’t encrypt them properly because we don’t know the key. What we can do, if we can figure a way around (a), is listen for ARP requests and send them back out into the ether. The same ARP request can be used many times, the more replays the more IVs. If packet injection isn’t working then just stick with generating traffic directly via the WEP network. We were able to crack our short key with just 5,000 packets, so without further ado, let’s recommence the packet capture. This time we’ll restrict the channel and BSSID so that we only capture relevant packets: # airodump-ng -c 6 -b 00:de:ad:be:ef:00 -w lxfcap wlan0mon The -w switch tells airodump-ng to save the packets to disk with the prefix lxfcap . They are saved as raw data (.cap) as well as .csv and Kismet-compatible formats, for use in further analyses with other programs. With the capture running, open another terminal and attempt to do a fake authentication with the router: # aireplay-ng -1 0 -e WEPnet -a 00:de:ad:be:ef:00 wlan0mon

If you don’t see a reassuring Association successful :-) then the next step most likely won’t work as is. However, if you add in the MAC address of a device associated with the WEP network with the -h switch, then that ought to fix it. Start the replay attack with: # aireplay-ng -3 -b 00:de:ad:be:ef:00 wlan0mon Generating WEP traffic will speed this up, and remember there won’t be any ARP requests to replay unless something is connected to the network, so you may have to cheat a little here to get things going. Eventually you should see the numbers start increasing. The packet count in the airodumpng session should increase accordingly, and it shouldn’t take long to capture the required packets, sometimes you’ll get away with as few as 5,000, but generally 20-30k will suffice (some packets are better than others). At the top end, this is only around 10MB of data. Ctrl+C both the dump and the replay processes. We’ll cheat a little by telling aircrack-ng to only search for 64-bit (40+24 bits of IV) keys: # aircrack-ng output-01.cap -n 64 If you have enough packets, aircrack-ng will likely figure out the key almost immediately. Even without the -n 64 hint with enough packets the attack can still be swift and deadly. You may be unlucky though and sent off to get more packets, in which case run airodump-ng and aireplay-ng again. If you see a message about being dissociated during the replay attack, then you will need to do another fake authentication. The output filenames will be incremented, and you can use wildcards on the command line, eg output*.cap to use all of them at once. Once you’ve cracked a 40-bit WEP key, the next logical step is to try a 104-bit (13 character) one. The procedure is exactly the same, only more packets will likely be required (we managed it with 25,000 IVs). Cracking WPA2 keys is a whole different ball game, there are no nice attacks, but if you are able to capture the four-way handshake as a new device connects, then a dictionary attack can be used.

Exploits and injections

Even 128-bit WEP keys can be trivially cracked with just a handful of packets.

Reggae Wireshark As a pen tester, once you’ve got hold of a wireless key there’s no reason to stop there. Besides having access to any resources on that wireless network you can also decrypt its traffic. Wireshark is a great tool for capturing and viewing packets. You’ll find it in Kali’s Sniffing & Spoofing menu, or you can install it on any decent distro. We’ve already captured a bunch of WEP-encrypted packets so lets have a look at those. Go to File > Open and choose one of the output*.cap files. Initially there’s not much to see, most packets will just be listed as amorphous IEEE 80211 data, and there will be some other boring network

44     LXF216 October 2016

requests and acknowledgements. However, we can tell Wireshark our key and these packets will surrender all of their secrets. Go to Edit > Preferences > Protocols > IEEE 802.11 and tick the Enable decryption box. Click the ‘Edit’ button next to Decryption Keys, and then click on the ‘+’ to add a new key. Ensure the type is set to WEP and enter the ASCII codes of each character of the password, optionally separated by colons, eg our initial password short would be entered 73:68:6f:72:74 . Once you leave the Preferences dialog, all the packets will have been delightfully colour coded, all sources and destinations revealed.

It would be remiss of us to feature Kali Linux and not mention the Rapid 7’s Metasploit Framework (MSF). MSF allows security mavens to submit modules to test for (and optionally exploit) all manner of vulnerabilities, whether it’s the latest use-after-free bug in Flash, an SQL-injection bug in Drupal or some new way of sending the Windows Update service into spasm. It’s immensely powerful and we hardly have space to even scratch the surface of it here.

Nonetheless we can illustrate some of MSF’s powers by taking liberties with the Metasploitable 2 Virtual Machine. There wasn’t space to include it on the disc, but those who don’t care about a 800MB download can get it from in exchange for some details or from if you’d rather get it quietly. Unzip the file and you’ll find a VMware virtual machine. The actual disk image (the VMDK file) can happily be used in VirtualBox (with the ‘Choose an existing virtual hard disk’ option) or Qemu. In order for the VM to be visible on the network, it needs its virtual network adaptor to be configured in bridged mode as opposed to NAT. In VirtualBox, we can achieve this by going to the Network tab, and setting ‘Attached to’ to ‘Bridged Adapter’. It will then act just like a regular device attached to your network—if DHCP is available everything should just work, otherwise a static IP can be configured. Start the VM, and then log in as user msfadmin with the password the same. Find the device’s IP address using ip a . If devices on the network need static IP configured, this can be done from /etc/network/interfaces (the VM is based on Debian Lenny). There are a number of terribly configured and vulnerable services running on this VM, so it’s a particularly bad idea to run this on an untrusted network. The extra cautious should even disconnect their routers from the Internet at large. We’ll use MSF to exploit the Tomcat service, which you can connect to by pointing a browser at port 8180 on the VM’s IP address, which we’ll use to refer. This particular instance of Tomcat has a manager application running at /manager/html with easy to guess credentials (hint: it’s tomcat/tomcat). The manager allows arbitrary applications (packaged as WAR archives) to be uploaded, which is not something you really want anyone to be able to do. No matter how you exploit a service, a common

Metasploit Framework can only show you the door, you must walk through it. Or buffer overflow your way through it.

ready to launch the attack with exploit . All going well, you should see something like: [*] Sending stage (46089 bytes) to [*] Meterpreter session 1 opened ( ->… Followed by a new meterpreter prompt. Type help for a list of commands, they’re different to Bash, although that sort of shell is available from the shell command. If we type execute meterpreter’s getuid command, we can see that we have the access privileges of the tomcat55 [/user] . We could probably do some damage like this, but the Holy Grail is getting root access. As luck would have it, there’s a privilege escalation vulnerability in another part of the system (the distcc daemon) which you can read about in the Unintentional Backdoors section at https://community. Before we go though, we’ll look at a textbook attack. DVWA, the Damn Vulnerable Web Application, should be accessible at As you can probably fathom, it features somewhat underwhelming security. This is immediately obvious from the login page, which kindly tells you what the admin password is. Log in with those details, then select ‘DVWA’ from the left-hand column and set the Script Security to low. As if things weren’t bad enough already. Now go to the SQL Injection page. The idea is that you enter a User ID (in this case a number from 1 to 5) and the script returns that user’s first and last names. It works, try it. Sadly, DVWA is also vulnerable to a classic SQLi. Look at what terrible things happen if you put this code into the User ID field: 1’ or 1=1 # . Zoiks! The script got very confused and just returned all of the IDs. The reason for this oversharing is due to the underlying PHP query, which looks like: $getid = “SELECT first_name, last_name FROM users WHERE user_id = ‘$id'” By crafty quote mismatching, the last part of our query is then interpreted as: WHERE user_id = ‘1’ or 1=1 #” . The double quote at the end is commented out by the # and the clause or 1=1 ensures that the WHERE expression is always true. So all records are returned. There’s really no reason for this sort of coding blunder. PHP includes nice functions to sanitise user input and prevent this sort of thing. But here must end our brief foray into Kali Linux, but do explore it further. For fun and games why not download a Windows XP virtual machine (which you can do entirely legitimately provided you delete it after 30 days) and see how much damage you can cause with Metasploit. Hint, we enjoyed MS12-020. LXF

Quick tip We didn’t have time to cover Burpsuite here, but it’s a very powerful tool for finding holes in web applications. Unfortunately some of the more interesting features are only available in the paid-for edition.

“For fun and games why not download a Windows XP virtual machine.” goal is to get shell access to the target machine. This is usually done by starting a reverse shell on the recently exploited machine. Once initiated, the shell will ‘call back’ its master and enable them to enter commands with whatever privileges the exploited service had. We’ll use a Java payload to achieve just this in MSF. Start MSF on the Kali machine, it’s in the 08. Exploitation Tools menu. If you see an error, wait a minute and try again. It has to create its database on first run and this sometimes takes longer than it’s prepared to wait. At the msf> prompt enter: use exploit/multi/http/tomcat_mgr_deploy Note the prompt changes. You can find out more about the exploit by typing info . Next, we set some parameters for the exploit module. Change the RHOST according to the results of the ip a command on the Metasploitable VM earlier: set RHOST set RPORT 8180 set USERNAME tomcat set PASSWORD tomcat set PATH /manager/html set TARGET 1 These are all self-explanatory except the last one, which tell MSF to create a Java payload, as opposed to something OS-specific, which won’t work for this exploit. We’re now

October 2016 LXF216     45

Mr Brown’s Administeria

Mr Brown’s Jolyon Brown


When not consulting on Linux/DevOps, Jolyon  spends his time bootstrapping a startup. His  biggest ambition is to find a reason to use Emacs.

Esoteric system administration goodness from the impenetrable bowels of the server room.

The sound of silence


his month, I’m writing this column whilst on holiday sans electricity. The owner of the property we’re renting cheerfully warned me that the power would be out for a few hours one day during our stay for maintenance work. We brushed it off, assuming we’d be out at the beach anyway. Unfortunately, we didn’t take into account the British weather and today coincided with the type of rain and wind that made the dog look at me as though I was insane when I tried to take her out this morning. So we’re sat in our temporary home watching it lash down. Of course, everyone ignored my warnings to charge up devices overnight, so they sit silent drained of power. Even if they are useable, there’s no internet and phone signals are so weak as to be useless. I charged my laptop and phone up, but I’m pretending they too are out of commission for the time being—I’m drafting this up with paper and pencil. (Note to self: when did my handwriting become so atrocious?) I have to say, this enforced information diet is enjoyable. The kids are reading (gasp) physical books rather than watching YouTube. There have been actual conversations between family members. A few weeks ago, rushing to catch a train, I thought I’d left my phone at home. I momentarily experienced the kind of panic that should be reserved for lost children: How would I pay for parking? What about picking up reserved tickets. Subsequently, I found the phone in the car footwell and I chided myself for the panic and reliance on a single point of failure that I’d never accept in any infrastructure I was managing. This holiday outage then is the perfect cold turkey for my state of constant connectedness. Now, if you’ll excuse me, I’m going to go around switching off things at the wall so that even if the power does come back on I can enjoy being offline for a little bit longer.

46     LXF216 October 2016

Mayhem reigns The DARPA Grand Cyber Challenge plays out in  Las Vegas as machines battle each other for prizes.


ack in 2013, DARPA (the US defence agency which had a vital role in the research which led to the internet) announced its intention to hold a ‘Cyber Grand Challenge’, consisting of teams creating automated systems that could compete against each other to evaluate software, test for vulnerabilities, generate security patches and apply them to protected computers on a network. In a press release, the agency pointed out that the process of finding and countering bugs, hacks, and other attack vectors “is still effectively artisanal”, with professional bug hunters and other security professionals dedicating a huge amount of effort searching millions of lines of code to find and fix vulnerabilities that could be taken advantage of by users with ulterior motives. Many teams

Mayhem won $2m for fixing bugs and using them against other automated systems.

took up this challenge, using technologies such as machine learning to try and automatically perform this work. After preliminary rounds, the Cyber Grand Challenge final took place at DEF CON 24 in Las Vegas in August, with seven finalists aiming to capture the top prize of two million dollars for the team which outperformed the competition in a special ‘capture the flag’ contest. The systems, running on identical high-end hardware, battled for over eight hours to find and repair bugs in specially prepared software while at the same time using the flaw uncovered to attack their competitors. The eventual winner was Mayhem, which was developed by ForAllSecure of Pittsburgh. In order to fuel follow-up research, all of the code produced by the automated systems during the final event has been released to allow others to learn from it (see http:// The systems themselves run on a Linux-based environment known as DECREE, which includes support for specialised binaries used in the challenges themselves. Mayhem was invited to participate in the annual human DEF CON capture the flag contest, where, as expected it placed last, but did manage to complete one task ahead of the other teams. DARPA hope that the research and development generated as a result of the competition will lead to new technologies and products, in much the same way as its autonomous vehicle competition did in 2005.

Mr Brown’s Administeria

Sysdig Discover the system exploration and troubleshooting tool that brings a lot of  your favourite problem-solving tools together in one awesome package.


ack in the day, one of my opening interview questions for prospective admins was to come up with a simple problem and ask them to name some command-line tools they’d use to investigate it. It was a basic filter for people who’d actually worked with Linux/Unix but would often trip candidates up. Almost everyone had heard of top but a surprising number hadn’t used the likes of iostat, vmstat or tcpdump. That was then, of course, when I was concerned with physical servers running a single operating system. Logging into a Linux system being used to run containers now and looking at a process listing can feel quite alien. If using top for troubleshooting wasn’t really the answer I was looking for in the noughties, what would I like to see people using these days?

Can you dig it, Sysdig As Linux has moved on, I’m glad to say the tools have too. This month, I’m going to take a look at one of my favourites sysdig (, which is sold as being a “Linux system exploration and troubleshooting tool with first-class support for containers”. The tool is open source (of course!) and the company behind it (formerly known as Draios but now rebranded as Sysdig) offer a commercial service (sysdig cloud) which is a cloud and on-premises packaged version of the software. There’s a tagline on the website which says “Think of sysdig as strace + tcpdump + htop + iftop + lsof + transaction tracing + awesome sauce. With state of the art container visibility on top”. Sounds promising to me. So how can I get it and what can it do? The open source version of sysdig (see the Sysdig Cloud box, bottom, p49 for details on the commercial offering) was available for me on Ubuntu 16.04 with a simple sudo apt-get install sysdig . This gave me version 0.8.0, whereas the downloadable version from was version 0.11.0. I went with the latter (the recommended approach of the project is to use the latest version available) for the purposes of this article. There are a variety of install options – take a look at for a list—I’m sure anyone reading this will be able to work it out for themselves. Sysdig works by loading a kernel module and then capturing system calls and other operating system events. The best diagram I’ve found showing where sysdig fits into the operating system as a whole (along with a really

comprehensive list of just about every Linux performance and analysis tool I’ve ever heard of) can be found at Sysdig can be seen there, sitting in the system call interface (which handles the communication between the kernel and userspace programs). Anyone who ever ran strace on a process will have seen the types of calls I’m talking about (eg read, write, open and exec will all commonly be shown in an strace dump). Running $ sudo sysdig with no parameter takes this a step further, dumping every system call taking place to stdout (I found it a simple way to see whether everything is installed correctly). Sysdig can be used to look at these events in real time, or it can dump them into a file for later analysis, similar to say, Wireshark but for system calls rather than network packets. According to the developers, the easiest way to get started with sysdig is to run it via its cursesbased interface, csysdig.

Curses! Running $ sudo csysdig plonks me straight into what looks like familiar territory when it comes to command-line analysis tools. At a glance, a casual observer could be forgiven for

This is the website for the open source version of sysdig and the wiki contains many useful examples of what the tool can do.

Falco - snort, ossec and strace in one While I can’t shake off associating the word  Falco with the Austrian singer from the 80s, this  spin off from the main sysdig project is worth  investigating (probably worth a column on its  own). It’s a behavioural activity monitor  designed to “detect anomalous activity in your  applications” according to the website.  Running as a daemon, Falco uses sysdig to  watch a system for behaviour matching a set of  predefined (open source) rules. This makes it  similar to Snort, the popular intrusion 

prevention system, albeit it at the system call  level rather than at the network. As with the rest of the project, much  emphasis is placed on the ability to use it with  containers (which is fair enough, given the  marketplace at the moment). It can be run  inside a container itself, of course. Examples of  the types of things it can detect are listed on the  project wiki. These include an unexpected shell  being started inside a container, reads of  sensitive files and outbound network

connections from binaries like  ls  (indicating  they’ve been replaced by trojans). Falco can also  use recorded trace files from sysdig, which can  help with rule development (being able to run  the same condition as many times as needed to  help tweak new rules is very handy). It’s very  easy to configure Falco when it comes to  handling generated alerts – they can be pushed  to syslog, a file, stdout or via an external  program – making it a good citizen in terms of  fitting in with any enterprise monitoring system.

October 2016 LXF216     47

Mr Brown’s Administeria assuming I was running a fancy version of top to be honest. At the bottom of the screen though are some handy menus which can be accessed by the function keys (or the mouse). If the screenshots look a little odd, I’ve recently switched to using i3 (see for more details) which I thoroughly recommend but I haven’t quite managed to get it looking how I want it to yet. The csysdig menus include a nice help function and one labelled ‘list’ which in turn brings up some examples of what sysdig can do. There are quite a few, including listing which files and directories are being accessed, what errors are occurring and what connections have been established. These menu items are known as views and are basically Lua scripts which use sysdig filter fields and process, then display the data on screen in various ways. The nice thing about csysdig which sets it apart from the aforementioned top and its variants is the ability to drill down into a particular process or container, then having the views use that as their new context, eg by highlighting something I’m interested in and hitting the return key, I can get more detail about it—I could drop down into an Apache process running in a container and get a trace of the traffic it’s processing by way of the echo option. Similarly, I can see exactly what system calls are involved by way of the dig option. Csysdig can process captured trace files from the command line version too, which is nice when looking around for some inspiration as to what exactly happened when a problem occurred. It’s a handy tool to have in your back pocket. Easily customised too— modifying or adding new views is pretty straightforward (using Lua scripts). Having wet my appetite with the friendly interface I think it’s time I headed back to the command line proper and try to get my head around how the heart of sysdig works. I’ve mentioned using files to record sysdig output a couple of times now. This is a fundamental operation, so it’s a good place to start: $ sudo sysdig -w capture.scap The output file could be called anything of course, but the sysdig projects’s suggestion of using .scap seems like a sensible one to me. There are options to limit the number of events stored in a file, to ensure files don’t grow beyond a certain size and how many files to keep at any one time.

In quite possibly the worlds most boring screenshot [Ed – we’ve seen worse just in this issue], Sysdig does a live trace of my vi session.

48     LXF216 October 2016

This is handy when wanting to do continuous captures of say, an intermittent issue. We’ve all had them I’m sure—some problem that occurs at random, usually around three am Doing something like $ sudo sysdig -G 3600 -W 8 -w capture.scap limits a capture file to an hours worth of data, with five rotations of the file to be kept. Sysdig starts writing to capture.scap0, keeps going to scap.4 and then beings writing to scap.0 again in this case. Of course, writing files is great but what about reading them? These are binary files (actually pcap-ng format—one of the Sysdig founders worked on pcap originally) and so we need sysdig to read them. Using the the -r flag (eg $ sudo csysdig -r "capture.scap" ) will do this. It’s worth pointing out here that sysdig is available for Windows and OS X. This might be of use to those poor unfortunate souls forced to do their work on the mandated corporate desktop environment. Sysdig on Windows allows trace files to be examined at least, so captures can be examined at leisure after being generated on say a production system. Assuming IT support allows it to be installed on there in the first place of course (I’m so glad I don’t have to worry about that kind of thing these days). Naturally, I haven’t tried this out myself. There’s only so much pain I can go through on your behalf, dear reader.

Filtering for fun and profit I’ve already mentioned the nice way csysdig was able to drill down into selections, but, of course, every avid follower of this column recognises that there’s nothing like the command line when it comes to control. The power of sysdig comes from it capturing everything, then allowing us to pick out the elements we want. If I open a vi session, then in another terminal window run the following $ sudo sysdig Then as I edit my file, sysdig will show the system calls being made—it actively filters everything out barring my vi process, thanks to the argument. Sysdig has a ton of these (listed via $ sysdig -l ). Even better, they can be combined with Boolean and comparison operators, eg: $ sudo sysdig evt.type=open and and proc. name!=cat

Mr Brown’s Administeria This traces all file opening events generated by me, unless they are as the result of my using the cat command. So what other events can I use? Sysdig dumps a huge list of everything it knows about if I issue $ sysdig -L . While remembering every single one is certainly beyond me, it’s worth knowing that the parameters to each system type (eg The execve type has multiple parameters, like ERRNO and PID) can also be used as a filter by use of the evt.arg field, eg perhaps I want to see every access attempt of /etc/shadow: $ sudo sysdig evt.type=open and Sysdig can format the output of these commands by use of the -p flag, which is a bit like printf when it comes to syntax, eg to add to the file monitor above, I can just print out the names of users reading a particular file. $ sudo sysdig -p “” evt.type=open and evt.arg. name=/etc/shadow My favourite at the moment comes straight from the sysdig wiki pages, however. This prints out the directories visited by a user, following them around the system. It makes me think sysdig would be a great tool for creating a honeypot with: $ sysdig -p” dir:%evt.arg.path” evt. type=chdir

Chiseling away These one liners are great and hopefully give you some indication of how powerful sysdig can be. But won’t it take an age to build up a library of these? Can I create something a bit more useful? The solution to both of these questions is to use what sysdig calls ‘chisels’. These are the Lua scripts I mentioned and can be used to extend csysdig. As you might expect, a whole load of them are included with the installation and I can get a listing of them—this time with $ sysdig -cl . For a more detailed explanation of what an individual script can do, use the -i flag (eg $ sysdig -i topcontainers_file ) or simply take a look at how they are written—on my Ubuntu system they were installed under /usr/share/sysdig/

chisels. I can’t claim to be a Lua expert by any means but it’s quite a simple, flexible language (and has been covered by in the past, see Tutorials, p84, LXF205). The provided examples make pretty easy reading for anyone familiar with other scripting languages at any rate. I’d recommend putting selfwritten scripts in ~/.chisels which sysdig looks for by default when listing or running chisels. The included examples are really useful—being able to quickly run $ sysdig -c topcontainers_cpu is great for example. Chisels can be combined too - it’s just a case of adding an extra -c <chisel_name> argument. I’ve tended to use them more with filters though: $ sudo sysdig -c spy_users "" Writing a chisel is reasonably straightforward. All sysdig requires, as a minimum, is that any chisel must contain four variables: a description, a short description, a category and a list of arguments (which can be empty). While space unfortunately prevents me from ‘digging’ (sorry) into writing chisels from scratch, a full tutorial can be found at and there are many more examples out there on the internet as you’d expect. Finally, I didn’t want to end this article without a quick mention of tracers—a relatively new addition to the sysdig package. By writing specially formatted strings to /dev/null to start and end tracing, I can instrument anything I want to run on my computer, then analyse the results using sysdig. This is fantastic and so simple to use. Have a look at the small video available at and tell me you’re not impressed at how easy it is. Even better, sysdig includes a handy spectrogram which makes it easy to spot outliers. I’ve only really just started to look at using this if I’m honest, but it shows great promise for helping to identify bottlenecks in a process. I hope this introduction to sysdig has piqued your interest into taking a look at it—I really do think it’s a valuable addition to any sysadmin’s toolbox. It also might just help you ace your next interview! LXF

Sysdig Cloud It sometimes comes as a surprise to many end  users, but open source developers have to eat  too (and not just spend their time responding to  support tickets and pull requests). Luckily,  SaaS and paying for support of open-source  products are pretty much accepted as the norm  these days. The commercial entity behind sysdig (found  at as opposed to the open  source .org address) has gone down the SaaS  route with the launch of Sysdig Cloud, which  firmly sits in the container monitoring niche.  Available as an internet or on-premise  solution, Sysdig Cloud builds on the ability of  the open source version to intercept system  calls and collects them together for as many  hosts as the user is willing to pay for ($20 per  month per host, with up to 20 containers on  each). These get slurped up to Amazon’s cloud  where sysdig provides a set of nice real-time  dashboards, integrations, the ability to replay  system calls and nice drill down topology. Given Sysdig Cloud’s focus on containers,  it’s nice to see a monitoring tool that’s designed  to report on the health of a service too (ie I may 

have some containers down, but overall my  application is up and running). Obviously, this  is a very competitive marketplace, but for  someone who is building a microservices/

container-based application and looking for a  monitoring solution, Sysdig Cloud is worth a  place on the shortlist in my opinion. There’s a  free trial for anyone wanting to give it a whirl.

Built to impress even the most pointy haired of bosses, Sysdig Cloud is a SaaS service that provides a real-time overview of a platform using system call and metadata.

October 2016 LXF216     49

The best new open source software on the planet Alexander Tolstoy offers a tasty side order of hot and spicy (free) sauce to go with this month’s rack of hand-picked open source apps for your delectation.

Lumina Pitivi

EncryptPad Qt5-Fsarchiver Fontforge Museeks Tor Browser Bovo Blobwars: Blob Metal Solid Krop Clementine

Desktop environment

Lumina Version: 1.0 Web:


any Linux users will be accustomed to the fact that KDE Plasma is the most configurable of the Linux desktop environments, although it‘s quite a heavyweight. Those who like lighter solutions often stick to LxQt and enjoy snappier performance, but LxQt development has stalled lately and for some users this has become quite a problem. Luckily, we have another more robust, light and feature-rich Qt-based desktop that we can turn to and it’s called Lumina. This desktop environment was initially developed to complement the user-friendly PC-BSD operating system—the BSD flavour that’s aimed

at desktop users. Recently, a shiny new version of Lumina was released and version 1.0 is the first official stable edition of this exciting desktop. While there are already other lightweight desktops, Lumina is special in that it has minimal external dependencies, which results in very good portability. In fact, you don’t need to install PC-BSD in order to enjoy Lumina. You’ll find that there are many carefully prepared binary packages for

Lumina is surprisingly solid for a super-lightweight desktop environment and it has few dependencies.

“You get multi-monitor support, a fast and responsive interface.”

Exploring the Lumina interface... Monitor configuration Lumina developers specifically highlight the decent screen configuration dialog that supports several monitors.

Main menu You can slide menu items sideways to reveal extra options, and the text input field is for quick searching.

Bottom bar

Text editor

A traditional Windows – or KDElike – panel which includes a main menu launcher task list, clipboard manager, clock and battery indicator.

To complement the consistent look and feel of different desktop components, there’s another plain text editor madeb y the Lumina project.

50     LXF216 October 2016

Settings modules Some settings in Lumina open up as editable configuration files. You’ll need to fine-tune it by hand.

nearly all popular Linux distros. You’ll also discover that the loaded desktop occupies as little as 150MBs of RAM and feels very snappy. Lumina includes a desktop with a classic bottom panel, an application menu and several utilities, such as lumina-fm for file management, luminasearch search engine and app launcher, along with the quite self-explanatory lumina-screenshot tool and luminafileinfo for viewing file properties. Lumina also offers its own configuration dialogs for certain desktop settings, including screen resolution, fonts and styles. As for the remaining core features, Lumina relies on system-wide tools, such as Pavucontrol for dealing with audio and NetworkManager for web access. There is, however, no home-brewed window compositor for Lumina yet—the project recommends using Compton for window management. Overall, Lumina offers a solid base for those unafraid to apply heavy customisations and quite a lot of postinstall polish to their desktop. In return you get multi-monitor support, a fast and responsive interface with consistent menus and dialogs.

LXFHotPicks Text editor

EncryptPad Version: Web:


he arguments about the most secure place for storing valuable data and sensitive information are pretty much endless, and regardless of the type of storage – whether a local hard drive, a cloud service or a removable media – nobody doubts the importance of data encryption. EncryptPad is a free and open source text editor for such information with a built-in data protection. It protects files with passwords, security key files, or both, and can also be used to encrypt binary files, such as images and videos etc. EncryptPad has plenty of securityrelated features, such as a random key file and password generator, support for downloading keys from remote locations, optional read-only mode and more. The editor uses its own EPD file format, where your data is protected with cipher algorithms (CAST5, TripleDES, AES128, AES256), file

integrity is maintained via hash sums (SHA-1, SHA256) and the whole thing is compressed with ZIP. There’s very little chance that your *.epd files could be read outside EncryptPad, so you’d best not forget your passwords! Commonly there are two passwords for each file: one is the direct password and another is for unlocking the key. You can use the same key for multiple files (and there’s a checkbox for making a key persistent), but the primary password is always unique for each file. Since files can be protected with both a key and a password at the same time, it means EncryptPad is a good solution when you need to store sensitive information on unprotected

Somewhat annoyingly, EncryptPad supplies binary packages for Windows and OSX but not Linux.

“A good solution when you need to store sensitive information.”

media, such as a notebook, memory stick or unencrypted cloud storage. When running, EncryptPad stores unencrypted text in memory and for this reason, it’s recommended to shut down EncryptPad when not editing or reviewing text documents. The official website provides binary packages for Windows and OS X but not for Linux. To build it, download its source tarball and run the $ ./configure. sh --all command. Provided you have Qt5, gcc and Python developer’s packages installed, that single command will swiftly do the job.

Backup utility

Qt5-FSArchiver Version: 0.6.19 Web:


he name of this project on Sourceforge is a little misleading as it’s listed as Qt4FSArchiver. In fact, the application has smoothly transferred to modern Qt5 bindings some time ago. Nevertheless, it’s still a GUI front-end to the FSArchiver command-line utility, which stands in the same line with Clonezilla and Partitionmanager. The main use case for running Qt5FSArchiver is when you need to back up or restore a partition or a directory. In other words, you can do either a partial or global backup of your system – not only just directory structure – but also your drive setup (MBR, GPT or GUID). If you need to restore a broken Linux installation or clone your OEM image to other machines, Qt5-FSArchiver is exactly what you need. The application requires root privileges to run and carefully retains all

permissions when creating an image of your drive. The interface is densely populated with various flags and options, but it’s still easy enough to get around. The top-right area lists all found partitions and the area below shows your directory tree where you can choose where to store backups. Take note of the flags to the left: here you can optionally use the encryption key; split backups to parts; save PBR (it will save the bootable flag of a partition) or set the number of CPU cores for multithreaded backup. By default, Qt5-FSArchiver uses the LZO compression method for backups, but other options are also available

A few mouse clicks and your partition is carefully backed up ready for your to wreak havoc.

“You can do either partial or global backup of your system.”

(GZIP, BZIP2, LZMA). The main Actions menu of the application contains quick links to other frequently used features, such as network partition or directory backup/restore, MBR/GPT backup restore and few other options. We tested Qt5-FSArchiver to transfer our bootable /dev/sda1 (/boot/efi) and /dev/sda3 (/) partition to another drive and it worked magically. Of course, the same could be done from a bear command line (see $ fsarchiver – help ), but it was much more convenient thanks to Qt5-FSArchiver.

October 2016 LXF216     51

LXFHotPicks Font editor

Fontforge Version: Git Web:


inux has always been superfriendly for desktop environment customisation—it goes far beyond themes and styles in different desktops. However, whatever interface you like you probably can’t live without fonts. Most people stick to one of the default sans typefaces that comes with their Linux distro, but there are more ways of playing with fonts than you might think. Fontforge is a font editor that enables you to be more creative and not only choose fonts, but also make your own. Fontforge is one of very few applications of its kind for Linux and it’s definitely the most respected and tried and tested. Fontforge has been around since 2000 and despite its age, its still in active development. It also means that the user interface is organised in a quite old-fashioned way (we warned you!). The first time you launch Fontforge, it will ask you to open a font

of any supported format. Fontforge boasts a very broad list of formats it can read, including TTF, OTF, Type1-3, SVG, Postscript, BITMAP and even fonts extracted from PDF. If you are not sure where to start, guide Fontforge to /usr/ share/fonts and choose anything you want. There are two main interfaces within Fontforge: the first interface is a character table browser with all symbols contained in a font while the second one is an editor that pops up when you double-click a letter in the browser. The editor has manipulation tools that us shapes, nodes, selections and other elements of a letter. From one perspective this looks like a custom-

Doing a surgery of a font is very exciting and helps you get more creative.

“Fontforge is a font editor that enables you to be more creative.”

tailored vector editor—but you can draw letters right in the Fontforge editor or, if you’re more accustomed to other tools such as Inkscape, you can prepare letters there and transfer them as SVG images to Fontforge and adjust certain metrics. Bringing standalone letters to a font is exciting, but quite complex— you’ll have to deal with kerning, baseline, glyph adjustments and many other aspects in order to create a smooth and good-looking font. Some metrics can only be optimised manually, which is why creating professional fonts can take a long time. But you can always try your luck with simple fonts in Fontforge—it’s easy and a lot of fun.

Music player

Museeks Version: 0.6.0 Web:


he Chrome OS concept is popular these days and you may be inclined to think that classic desktop applications will gradually disappear, and everything will be replaced by browser-based OSes and extension-based desktop apps. We don’t know for sure if this will happen, but right now extension-based apps are a growing trend: Take a Chromium-based web engine, add some JavaScript code and encapsulate it as a standalone app that does some specific job—this is exactly the core of any modern application based on Electron and NodeJS. We have text editors, image writers, email clients, messengers based on this platform, which all contain the same code that powers the Chromium web browser. Museeks is the new audio-focused addition to this family. It’s a music player with a basic feature set, which

52     LXF216 October 2016

can shuffle tracks, sort tracks to playlists and put them in a playback queue. Museeks is a very simple player. There aren’t any mind-blowing killer features (although the support for a dark theme is something that gets many late-night Linux users excited). However, there’s one very useful and fun feature in Museeks: a playback speed control. Press the ‘gear’ button on the lower-left corner of the Museeks window and go to the Audio tab. Note the default ‘1’ value in the lonely input field. If you change it to something else, the player will instantly react, so you can listen to your favourite tune and experiment with its speed in real time.

Museeks works best as a player for those who just need to sort some tracks and enjoy good audio.

“There’s one useful and fun feature in Museeks: playback speed control.”

Change the value to ‘2’ and the audio track will run twice as fast while setting it to ‘0.5’ will make it run twice as slow. In other respects, Museeks proves itself as a very simple and robust player. The search field in the upper-right corner is very fast, even when, in our case, we used a test library of 10,000 tracks. There’s nothing in Museeks‘ interface that’s an obstacle to a firsttime user, except for the mechanic for adding music to the database. For some reason, Museeks neither scans ~/Music by default nor asks to add a directory. So you’ll need to add one and hit the ‘Refresh Library’ button.

LXFHotPicks LXFHotPicks Web browser

Tor Browser Version: 6.0.4 Web:


ost modern web browsers are clones of Chromium. We could go further and suggest that once someone successfully builds Chromium from source (and the build system is complex), they could proudly announce another brand-new product, as if it were created from scratch. Tor Browser is totally different. To start with, it’s derived from Mozilla Firefox. If you don’t know what Tor is about (and the name’s derived from The Onion Router, the original project) it’s focused on privacy, security and anonymous internet access. Tor Browser applies a collection of hacks, custom settings, tweaks and workarounds to vanilla Firefox and aside from the changes to the product itself, all network traffic is redirected through the Tor Network to deliver greater anonymity. Tor Browser helps people bypass certain government regulations (such as ‘The Great Firewall of China')

and generally leaves less of a digital trail on the internet. Specific tools used by Tor Browser are the HTTPS Everywhere Firefox extension, another extension called NoScript for resisting JavaScript-based threats, and the Fteproxy utility to prevent traffic blocking and inspection along with a few other components. Tor Browser deliberately removes the regular set of Firefox system extensions as they don’t qualify for what the Tor developers define as ‘true privacy’. In the case of the 6.0.4 release, it’s more evolutionary than revolutionary. The new browser relies on the updated Tor framework (v0.2.8.6), which starts faster and is more flexible for when you

If you’re concerned that you’re leaving too much of a trace on the internet—try Tor Browser, see page 30!

need to set custom ports for Tor traffic. Tor Browser looks and feels like Firefox that’s because it’s based on Mozilla’s updates, sync service and other ‘desktop’ features. It can also be run as a portable version as well, allowing you to write to a USB thumb drive and take your secure web browser with you.

“Tor Browser helps people bypass certain government regulations (such as ‘The Great Firewall of China').”

Video editor

Pitivi Version: 0.97 Web:


inux is known as a wellestablished platform in video editing with lots of top-quality applications for non-linear video processing. Pitivi is one of the most prominent applications, which you should definitely try even if you’re not planning on directing movies. Pitivi has been moving towards the 1.0 release, but it has just reached version 0.97. The list of new features includes a rewritten video rendering dialog and more accurate GStreamer codecs support, which means that if a codec or a container is listed as available it’s guaranteed to work. In other respects, Pitivi is a classic editor that balances simplicity and a readiness for professional use. The interface has four areas: the top-left is for storing your imported clips and surfing through the Effects library. Next, there’s a small panel where you can

control what transitions and effects are applied to what clip. The top-right area is a small preview window with playback controls below it. As you might expect, the largest part of the screen below the three other blocks is dedicated to the timeline. This is where your new movie is being created from clips. You can drag clips on the timeline, group and ungroup them, align them and add a personal touch with lots of eye-catching visual effects. Pitivi has a very nice set of fancy effects for simulating retro colour films, various distortions, colour overlay

Make your own movie with this easy to use non-linear video editor.

effects etc. In order to export your work into something you can share with your friends or publish online, you’ll need to make sure you’ve installed the extra GStreamer codecs as Pitivi heavily depends on the whole Gstreamer framework. Pitivi is part of almost all Gnu/Linux distros, so you don’t need to surf the internet or add third-party repositories in order to install it.

“A classic editor that balances simplicity and a readiness for professional use.”

October 2016 LXF216     53

LXFHotPicks HotGames Entertainment apps Board game

Bovo Version: 16.08 Web:


he KDE Games package includes lots of simple games, many of which don’t have analogues for Linux. One such game is Bovo, which is based on Gomoku (or Gomokunarable as it was originally called), a Japanese board game that dates back to the Heian period (roughly a thousand years ago). It’s a two-player game played on a square wooden board, in Bovo this is a 22x22 cells, where X and O pieces are placed in turns. The goal of the game is to construct a row of five pieces before an opponent does the same. It doesn’t matter if a row is horizontal, vertical or diagonal, it only has to be unbroken. This style of Gomoku is very popular among students and also known as the five-in-a-row game.

By default, Bovo looks like a copybook (you can change the look in Settings > Theme) and invites you to play against its AI. Note the drop-down difficulty menu, which has as many as eight levels from ‘Ridiculously Easy’ to ‘Impossible’. The game certainly helps to train your concentration: once you miss your opponents’s winning combination you can no longer fix it and lose the match. The only way to win is to arrange your marks in such a way that you’re going to construct two rows at a time as your enemy won’t be able to defend both rows at the same time.

Bovo is five-in-a-row game found in KDE Games.

“You can undo as many moves as you like without penalty.”

Bovo also has the Hint button on its toolbar, but the advice given doesn’t necessarily lead you to victory, so use this button with caution. If you make a move and immediately regretted it, there’s also a handy Undo feature in Bovo. You can undo as many moves as you like without penalty— something that is totally impossible if you play with real human using a sheet of paper.

Platform game

Blobwar: Metal Blob Solid Version: 2.0 Web:


s the introductory subtitles say, Blobs were a happy and peaceful nation that never showed any anger and were quite busy studying quantum physics and harvesting cherries. Something went terribly wrong when these peaceful creatures were attacked by aliens from out of space. Most of the civilian population was killed, but a few surviving blobs managed to hide. In Blobwars, you play as the fearless Bob the blob agent, whose mission is to infiltrate the occupied territory and to save helpless blobs. The game marks the first episode of Blobwars, dubbed Metal Blob Solid. The whole series has been in development by Parallel Realities, a

54     LXF216 October 2016

UK-based game developer, since 2003 but as it’s an open source project, it lives a full life even now having received updates and new portions of refactored code in 2016. The game used to include a lot of blood that made it unsuitable for young audiences, but as of now, when you hit an enemy there’s a spray of supercolourful fireworks. The game also has a very captivating and heroic background music as well as satisfyingly solid weapon sounds.

A blob on a mission and unlimited ammo.

“When you hit an enemy there’s a spray of colourful fireworks.”

Playing Metal Blob Solid is fun and exciting. Your character can shoot with different weapons, but you can only hold one at a time eg you’ll want to avoid picking up a pistol if you’re holding a laser gun. However, the trade-off is that you get the unlimited ammo. Blob explores a level by walking, jumping and diving under water. The developers promise a nine hours of gameplay, which we personally couldn’t prove after being defeated by one particularly tricky boss. Overall, the game is thrilling and has a great atmosphere.

LXFHotPicks PDF tool

Krop Version: 0.4.11 Web:


ome developers tend to pack as many features as possible into their software while others prefer to have many small utilities for different jobs. The latter is exactly what Krop is about—it’s a concise graphical tool for cropping PDF documents. What is interesting is that the source code of Krop doesn’t have anything that could be compiled into a binary—it’s just a set of Python scripts that draw a fully fledged graphical UI thanks to the PyQt library. PDF handling is performed using the Python bindings to Poppler, a widely used libre PDF manipulation library. So now you know basic Krop dependencies and you can install it using this single command: $ sudo python install Krop looks surprisingly solid for a tiny application. You’ll see the main area where a PDF’s contents is displayed and a handy sidebar to the left of it. To use it, you just open a PDF file which

you need to crop and draw at least one selection on the first page. By default, Krop applies your selection to all pages, so if, eg, you need to crop a book of many pages using the same margins, you only need to draw the correct frame once. If you draw several selections, Krop will split your pages according to their count (selection=page). Again, this can work for scanned spreads with two or more pages on the same sheet. The Advanced tab has additional options for applying selections and configuring page splitting. Krop’s developer has proudly announced that his software can also automatically break pages into parts to fit the limited screen size of

If you only need to correct margins or split pages into parts, Krop has all this in one neat interface

some eReaders (particularly low-end devices with 4:3 screen ratio that lack convenient scrolling support). You can choose your device from the dropdown list and also set page margins here. When all preparations are complete you’ll need to hit the ‘Krop’ button (the one with a smile), and Krop will silently produce a copy of your file with a $i-cropped.pdf suffix.

“Krop can automatically break pages into parts to fit the limited screen size of some eReaders.”

Music player

Clementine Version: 1.3.1-97 Web:


he last time we picked Clementine [in HotPicks, LXF145], it was in the days when the dust had not settled after Amarok had completely changed its look and feel after upgrading from version 1.4 to 2.0. Clementine used to be – and continues to be – the clone of old Amarok 1.4, but in recent years it has gained more features and more polish. Nowadays, regardless of your attitude to Amarok, you can use Clementine as your default Qt5-based audio player and enjoy very fast library surfing and lots of online music services that are tightly integrated into the player. That said, even if your ~/Music directory is completely empty, you can still populate your playlist with tracks of any genre by searching through music services, such as Magnatune or Spotify. Clementine offers many ways to access music thanks to the very handy

vertical menu along the left-hand side of the window. You can surf through a library of file tags, explore your albums via a built-in file manager or add music from remote devices (including your optical drive). The Tools menu has a lot of features that you’ll need to complement listening to you favourite tunes while the Extras menu has some funny items to fool around with. Clementine has a lot of additional features that you should be aware of, such as Wii Remote support, web access (you can connect to your Clementine player from another machine), a file format transcoder and lyrics and cover art auto-fetching etc.

Enjoy spankingly fast library surfing and lots of online music services tightly integrated into the player.

The Clementine 1.3 update also offers some extra goodies, including support for (the largest music library in the world, which recently resolved its licensing issues), a new ‘Rainbow Dash’ visual analyser, as well as a ‘Psychedelic Colour’ mode addition to all analysers (great for stressed out sysadmins), so we can safely say the new version is more colourful than ever before. LXF

“Enjoy very fast library surfing and lots of online music services that are tightly integrated into the player.”

October 2016 LXF216     55

Reviews Xxx

Pi user

giving you your fill of delicious Raspberry Pi news, reviews and tutorials.

DAvID FeRgusoN Raspberry Pi enthusiast and creator of PiBakery.



y Raspberry Pi is amazing,  however, like many others  I’ve found that the initial set  up of the SD card can be a challenge.  Depending on which operating  system you use, the process ranges  from tricky to break-out-thecommand-line, which isn’t something  you really want from a computer  that’s designed to be used by people  of all ages and skills. To try and solve this, I created  PiBakery, a blocks-based, easy to  use setup tool for the Pi. Put simply,  PiBakery enables you to easily create  customised SD cards for Raspberry  Pi, using a Scratch inspired interface.  If you’ve ever used Scratch before,  you’ll know how to use PiBakery! With Scratch, you drag blocks out  to make your sprites do things. In  PiBakery, the idea is the same,  except that the blocks make your Pi  do things instead. There’s a block for  connecting to Wi-Fi, one for changing  the password and another for  installing a VNC server. Instead of  Scratch’s ‘On Green Flag Clicked’, you  have ‘On First Boot’, and ‘On Every  Boot’, allowing you to easily run  scripts and perform setup tasks as  soon as your Pi powers on. What’s more, if you insert an SD  card that’s been created with  PiBakery back into your computer,  you’ll be given the option to modify  the scripts that you’ve originally set.  Go to a Raspberry Jam and want to  connect to the Wi-Fi there? Use the  Wi-Fi block to get your Pi to connect  the next time it boots up. Head to to  learn more and download, and follow  @PiBakery on Twitter to stay up to  date with PiBakery news.

56     LXF216 October 2016

10 million Pis sold

Ain’t no stopping us now, we’re on the move! The Raspberry  Pi is one step away from a world record.


he Raspberry Pi Foundation has been in contact to inform us that the Raspberry Pi has passed another monumental milestone saying “Please join us to celebrate the sale of the 10,000,000th Raspberry Pi. In four years, we’ve grown from a single pallet in a garage to become the best-selling British computer ever.” There’s never been anything modest about Pi sales, other than the initial batch of 10,000 built in China. Even after the first 12-months, 1.5 million had been shipped. If anything the second year the rate slowed with another million being sold, but that was because the Raspberry Pi Foundation was plotting, planning and preparing. In its third year with the release of the refined Model B+ and A+ sales surged to 4.5 million. On its third birthday the Pi 2 was released and this swelled sales in 2015 to 8 million. Late 2015 saw the Pi Zero released and the Pi 3 was launched on its fourth birthday in 2016. These were more

than enough to drop another 2 million sales. So it looks like the next record will be that of the 12.5 million sales of the Commodore 64. An amazing achievement for a charity where all the profits go back into teaching children and training.

Nothing can stand in the way of our insatiable appetite for a bit of Raspberry Pi!

Pi cucumbers African Pi labs Smart farming with the Pi

Powering education

he son of a Japanese cucumber farmer has decided to combine the power of Arduino, the Raspberry Pi 3 and Google Cloud TensorFlow to enable deep-learning AI to sort their family’s crop of spikey cucumbers – an arduous process, for his poor mother, that’s currently done manually. While the system still has some way to go from its 70% accuracy, it’s another example of how disparate technologies can come together for novel solutions.

he Pi Foundation’s blog has an amazing story outlining how the Pi – and a lot of person power – has brought computer education to Togo, West Africa. Dominique Laloux has worked on a number of projects in Africa and his latest is refitting a schoolroom with power and network. The classroom was kitted out with 21 Pis: one for each student, the teacher and another for driving the LED projector.


Who knew that cucumbers needed good grades?


Bringing education to all corners of the world.

Pi Zero HAT Reviews

Enviro pHAT Les Pounder loves data and is always looking for new ways to integrate real time data into his latest project. Perhaps this board can fulfil his needs? In brief... A Raspberry Pi Zero sized add-on board that offers sensors to read temperature, pressure, light levels, colours, orientation and compass headings. Designed to provide data via a robust Python library which can be easily integrated into existing projects. The board uses only the I2C pins of the Raspberry Pi, enabling other boards to be used as unique methods of output.


he Pirates of Sheffield bring us another board using their own pHAT standard for smaller addon boards primarily designed for the Raspberry Pi Zero but compatible with all 40 GPIO pin Raspberry Pi boards. It’s also worth noting that the Enviro pHAT only uses three I2C pins (I2C being a simplified communication protocol) and two other pins for power, which means it is possible to attach the board to older Pis using jumper cables or a breakout board. The Enviro pHAT is a platform for data gathering, similar to the Sense HAT used for the Astro Pi project. The Enviro pHAT comes with a plethora of sensors. The BMP280 temperature and pressure sensor can work between temperatures of –40C and +85C, and pressures of 300 to 1100 hPa (hecto Pascals). The TCS3472 light and RGB colour sensor can provide a reading for the light level, enabling it to be used as a trigger for light dependent projects, and can also identify colours, returning the value detected as a comma separated list of values, known as a tuple. The sensor’s accuracy can be increased thanks to two white LEDs on each side of the light sensor. Next is an LSM303D accelerometer and magnetometer. The accelerometer detects orientation and motion of the board. A basic compass heading can be taken using the magnetometer. So with this single sensor we can create an input based on the orientation or heading of the board, handy for gesture controlled projects.

Features at a glance


The Enviro pHAT has sensors for temperature/ pressure, light/colour and motion. These are simple to integrate into any Python project.


The Raspberry Pi GPIO does not have an analog input by default. The Enviro pHAT comes with its own ADC for extending the number of sensors.

The Enviro pHAT is a compact sensor board designed to fit atop the Raspberry Pi Zero and sit with a flush profile, enabling small, neat projects.

As well as sensors the board comes with an ADS1015 4-channel analog to digital converter (ADC) that can be used with external analog sensors. Attaching a sensor to the ADC is rather simple if it uses 3.3V logic, requiring that we simply attach the sensor to the header pins present on the board. If your sensor uses 5V logic, and this can be identified in the data sheet for the sensor, then you will need to use a voltage divider, commonly three resistors of equal value.

Take charge Assembling the Enviro pHAT requires some basic soldering skills. Controlling the Enviro pHAT is then handled via a robust and easy to use Python 2 / 3 library that can be used with all of the sensors, or individual sensors can be used by importing each class as needed. Installation of the Python libraries is automated thanks to an install script available from the Pimoroni website. Typically using install scripts from random websites is not the done thing, but in this case we can trust the source. For those who wish to install manually there are full instructions and a step-by-step guide. The Enviro pHAT comes into its own in a data logging project. Each of the sensors can be polled and data recorded into an external file, such as a CSV file which can then be imported into a spreadsheet application, or the

data can be used with an online resource such as As mentioned earlier, the board only uses the I2C pins of the GPIO, meaning that other boards can also be connected. For example you can use the Enviro pHAT to gather data and then display that data via a chain of WS2811 LEDs, commonly known as Neopixels, where different colours denote different weather conditions or react to severe accelerometer input. The Enviro pHAT is a great board that can be easily integrated into a home automation project or an experiment. The build quality is exceptional and the supporting Python libraries are easy to use while providing an extensive range of data capture options for those who wish to take on advanced projects. LXF

Verdict Enviro pHAT Developer: Pimoroni Web: Price: £16

Features Performance Ease of use Value

8/10 8/10 8/10 9/10

A cost-effective and easy-to-use add-on board for gathering real time data about your environment.

Rating 8/10 October 2016 LXF216 57

Raspberry Pi Sonic Tutorial Pi

Python/Sonic Pi: Use a dance mat

Les Pounder busts out his best moves on the dance floor and explains how  to create music (or something like it) using a dance mat to control Sonic Pi.


Our expert Les Pounder

is a maker who  specialises in the  Raspberry Pi and  works with the  Raspberry Pi  Foundation to  deliver their  Picademy  teacher training.  He has a blog at

or our tutorial this month we delve into our loft and dust off the USB dance mat that we bought in the 2000s. The goal of this project is to use the dance mat as a method of input for Sonic Pi. Now, we can’t just plug in and go; we first need to write some Python code that will talk to Sonic Pi’s server and react to our dance moves. To start our project, connect your keyboard, mouse, HDMI and SD card, and finally the USB dance mat to a spare USB port. Now power up your Raspberry Pi and boot to the Raspbian desktop. Once at the desktop, we need to open LXTerminal, the icon for which is in the top-left of the screen and resembles a monitor. In the terminal we’ll first install the library that will enable our dance mat to be used with Python. Inputs is a Python library created by Zeth. The goal of Inputs is to simplify the use of game controllers in Python. We’ll install Inputs using the Python package manager. In the terminal type the following. $ sudo pip3 install inputs Once Inputs is installed the terminal will be returned back for use, and the next thing to do is install the second Python library. Python Sonic is a Python library that enables a Python

project to talk to the server that runs Sonic Pi. But in order to connect Python Sonic to Sonic Pi we need to install pythonosc. In the terminal type the following, pressing Enter at the end of each line. $ sudo pip3 install python-osc $ sudo pip3 install python-sonic That concludes the installation of the software, and the terminal can now be closed. Our next step is to open Sonic Pi. This is crucial because Python Sonic requires the application be open for use. Sonic Pi can be found in the Programming section of the main menu. Minimise Sonic Pi and return to the Programming menu. This time select Python 3. Once Python 3 opens, click File > New and a new blank document will appear. Immediately save this by clicking File > Save and name the new file Future saves will be quicker. In the blank document we start our code by importing the libraries that will provide the functionality required. From the Inputs library we import the get_gamepad class; this is used to communicate with our dance mat. Next we import the entire Python Sonic library using * – this removes the need



Pis You will need  Any model of Raspberry Pi  A USB dance mat  An Internet connection for your Raspberry Pi All of the code for  this tutorial can  be downloaded  from https:// LXF216-SonicPiDancemat/ archive/master. zip

Yes, we are kind of assuming you’ve got a dance mat up in your loft, if not on your floor right now. Hasn’t everyone?

58     LXF216 October 2016

Sonic Raspberry Pi Raspberry Pi Tutorial Pi

Python Sonic This isn’t our first project using Python Sonic, in LXF211 we dipped our toe into what was a very bleeding edge library. Thankfully now the library has matured and is even being used by the Raspberry Pi Foundation as part of their Picademy training courses. Installation has been improved thanks to being packaged for the pip Python package manager, which handles installation and dependencies. Now

that the installation has been improved the project loses some of its experimental status, but it is still in heavy beta, so it is prone to a few bugs along the way. The team behind Python Sonic are still working hard to improve the library and have provided an extensive reference guide via their Github page: python-sonic

to preface every function call with the name of the library. Our last import sees us importing the Thread class from threading (more on this later). from inputs import get_gamepad from psonic import * from threading import Thread Our next section of code is the creation of two functions, objects that we can call by their name and this triggers their code to be executed. Our first function is called beat() and it plays a heavy kick drum beat every half second using Python Sonic. The kick drum is a pre-recorded sample contained within Sonic Pi. Python Sonic contains a function sleep() which mirrors the sleep command used in Sonic Pi. def beat(): while True: sample(DRUM_HEAVY_KICK) sleep(0.5) Our second function plays another sample, the classic Amen loop. This time we set the amplitude of the sample to 0.5 to ensure that the volume is consistent with other samples. We add a sleep for 1.753 seconds, the exact duration of the Amen sample. def Amen(): while True: sample(LOOP_AMEN, amp=0.5) sleep(1.753)

Moving to the beat We now move on to the main body of our code. Here we use a try…except test that will attempt to run the contained code, which starts with an infinite loop, while True, to run the contained code. try: while True: Inside of our loop we’ll create an object called events that will be used to link to our attached dance mat using the Inputs library. We then use a for loop to extract the event code, the button that has been pressed on the dance mat, and the state of the button, on or off, which is shown as 1 or 0. We then store the referenced button press to a variable called button and then print the information to the Python shell for debug purposes. Our last line in this section changes the default instrument for Sonic Pi to a Tron inspired digital noise. events = get_gamepad() for event in events:

print(event.code, event.state) button = event.code print("The button pressed was ",button) use_synth(PROPHET) We now enter into a conditional test that will check for each button press and react accordingly. This test uses a series of if..elif (short for else if) tests that will check the state of the dance mat. Our first test checks to see if the Select button, which Inputs refers to as BTN_BASE3 has been pressed, identified by its state changing to 1. If both of those tests are correct then the indented code is executed. In this case it will print the button for debug. The select button has a special function in that it can provide a background beat for us to dance to. Earlier we created the Amen function, and this function will be called and contained in a separate thread of Python code. In essence we are running two sequences at once to give us a layered sound. if button == "BTN_BASE3" and event.state == 1: print("SELECT") Amen_thread = Thread(target=Amen) Amen_thread.start() The next elif condition is a similar test to check if Start has been pressed. If so another thread is launched to play a drum beat. We’ll skip this elif condition, but it is present in the full code download for this tutorial. Our next elif condition tests to see if the X button has been pressed. If so then a cymbal sample is played as well as printing the button to the shell for debug. elif button == "BTN_BASE" and event.state == 1: print("X") sample(DRUM_CYMBAL_CLOSED, amp=0.5) For each button present on the dance mat there is a corresponding elif condition. Once a condition is proven to be true, the code within is executed, and it’s this that gives us the different sounds. We now come out of the conditional tests and out of the infinite loop and come to the except section of code. If the code contained in try were to fail, or if the user wished to exit, then pressing Ctrl+X will force close the project and print EXIT on to the screen. except KeyboardInterrupt: print("EXIT") With the code completed we can now save our work and click Run > Run Module from the menu. Now get ready to dance to the beat of your own music! LXF

Quick tip Sonic Pi has lots of built in samples, synths and effects, so if you’d like to try out something new, open the Sonic Pi Help, located at the bottom left of the Sonic Pi application. Try out a few different sounds to find your favourites and then introduce them into the code for this project as you like.

Get your Pi filling here: Subscribe and save at

October 2016 LXF216     59

Raspberry Pi Secure messaging

Cryptography: One Time Pad

Nate Drake explains how your Pi can bring you to the very pinnacle of  cryptography by exchanging messages in perfect secrecy.


Our expert Nate Drake

Nate is a freelance  journalist  specialising in  cybersecurity and  retro tech. His  girlfriend won’t let  him buy a kitten,  so he has to make  do with photos  of kittens.

uantum Computers. Microphones so sensitive that they can record your keystrokes from yards away. Networks of zombie computers working round the clock to brute-force passwords. Government designed backdoors in code. It has never been harder to be entirely certain that any message you send can be transmitted or received in absolute secrecy. This holy grail of cryptography has long frustrated security experts, and most people are willing to settle for using encryption programs like gpg, which while theoretically breakable will resist all cracking attempts long after you’re pushing up the daisies. What if, however, there were a way to be certain that your personal emails, pictures of your pet kitten, backups of your tax returns for the past decade and so on were safe even if intercepted? Enter the One Time Pad.

The Notorious OTP In simplest terms, a One Time Pad is a series of random numbers which you agree upon with someone with whom you wish to communicate, usually by meeting in person and exchanging pads. When sending a message, you first need to convert it to numbers, then add each of these numbers to the numbers in the pad. Once the recipient receives the message, they can work backwards using their copy of the pad, deducting the numbers to retrieve your original message.




Quick tip Regardless of what method you use to create your OTP it’s a good idea to have separate “Bob to Alice” and “Alice to Bob” pads to make sure you both don’t accidentally encrypt messages with the same pad.

One implementation of the OTP encoding the message “The falcon has flown.” You’d be quackers not to use it.

60     LXF216 October 2016

Provided that the numbers are truly random, no one else sees the pad, and the same text isn’t encoded twice with the same pad, then even the world’s fastest supercomputer couldn’t decode the message. The strength of the encryption lies in the randomness of the pad numbers. Without knowing these, anyone who intercepted a message might see the word ‘LINUX’ encrypted as ‘OSYAJ’ but would have no way of knowing it isn’t another five letter word, like ‘CHILE’. The One Time Pad has been kicking around in some shape or form since the 1880s but it wasn’t until 1917 that Gilbert Vernam et al formally patented a machine for automating the process. In this case two reels of punched tape were used, one with the original message and one with the pad of random numbers. KGB agents in the US were quick to exploit this during the Cold War, placing small pads inside hollowed-out nickels, fake walnuts and any number of dastardly hiding places. In the 21st century, computers like Raspberry Pi lend themselves well to being carried around easily and are perfect for generating and processing One Time Pads. But in order to understand why, it’s necessary to understand the pitfalls of the One Time Pad.

Entropy isn’t what it used to be... Bruce Schneier once described the One Time Pad as “theoretically secure, but... not secure in a practical sense.” This reflects the fact there have been times that OTPs have been broken in practice despite their theoretical security. In the 1940s, for instance, US SIGINT’s counterintelligence program Venona was able to decrypt a number of Soviet OTP messages simply because some pads had been reused. This crypto-cardinal sin was committed because the Soviets simply couldn’t generate pads fast enough for the thousands of daily messages sent during wartime. A similar chink is found by German codebreakers in Neal Stephenson’s Cryptonomicon. The British employ a raft of old ladies with small bingo machines to draw numbered balls to generate pads. Unfortunately the old dears don’t always obey best practices – they fail to look away each time they draw a ball as instructed, meaning that they subconsciously select predictable numbers. Fast forward to the 21st century and the issue hasn’t improved much. Entire books have been devoted to this subject but suffice it to say that computers generally aren’t very good at generating true randomness. Usually when it’s required a website or program will ask you to wiggle your mouse to provide a so-called “noise source” to work from.

Secure messaging Raspberry Pi OTP goes Thermal In order to proceed, you’ll need to have your Adafruit Printer set up and working. Fortunately the website has an excellent guide to this available at First install rng-tools as indicated in Step 1 of the walkthrough on p63. You’ll also need to edit /etc/default/rng-tools in your favourite text editor. Remove the # at the beginning of the line HRNGDEVICE=/dev/hwrng. Save and exit. Use sudo /etc/init.d/rng-tools restart to be certain the Pi is using only the hardware RNG now. Next, download the otp-gen software: $ git clone otp-gen.git At this stage if you wish you can go into the opt-gen/ folder and run off a sample pad to see what it looks like:

$ cd otp-gen $ sudo ./ $ nano otp.txt Next we need to make sure the software starts automatically when the machine starts: $ sudo nano /etc/rc.local Scroll to the bottom of this file and insert the following three lines above the words ‘exit 0’: $ cd /home/pi/otp-gen $ ./ $ python ./ If you have downloaded the otp-gen folder to anywhere else besides /home/pi/ then change the first line accordingly. Use Ctrl+X to come out of the text editor, and press Y to save the changes. Next use sudo reboot to restart your Pi. The printer should print out a message saying it’s connected to the network along with your IP

Mostly however when a computer requires larger amounts of randomness it will form a string of pseudo-random data from your entropy pool, which, while ideal for determining where the next block will fall when you play Tetris, is less than perfect when it comes to security. Thankfully one of the lesser known features of the Raspberry Pi is that it has its own built-in hardware random number generator, which in combination with the rng-tools suite can generate exactly the kind of high-quality randomness needed for an OTP. The second obstacle faced by those using OTP has to do with key generation and distribution. The Soviets were unable to keep up with the demands of war, and in real life the bingo machines of Britain’s little old ladies would probably end up smoking with the number of times they’d need to be churned. Fortunately the hardware RNG built into the Pi can generate large amounts of data in a short amount of time. For instance a pad with 10,000 sets of five-digit random numbers can be generated in seconds by activating the hardware RNG (see step 1 of our guide on page 63) and then entering: $ sudo base64 /dev/hwrng | tr -dc '0-9' | fold -sw 5 | head -c 10000 > bobtoaliceotp.txt The resulting text file can then be printed out, for instance by Adafruit’s thermal printer. This printer has the advantage that unlike most laser printers it doesn’t record the serial number, make or model on each sheet it prints out (on this, see It also holds up to 15m of paper, which will be plenty for generating long messages. Using this in combination with the excellent program otp-gen, you can print off your own pads automatically at the touch of a button too (see the ‘OTP goes Thermal’ box on this page). Once the reams of paper are safely printed and tucked away, next comes the issue of physically distributing pads. This is no doubt the main reason why the OTP hasn’t seen much widespread use. It’s rumoured that the red phone running between Washington and Moscow for instance is secured by an OTP, mainly because both Superpowers are wealthy and melodramatic enough to have men with dark glasses to handcuff themselves to briefcases and swap pads at regular intervals.

address, then pause. Simply press the button to generate your own one time pad. Repeat as many times as you like.

Sample printout of an OTP. It’s a good idea to leave the first five characters unencrypted so it’s easy for your contact to know where to start.

Of course if you have printed paper pads, it is possible to mail them to your recipient, but then your OTP would be no safer than a regular letter because it may have been intercepted and copied along the way. The same applies for using regular encryption programs like GnuPG. The only way to be sure that your pad cannot be intercepted is to deliver it in person. Again the humble Raspberry Pi comes to the fore here because it’s extremely small and easy to carry. Upon meeting the person with whom you want to exchange messages, you can give them a copy of your Pi’s MicroSD card or the Pi itself. To ensure perfect security of your messages, it’s necessary to delete pads that have previously been used. If you have printed yours out, a little tearing off and a Zippo lighter is likely to be helpful here. (Other lighters are available.) Otherwise running the shred command on the pad you just used should be enough to prevent recovery. Both SD cards and Pis are inexpensive also, so if you really feel you have to destroy them once the pads have changed hands, you can do this too. The low cost of a PI is also a great answer to another common criticism, which is that an OTP is usually very

Quick tip If you use your Pi to generate pads, for the sake of security try to disconnect from the internet when creating pads as well as encoding/decoding messages. Adafruit’s website suggests performing all encoding inside a Faraday cage, but unless you have a handy secret volcano lair, your home or office should be fine.

A hollowed out nickel and microfilm as used by the KGB. The Kremlin awarded Brownie points to Soviet Spies who didn’t accidentally spend them.

Love your Pi more: Subscribe and save at

October 2016 LXF216     61

Raspberry Pi Secure messaging difficult to scale beyond two people exchanging messages. If you do decide to form a secret society, it may be best to designate one person to meet each member and exchange pads regularly. That person can then sit at the centre of the web and act as a clearing house for messages, forwarding them between members as need be. Even using a Pi, however, it is still possible for pads to be intercepted, and communicating can be cumbersome. This is why it’s good to employ some best practices for your OTP.

Supersize your OTP Quick tip If you decide to use code words or numbers it may be a good idea to have a special number – say, 99 – to indicate that the next word is a code, so that it isn’t taken out of context.

For pen and paper OTPs, although technically it’s possible to convert each letter to numbers (A = 1, B = 2, etc) and then add them to the numbers in the pad, this can be rather cumbersome and it doesn’t allow you to send any special characters. One very easy way around this issue is to write a message on your Pi and then combine it with a block of random data using Karl Fogel’s excellent program OneTime, as explained in the walkthrough on the next page. If you prefer going old-school, Russian spies used to use a device called a straddling checkerboard to avoid long nights struggling with walnut shells. Search online for an image of this and you’ll see that, although there’s a lot of variations, the most common letters are typically along the top row, which means they can be enciphered as a single digit. Less common letters are represented by their row and column – for example the letter C is represented by the number 21. This also allows special characters such as 62, which switches between letters and numbers. The alphabet can be

One rather dramatic way to be certain data has been erased. For the sake of safety, it might be better to consider secure erasing tools before reaching for a blowtorch.

rearranged in any order you like for extra security if you wish. Another way to save on scribble time is to borrow a trick from thrifty business owners in the 1800s by using codes for common words and expressions – to avoid having to pay for long messages, Bolton’s Telegraph Code for example uses the number 0446 to represent the classic excuse, “The cheque was sent to you in the last post.” Sadly there is no corresponding code for “It was like that when I got here.” Books like Bolton’s aren’t meant to disguise the meaning of what you say, just to save time. However, if you are going to the trouble of meeting and exchanging keys with a friend, there’s no harm in deciding together your own code names for common people and places. For instance if the members of your secret society regularly meet by a weeping willow in Hyde Park, you might decide to refer to that location as “Sweden” and to refer to each of the members using animal names. This would mean that if the decoded message “Meet me with Penguin in Sweden” is intercepted, shadowy government spooks will be left scratching their heads. As you meet to exchange more pads, you can decide on code names for new people and places. Any OTP system is only as good as the security of the pads, so regardless of whether you use a computer program

From Bloomer’s Commercial Cryptograph: a telegraph code and double index holocryptic cipher. Given the purpose of the book, you’d think it would have had a shorter title...

Max out your MAC It’s best before sending any private information with an OTP to send a Challenge-Response message first. Technically you could agree on two code words to use with your recipient: one could indicate that it’s you and you’re able to talk freely – for example, “Everest” – and another could disclose that you’re talking under duress – for instance, “Sparrow”. This means however that the same text is being encrypted each time, making messages easier to crack. A better system is to choose from a list of prearranged words or phrases. One way to do this would be to agree on a book – directories and almanacs are perfect types of book for this – and in your “challenge” message you could send an arbitrary page and line. For example you could agree to use the 1992 edition of Wisden

Cricketer’s Almanack. Bob can message Alice saying, “613-1.” Alice can reply with the first line of page 613, which is: “Worcestershire were the only county to win two trophies in 1991.” She can then add a challenge of her own to the message, for example asking for page 582, line 4. If of course Alice replies with anything other than the correct words, Bob will know that it’s not her or she’s under duress, and the same applies with his reply to her. You can further increase the security of this system by agreeing beforehand that the response to the challenge should not be the line requested in the message but the one (say) three lines after, or perhaps the same line on the following page.

If you insist on using a printout, a few pearls of Wisden can’t hurt.

Never miss another issue Head to 62     LXF216 October 2016

Secure messaging Raspberry Pi or paper, it’s important to destroy both the pad and your ‘plaintext’ message once you have sent a message and both the pad and the ‘ciphertext’ message once you’ve decoded any message you’ve received. If you use the OneTime program in combination with a large file of random data, say 1GB, the program will only use as much data as is needed to encode your files – so a 128K PDF will only be around 128K in size. Each encoded file records the offset in bytes of those used in your pad, so your contact’s version of OneTime will be able to decode it. Drawing on one very large file, however, means you cannot delete pad data no longer in use without removing the entire file. This is why it’s best to split your large file of random data into multiple smaller chunks, which you can delete regularly. By default OneTime will prevent you from encoding files with the same random data. See the

walkthrough below for more information on this. OTP also doesn’t have any built-in way to make sure that the person you’re talking to is the person to whom you gave the pads. If you’re using your Pi to send and exchange messages, it’s best to use gpg to digitally sign any messages you send. If you’re using pen and paper, you can use a less secure form of Message Authentication (See ‘Max out your MAC’ on the previous page). Finally, there is no reason that you can’t use OTPs in combination with other forms of security. For instance you can encrypt a zip file with a ridiculously long password and send just the password via OTP instead of the whole message. In particular the OneTime program encrypts files in text format, so you can place these files on a passwordprotected drive also to boost your security. Feel free to experiment and decide if this is right for you. LXF

Next issu Build a e: super Pi!

Use digital One Time Pads


Install OneTime and create folders


The OneTime application is available as part of the Debian jessie  repository – simply run the following: $ sudo apt-get install onetime If you’re using the Adafruit printer, you’ll also need rng-tools: $ sudo apt-get install rng-tools At this stage you may wish to create a folder for your pads: $ mkdir -p onetimepad/{bobtoalice,alicetobob} Use cd to go to the first folder, for example:

Generate random pads

The following commands create a 10MB block of random data, and  splits it into numbered 1MB chunks, named bob_to_alice_0009 and  so on. Feel free to change the numbers: $ sudo dd if=/dev/hwrng of=bob_to_alice.pad bs=1000 count=10000 $ sudo split -b 1000000 -d -a 4 bob_to_alice.pad bob_to_alice_ $ sudo shred -uz bob_to_alice.pad

Repeat this for the “Alice to Bob” pad. Give your contact a copy of  both pads.

$ cd home/pi/onetimepadbobtoalice


Encrypt your data with OTP


Onetime has a simple format for encoding files: $ onetime -e -p ~/pathto/your.pad yourfile.ext So, for example: $ onetime -e -p ~/onetimepad/bobtoalice/bob_to_alice_0001 ~/Desktop/ kitten.jpg (The file must be smaller than the pad.) You’ll see alongside the  original file a file with the same name and the .onetime extension.  Make sure to run the shred command on the pad you just used and  the original file.

Decrypting OTP messages

Once ‘Alice’ receives your message and has installed OneTime, the  command to run is simple, provided she has a copy of the same pads: $ onetime -d -p ~/pathto/your.pad yourfile.ext So in the example case we’re using: $ onetime -d -p ~/onetimepad/bobtoalice/bob_to_alice_0001 ~/ Downloads/kitten.jpg.onetime The decrypted file will appear in the same folder as the .onetime file.  ‘Alice’ in turn should be sure to run the shred command on the pad  and the encrypted file once decoded.

October 2016 LXF216     63

Back issues Missed one?

Get into Linux today!

Issue 215 September 2016

Issue 214 Summer 2016

Issue 213 August 2016

Product code: LXFDB0215

Product code: LXFDB0214

Product code: LXFDB0213

In the magazine

We celebrate 25 years of the kernel and get excited about… accounting tools! If that wasn’t exciting enough, how about the best distros? Plus: loads of tutorials, AI-brewed beer and a drone flying lesson.

In the magazine

In the magazine

LXFDVD highlights

Neon 5.7.2, Fedoras 24, Voyager 16.04 and Ultimate Boot CD 5.3.

Get a (minty) fresh start with Mint’s biggest upgrade. Pick from our screencasters to record your adventures or build your very own Pi drone and head outside or stay indoors to multithread Rust and Swagger REST.

LXFDVD highlights Linux Mint 18 Cinnamon, Linux Mint 18 Mate and Peppermint 7.

Build your perfect home server for streaming games, sharing files and all kinds of servery stuff. Plus, we go media mad to edit photos and audio, look forward to open hardware, and round up lightweight browsers.

Issue 212 July 2016

Issue 211 June 2016

Issue 210 May 2016

Product code: LXFDB0212

Product code: LXFDB0211

Product code: LXFDB0210

In the magazine

In the magazine

In the magazine

Hack! Code! Build! Er, Read! Yes, read our top 100 open source tools. We also round up the best info managers and help you avoid SIP fraud. Meanwhile Jonny gets all Fuzzy and Mihalis continues to Rust.

LXFDVD highlights Kubuntu, Lubuntu and Xubuntu 16.04, 4M Linux 17.0 and more.

We light up the runway for the release of Ubuntu 16.04 LTS, compare fiery walls, hack together a Pi-powered audio streamer and sling around lots of data with R and R Studio. Oh, and Jonni explains Vulkan.

LXFDVD highlights 32- & 64-bit Ubuntu 16.04 LTS, Bodhi Linux 3.2.0 and more.

To order, visit Select Computer from the all Magazines list and then select Linux Format.

Or call the back issues hotline on 0344 848 2852 or +44 344 848 2852 for overseas orders.

Not happy with your desktop? Build your own with our guide! Want to build more? Good! How about a NAS! Now get busy collaborating with our pick of editors. Tired? Have a rest and read about the Micro:bit.

LXFDVD highlights

Ubuntu Server 16.04, Debian 8.4, sharing/backup tools, and more.

LXFDVD highlights

Ultimate Ubuntu 15.10, Window Maker Live, ArchBang and more.

Quote the issue code shown above and have your credit or debit card details ready

Get our diGital edition! SubScribe today and Get 2 Free iSSueS*

Available on your device now

*Free Trial not available on Zinio.

Not from the UK? Don’t wait for the latest issue to reach your local store – subscribe today and let Linux Format come straight to you.

“If you want to expand your knowledge, get more from your code and discover the latest technologies, Linux Format is your one-stop shop covering the best in FoSS, raspberry Pi and more!” Neil Mohr, Editor

To SubScribE Europe?

From only €117 for a year


From only $120 for a year

rest of the world

From only $153 for a year

IT’S eASy To SubScrIbe... cALL +44 344 848 2852 Lines open 8AM–7PM GMT weekdays, 10AM–2PM GMT Saturdays * Savings compared to buying 13 full-priced issues. You will receive 13 issues in a year. You can write to us or call us to cancel your subscription within 14 days of purchase. Your subscription is for the minimum term specified and will expire at the end of the current term. Payment is non-refundable after the 14 day cancellation period unless exceptional circumstances apply. Your statutory rights are not affected. Prices correct at time of print and subject to change. * UK calls will cost the same as other standard fixed line numbers (starting 01 or 02) and are included as part of any inclusive or free minutes allowances (if offered by your phone tariff) For full terms and conditions please visit

October 2016 LXF216 65

Terminal Handy shortcuts and timesavers for command line users

Terminal: save time and effort

Nick Peers goes hunting for time-saving tips, tricks and shortcuts that will  enable you to use the Terminal in a more efficient manner.


Our expert Nick Peers

estimates that  by following the  tips he’s  uncovered in this  tutorial, he may  actually reduce  his typing in the  Terminal by up  to 50 per cent.

ransitioning from a graphical user interface to the Terminal? Then you’ll want to speed things up. The good news is that the Terminal is packed full of timesaving commands and shortcuts; the trickier part is actually finding them. Never fear, because this issue we’ve dug out a collection of handy command-line tricks that will transform the way you interact with the Terminal going forward.

Repeat previous commands One of the first time-saving tips you’ll learn is that pressing the up and down arrow keys at the command line cycles you through the most recently used Terminal commands. That’s fine if the command you want was typed a short while ago, but a much quicker way to find what you’re looking for is to press Ctrl+R and then start typing a few letters – you’ll see the most recent match appear in the list. Hit Enter to run it again, or press the right arrow to insert it into the command line, allowing you to modify it first. If the command isn’t the one you’re looking for, you can keep hitting Ctrl+R to cycle through previous matches until you find the one you’re looking for, or alternatively press Ctrl+C to exit and then type the following: $ history

The !! command allows you to repeat the last command entered. A great use for this is to insert a missing sudo when a command needs admin privileges.

66     LXF216 October 2016

This will list all the commands stored in the Terminal’s buffer. To repeat one, type the following, replacing ‘1’ with the number next to the command you want to run: $ !1 You can also use !! to simply repeat the previous command. A particularly handy use for this is when you’re told the command you tried to run requires root privileges. When this happens, simply type sudo !! and hit Enter.

Faster directory management In a similar way, !$ allows you to reuse an argument from the previous command in your current one – for example, the following commands create a new directory then change to it: $ mkdir ~/Documents/work $ cd !$ The !$ shortcut is just one way to speed up the way you interact with files or navigate your filesystem, which leads us neatly on to some more tips and tricks. One you may already know is tab completion: as you start typing a command or path, press Tab to attempt to autocomplete the command or folder. If nothing happens, press Tab twice to bring up a list of potential matches. Type enough letters to make it clear what command or path you’re aiming for, then press Tab and it should pop up. Do you ever find yourself working within several folders at the same time? Thanks to the pushd command, you can create a folder stack, a list of folders you can quickly navigate to using the cd command. Let’s start by adding the current directory to the stack: $ pushd You’ll see the folder is listed next to ~. To add a different directory to the stack and then switch to it, use this syntax: $ pushd /path/to/folder/ You’ll move to this directory, and also see that it’s added to your original folder in the stack. For a better view of the stack’s contents, type the following: $ dirs -v You’ll see each folder is listed with a number next to it. You can use this number in conjunction with the cd command to quickly jump between these folders (replace ‘0’ with the

Terminal Tutorial Streamline outputs Many commands output to your screen, which is fine in many cases but awkward in others. You can instruct a command to send its output to a file rather than the screen using the > operator: $ ps -ax > processes.txt This will list all the processes running on your PC in a newly created processes.txt file. Repeat the command, though, and the file will be overwritten. If you’d rather append the

information on to an existing file, use >> instead: $ ps -ax >> processes.txt You can also pass the output of a command to another command for additional processing too. This requires the | separator, and is commonly used with the grep search tool for filtering output to include specific search terms. To filter your command history, for example, try the following: $ history | grep command

number of the folder you wish to access): $ cd ~0 This folder stack is temporary – once you close the Terminal window it’s lost. Here’s another shortcut: if you’ve created a folder inside which you now need to create a collection of sub-folders, speed things up by using mkdir with the following argument to create the named sub-folders in one go (you can specify as many sub-folders as you like): $ mkdir -p ~/Documents/{work,home,admin,letters} The same argument applies with the rmdir command too, allowing you to quickly delete empty subfolders as well. One final shortcut: need to quickly ascertain the differences between two directories? Make use of the diff command, thus: $ diff /folder1 /folder2 This will do the gruntwork for you, and the output will reveal which files are exclusive to which folder.

Replace ‘command’ with your search term or terms and the resultant list will only include those commands that match your terms – for example, history | grep mkdir . Another use of the pipe is to control the output flow. For example: $ dmesg | less This allows you to scroll up and down the output of dmesg using your cursor keys – to exit, press Q.

The alias command is a great command for speeding up repetitive commands, but its temporary nature requires a workaround.

Set up aliases Some commands – particularly those with lengthy arguments – can be a real bind to type if you use them regularly. The alias command enables you to bind a shortcut to the lengthy command and its argument. Ubuntu’s Terminal provides some examples by default, revealed when you type alias and hit Enter. These include different ways of listing a directory using the ls command. To create your own aliases, invoke alias using the following syntax: $ alias shortcut='command' Replace ‘shortcut’ with the name of your chosen shortcut, noting that it cannot contain any spaces (if you need to, use a different separator, such as a dash or underscore). If you use an existing command – such as ls – then the alias will circumvent it when the command is used without an argument; in other words, it allows you to effectively change the command’s default behaviour. Once you’ve set up an alias or two, review what they are by typing alias and hitting Enter again. To remove an alias, use the following: $ unalias shortcut The alias command is very useful, except for one critical shortcoming: its effects only last as long as the current Terminal window is open. Close the window and all your aliases are gone. All is not lost, however, as you can permanently embed aliases into each Terminal session by adding them to a hidden file called bash_aliases in your home directory. $ nano ~/.bash_aliases

This creates a new, empty document, so add your alias commands one at a time using the same syntax as before, then save your file. From here either close and reopen the Terminal or use the following command to reload the bashrc file, which includes your Terminal preferences (and references the bash_aliases file you just created): $ source ~/.bashrc You’ll find that your aliases should now be preserved across Terminal sessions.

Quickfire tips To round things off, here are some more handy shortcuts. First, when you launch a graphical app like Firefox or the Nautilus file manager through the Terminal, you’ll see how the Terminal window remains in place, unable to do anything until you close the application. To prevent this (or the need to open a second Terminal window), simply append ‘&’ to the end of your command, like so: $ firefox & Struggling to find the right command to use? The apropos command can steer you in the right direction – just use it in conjunction with some text describing what you’re looking for, such as: $ apropos "download" Last but not least: you know that the clear command quickly clears the Terminal screen, but what if you have a command lined up and ready to go? Simply press Ctrl+L to clear the screen but leave your command in place. LXF

Next issu Screen e: explained

We’re #1 for Linux! Subscribe and save at

October 2016 LXF216     67

Tutorial Bash Learn about commands and Bash

attributes to make some handy scripts

Bash: How to write scripts

Alexander Tolstoy shows you how to get creative, flip the script and have a  bash at writing your very own Bash scripts… What? What did I say?

Our expert Alexander Tolstoy

is our resident  picker of hot  things and has  been a freelance  writer for many  Linux publications  since 2006. 


Quick tip There are always things that can be optimised, eg avoid excessive variable declaration. The following example: color1=’Blue’ color2=’Red’ echo $color1 echo $color2 Can be easily shortened to: colors=(‘Blue’‘Red’) echo ${colors[0]} echo ${colors[1]}

or many Linux users a command-line interface is still something that’s best to be avoided. Why drop to the command-line shell when virtually all activities can be performed perfectly well in a graphical desktop environment, after all? Well, it depends. While modern Linux distributions (distros) are very user-friendly and in many areas outperform Windows and OS X, there’s a huge amount of power hidden away in Bash (Bourne Again Shell). The shell is a cornerstone of the GNU Project and has been with us since 1989. It’s also been a standard Unix shell in many operating systems (not just Linux) and there‘s no reason to think it will lose its importance in the future. The reality is that there are many command-line applications that can replace their desktop analogues. It means that from a pragmatic point of view graphical applications introduce huge system resource overheads, when you could instead enjoy robustness, speed and efficiency. Learning Bash scripting can help you understand how many Linux commands work and also help you automate certain routines. There’s no better way to approach programming than Bash scripting. The barrier is low, the possibilities are endless—so it’s time for the real deal!

68     LXF216 October 2016

We’ll start with basics and create a small but useful script. A script is a text file with (roughly) a sequence of commands and few other special attributes. The first is the obligatory first line, which should look like #!/bin/bash , where #! is a magic number (a special marker that designates that this is a script) and /bin/bash is the full path to command interpreter. The second attribute is specific for filesystems used in Unix and Linux: in order to tell Bash that our file is a script (not just a plain text file), we have to make it executable: $ chmod +x Using the .sh extension is optional, as Bash looks for that ‘executable bit’ and tolerates any extension. So the simplest structure of our test will be like this: #!/bin/bash command_1 command_2 … You can replace our placeholders with something working and useful. Say, we want to clean up temporary files in Linux: #!/bin/bash cd /var/tmp rm -rf * This way you can put any commands in your scripts, line by line, and each command will be executed consequently after a previous one. Before we move forward, let’s pay some attention to special characters, ie characters that aren’t executed and treated differently by Bash. The first one is ‘ # ’, which enables you to put a random comment after it. There’s no need for any ‘closing tag’, just put # and whatever follows will be treated as a comment until the end of the line. The second useful special character is semicolon (‘ ; ‘), which separates one command from another within the same line. You can use it like this: #!/bin/bash cd /home/user; ln -s Images Pictures; cd Pictures; ls -l The practical use of ; assumes that you can save some vertical space by combining similar or related commands in one line, which helps keep things tidy. Another helpful hack is to use dots (‘ . ‘). Not only do they indicate full stops, but in Bash dots introduce special logic. A single dot means ‘current directory’, while two dots make Bash move you one level up. You may not know beforehand where a user will place your script, so the cd . sequence will move Bash to the current directory, whatever it is. Similarly, cd .. will bring you to the parent directory. There are also other special characters, such as ‘ * ‘ which is wildcard for ‘anything’, backslash (‘ \ ‘) for quoting the

Bash Tutorial Tips, tricks and timesavers The more time you spend in Bash, the more you may feel that some aspects of it could have been much better optimised. However, this isn’t entirely true. Bash has been used in system administration for 27 years, and it can be excellent when it comes to optimisation as long as you’re willing to put in a little effort to make it your own. Remember that Bash stores its settings in the ~/.bashrc file (individually for each user), which you can populate with your own features. Let’s start by automating mkdir $dir and cd $dir with a single command. Add the following to .bashrc:

mkcd() { mkdir $1; cd $1; } And then don’t forget to run $ source .bashrc to apply changes. After that when you run, say, $ mkcd build , you will immediately enter the newly created build directory. For easier navigation you might want to add bookmarks for frequently used directories. Use the CDPATH function in .bashrc for that: $ echo CDPATH=/home/user/Documents/ subscriptions/LXF >> ~/.bashrc && source ~/. bashrc Once done, try using $ cd LXF from

anywhere in your system and you’ll be taken to the right place. CDPATH uses the last element in the path for the bookmark name. Finally, the easiest and quickest way to save some time is to populate your .bashrc file with command aliases. You can assign an alias to a long command that’s tiresome to type, or simply because you don’t want to mix habits from another OS. Let’s add a couple of DOS commands as an example: alias dir=’ls-l’; alias del=’rm -rf’ This way you can create a custom-tailored Bash that will work lightning-fast!

It is also permissible to declare several variables in one line. See the next example: a=Mary b=had c=”a little” d=lamb echo $a $b $c $d Sometimes you’ll need to assign a command output to your variable. There are two ways to do it. In the following example each line produces the same result: a=$(df --total) b='df --total' echo $a $b The output of $b will only work if using exactly the correct type of single quotes (‘ ' ‘).

Conditionals For an extra source of Bash inspiration, visit its official GNU website at

following character, exclamation mark (‘ ! ‘), which reverts the command after it and many more. Luckily, in many cases special characters are self-explanatory thanks to the context.

Variables A variable is a word or any sequence of allowed characters that can be assigned to a value. You can use a calculation result or a command output, or anything you want as a value for variable and use way more comfortably than in a direct way. In Bash, the name of a variable is a placeholder for its value, so when you are referencing a variable by name you are actually referencing its value. Assigning a value to a variable is done via the equals sign (‘ = ‘), while referencing is done using the dollar sign (‘ $ ‘) followed by the variable name. Let’s have a look: #!/bin/bash a=175 var2=$a echo $var2 In this example we first assign a numeric value to variable a then assign its value to another variable var2 and then print the value of the latter ( 175 ). You can use a text string as a value and, in case there is at least one space or punctuation mark, you’ll need to put a string in quotes: a=”Mary had a little lamb” echo $a

Quick tip Some words and characters in Bash are reserved and cannot be used as names for variables: do, done, case, select, then and many more. Also avoid whitespaces and hyphens, though underscores are allowed.

Bash supports traditional Unix constructions for conducting tests and changing the behaviour of a script depending on given conditions. There are two basic things that you’re advised to remember: the ‘if/then/else’ tree allows a command to be executed when a condition is met (or not), whereas ‘while/do/else’ is a tool for looping parts of a script, so that some commands will be executed all over again until certain a condition is met. It is always best to illustrate this theoretical part with some working examples. The first one compares the values of two variables and if they match, the script prints the happy message: a=9 b=8 if [ "$a" = "$b" ]; then echo "Yes, they match!"; else echo "Oh no, they don’t…" fi Please take note that once you introduce ‘if’, don’t forget to put ‘fi’ in the end of your construction. Now it seems that

Writing scripts in a good editor is beneficial at least because it’ll have features like syntax highlighting.

Improve your code skills Subscribe now at

October 2016 LXF216     69

Tutorial Bash

Add some Brony appeal to your .bashrc profile with fortune | ponysay.

we’ve reached a point at which we can make some use of our script. Let’s, eg, check the day of the week and once it’s Friday, print a reminder: #!/bin/bash a=$(LC_TIME="en_US.utf-8" date '+%a') if [ "$a" = Fri ]; then echo "Don’t forget to visit pub"; else echo "Complete your work before deadline" fi Notice that we included the extra declaration of the LC_ TIME variable for the sake of compatibility. If we didn’t, then our script wouldn’t work on Linux systems with non-English locale. However, let’s advance a little further and see how we can use the while/do method. This script runs in the background and clean the temporary directory every hour: #!/bin/bash a='date +%H' while [ $a -ne "00" ]; do rm -rf /var/tmp/* sleep 3600 a='date +%H' done Attentive readers may notice that this script will stop working at midnight, because the condition while hour not equals 00 stated above, will be unmet once the day is over. Let’s improve the script and make it work forever: #!/bin/bash

while true; do a='date +%H' if $a -eq "00"; then sleep 3600 else while [ $a -ne "00" ]; do rm -rf /var/tmp/* sleep 3600 done fi done Pay attention to how you can check conditions using if and while if your variable contains an integer number, you can use alternative operators, such as -eq , -ne , -qt or -lt as ‘equal’, ‘not equal’, ‘greater than’ and ‘less than‘ respectively. When you use text strings the above substitution will not work and you’ll need to use = or != instead. Also, the while true construction means that our script will not exit until you manually interrupt it (Ctrl+c or by killing) and will continue running in the background. Both if and while can be put in cascades, which is called ‘nesting’ in Bash. Just don’t forget to put the proper number of fi and done elements at the end to make your script work.

Simple loops Using conditions may be a brilliant way to control your script’s behaviour, but there are other use cases when you need to optimise your routines with Bash, say, for recurring actions with variable data, eg you need to run one command several times with different arguments. Doing it manually can be unacceptable for many reasons, so let’s try to solve the problem gracefully using the for loop. In the following example, we shall create several resized versions of the original image rose.png using the convert tool from ImageMagick: #!/bin/bash for a in 10 25 50 75 90 do convert rose.png -resize "$a"% rose_resized_by_"$a"_percent. png done The example above declares a as an integer and lists all its values. The number of times the convert command will run matches the number of values. Sometimes you need a longer list of values, so let’s optimise our script using start/stop values and a step: #!/bin/bash for a in {10..90..5}

Real-world one liners You can make practically any Bash script a oneliner, even though it would be hard to read it. But there are a lot of useful yet short Bash scripts created by the people from around the world. Even if you feel like you don’t need any more practice at writing scripts looking at other’s best practices will not hurt. A one-liner means that you can use it directly as a command. The first example script we’re going to show off is for music lovers, and it converts all .flac files that it finds in the current directory to MP3

files at a good quality setting (320 kbps) using FFmpeg. $ for FILE in *.flac; do ffmpeg -i "$FILE" -b:a 320k "${FILE[@]/%flac/mp3}"; done; Another tip is to copy something to the clipboard from the command line. First make sure you have the xclip package installed, then try the following: $ xclip -in -selection c # or $ echo "hi" | xclip -selection clipboard The first command will copy the contents of

your script to the clipboard, while the second one will put the word ‘hi’ there. The next examples shows 10 large files opened by currently running processes in your system: # lsof / | awk '{ if($7 > 1048576) print $7/1048576 "MB" " " $9 " " $1 }' | sort -n -u | tail It’s extremely useful when you need to identify the origin of the high load and standard system monitor doesn’t clear things up. You have to be root to run this command.

Save money, go digital See 70     LXF216 October 2016

Bash Tutorial

do convert rose.png -resize "$a"% rose_resized_by_"$a"_percent. png done The syntax used can be simply described as {$start..$end..$step} and allows you to build plain arithmetical progressions. There are also a couple of alternative ways to do the same thing. First, let’s use GNU seq, which is shipped with all Linux distros: for a in $(seq 10 5 90) As the line suggests, we’re using it as $($start $step $end)). Second, we can write the loop conditions in this way: for (( a=10; a<=90; a+=5 )) As you might guess, +=5 increments the value of $a by five. If we wanted to increment by 1, we’d use ++ . Looping is also a very quick way to number elements using variable incrementing. See below where we list the days of the week: #!/bin/bash a=1; for day in Mon Tue Wed Thu Fri do echo “Weekday $((a++)) : $day”; done This script will return the list with numbered days (Weekday 1 : Mon; Weekday 2 : Tue; Weekday 3 : Wed etc).

Some advanced tips We already know how to put commands in scripts and even how to add arguments to certain commands. But Bash can do a lot more — for instance, it provides a way to pass a multiline text string to a command, a variable or even to a file. Within the Bash terminology it’s called the ‘heredoc’ format. To illustrate it, let’s create another script from our script: #!/bin/bash cat <<EOF > #!/bin/bash echo “I’m another script”

EOF The construction <<EOF…; EOF let’s you wrap almost anything into one logical object that you can manipulate in the same way you do a variable’s value, eg you can assign the data selected from a DB to your variable: $ sql=$(cat <<EOF SELECT foo, bar FROM db WHERE foo=’baz’ EOF;) Earlier we used a script to downsize an image by a given percentage. However, what if we needed to define that percentage by hand? Let’s use script variables that accept arguments when the script is run: $ cat #!/bin/bash a=$1 convert rose.png -resize “$a”% rose_resized_by_”$a”_ percent.png When you run the script and give the desired percentage as an argument ( $ ./ 50 ), your image will be resized by the given value (50%). However, if you run the script without any argument ( $ ./ ), it will create an unchanged copy of rose.png under another name. To fix this, we need to check if the variable has non-null/non-zero string, using the -n key: #!/bin/bash a=$1 if [ -n “$1” ]; then convert rose.png -resize “$a”% rose_resized_by_”$a”_ percent.png else exit; fi Of course, there’s so much more you can do with Bash, but we hope that these basics will encourage you to dive into this powerful command-line system that exists in every distro. LXF

Bash itself has a number of startup options. Explore them via $ bash --help.

Quick tip To make your code readable, use indents, especially for nested elements. Also commenting parts of your script after a # sign is a good practice.

October 2016 LXF216     71

Tutorial Pi and coffee Build a Raspberry Pi Kivy/Python

touchscreen espresso coffee machine

Project: Build a coffee machine Dan Smith mods out his coffee machine with a Raspberry Pi, 7-inch  touchscreen and sweet Kivy GUI in pursuit of that perfect espresso shot.


Our expert Dan Smith

Half man, half  caffeinated  beverage. With  brew running  through his veins,  he takes you on  a quest to find  the holy grail  of the perfect  espresso.


This project will  invalidate your  device’s warranty. It  involves modifying  a mains-connected  device that produces  super-heated steam  and you do so entirely  at your own risk.

Quick tip Customise your interface: FloatLayout provides absolute freedom of positioning widgets within a Kivy app. This is great for customisation of your GUI and for adding your own creative touch. Tune in next month for Part 2 of this tutorial for more tips on GUI customisation.

offee snobs are everywhere, especially in the software industry! Coffee fuels those long nights of gaming, coding and programming. No wonder they called it Java! You might have started drinking coffee at university because the cheap and nasty coffee machines gave a more financially sustainable caffeine hit than Red Bull. Fast forward ten years, and the majority of graduates morph into full-blown coffee snobs who don’t skimp when it comes to quality. If you are one of these people, you will most likely have a beautiful home espresso machine. Maybe even a Rancilio Silvia. The Silvia is a sleek, simple, robust, high-quality machine. However, one downside of this coffee machine is that, like many others, it uses thermostats in order to control the brew temperature. Utilising thermostats to obtain precision and continuous temperature control is not very effective. That is why you see high-end commercial coffee machines utilise Proportional-Integral-Derivative (PID) controllers. This hack will show you how to create a GUI that will incorporate precision temperature control into any simple coffee machine. Hacking-it-up utilising a Raspberry Pi will also mean the capability to integrate the Internet-of-Things into the machine. Step one will be to down a couple of double espressos. Then, it is time to get down to business. This hack will require some sweet skills in Linux (obviously), Kivy (for the front end GUI) and Python. But don’t worry if you’re just a beginner – this tutorial is a great way to build up your skills. You’ll also need a soldering iron, basic electronic tools and a solid supply of coffee beans (for your own consumption). Depending on how far you want to go with the build, you can also fabricate a new front panel (although this makes it a bit more expensive).

The Silvia combined with the Raspberry Pi – in full coffee snob lingo one would say this machine is pulling a caffè doppio (Italian for double espresso) with good crema.

actual brew temperature plot – plus it gives your machine the hi-tech, precision-control image it deserves. The Espresso Standard does, after all, specify 88°C at the grouphead outlet. There is no point in settling for anything less than perfection. If you’re interested, check out the Espresso Standard here:

The touchscreen

Getting familiar with Kivy

In order to troubleshoot, debug and get familiar with the functionality, you will need to start by setting up that 7-inch touchscreen on the Raspberry Pi. Go to and check out their Raspberry Pi 7-inch Touchscreen Display Tutorial. Once you’ve got it hooked up, have a play around with how beautifully Debian, the Pi and the touchscreen work together.

Kivy will be used to build the GUI. Why? Kivy is based on a simple app widget system. The Kivy widget system is easy to figure out. Kivy has good API support and tutorials on the Kivy website. Kivy also has a built-in settings widget manager that makes use of JSON configuration files. This makes building apps quick and easy. You can modify the standard Kivy settings widget to include sleep and wake-up times. This allows you to set the grouphead to be nice and warm by the time you get out of bed and save power during off-peak times. Kivy has a super useful plugin manager called Kivy Garden. Kivy also has cross platform functionality (Linux, Android, iOS, OS X, etc). Kivy Garden has some cool plugin widgets this hack will use, such as Graph. This hack utilises Graph for the real-time plotter. Coding with a FOSS IDE such as Eclipse

A Graphical User Interface The GUI specification will be as follows: a real time plotting graph, a coffee button for pulling an espresso shot, a steam button to froth the milk, and a hot water button. The plotting graph will enable you to see how effective the tuning of your PID controller is by incorporating the set point plot and the

72     LXF216 October 2016

Pi and coffee Tutorial

Ingredients list Hardware required for the Silvia-Pi build includes: 1x Raspberry Pi 2 1x Raspberry Pi 7-inch touchscreen 1x Rancilio Silvia (or any coffee machine that could do with better temperature control) 1x Solid State Relay (SSR) 2x Double Pole Single Throw (DPST) relays 2x transistors & diodes (for the driver circuits on the DPST relays) 1x integrated power supply (your country’s input V AC in, 5 V DC out) 1x k-type thermocouple 1x thermocouple amplifier (1-wire MAX31850K)

Why are you using an SSR? Why don’t you use a mechanical relay to control the boiler? Well… This is because of the potentially high switching rate of the controller. In actual practice the switching rate is relatively low, and some low-end machines do use mechanical relays. However, mechanical relays typically fail open. SSRs typically fail closed. This is definitely something to keep in mind when thinking about safety on your application. In addition, mechanical relays are only good for a specified number of cycles. What’s more, mechanical relays make a noise, while SSRs are quiet. For this application a good

and Secure Shelling into the Pi through your desktop is an effective way to implement this hack. This will mean you have to set up Kivy on both your desktop and on your Pi. Go ahead and do this by logging into your terminal and inputting $ pip install kivy then $ pip install kivy-garden followed by $ garden install graph .

Building your Kivy app Once Kivy is installed you can start building Kivy apps and getting familiar with the Kivy modules. Go to the Kivy website at and look through the API library, or even follow the “first app” tutorial – a Pong app – to get your head around the code and the general layout of building Kivy apps. Here we will be building a CoffeeApp , which will be a combination of Kivy widgets such as BoxLayout, Button, Label, Graph. So, time to get after it: In #!/usr/bin/kivy import kivy from import App from kivy.uix.boxlayout import BoxLayout class CoffeeApp(App): def build(self): # Add parent widget root = BoxLayout(orientation='horizontal') verticalBtns = BoxLayout(orientation='vertical’, size_hint_x = 0.25) # Add child widgets here then add them to the parent "root." return root # Run the script if __name__ == '__main__': CoffeeApp().run() The above code will create a stock standard BoxLayout Kivy app with a parent widget named root . You will also notice verticalBtns BoxLayout – you will use this to separate your buttons from your graph and display them vertically in the right quarter of the app. size_hint_x = 0.25 . You won’t be able to see this size hint in effect until you add the graph later on. Adding buttons and graphs into the widget is as simple as creating the widget coffeeButton=Button(text='coffee') and then adding it to the parent widget verticalBtns.add_ widget(coffeeButton) . In your case you will add the three buttons (coffee, steam and water) by repeating this code for

SSR to use is the Kudom KSI240D10-L SSR: 10A 240V AC, 4–32V DC. The K-type thermocouple temperature range is typically between –250ºC to 1250ºC, and accurate to ±1ºC. For signal processing purposes the K-type is easy to accommodate. There are numerous varieties of Integrated Circuits that are packaged with amplifiers, filtering, cold-junction compensation and analogto-digital converters that are specifically built for the K-type thermocouple and are low cost too. Because of this the K-type is perfect for this coffee machine application.

each button. You use the simple BoxLayout which handles the position and the size of the buttons within the app’s parent widget root, therefore you need to add verticalBtns to the root widget by adding the following: root.add_widget(verticalBtns)

Buttons, bindings and events Now to get your three buttons sorted. Run the code and you see three buttons arrayed vertically down the app. If you are running via SSH or direct to your Pi you will see the app run straight to the 7-inch touchscreen. Try pressing the buttons to see what happens… Not much? You will see the buttons change from grey to light blue, but that’s about it. Time to bind those buttons to get some functionality. By using bind method and defining on_press() and on_release() methods you can specify what happens. Start with adding functionality to the coffeeButton in your code. Between creating the buttons and adding the buttons to root, call the following bind method by adding the following code: coffeeButton.bind(on_press = self.coffeePress_callback) and coffeeButton.bind(on_release = self.coffeeRelease_callback)

Quick tip Functions vs Classes: To eliminate repeating code and to minimise lines of code, create a class that inherits from Button and uses the name of the object combined with some simple logic to determine what happens on a press or release.

The ol’ single line diagram for the Silvia Pi build. Looks a lot like hieroglyphics from Ancient Egypt. Probably because they were working on a similar project.

Don’t miss part 2 in the next issue Head to

October 2016 LXF216     73

Tutorial Pi and coffee

Quick tip GPIO.BCM or GPIO.BOARD: Setting GPIO. setmode(GPIO. BCM) allows IOs to be set as per GPIOs of the Broadcom SOC channel. Setting GPIO. setmode(GPIO. BOARD) allows IOs to be set as per the pin numbers on the Pi’s header. Example IOs on an RPi2: BCM 21 = BOARD 40. Keep a header GPIO cheatsheet nearby to avoid confusion.

Now you need to define the methods within the CoffeeApp class: coffeePress_callback(self, *args) and coffeeRelease_callback(self, *args) Do this above the build method within the class. Add some print statements in there as tracers to see if anything happens on press and release, and run the app again. You will now find on pressing the coffee button that your print statements will be outputted to the terminal. Repeat the above steps again for the steam and water buttons.

Outputting from the Pi The buttons now need to actually control mechanical or solid state relays or set points within your coffee machine. To do this you need to tell your Pi to drive outputs high or low or just change set point values. Luckily this is super easy. First of all, set which GPIOs are going to be outputs. Grab that Raspberry Pi Python module by adding the following to the top of your script: import RPi.GPIO as GPIO In this case you will be referring to the IO pins on Raspberry Pi as per the Broadcom SOC channel, so therefore add the following directly below the previous line: GPIO.setmode(GPIO.BCM) Now to set the following as outputs: coffeeButton, which will switch a Double Pole Single Throw relay GPIO 20 by adding the following below the previously explained code: GPIO.setup(20, GPIO.OUT) One pole will turn the pump on or off. The other pole will direct heated water through the 2-by-3 way solenoid valve’s vent or through the group head. Most coffee machines will have a 2-by-3 way solenoid valve to vent pressure when deactivating the coffee switch. This is a safety mechanism

to ensure no trapped pressure is exerted over the user when removing the group head. No need to use a GPIO for steam because you don’t need to output to a relay. You will only need to increase the set point. Coffee’s set point will be around 105°C and steam will be around 140°C. The set point will drive the PID controller to heat the boiler to 140°C. The steam pressure will be the driving force, therefore you won’t need to activate the pump. However you will need to control the boiler, so set the boiler to GPIO.setup(23, GPIO.OUT) . The last output is water, which will only turn the pump on. You only need a Single Pole Single Throw relay for this output: GPIO.setup(19, GPIO.OUT) You could use another DPST for this to keep parts standard on the printed circuit board (as per the ingredients list). Now that your outputs are set, time to hook them up into your button callback methods. Do this simply by calling the output() method of the GPIO class. When you press ‘coffee’ you want the pump to kick in and the 2-by-3 way solenoid valve to output to the group head. Therefore drive the output high under the coffeePress_callback() method by adding GPIO.output(20, 1) . When you depress the coffee button you want the pump to stop and the 2-by-3 way solenoid valve to vent, therefore drive GPIO 20 low by adding GPIO.output(20, 0) under the coffeeRelease_callback() method. For steam you are only changing the set point, therefore create and define an attribute within the CoffeeApp. Add SP = NumericProperty(105) below to define the coffee app class. Now add self.SP = 140 and self.SP = 105 under steam press and release methods respectively. You want the pump to kick in/out when you press/release the water button, so therefore add GPIO.output(19, 1) and GPIO.output(19, 0) under waterPress and waterRelease respectively.

Adding a graph widget Finally, time to get some plots plotting. Every IoT gadget needs a dashboard and every good dashboard needs a plot. Import that Graph and LinePlot module by adding to the top of your script: from import Graph, LinePlot Now create a widget class for your graph using class PlotterWidget(Graph) and of course initialise specific attributes within your plotter widget by calling the def __ init__(self, **kwargs) method. There are a lot of attributes to define, so check out the LXFDVD or GitHub user de-man for the code. Such attributes include grid sizing, border colours, y/x max/min, etc – it’s all very customisable. Now when you run that script you can see the coffee GUI taking form. The graph is on the left and the buttons should be arrayed vertically on the right.

Animating your graph

Oh no! Looks like R2-D2 undergoing brain surgery! Nah, not really… It’s the Pi being forged into the Silvia along with the requisite relays and driver circuits.

Incorporate animated line plots to your graph so you can get a visual look at the temperature control and the temperature history of your coffee machine. You will need to add two plots to the graph. One will be called SPplot . This is the set point which will tell the PID controller what value to drive to. The other will be called PVplot . This is the process value that will be read by the temperature probe from your machine. You will need to add in a couple of time methods and attributes in

Improve your Linux skills Subscribe now at 74     LXF216 October 2016

Pi and coffee Tutorial

order to animate the graph. Again, refer to the LXFDVD for the code. The main code snippet to note is a Kivy clock event: Clock.schedule_interval(plotter.update, SAMPLE_RATE) This will call plotter.update at a defined interval set by SAMPLE_RATE .

Binding control to your graph Now the animated line plots are scrolling along autonomously on your graph, but they aren’t actually linked to any of the controls. The SPplot needs to be hooked up to the steam button which controls the actual set point. Do this by adding one simple line of code after all your previous binding calls: code self.bind(SP = plotter.setter('currentSignal') Go ahead and run that code. When you press the steam button now and keep it pressed the SPplot goes from 105 to 140. Now release the button, and you will see SPplot drop back down to 105. Success. Time for a coffee break.

Getting temperature via OneWire Temperature control is next, and is a bit more involved. This is because you need to set up reading the temperature input from the k-type thermocouple via the amplifier – a MAX31850K. This thermocouple amplifier is actually an integrated circuit that combines everything needed in order to read the thermocouple accurately and in the digital form using OneWire. Luckily, the Raspberry Pi was built to utilise the OneWire capability. You will now need to look at setting up OneWire within the Raspberry Pi. First enable OneWire with $ sudo rasp-config Go to Advanced Settings, then Enable OneWire. You can also enable OneWire via: $ sudo nano /boot/config.txt and adding dtoverlay=w1-gpio to the file. After either option you must then $ sudo nano reboot . Once the Pi is back up and running, hook up the MAX31850K (as per the MAX31850K OneWire instructions) to the Pi’s OneWire pin, which is GPIO 4. Now add the appropriate modules into the Linux kernel – in this case w1-gpio & w1-therm: $ sudo modprobe w1-gpio then $ sudo modprobe w1-therm . Now go to the device directory where the OneWire temperature folder and file is stored: $ cd /sys/bus/w1/devices now $ ls . Take note of the folder name as this will be specific to each OneWire device (an example would be 3b-000000191766). Navigate to that folder now and there will be a /w1_slave file. This file is where your read-in temperature value is stored. Now read from that file by $ cat w1_slave – the output should be some hexadecimal values and if the data is being read successfully you will see for example crc=[hex_value] YES with t=23401. This is the temperature value read in by the probe, which actually means 23.401°C. Great success!

Even though the GUI looks super basic at the moment, it doesn’t take much to turn it into a visual masterpiece. Tune in next month for Part 2 of this hack.

Now incorporate what you have just learnt into your script. Add to your script the os module so that you can issue terminal commands within the script. Use import os , then use os.system('modprobe w1-gpio') and os.system('modprobe w1-therm') . Remember when running this script, you may need to execute with a sudo. With your script having direct access to the temperature you can now write the appropriate code to validate and process the temperature values. Don’t hesitate to borrow the kettle from the kitchen.

Building the PID controller Lastly you need to incorporate the PID controller. Make a specific class for this named PIDcontroller . Within the class create attributes and methods that are applicable to the controller, such as error, lastError, SP, PV and updatePID. You need to call the controller at set intervals. Do this by binding another clock schedule interval to the controller. Also you need to make sure the temperature plot is hooked up to the actual temperature readings. In this case you read the temperature through the PID class. Do this again by using the bind() and setter() methods to link the attributes between the classes. So that is that. You now have a basic Kivy app that can control a coffee machine to a better standard than most mid-range machines on the market. Next month, Part 2 of this hack will show you how to pimp out your GUI by creating customised buttons and backgrounds, quick-booting the Raspberry Pi straight into the GUI, and even adding your own splash screen that will make it look real pro. Remember to check out the code on the LXFDVD and GitHub user de-man to get the most out of this tutorial. LXF

Relay driver circuits The Pi’s IOs generate 0V on a low and 3.3V on a high. The specified DPST relays need a DC coil voltage of 5.0V in order to switch, meaning that a driver circuit will be needed to boost the voltage of the Pi’s output up to 5.0V and drive the coil to switch the relay. Building a driver circuit is straightforward. First, grab a general purpose transistor such as a 2N3904. Then, create a circuit for the transistor that will allow it to run in the

saturation region of operation during an ON (GPIO high). To do this, calculate the correct resistor value to obtain the saturation current needed and hook it up in series with the GPIO of the Pi’s output to the transistor’s base. Hook up the transistor’s emitter leg to ground. The collector leg will be hooked up to the coil’s low rail on the relay. The coil’s input of the relay will be hooked up to a supply of 5V. This is a generic transistor switch setup. When the Pi outputs

3.3V to the base of the transistor, the current will be enough (thanks to the calculated resistor value) to drive the transistor into saturation. This will then allow full current to flow from the 5V rail through the relay’s coil and then to ground. The coil in the relay will then actuate the mechanical switch controlling a higher load. Remember to incorporate a diode across the relay’s coil in order to eliminate back-EMF. This is a must as back-EMF has the potential to hurt the Pi.

October 2016 LXF216     75

Tutorial Cassandra Use the database for large Xxxx Cassandra

data processing with Sparks and Python

Cassandra: Processing data Mihalis Tsoukalos covers the essentials for talking to the distributed  database, Cassandra, using Spark and inserting data using Python.


Our expert Mihalis Tsoukalos

(@mactsouk) has  an M.Sc. in IT from  UCL and a B.Sc. in  Mathematics. He’s  a DB-admining,  software-coding,  Unix-using,  mathematical  machine. You can  reach him at www.

assandra is a database server created by the Apache Software Foundation and Spark is an engine for largescale data processing that‘s also created by Apache. This tutorial will teach you how to use Spark shell to talk to Cassandra; how to use the Cassandra shell to insert data; how to use Python to insert data into Cassandra and how to talk to Spark using Python. But first, you’ll learn how to install Cassandra and Spark because there are some subtle points in their installation process. Although Cassandra is a distributed database, which means that a Cassandra cluster can have many nodes, this tutorial uses a single-node Cassandra cluster just to keep things simple. Cassandra, Spark and large-scale data processing are difficult subjects that need a lot of practice. However, after reading this tutorial and experimenting a little these subjects shouldn’t be so obscure anymore. Before you install Cassandra please make sure that you have Java installed on your Linux machine. If not, execute: $ sudo apt-get install default-jdk

Installing Cassandra

Quick tip Using the CQL Python module is not the only way to connect Cassandra and Python. You can also use the Cassandra Driver from DataStax. You can find information about the cql module at https://pypi. pypi/cql/1.0.4 and about the Cassandra driver at https:// datastax.github. io/python-driver/ installation.html.

As there’s not an official package for Cassandra on Ubuntu 16.04, which is the distro we’re using, you should manually install the necessary binary files: $ sudo groupadd cassandra $ sudo useradd -d /home/cassandra -s /bin/bash -m -g cassandra cassandra $ wget cassandra/3.7/apache-cassandra-3.7-bin.tar.gz $ sudo tar -xvf apache-cassandra-3.7-bin.tar.gz -C /home/ cassandra --strip-components=1 $ cd /home/cassandra/ $ sudo chown -R cassandra.cassandra $ sudo su -l cassandra cassandra:~ $ export CASSANDRA_HOME=/home/ cassandra cassandra:~ $ export PATH=$PATH:$CASSANDRA_HOME/ bin The first two commands create a group and a user account that will own the Cassandra files and processes which offers greater system security. The third command downloads a binary distribution of Cassandra and the fourth command extracts the archive with the binary to the right place. The last two commands should be put in the .bashrc file of the user to ensure they are executed on log in. Most of the Cassandra-related commands are executed by the Cassandra user—you can tell that by the prompt used.

76     LXF216 October 2016

The next command starts the Cassandra server process: cassandra:~ $ cassandra ... INFO 07:25:44 Node localhost/ state jump to NORMAL cassandra:~ $ By default, Cassandra runs as a background process. Should you wish to change this behaviour, you should use the -f switch when starting Cassandra. The database server will write its log entries inside the ./log directory, which will be automatically created the first time you execute Cassandra. If your Linux distribution (distro) has a ready to install Cassandra package you’ll find the log files at /var/log/ cassandra. In order to make sure that everything works as expected execute cassandra:~ $ nodetool status . The previous command checks whether you can connect to the Cassandra instance or not using nodetool, which is used for managing Cassandra clusters. If you want to get a list of all commands supported by nodetool, you should execute nodetool help . The output of the nodetool status command shows the real status of the Cassandra node—the UN that’s in front of the IP of your node means that your node is up and running, which is a good thing.

This screenshot shows the list of commands that are supported by the Cassandra shell.

Cassandra Tutorial

About Apache Spark

The Spark Cassandra Connector creating a key space, a table and inserting data to the table from spark-shell.

You can also connect to the node using cqlsh utility by executing cassandra:~ $ cqlsh . Please note that, by default, cqlsh tries to connect to You can find out where the Cassandra server processes are listening with: cassandra:~ $ grep listening logs/system.log INFO [main] 2016-08-16 11:46:18,603 Starting listening for CQL clients on localhost/ (unencrypted)... Getting the following kind of error message when trying to use cqlsh means that there’s something wrong with the Python driver provided by the Cassandra installation: Connection error: ('Unable to connect to any servers’, {'': TypeError('ref() does not take keyword arguments’,)}) In order to resolve this particular problem, you should do: $ sudo pip install cassandra-driver cassandra:~ $ export CQLSH_NO_BUNDLED=TRUE The first command must run as root whereas the second command should be executed by the user that owns Cassandra. The new value of CQLSH_NO_BUNDLED will tell cqlsh – which is implemented in Python – to bypass the Python driver provided by Cassandra and use the external Cassandra Python driver you’ve just installed. The cqlsh utility supports a plethora of commands (pictured bottom, p76). Getting help for a specific command will either open a browser or display a text message with an external URL. You can stop the Cassandra process as follows: cassandra:~ $ ps ax | grep cassandra | grep java | awk {'print $1'} 13543 cassandra:~ $ kill 13543 The first command finds out the process id of the Cassandra server process and the second command terminates the process. You are done with installing Cassandra—you now have a single-node cluster, which is more than adequate for learning Cassandra.

Spark, which is the successor to Hadoop, is an engine for large-scale data processing. The engine also includes an SQL engine and supports stream processing, machine learning as well as graph processing. Software such as Spark needs to get its data from other sources—the good thing is that Spark can access a plethora of data sources including Cassandra, HDFS, HBase and Hive as well as any Hadoop data source. You can use Spark interactively from the Python, R and Scala shells, which also allows you to test and try new things while getting instant feedback before moving your

code onto a production system. You can try Spark using Python with the help of the pyspark executable and using Scala with the spark-shell executable. The fundamental data structure of Spark is called RDD (Resilient Distributed Dataset), which is an immutable distributed collection of objects. You can create an RDD using parallelising or by referencing an external dataset. You can see the full Spark API by visiting docs/latest/programming-guide.html and download Spark from http://spark.

$ tar zxvf spark-1.6.1.tgz $ cd spark-1.6.1/ $ ./build/sbt assembly Please bear in mind that the ./build/sbt assembly command might take a while to finish. Spark can be used interactively from the Scala, Python and R shells. In order to make sure that everything works with your Spark installation, execute the following commands: $ ./bin/pyspark 2>/dev/null Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ ‘_/ /__ / .__/\_,_/_/ /_/\_\ version 1.6.1 /_/ Using Python version 2.7.11+ (default, Apr 17 2016 14:00:29) SparkContext available as sc, SQLContext available as sqlContext. $ ./bin/run-example SparkPi 30 2>/dev/null Pi is roughly 3.142997333333333 You can also use your favourite web browser to see whether Spark is working as expected by pointing it to http://localhost:4040 while ./bin/pyspark is running. If everything is OK you will see an output similar to below.

Quick tip You can find more information on Cassandra at http://cassandra. Similarly you can learn more about the Spark project at http://spark.

Getting and Installing Spark Installing Spark is easier than installing Cassandra despite the fact that you’ll need to install Spark by compiling its source: $ wget spark-1.6.1.tgz

Spark offers a web interface with useful information about the current Spark installation which is accessed via your browser.

Improve your Linux skills Subscribe now at

October 2016 LXF216     77

Tutorial Cassandra

The insertData. py Python code shows how to insert data to Cassandra using Python and the cql module.

This web page URL also shows additional information about the Spark installation including Spark jobs, stages, storage and executors. The following steps are required for making Spark available to all users on your Linux distro: $ sudo mv spark-1.6.1 /usr/local/bin $ vi ~/.bashrc In the ~/.bashrc file you should add the following line that adds the bin directory of Spark to the PATH variable: export PATH="/usr/local/bin/spark-1.6.1/bin/:$PATH”

Inserting sample data into Cassandra

Quick tip Depending on the way you installed Cassandra, you might need to manually start RPC server by executing nodetool enablethrift while Cassandra is running and enable Cassandra connections to port number 9160.

The following commands, executed from the Cassandra shell, will create a new key space, two tables and insert sample data into the two tables: CREATE KEYSPACE IF NOT EXISTS lxf WITH REPLICATION = {'class': 'SimpleStrategy', 'replication_ factor': '1'} AND durable_writes = true; USE lxf; CREATE TABLE issue214 (key int PRIMARY KEY, value text); CREATE TABLE issue215 (key int PRIMARY KEY, value text); INSERT INTO issue214 (key, value) VALUES (1, 'Linux'); INSERT INTO issue214 (key, value) VALUES (2, 'Magazine'); INSERT INTO issue215 (key, value) VALUES (1, 'Mihalis'); INSERT INTO issue215 (key, value) VALUES (2, ‘Tsoukalos'); When you have a single-node cluster, it’s important to use SimpleStrategy instead of NetworkTopologyStrategy when creating a new key space. Additionally, the value of replication_factor controls the number of replicas that need to be updated for changes to take effect. A value of 1 means that when you write data to your cluster, your data will only be stored on only one node. You can then use cqlsh to verify that the data is there: cqlsh:lxf> SELECT * FROM issue214; cqlsh:lxf> SELECT * FROM issue215; This project will be the equivalent of the “Hello World!” program. However, no “Hello World!” message will appear on your screen! But first you will need to install a connector that makes Spark able to communicate with Cassandra: $ git clone

connector.git $ cd spark-cassandra-connector/ $ ./sbt/sbt assembly ... [info] Packaging /home/mtsouk/code/sparkCass/sparkcassandra-connector/spark-cassandra-connector/target/ scala-2.10/spark-cassandra-connector-assembly-2.0.0-M1-19ge3f1042.jar ... ... What you get from executing the previous commands are two JAR files in a directory called target. One file is for Scala, which will be the one that will be used, and one is for Java. You can find the Scala version inside ./spark-cassandraconnector/target/scala-2.10/: $ cd spark-cassandra-connector/target/scala-2.10/ $ ls -l spark-cassandra-connector-assembly-2.0.0-M1-19ge3f1042.jar $ cp spark-cassandra-connector-assembly-2.0.0-M1-19ge3f1042.jar ~ $ cd ~ In order to use the connector you’ll need to start the Spark shell: $ spark-shell --jars ./spark-cassandra-connector-assembly2.0.0-M1-19-ge3f1042.jar ... scala> You will now be able toaccess the Cassandra server using spark-shell : scala> sc.stop scala> import com.datastax.spark.connector._, org.apache. spark.SparkContext, org.apache.spark.SparkContext._, org. apache.spark.SparkConf scala> import com.datastax.spark.connector.cql. CassandraConnector scala> val conf = new SparkConf(true).set("spark.cassandra.", "") scala> CassandraConnector(conf).withSessionDo { session => session.execute("CREATE KEYSPACE lxf2 WITH REPLICATION = {'class': ‘SimpleStrategy’, ‘replication_ factor': 1 }") session.execute("CREATE TABLE lxf2.issue217 (word text PRIMARY KEY, count int)") session.execute("INSERT INTO lxf2.issue217 (word, count) VALUES ('Linux Format', 123)") } You first have to stop the current Spark context and create a new one that will be connected to the local Cassandra server as will be specified by the conf variable. Next, you’ll need to import some necessary classes, create the conf variable and directly access the Cassandra server in order to execute three commands (pictured, top, p77). The following is the proof that the previous code worked: cassandra:~ $ cqlsh Connected to Test Cluster at [cqlsh 5.0.1 | Cassandra 3.7 | CQL spec 3.4.2 | Native protocol v4] Use HELP for help. cqlsh> use lxf2; cqlsh:lxf2> SELECT * FROM issue217; word | count --------------+-------

Never miss another issue Subscribe to the #1 source for Linux on page 28. 78     LXF216 October 2016

Cassandra Tutorial Linux Format | 123 (1 rows) If you’re going to use the Spark Cassandra Connector you should definitely visit and check the table for version compatibility issues.

Using Python to add data This section will teach you how to insert data into Cassandra using Python. The data is hard-coded inside the code but you can easily change the Python code to read data from other sources. First, you’ll need to install a Python module with: # pip install cql Before executing the Python script, you should already have a table called issue217 in the lxf2 key space. If not, create it as follows: cqlsh:lxf2> CREATE TABLE issue217 (word text PRIMARY KEY, count int) The Python code, saved as, is the following: import cql con = cql.connect('', 9160, 'lxf2', cql_version='3.0.0') print ("Connected!") cursor = con.cursor() query = “INSERT INTO lxf2.issue217 (word, count) VALUES (:c1, :c2)" values = [dict(c1="k1", c2=11), dict(c1="k2", c2=12), dict(c1="k3", c2=13), dict(c1="k4", c2=14)] for oneValue in values: cursor.execute(query, oneValue) The cql.connect() command is where you define the parameters of the connection, including the key space that will be used (lxf2). You then have to create a cursor that is used for interacting with Cassandra. Last, you use the cursor and a for loop to insert the desired data into Cassandra. You should now verify that did its job correctly using cqlsh. (The running script as well as the interaction with cqlsh is pictured, top, p78.)

Counting lines using Spark You will now learn how to use Spark to count the lines of a text file that contain a given string. But first you will need to install another Python module: $ sudo pip install py4j . The Python code, saved as, is: import os import sys

About Apache Cassandra Cassandra is a consistent, distributed key-value store. Being a Distributed keyvalue store essentially means that Cassandra fetches its data from multiple locations, which improves the software's performance. Cassandra offers an interactive shell called cqlsh. Executing cqlsh will show the following information: $ cqlsh Connected to Test Cluster at [cqlsh 5.0.1 | Cassandra 3.7 | CQL spec

3.4.2 | Native protocol v4] Use HELP for help. cqlsh> You should use CQL (Cassandra Query Language) to interact with a Cassandra database, which allows you to insert data, make queries and define schemas. However, software such as Cassandra is usually used in combination with other kinds of software, such as Spark, because Cassandra doesn't have much in the way of processing capabilities.

informing the program where to find the pyspark Python module that comes with Spark. For the Spark installation used in this tutorial, pyspark can be found at /usr/local/bin/ spark-1.6.1/python/pyspark. (See running below, the generated result is verified using grep.) The biggest advantage of Cassandra over other NoSQL databases is that it supports an SQL-like language, which means that you will not need to learn a completely new query language. Additionally, it’s a highly scalable and highly available database with no single point of failure while also being easy to learn. The fact that it's a NoSQL database means that its schema can easily change without downtime. Cassandra is also very fast as most operations happen in memory; in order to avoid data loss Cassandra keeps a commit log. Cassandra is excellent at handling real-time data and time-series data. It’s not a coincidence that Twitter, Digg and Facebook all use Cassandra! If you are already familiar with a relational database, you will need some time to get used to a NoSQL database. Additionally, you will need to write some test applications before using Cassandra in production. Cassandra doesn’t support joins, which means that joins must be implemented programmatically by the developer. Recovery from a failure must be done manually using nodetool. Last, Cassandra doesn’t support atomic operations, which means that a failed transaction might leave traces. However, as all Cassandra operations are idempotent, you can retry the same operation until it succeeds without any side effects. As always the best way to evaluate Cassandra is by writing applications that use it. LXF

Next issu Get intoe: Wordpres s

if ‘SPARK_HOME’ not in os.environ: os.environ['SPARK_HOME'] = ‘/usr/local/bin/spark-1.6.1’ if ‘/usr/local/bin/spark-1.6.1/python’ not in sys.path: sys.path.insert(0, ‘/usr/local/bin/spark-1.6.1/python') from pyspark import SparkContext logfile = “/var/log/syslog” sc = SparkContext("local”, “LXF App") data = sc.textFile(logfile).cache() count = data.filter(lambda s: ‘error’ in s).count() print “Lines containing error: %i” % count You'll note that the first thing that the Python code does is

The script shows how to use Python to interact with Spark. You should execute by using spark-submit.

October 2016 LXF216     79

Tutorial NTPConfigure Xxxx and GPS GPS a GPS HAT to provide

time synchronisation for NTP

GPS: Precise timekeeping

Sean Conway follows up last issue’s tutorial on installing a GPS HAT by  configuring the setup to produce a signal for use by NTP for accurate time.  and hence determines the precise time based on the four (or more) corrected time signals. Using both the GPS data stream and the PPS signal, NTP can achieve time accuracy in the microseconds. (If you’re thinking this couldn’t be quite as precise as taking the time directly from something super accurate, such as a caesium clock – like the clock inside a GPS satellite itself – then you’d be correct. That’s why a GPS receiver is classed as a “stratum 2” time source in the diagram representing the process (pictured below). With an operational GPS receiver attached to a Pi and the GPS knowledge gained from our tutorial in LXF215, let’s use the GPS HAT to produce the signals that can be used by NTP for time support. To be clear, for this tutorial you will require a Pi computer hosting a Adafruit GPS HAT receiver (so if you missed last issue, turn to page 64 and order a copy now).

Our expert Sean D Conway had 

formal training  in electronics  and half a career   spent in aviation,  so he really knows  where he is with  implementing a  GPS receiver   on a Pi.

Let’s talk to each other

H Quick tip A mismatch in the file checksum can be a result of someone tampering with the software tarball or a bad download. It is not recommended you use the tarball without a matching checksum.

ere at LXF, we’re always trying to keep up with the times. Last issue we showed you how to add GPS capabilities to your Raspberry Pi by pairing it with an Adafruit Ultimate Global Positioning System (GPS) HAT (, enabling you to be up-to-the-minute in knowing precisely where you are. As it happens, GPS receivers are useful for more than simply finding your location. (Who’d have thought it?) In fact, GPS receivers can be used by the NTP daemon for ultra precise time synchronisation. Keeping up with the times, right? Network Time Protocol (NTP) is a protocol used to distribute time over a network. Time servers requiring the highest possible accuracy can achieve this goal by using a GPS receiver as a primary reference source. GPS receivers that are used to support precise timekeeping are configured to provide a pulse per second (PPS) signal, which doesn’t provide the time per se but indicates the instant a second starts. This is combined with the satellite metadata received by the GPS receiver – which includes position and velocity information from a minimum of four satellites as well as a time signal from each. The speed of radio waves is known, so the GPS receiver takes this position and velocity information, corrects for how long each radio signal has taken to reach it,

80     LXF 216 OCtober 2016

In order for the Pi to support NTP time synchronisation, the configuration we used to get the GPS HAT up and running will need to be tweaked for NTP optimisation. Using your favourite (vi replacement) text editor, modify the /boot/ config.txt file to contain the following configuration options: force_turbo=1 dtoverlay=pps-gpio,gpiopin=4 The Pi OS kernel has the cpufreq driver on by default. This driver will raise and lower the processor frequency depending on the processor load. When the driver pushes the CPU above baseline, turbo kicks in. This is supposed to reduce heat and thus help extend the life of the ARM chip. However, the driver when enabled controls dynamic clocking, which can be a detriment to providing accurate time, so we need to disable the driver. That’s what the first line here is doing.

A journey so time can be synchronised.

NTP and GPS Tutorial A faster TTFF The GPS chip needs to know time and the location of each satellite to establish a fix. The satellite sends information about its location and time in data sentences. In order for the GPS to retrieve the information, the satellite signal must be of a specific strength for a specific duration. If the signal is degraded because of obstructions while the unit is trying to get a fix, the data is lost and the GPS unit must start over. To assist the GPS receiver in locating satellites, the manufacturers make available ephemeris data from their FTP site. The data can be downloaded and then uploaded into the GPS unit’s flash memory. When the GPS unit starts it has information on the satellites that assists the receiver in establishing time to first fix (TTFF) more quickly.

The second line will cause the loader to look for the ppsgpio module when the OS mounts. In the same line, the GPIO pin used for the PPS signal is also defined. The pin defined can be different, depending on the GPS vendor. There’s a schematic drawing of the Adafruit Ultimate GPS HAT we’re using (pictured overleaf, p82). In this drawing you will see the reference to the 1PPS pin output. If PPS output is not going to be used, a bridging trace pad on the HAT printed circuit board can be cut in order to disconnect the PPS signal and free GPIO pin 4 for other uses. Reboot the Pi to ensure that the configuration change works and is used on initialisation. sudo shutdown -r now Confirm the kernel modules to support the PPS signal have been loaded. sudo lsmod | grep pps pps_gpio 2555 0 pps_core 7092 1 pps_gpio Next, in order for the GPS receiver PPS signal to be used in helping to keep accurate time, some additional Pi software be required, so run the following: sudo apt-get install pps-tools When done, execute the following test command to confirm that the once-per-second transition of the PPS signal is now available. The output should be like that below. sudo ppstest /dev/pps0 trying PPS source "/dev/pps0" found PPS source "/dev/pps0" ok, found 1 source(s), now start fetching data... source 0 - assert 1456608486.008323610, sequence: 308 clear 0.000000000, sequence: 0 source 0 - assert 1456608487.008326578, sequence: 309 clear 0.000000000, sequence: 0 source 0 - assert 1456608488.008330692, sequence: 310 clear 0.000000000, sequence: 0 ^C ← Control+C to break out of the command Before jumping into compiling the NTP daemon itself, we need to ensure that the GPS unit will provide the correct input for NTP to use. To do this, we’ll need to revisit a few of the tools we introduced in LXF215 and use these to reconfigure the GPS device to provide the data sentence required for time reference and the receiver’s position coordinates and time data. The unit already uses this information to help it obtain a faster time to first fix (TTFF), as explained in the box above. We simply need to configure it to share this information.

First, find the latitude and longitude in decimal degrees for your location. An easy way to do this is to record the Latitude and Longitude from the GPS toolset gpsstatus program readings. You will also need to note the UTC time the readings were taken. In our case the coordinates are 49° 48’ 10.49" N 097° 06’ 46.01" W , 0259 UTC. Paste the lat and long values into Google Maps ( Right-click on the pin icon that is shown on the map and select the menu option ‘What’s here?’. Record the Latitude and Longitude in decimal degrees shown in the lower part of the screen.

Create a config file Now, using your favourite vi replacement text editor create a configuration file: /etc/gpsinit_loc.conf. Construct a line in the file that contains the decimal degrees and UTC time. The syntax structure is as follows: -l 49.802919,-97.112889,0259 -tSave and close the file. Now in the same directory in which you uncompressed the, edit the gpsinit_reset.conf file, to reflect the changes shown below. # Now set to 115200 setspeed 9600 # Set NMEA Sentence Output PMTK314,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 Save and close the file. Now initialise the GPS unit with the mt3339 command using the option file we modified: sudo ./gpsinit -s 9600 -f gpsinit_reset.conf /dev/ttyAMA0 The gpsinit_loc.conf file is called within this script. If you watch the messages flow by, you should see the values you entered being used. The GPS unit is now chugging away with the required data. So next we need to establish an NTP daemon that supports the PPS signal from the receiver. The

HATs on your Pi for this tutorial.

Quick tip The time server is made available through the author ISP. You will need to provide an alternate source supported in your location.

NTP reference clock drivers The NTP configuration file options are using type 20 reference clock drivers. www. The mode switch tells the driver what NMEA sentence is being used for time. In the example mode 0x11 is data-type sentence $GPRMC with line speed of 9600. It is recommended that only one NMEA sentence per second be used. GPRMC or GPGGA are used most often. Don’t enable both, because doing so can cause problems with the driver. The option flag 1 1 enables PPS signal processing. By default PPS is disabled (set to 0). PPS is supported for NTP in the NMEA type 20 reference clock driver.

Did you miss the last issue? Head over to now!

October 2016 LXF 216     81

Tutorial NTP and GPS ue: Next isbsoot Multi- rive USB d

Think of the command as print and NTP query.

GPS receiver PPS signal is not often used with NTP, and for this reason the NTP software default installation doesn’t provide support for PPS. In order to obtain PPS support, the existing Pi NTP software will need to be removed and NTP software with PPS support installed. The PI kernel supports the PPS signal, so when replacement NTP software is compiled ( checking ATOM PPS interface... yes ) on the Pi, the PPS signal support will be available. So first we will remove the old software and then compile the replacement. sudo service ntp stop sudo apt-get remove ntp

Quick tip The Nottingham Linux Users Group has a great link if you are looking for details on the output of the ntp query tool: http:// uk/2012/01/ ntpq-poutput/831

If you wish to use pin 4 print circuit board trace, cutting is required.

The right NTP Create a directory that will be used to hold the software download and also during the compile process. Download the latest ntp software tarball and the checksum from the NTP archive sudo wget sudo wget Confirm that the checksum in the file with an .md5 matches the checksum generated for the NTP software downloaded. sudo cat ntp-4.2.8p3.tar.gz.md5 b98b0cbb72f6df04608e1dd5f313808b ntp-4.2.8p3.tar.gz sudo md5sum ntp-4.2.8p3.tar.gz b98b0cbb72f6df04608e1dd5f313808b ntp-4.2.8p3.tar.gz Yes, a match. Now unroll the tarball: sudo tar -zxvf ntp-4.2.8p3.tar.gz Now change to the directory named in the tarball with cd ntp-4.2.8p3 . Now, this will come as a shock, but things don’t always go smoothly even for us at LXF. During the make step, we encountered an error: /usr/bin/ld: cannot find -lcap .

Our install was missing a library, libcap-dev. The solution was revealed in a post at by Dougie Lawson of Basingstoke, UK: sudo apt-get install libcap-dev Now compile a new build of NTP software: sudo ./configure –enable-linuxcaps sudo make -j5 sudo make install sudo cp /usr/local/bin/ntp* /usr/bin/;sudo cp /usr/local/ sbin/ntp* /usr/sbin/ Make the data available to NTP: sudo ln -s /dev/ttyAMA0 /dev/gps0 Using your favorite vi replacement text editor, modify the configuration file for NTP /etc/ntp.conf by adding the following lines to the file: #exchange time but don’t allow any modification to be made restrict -4 default kod notrap nomodify nopeer noquery #time source using GPS supporting the PPS signal server mode 0x11 minpoll 4 maxpoll 4 fudge flag1 1 refid NEMA stratum 15 server minpoll 4 maxpoll 4 fudge refid PPS stratum 1 server prefer Save the file when complete and start/restart the NTP daemon to load the updated configuration file. sudo /etc/init.d/ntp restart After a minute of the daemon running, examine the output using the commands listed to determine the clock status: sudontpq -p sudo ntpq -c rv See the screenshot (at the top of this page). The server is being used as the primary reference (*) and system sync is derived from PPS peer (o). The time is stratum 2 referenced (stratum 0 being GPS, PPS being 1, and the server being stratum 2). If you completed the first of these two tutorials related to GPS, you will have established a GPS receiver using a Raspberry Pi. With the configuration you could obtain GPS lat and long information. With a simple mod of the GPS receiver configuration as explained in this tutorial a PPS output signal is provided that can be used by the NTP daemon on the Pi for time synchronisation. Now you’re on time! LXF

Never miss another issue Subscribe to the #1 source for Linux on page 28. 82     LXF 216 October 2016

Helping you live better & work smarter

LIFEHACKER UK IS THE EXPERT GUIDE FOR ANYONE LOOKING TO GET THINGS DONE Thousands of tips to improve your home & workplace Get more from your smartphone, tablet & computer Be more efficient and increase your productivity


Phoenix: how to build a blog

Mihalis Tsoukalos explains all you need to know to start creating websites  using Phoenix and shows you how to build a basic blog site from scratch.

Our expert Mihalis Tsoukalos

(@mactsouk)   is a Unix  administrator,  a programmer,  a DBA and a  mathematician  who enjoys  writing articles  and learning new  things. You can  reach him at www.

Figure 1: This is the default web page of an empty Phoenix project as defined inside the ./web/templates/ page/index.html.eex file.


Quick tip Ecto is a package for writing queries and interacting with databases in Elixir and hence Phoenix. You can find more about it at https:// ecto/Ecto.html and at https:// Ecto currently supports PostgreSQL, MySQL, MSSQL, SQLite3 and MongoDB.

hoenix is a Web Framework written in Elixir that works similarly to Rails but is faster and more scalable than Rails. This tutorial will show you how to create a simple blog site using Phoenix and some Elixir code. You can find more about Phoenix at but don’t spend too much time on reading because the correct way to learn Phoenix is by writing applications! Before installing Phoenix you will need to install some additional packages to create a proper development environment and make your life easier: $ mix local.hex $ ps ax | grep -i postgr The first command installs Hex, which is the package manager of Elixir. The second one confirms that PostgreSQL is both installed and up and running; if not, please install it and start the PostgreSQL server process. You can verify that you can successfully connect to the running PostgreSQL server process as follows: $ sudo -u postgres psql psql (9.5.3) Type "help" for help. postgres=#

84     LXF216 October 2016

If PostgreSQL is not running for some reason, you will get an error message similar to the following: $ psql psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? Then, you can use mix to install Phoenix: $ mix archive.install archives/raw/master/phoenix_new.ez You can find the version of Phoenix you’re using as follows: $ mix -v Phoenix v1.2.1 Although it is not absolutely necessary to install and use Node.js for your Phoenix projects, Node.js can be very helpful and is used by Phoenix, so visit and install Node.js before continuing with the tutorial. You can find your Node.js version as follows: $ node --version v4.2.6 Enough with the dependencies, it is now time to start using Phoenix to create a simple website.

Phoenix project This project will be the equivalent of the elementary “Hello World!” program that everyone tries out when starting to learn to code. First, you will have to create a new Elixir project using mix as follows: $ mix hw ... Fetch and install dependencies? [Yn] Y ... $ cd hw

Phoenix The mix tool The Elixir tool for creating and maintaining projects is called mix. Each Elixir project created by mix, including projects that use Phoenix, contains a file named mix.exs that is used for configuring your project. The mix.exs file is divided into three parts, named “project”, “application” and “deps”, that are also valid Elixir functions. The “project” part is where you put all project-related information such as project name and version. The “application” part is for putting information that is used for describing the application you are developing. The “deps” part contains the most important information

because it is where the dependencies of the project are listed – the mix tool will either download all dependencies based on the information found in the “deps” function or give you the details you need to get them yourself. Usually, you will not have to deal with the mix.exs file when developing Phoenix projects because the mix tool does all the work for you. The mix clean command erases the generated application files and allows you to start with a fresh project. The mix compile command compiles your Elixir project. The mix run command executes your project, whereas the

$ mix ecto.create $ mix phoenix.server The first command creates a new Phoenix project at the current directory, named hw. After executing mix ecto.create you might get some error messages that have to do with the database connection; in that case, edit ./config/dev.exs and make sure that you put the connect username, password and database information near the end of the file. However, for this simple project, this is not necessary. The mix phoenix.server command starts Phoenix router, which you can also start with the interactive Elixir shell: $ iex -S mix phoenix.server The previous command starts an HTTP server that usually listens to port number 4000 and is useful for testing your application. The good thing is that it also displays information about user requests: $ mix phoenix.server [info] GET / [debug] Processing by Hw.PageController.index/2 Parameters: %{} Pipelines: [:browser] [info] Sent 200 in 24ms So, if you visit http://localhost:4000/ you will see Figure 1. The HTML code for displaying Figure 1 is hard coded inside the ./web/templates/page/index.html.eex file. Should you wish to display something different, you should carefully change the contents of this file. At this point, you usually define your routes, which you can think of as the paths supported by your web application, and write the necessary code so that each URL displays the right output. Because this is a simple project, it will support just one path, which will only display a simple message. So, the contents of router.ex need not be changed. After learning the basics, it is about time to develop something practical using Phoenix.

mix test command executes the tests of the project – by default, mix generates some dummy tests that are difficult to fail; should you wish to have real tests, you will have to write your own. There are plenty more; the mix --help command will show you all supported mix commands. Should you wish to get information about a specific command, you can simply execute mix help <command_name>. Please note that it is highly recommended to create your Phoenix projects using the mix tool. You can find more information about mix at

$ mix do deps.get, compile $ mix local.hex Once again, you will need to edit ./config/dev.exs and insert the correct PostgreSQL information, which will be the subject of the next section.

Talking to a database Phoenix uses PostgreSQL by default; although it is possible to use another database server if required, it is wise to stay with PostgreSQL because Phoenix has better support for PostgreSQL than for alternatives. For the purposes of this tutorial the name of the database will be “LXF” and the name of the user that will be used will be “LXFuser” with a password of “aPassword” – you will learn more about the tables that are going to be created in a while. The next thing you should do is execute the following commands from the PostgreSQL shell: mtsouk=# CREATE USER LXFuser WITH PASSWORD 'aPassword'; CREATE ROLE mtsouk=# CREATE DATABASE LXF; CREATE DATABASE mtsouk=# GRANT ALL PRIVILEGES ON DATABASE LXF to LXFuser; GRANT mtsouk=# ALTER USER LXFuser CREATEDB; ALTER ROLE Now put the correct information inside ./config/dev.exs. That’s it for the database related things, so we can now continue building the project:

Blog it

Figure 2: This shows how to connect to PostgreSQL to query the table that holds the data for the blog posts (of which we have two).

You will now learn how to create a blog site using Phoenix. In order to create a Phoenix project named “blog”, you will need to execute the following commands $ mix blog ... Fetch and install dependencies? [Yn] Y ... $ cd blog

Improve your code skills Subscribe now at

October 2016 LXF216     85


Quick tip You can find the documentation of the Phoenix project at https:// phoenix/Phoenix. html and learn more about Elixir at

Figure 3: The left-hand window shows the list of all available posts whereas the right-hand window shows the web page used for adding new blog posts.

$ mix ecto.create && mix ecto.migrate $ npm install $ mix phoenix.server The npm install command installs Node.js dependencies, whereas the mix ecto.create && mix ecto.migrate commands create and migrate your database. This time the mix phoenix.server command should generate no database related errors. The next command will execute a Phoenix generator that will create some things for us: $ mix phoenix.gen.html Post posts title:string body:text ... Add the resource to your browser scope in web/router.ex: resources "/posts", PostController Remember to update your repository by running migrations: $ mix ecto.migrate What we did here is declaring the name of a web page in both singular (Post) and plural (posts) as well as the fields of it (title, body) along with their types (string, text). As the output of the previous command suggests, it is time to add a new route. Routing is the process that Phoenix has to do so that each HTTP request will be served by the appropriate Elixir code. You will need as many routes as the number of static web pages your project has. If you have dynamic pages, then you will need fewer routes. The project file with the routing information for the blog project is ./web/router.ex. At this point, you will need to add just one route: $ diff router.ex router.ex.orig 20d19 < resources "/posts", PostController After this, you will need to execute the following command for changes to take effect in the PostgreSQL part: $ mix ecto.migrate The following command shows the routing list of our project: $ mix phoenix.routes Compiling 9 files (.ex) Generated blog app page_path GET / Blog.PageController :index post_path GET /posts Blog.PostController :index post_path GET /posts/:id/edit Blog.PostController :edit post_path GET /posts/new Blog.PostController :new post_path GET /posts/:id Blog.PostController :show post_path POST /posts Blog.PostController :create post_path PATCH /posts/:id Blog.PostController :update PUT /posts/:id Blog.PostController :update

post_path DELETE /posts/:id Blog.PostController :delete You now have a fully working blog site and you are allowed to stop here. Figure 2 (previous page) shows some of the contents of the table that holds the data for the blog posts after we've added two blog posts. But why not go further? The following section will briefly show how to add support for comments.

Adding comments Making it possible to add comments to your blog posts is not as difficult as it might sound. First you will need to execute two commands: $ mix phoenix.gen.model Comment comments name:string content:text post_id:references:posts * creating web/models/comment.ex * creating test/models/comment_test.exs * creating priv/repo/migrations/20160813080840_create_ comment.exs Remember to update your repository by running migrations: $ mix ecto.migrate $ mix ecto.migrate Compiling 1 file (.ex) Generated blog app 11:08:55.467 [info] == Running Blog.Repo.Migrations. CreateComment.change/0 forward 11:08:55.467 [info] create table comments 11:08:55.472 [info] create index comments_post_id_index 11:08:55.474 [info] == Migrated in 0.0s The post_id:references:posts part of the first mix command tells Phoenix how a comment should reference a blog post in the database. You will now need to edit ./web/ models/comment.ex and make the following changes: $ diff comment.ex comment.ex.orig 7c7 < belongs_to :post, Blog.Post, foreign_key: :post_id --> belongs_to :post, Blog.Post 12,14d11 < @required_fields ~w(name content post_id) < @optional_fields ~w() < 18,20c15,18 < def changeset(model, params \\ %{}) do < model < |> cast(params, @required_fields, @optional_fields) --> def changeset(struct, params \\ %{}) do > struct > |> cast(params, [:name, :content]) > |> validate_required([:name, :content]) 23d20 < Last, you will need to edit ./web/models/post.ex to let it know that it supports multiple comments: $ diff post.ex post.ex.orig 3d2 < import Ecto.Query 8d6

Never miss another issue Subscribe to the #1 source for Linux on page 28. 86     LXF216 October 2016

Phoenix Basic PostgreSQL administration PostgreSQL is a capable DBMS that can serve a large variety of applications. PostgreSQL offers psql, which allows you to interact with it from a terminal and perform most tasks. If you execute psql without any parameters you will most likely get an error message similar to the following: $ psql psql: FATAL: database "mtsouk" does not exist You can list all available PostgreSQL databases as follows, which is usually the first command you will need to execute: $ psql -l Then, you can specify the database you want to connect to as follows: $ psql -d postgres psql (9.5.3) Type "help" for help. postgres=# You can find the version of PostgreSQL you’re using by executing this command under psql: postgres=# SELECT version(); You can create a new PostgreSQL user called “LXF” as follows: postgres=# CREATE USER LXF WITH PASSWORD 'aPassword';

CREATE ROLE Alternatively, you can use the createuser command line utility, which is provided by PostgreSQL, to create a new PostgreSQL user. You can create a new database as follows: postgres=# CREATE DATABASE phoenix; CREATE DATABASE Alternatively, you can use the createdb command line utility, which is provided by PostgreSQL, to create a new database. You can give full access to an existing user to an existing database as follows: postgres=# GRANT ALL PRIVILEGES ON DATABASE phoenix to LXF; GRANT postgres=# ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE, DELETE ON tables TO LXF; ALTER DEFAULT PRIVILEGES postgres=# ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, USAGE ON sequences TO LXF; ALTER DEFAULT PRIVILEGES You can now connect to the Phoenix database as user “lxf” as follows: $ psql -d phoenix -U lxf -W -h

< has_many :comments, Blog.Comment 13,15d10 < @required_fields ~w(title body) < @optional_fields ~w() < 19,28c14,17 < def changeset(model, params \\ %{}) do < model < |> cast(params, @required_fields, @optional_fields) < end < < def count_comments(query) do < from p in query, < group_by:, < left_join: c in assoc(p, :comments), < select: {p, count(} --> def changeset(struct, params \\ %{}) do > struct > |> cast(params, [:title, :body]) > |> validate_required([:title, :body]) Now execute mix ecto.migrate from the root directory of your Phoenix project. You will now need to change the routing table (./web/router.ex) and add an Elixir function (:add_a_ comment) in order to implement comments: $ diff router.ex router.ex.orig 20,22d19 < resources "/posts", PostController do < post "/comment", PostController, :add_a_comment < end Now you should edit ./web/controllers/post_controller. ex – please see the provided files on the LXFDVD for the final version of ./web/controllers/post_controller.ex. Put simply, you create a new plug, you implement the add_a_comment function and you make a small change to the existing implementation of the show function.

psql (9.5.3) Type "help" for help. phoenix=> With a little help from SQL, you can create a new table: phoenix=> CREATE TABLE test_table ( id bigserial primary key, name varchar(20) NOT NULL, comments text NOT NULL, date_added timestamp default NULL ); CREATE TABLE You can find more about the fields of an existing table as follows: phoenix=> \d+ test_table; You can delete the contents of an existing table without deleting the actual table as follows: postgres=# TRUNCATE table_name; You can delete an entire table including its contents as follows: phoenix=> DROP TABLE test_table; DROP TABLE Finally, you can delete an entire database, including all its tables with all their contents, as follows: postgres=# DROP DATABASE phoenix; DROP DATABASE

Figure 4: This shows how our blog site displays an existing blog post, including its comments.

You will now need to create ./web/templates/post/ comment_form.html.eex, which will be the web page for writing comments. Then, you need to make a change to ./ web/templates/post/show.html.eex to turn on comments. Now, create ./web/templates/post/comments.html.eex, which will be used for displaying the comments and make it active inside ./web/templates/post/show.html.eex. Then, you will need to make sure that the final versions of ./web/models/post.ex, ./web/controllers/post_controller. ex and ./web/templates/post/index.html.eex you are using are the same as the ones from the provided source code files on the LXFDVD. Now that you are done with the development of the blog site, you can start using it. Figure 3 shows the home page of the site as well as the web page for creating new blog posts, while Figure 4 shows the page that lists the contents of a blog post, including its comments. Although the blog site is far from complete, it is working without requiring you to write too much Elixir code! Should you wish to improve it, you can add user support and the ability to add images to your blog posts. The last step would of course be to deploy your website to a web server for the world to use and enjoy, but the details of how you do this are beyond our brief for this tutorial. LXF

Next issu Enigma e: machine

October 2016 LXF216     87


Jenkins: Create a CI pipeline Ramanathan Muthaiah explores the basic nuances of accessing Jenkins  via Python that opens up a whole new world of opportunities.


Our expert Ramanathan Muthaiah

began his career,  in the mid-90s,  flirting with legacy  Unix systems.  After assembling  a PC on a shoestring budget  running Slackware  on a 486  processor and  getting very  excited, the Unix  fever has never  gone away.

Quick tip Code snippets used in this article are available here, https:// mramanathan/ apache_buildmon/ tree/master/V2. Readers can access the complete source code for citool. py and ciproject. py along with an outdated readme.

ontinuous integration (CI), to quote Martin Fowler (, is a “. . . software development practice where members of a team integrate their work frequently . . . leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible.” Substantiating how or why this practice helps, he further adds, “Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly.” Various CI tools are available in the market—both open source (eg Jenkins/Hudson and Travis) and commercial (eg TeamCity and Bamboo). Each tool has several pros and cons along with its ecosystem of plugins for monitoring; integration with various tools for source code management (eg Git); bug tracking (eg Jira) and code review (eg Gerrit). (For an extensive list of supported features, licence types and other details, see In this article, we’ll focus on developing a Python-based tool from the ground-up for monitoring various parameters of a CI pipeline. For the sake of brevity and discussion, we’ll call this tool and to build it we’ll consider projects hosted at Note: Prior programming experience in Python would be handy and the code examples in this tutorial are using Python v2.7.3. To achieve the use-cases we’ve outlined, we’ve used a handful of Python modules from the standard library, such as urllib2, logging, sys, collections, json and time. Various projects hosted at Apache Build Infrastructure (ABI) have their CI infrastructure driven by Jenkins 2.7.2 (as of September 2016). To start with, let’s try and list some basic use-cases for this tool. In addition, as in any command-line tool, it would be nice to have few more options, like, verbosity. Dump For listing all the projects hosted in ABI. Query For listing those projects in ABI, that match a specific string. Show For displaying basic information on a specific project ABI, as requested by the user. Basic The information may include the status of the last build; what event triggered the last build; at what time the build was triggered and the status of the last ten builds (basic indicator of the project health). Option For turning on and off the proxy. Verbose Desired level of debug output. In this article, we’ll focus on exploring Jenkin’s Python REST API (remote access API). Also, we’ll assume that access

88     LXF216 October 2016

to ABI works from web browsers and programmatically (via scripts) too. Handling of user input, passing command-line values to the relevant (user-defined) functions is managed in If you recall, this is the user-facing program that we’ll invoke from the command line. Interactions with ABI, processing and refining the data retrieved from ABI, proxy handling and outputting debug messages is managed in which has the necessary abstract class definitions. We’ve done it this way to isolate the data-handling logic from the main program and to keep maintenance to the minimum in, which we’re treating as the main program. In ( user inputs are managed using argparse module. In the Python class file, (, which has the complete set of class definitions, the following modules are used: urllib2 is (for opening or accessing URLs), re (for pattern matching), sys (for a graceful exit), collections (for custom data structure), logging (to trace program flow that may induce warnings/errors), essential (to debug the program flow) and time (to convert Unix timestamp to human readable format). Of course, many of these modules have not been used to their fullest potential, eg the sys module could be used for abnormal interruptions received by the program during its execution (using the Ctrl+c key combination). If the reader is accessing ABI from behind a proxy or firewall (typical of enterprise networks) then the proxy setting should be modified to hold appropriate value for the proxy URL. Here’s the snippet of code, in, that shows the section to set proxy URL along with the port number. if self.proxyset == "ON": # proxy settings for urllib2 proxy = urllib2.ProxyHandler( { 'https' : 'proxy-url-goeshere:port_number' } ) opener = urllib2.build_opener(proxy) urllib2.install_opener(opener)

Jenkins’ REST API: The Jenkins wiki mentions about three flavours of remote access APIs. They are XML, JSON (with JSONP support) and Python. These APIs are “offered in a REST-like style. That is, there is no single entry point for all features, and instead they are available under the '... /api/’ URL where the '...’ portion is the data that it acts on”. For a full explanation of the APIs, read In the following sections, we’ll be using Python REST API to query ABI.


However, the same can be achieved using the other APIs. For demonstration purpose, the code snippet for using JSON API is available here: Before we begin to start coding the main functionality, let’s spend some time to build the help options. As already mentioned, provision is needed to: turn on or off the proxy; set verbose level; invoke the respective options to list all projects or search for a project (or projects) or show details of a specific project. Using the argparse module, these are defined as shown in the snippet below: if __name__ == “__main__": parser = argparse.ArgumentParser(description="Help to track the CI status of projects hosted at") parser.add_argument("-v", "--verbosity", type=int, default=0, choices=[0, 1, 2], help="print debugging output") parser.add_argument("-p", "--proxy", default="off", choices=["on", "off"], help="Jenkins access outside the corporate network") parser.add_argument("-d", "--dump", metavar="all", action="store", help="List all Apache project") parser.add_argument("-q", "--query", metavar="projectname", action="store", help="List projects that match this project name") parser.add_argument("-s", "--show", metavar="projectname", action="store", help="List build status for the specified project") parsed_args = parser.parse_args() Now, let’s have a quick look at the output of the help menu. The listing below shows the various arguments and the list of valid options they accept: $ python -h usage: [-h] [-v {0,1,2}] [-p {on,off}] [-d all] [-q project-name] [-s project-name] Help to track the CI status of projects hosted at optional arguments: -h, --help show this help message and exit -v {0,1,2}, --verbosity {0,1,2} print debugging output -p {on,off}, --proxy {on,off} Jenkins access outside the corporate network -d all, --dump all List all Apache project -q project-name, --query project-name List projects that match this project name. -s project-name, --show project-name List build status for the specified project.

def __init__(self, proxyset, verbosity): “"” URL parts that shall be used by the various methods If proxy is set, then change proxy URL to match your corporate’s setting Works with Python v2.7.x, not tried in v3.x “"” self.pyapi = ‘api/python?pretty=true’ self.buildurl = ‘’ self.proxyset = proxyset self.verbosity = verbosity ..... def query(self, *thisProject): “"” If ‘thisProject’ is empty, list all project names setup in Tcloud Jenkins If ‘thisProject’ is invalid, quit with message. If ‘thisProject’ is given, return project’s tcloud jenkins URL. “"” logging.debug("Python API for CI tool: %s” %(self.buildurl + self.pyapi)) allProjects = eval(urllib2.urlopen(self.buildurl + self.pyapi). read()) Skipping certain obvious variable definitions for the ABI’s build URL and its REST API, let’s jump to query function. Here the URL is constructed and passed onto urllib2 and the entire construct is treated as a Python expression using eval . Output is eventually stored in the Python object, allProjects which shall become the de-facto object to extract necessary data to meet our requirements The de-facto object, allProjects, has various methods available. Using one such method, we shall list all the projects at ABI. if len(thisProject) == 0:"Dumping the names of all projects hosted at") for project in allProjects['jobs']: print project.get('name') Now, we’ll execute the tool with a bunch of arguments and valid values:

Listing all projects A word of caution: the list of projects hosted at ABI is quite huge, so expect a lot of scrolling output when this option is invoked on the command line. To achieve this functionality, we should have access to all the projects at ABI. For this, we shall be harvesting the data that’s made available via Jenkins’ Python REST API. We’ll use the root or top-level of the remote access API ie python?pretty=true. Sticking to our original intention of separating the core logic from the user-facing program, the code below is available in class Citool(object): # Base class that implements query of jenkins

The nested data structure from Jenkins’ Python REST API requires a certain level of patience if you want to extract data.

If you missed last issue Call 0344 848 2852 or +44 344 848 2852

October 2016 LXF216     89


$ python -v 0 -p on -d all Verbose ( -v ) is set to level 0, proxy ( -p ) is set to on and argument to show ( -d ) the projects that accepts the value is all . Interestingly, the output includes some useful debug info highlighting which line within the user-defined function is being executed. Below is the sample dump of the output listing all the projects. It’s curtailed to show only a few projects as the entire list is quite humongous: // {{ 08/22/2016 02:51:46 PM == INFO ==Module:citool Function:query Line:59 }} Dumping the names of all projects hosted at // Abdera-trunk // Accumulo-1.6 // Accumulo-1.7 // Accumulo-1.8 // Accumulo-Master

Search for project It’s pretty obvious from the previous section that the projects’ listing spans several lines of output and looking for useful information in this scenario can be quite painful. Under such circumstances, it would be helpful to be able to search for projects based on the user input. As we know, all the projects are accessible via allProjects, the de-facto Python object. Iterating on one of the object’s methods ie allProjects['jobs'] , we shall try to match the project name with the input string. If a match is found, then the complete name of the matched project is recorded and stored in a Python list. Here’s the code snippet for achieving this:"Collecting names of all projects...“) for i in allProjects['jobs']: projects.append(i['name'])"Checking %s in project list...” %(self. projectString)) lookupStr = re.compile(self.projectString, re.IGNORECASE) for i in projects: lookupResult = re.findall(lookupStr, i) logging.debug("Lookup results: %s” %(lookupResult)) if len(lookupResult) != 0: matched.append(i) for prj in matched: print("{0} project matched with query string”.format(prj)) Now, let’s execute the tool with the valid option to query for a specific project. To query ‘hbase’, we’ll use Python’s

The official website,, has plenty of documentation.

regular expression module to fetch the matching projects that contain this string (not case sensitive). The results are collected and displayed on the standard output. $ python -p on -q hbase The value for proxy is set to on and the verbose option is skipped entirely this time. Here’s the output: // {{ 08/23/2016 09:28:28 AM == INFO ==Module:citool Function:showProjects Line:185 }} Collecting names of all projects... // {{ 08/23/2016 09:28:28 AM == INFO ==Module:citool Function:showProjects Line:189 }} Checking hbase in project list... // Flume-1.6-HBase-98 project matched with query string // Flume-trunk-hbase-1 project matched with query string // HBase Website Link Ckecker project matched with query string // HBase-0.94 project matched with query string We’ve finally arrived at the last use-case we listed, ie to display basic information about a specific project. First, the user-provided project name is validated by comparing it against each project as indicated in this code snippet:"Checking %s in project list...” %(thisProject[0])) for i in allProjects['jobs']: if thisProject[0] == i['name']:"Matched {0} with {1}”.format(thisProject[0], i['name']))"Project URL to access more info is {}”. format(i['url'])) return i['url'] Under the condition where the user’s input is determined to be valid, the program will go on to retrieve and display information about the project, ie the last completed build and status of last ten builds. self.projectName = projectName projectUrl = self.query(self.projectName) newBuildurl = projectUrl + “/” + self.pyapi projectInfo = eval(urllib2.urlopen(newBuildurl).read()) self.showLatestBuild(projectInfo) self.showLastTen(projectInfo) Information on the last completed build will include the event that triggered the build and its completion status. For the sake of brevity, we’ll skip discussing some of the functions, eg fetching the build time (converting Unix time to human readable format), determining the build cause (whether the event was timer triggered or due to the latest commit in the one or multiple repositories relevant to the given project): buildStartedAt, buildEndedAt = self. getBuildTime(lastBuildInfo) startedBy = self.getBuildCause(lastBuildInfo) if lastBuildInfo['building'] == False and lastBuildInfo['result'] == “SUCCESS": print("Build was started by {0} at {1} and completed in {2}”. format(startedBy, buildStartedAt, buildEndedAt)) print("And the build passed without any errors") if lastBuildInfo['building'] == False and lastBuildInfo['result'] == “FAILURE": print("Build was started by {0} at {1} and completed in {2}”. format(startedBy, buildStartedAt, buildEndedAt))

Never miss another issue Subscribe to the #1 source for Linux on page 28. 90     LXF216 October 2016


Issues with urllib and urllib2 In the early months of 2011, Niels Heinen of the Google Security Team reported a ‘Redirect vulnerability in urllib/urllib2 (http://bugs. The patch to fix this issue was committed within a month of it being reported. Post-commit, more details were shared about the nature of the issue and the fix in the official blog, Python Inside (see urllibSecVulFix). Another bug in urllib was reported in 2013, ( issue17322); however, this bug’s category was

‘normal’ unlike ‘11662’ which was classed as a ‘release blocker’. However both are fixed now. It should be noted that when using urllib or urllib2, SSL verification isn’t possible when ‘urlopen’ is invoked. For anyone serious about security this is surely a concern. This is where the ‘requests’ module scores and stands apart from the others. The list of supported features in the request module is quite comprehensive, such as HTTP(S) Proxy Support, Connection Timeouts and Basic/Digest Authentication etc. For the complete list and overview check the ‘requests’

print("And the build failed with errors") if lastBuildInfo['building'] == False and lastBuildInfo['result'] == “ABORTED": print("Build was started by {0} at {1} and completed in {2}”. format(startedBy, buildStartedAt, buildEndedAt)) print("And the build was aborted") To obtain the status of last ten builds, we’ll make use of project builds’ remote access API, in this instance for HBase0.04 this is api/python?pretty=true. With this kind of project-specific API, it’s possible to automate various things, eg you can communicate the build availability or inform project members if any critical event has occurred that may impact a major public release: allBuilds = projectInfo['builds'] for b in allBuilds: if counter <= 10: thisBuildInfo = eval(urllib2.urlopen(b['url'] + self.pyapi). read()) buildUrls.append(b['url']) if thisBuildInfo['building'] == True: buildResult.append("Build in progress") else: buildResult.append(thisBuildInfo['result']) counter += 1 for job, status in zip(buildUrls, buildResult): print("Status of build job, {0} is, {1}”.format(job, status)) buildStats.update({job : status}) With the coding done, now, we’ll query details for HBase0.94 in the output (shown below): $ python -p on -s HBase-0.94 First, there will be confirmation that the project exists (if the project name input by the user is valid) and then the project’s build info will be printed as standard output: {{ 08/23/2016 02:39:09 PM == INFO ==Module:citool Function:query Line:64 }} Checking HBase-0.94 in project list... {{ 08/23/2016 02:39:09 PM == INFO ==Module:citool Function:query Line:67 }} Matched HBase-0.94 with HBase0.94 {{ 08/23/2016 02:39:09 PM == INFO ==Module:citool Function:query Line:68 }} Project URL to access more info is {{ 08/23/2016 02:39:10 PM == INFO ==Module:citool Function:showLatestBuild Line:121 }} Last completed build of HBase-0.94 // is

module ( Enthusiastic members of the Python community have come up with solutions to overcome or fix the shortcomings of both the urllib and urllib2 modules. One group, in particular has tried to patch urllib2 to use CONNECT for https proxies (https://pypi. Meanwhile, urllib3 (see https://pypi.python. org/pypi/urllib3) encompasses many critical features, such as thread safety, connection pooling and proxy support.

0.94/1483/ Build was started by an SCM change at Wed Jan 13 06:10:11 and completed in 01:04:47 And the build failed with errors Status of build job, is, FAILURE Status of build job, is, FAILURE Status of build job, is, FAILURE Status of build job, is, FAILURE /Status of build job, is, SUCCESS With this basic understanding how to use Jenkins’ Python REST API, we’d encourage you to experiment with your own ideas to become familiar with what you can do. You never know, you might have some innovative ideas and be able share that with the rest of the community. Avid users of Jenkins may be aware of existing projects that exploit REST API to provide capabilities in areas, such as automation (remote control) of common Jenkins tasks related to jobs and the retrieval of the latest results. A few such projects are listed below along with links to documentation. Note: Compared to the topics we’ve covered many of these features would be considered advanced. Project Python-Jenkins: https://pypi., Repo: openstack/python-jenkins. Project jenkinsapi: This is forked from another GitHub project,, Repo:, Doc: Project AutoJenkins, Repo: autojenkins, Doc: latest. If you want to go further with the tool, it can be extended to use matplotlib and use the data collected from ABI to plot simple graphs for visual representations of the continuous integration pipeline’s health and a project’s trend as it evolves and grows. To reduce flakiness that can arise due to dependency on external data sources, and thus increase robustness, a unit-testing framework can be built using Unittest (part of the standard module) and it can be integrated with third-party tool APIs (eg Twilio) to send out text messages when a critical events occur. LXF

October 2016 LXF216     91

Got a question about open source? Whatever your level, email it to for a solution.

This month we answer questions on: 1 Slow Ubuntu after upgrade 2 Deleting a nonexistent file 3 Disk filling up


4 Finding missing files 5 Enabling Ethernet drivers ★ Shell scripting

Slow upgrade

You were kind enough to advise me on upgrading from Ubuntu 14.04 to 16.04 in LXF206. In the event I decided to wait for the official release of 16.04 LTS. I have now done the upgrade, which has prompted some further questions. The upgrade took a bit over three hours, which surprised me. Is this normal? Internet connection here is very slow, which doesn’t help. Now when I boot up, Lubuntu on the splash screen has changed to Kubuntu,

Smartctl will tell you if your hard drive is showing signs of imminent failure. If so, back it up! which eventually disappears to leave a blank screen for what seems like ages. The boot process is very slow. Is this normal, or is there a way to speed it up? Peter Ratcliffe

Enter our competition Linux Format is proud to produce the biggest and Get into Linux today! best magazine that we can. A rough word count of LXF193 showed it had 55,242 words. That’s a few thousand more than Animal Farm and Kafka’s The Metamorphosis combined, but with way more Linux, coding and free software (but hopefully less bugs). That’s as much as the competition, and as for the best, well… that’s a subjective claim, but we do sell

92 LXF216 October 2016


way more copies than any other Linux mag in the UK. As we like giving things to our readers, each issue the Star Question will win a copy or two of our amazing Guru Guides or Made Simple books – discover the full range at: For a chance to win, email a question to, or post it at to seek help from our very lively community. See page 94 for our star question.

First of all, the version of Ubuntu 16.04 on the LXFDVD was the official release. While we had enabled extra desktops for those wishing to try them, the install ISO was an unmolested Ubuntu image, and we wait for the proper releases of such distros rather than included beta releases that have already been superseded by the time the magazine hits the shelves. The change of splash screen is purely cosmetic and not related to your speed issues. The upgrade process is slower than a clean install because it removes individual files before adding the new ones, instead of a quick reformat of the drive that takes far less time. However, it shouldn’t take that much longer. That and the slow boot process may indicate a problem with your hard drive. The first thing to do is try to find out why the boot is slow. As soon as the splash screen appears, press Esc to get rid of it. You can now see the startup process and the text should scroll past fairly quickly. If there are any significant pauses, make a note of the messages on screen: it means that part of the boot process is probably taking longer than it should. Once you have that information, a web search may help find a solution. A common cause of this is the kernel searching for a driver or firmware for a piece of hardware that is not present. In that case, installing the relevant package should help. If there is a general slowness, it could be a hard drive problem. The first step is to check your filesystem with fsck. You need to boot

Answers Terminals and superusers We often give a solution as commands to type in a terminal. While it is usually possible to do the same with a distro’s graphical tools, the differences between these mean that such solutions are very specific. The terminal commands are more flexible and, most importantly, can be used with all distributions. System configuration commands often have to be run as the superuser, often called root. There are two main ways of doing this depending on your distro. Many, especially Ubuntu and its derivatives, prefix the command with sudo , which asks for the user password and sets up root privileges for the duration of the command only. Other distros use su , which requires the root password and gives full root access until you type logout. If your distro uses su , run this once and then run any given commands without the preceding sudo .

from a live CD/DVD to do this, then open a terminal and run $ sudo fsck -f /dev/sda1 This will check the consistency of the filesystem on the first partition and prompt you if it finds anything that needs to be corrected. You can also use the graphical version on the Rescatux live CD if you are uncomfortable with the terminal, but this automatically tries to fix errors without asking you first. If the filesystem is clean, you should check your disk hardware, which you can do from within Ubuntu. First install smartmontools from the software centre. Then open a terminal and run $ sudo smartctl --health /dev/sda This does an initial health check – note that you use the whole drive name, not a partition. If that is fine, run a diagnostic with $ sudo smartctl --test=short /dev/sda The output will give an estimate of the time needed, after then you can see the results with $ sudo smartctl --log=selftest /dev/sda If this is clean, you can run a more extensive test by specifying --test=long. Both tests allow you to continue using the computer while they are running. There is also a graphical interface called gsmartcontrol that can be used to run the tests and view the results. Whichever way

you choose to run smartctl, if it shows errors then your drive is in danger of failing soon, in which case you should back up your data and replace the drive as soon as possible.


Ghost file

I am using PCLinuxOS, 64-bit, fully updated. I have a folder on my external USB hard drive. The folder is empty – all files were deleted – but when I try to delete the folder I get the following. Could not delete file /media/95dbb47b-03a94fc2-8030-ded483b8ac10/Travel/Booked/ Angola/Booking form.pdf. The file Booking form.pdf does not exist, so how can I delete the folder? I tried various permission settings and there are no hidden files. Should I hide the folder if I cannot delete it? Dave Pritchard The first thing is to verify that the folder is truly empty, so check this by running the following, which will show all the contents including hidden files, with details of permissions etc: $ ls -lA /media/95dbb47b-03a9-4fc2-8030ded483b8ac10/Travel/Booked/Angola If this shows no files, it is likely you have some filesystem corruption. This is more likely with an external drive, since it can be caused by unplugging or powering down the drive without unmounting it. Corrupted filesystems often return strange results when listed, so try: $ ls -lAR /media/95dbb47b-03a9-4fc2-8030ded483b8ac10/ If this shows any strange entries, like rows of question marks, you know something is wrong. You can check for, and usually fix, filesystem corruption with fsck. The drive must be unmounted and fsck needs to be run as root. If your drive is at /dev/sdb1, you would run $ sudo fsck /dev/sdb1 This will report any errors it finds and ask you whether you want it to repair them. This can be tedious, so you can add the -y option to automatically answer yes to each prompt; use

this with caution! The alternative, if you have the space, is to copy the contents of the drive elsewhere – or at least those you need. This exercise is often a good excuse for a clearout. Reformat the drive and copy everything back. Hiding the folder won’t help – it only hides the current symptom of the corruption. Left alone, filesystem corruption only gets worse, never better. Next time it could lose you an important file. Hiding the folder exhibiting corruption is the equivalent of fixing low oil pressure on your car by disconnecting the warning light!


Disappearing disk space

My server’s root filesystem is 20GB but after installing a PHP script it filled up very quickly, gigabytes each time I run it. Before long it had hit 100 per cent. I found /proc/kcore about 120TB in size. I’ve read that it doesn’t actually use any space as it’s not a real file, but I cannot find anything else that big. The Disk Usage analyser said I was using about 16GB. I reinstalled CentOS 7 with a 100GB root partition but it still filled up very quickly. Jonathan Cameron The reason for the discrepancy between 16GB and 20GB is twofold. First the filesystem’s available space is never as large as the full partition, because of overhead. Secondly there are two ways of measuring file and disk size, in GB or GiB, although both are sometimes referred to a GB – see the Quick Reference box below for more. You can use the terminal du command to show the space used by each directory, and pass the output through sort to show the largest: $ du -scxh /* | sort -h There is a more useful alternative, although not generally installed by default, called ncdu (NCurses Disk Usage). This should be in most distros’ repositories to be installed through the package manager in the usual way. Then run it like this: $ ncdu -x /

A quick reference to...

Disk space


ou’ve just bought a lovely new 1TB drive, partitioned and formatted it in GParted, but your desktop’s tools are reporting that it has a size of around 900GB. Have you been robbed? Where is the 1TB you paid for? There are two main factors here. The first is overhead – space is needed for the partition tables, and any filesystem also needs space for its own purposes. Ext4 also reserves 5 per cent (a default that can be

changed) of the filesystem for the root user only, to prevent lowly users like us from filling up the drive and causing problems. The other factor is the way in which drive (and memory) capacity is measured. Memory works in powers of two, so the nearest to 1,000 is 1,024 – one kilobyte of RAM is actually 1,024 bytes. However, the SI standard specifies that a kilo is 1,000 units, mega is 1,000 kilos and giga is 1,000 mega. There is a “binary” version of this where 1KiB is 1,024 bytes, 1MiB is 1,024KiB and so on, but these terms are often used

interchangeably, and incorrectly. So your 1TB drive is actually 1,000,000,000,000 bytes, which is approximately 0.91TiB, and there is the difference. You can see this with the df command, which reports size and free space on filesystems: df -h will report in binary units while df -H uses SI units (multiples of 1,000). The latest fdisk and gdisk also use binary units. For example, the 3TB drive in my computer is reported as 2.7TiB in gdisk – that’s a 10 per cent difference just because of the method of counting bytes that is used.

October 2016 LXF216 93

Answers The -x option is important: it tells ncdu to stay within the current filesystem and ignore any other filesystems mounted in it. This includes the likes of /dev, /sys and /proc, which are virtual filesystems that do not actually reside on disk. /proc/kcore is your computer’s memory (physical and virtual) displayed as a file, so it looks huge but takes up no space. The list is sorted by size, and you can drill down through the directories to find the culprit(s). If this PHP script is using up gigabytes each time it is run, then it is most likely sending information, and a lot of it, to a log file. If it is running through a web server, you may find that /var/log/apache is getting rather full. Using a single partition for everything on a server is not a good idea – it’s not ideal for a desktop either. Server daemons often store their data in /var or /srv (depending on the distro). It is good practice to use a separate filesystem for these, in which case 20GB is plenty for /, especially if you are not installing a massive desktop environment. It is always a good idea to separate your data from the operating system – it makes backing up and updating easier, and prevents runaway software from filling up your root filesystem. Even though Linux filesystems are robust enough to tolerate being filled up like this, making a crash unlikely, you will still end up with fragmentation on the filesystem and subsequent performance loss.


Finding missing files Is there an easy and quick way to tell if I am missing a file in multiple directories and then make a list and

Star Question ★


output it to a text file? I’m running Kodi on my Raspberry Pi and NAS. Not all of my films (and music) have folder/cover art. Rather than go through each directory one by one and check, I thought that there must be an easy way to check this with a terminal command or bash script. Something like this: “Look at all these directories (and sub directories) – do they contain the file folder.jpg? If not, then name the directory it’s missing from and make a list in a text file”. I was given this script: basedir=/media/mymusic for x in `find $basedir -type d` ; do

[ -f "$x/folder.jpg" ] || echo "$x" done When I run this, I get too many false positives because the directories I’ve called “alt_thumbs” and “extrafanart” don’t have or need folder.jpg but are being added to the list regardless. My Directories are organised like this: Films/A/Name of film Films/B/Name of film And within each file directory, the contents are organised like this: Film.mkv alt_thumbs (DIR)

This month’s winner is “guy”. Get in touch with us to claim your glittering prize!

String bashing

Using Bash, I am trying to parse a string of undefined length to check whether the last five characters are correct, in particular whether the value of NAME does not yet have the dot extension “.epub” appended. It needs to cope if the string is shorter than the extension. Trying variations on the following – if [ ! NAME=*.epub ] – the endless variations on dollar prefixes, enclosing quotes, white space and doubleequals in the Bash documentation are driving me mental. Can you explain what is the correct way to do this and why? guy This is Linux! There is generally more than one correct way to do something, but several approaches, each with its own merits. You cannot use filename wildcards like this; they are interpreted by the shell when referring to files. There are countless variants

94 LXF216 October 2016

Use ncdu to find out where your hard drive space is going, and even delete any space hogs.

on using sed, awk or cut to process a string like this, but for what you want the built-in variable expansion of Bash will do the job. If you have a variable called NAME and you want to check whether it has an epub extension, this test will do it: if [ "${NAME}" = "${NAME%.epub}" ] ; then ... The % operator strips the following characters from the end of the variable, if possible. So mybook.epub becomes mybook while mybook is unchanged. The above test checks whether the % operator makes any changes to the variable and returns true if it does. If all you want to do is make sure the name has an extension of .epub, you could use this: NAME="${NAME%.epub}.epub" This strips any .epub extension and then adds one, so the name now has the extension, whatever its original value. Bash has a number of useful operators like this. Using # instead of % removes the string

from the start of the variable instead of the end. %% and ## are greedy versions of these, removing the longest possible match instead of the shortest. You can also do string replacement anywhere in a variable like this: NEWNAME="${NAME/oldstr/newstr}" – rather like sed’s s operator. You can even specify default values for variables that may not have been defined, or set to null. ${NAME:-hello} returns the value of NAME if it is set, otherwise returns “hello”. Also, using = instead of - sets the variable to this value if it is unset. Note that while the braces are optional for a simple variable usage – so $NAME is equivalent to ${NAME} – they are required when performing any form of substitution. Also, these operations are not unique to Bash and can be used even if your script uses the more universal /bin/sh.

Answers extrafanart (DIR) folder.jpg fanart.jpg GeordieJedi You can do this using find as it has many options and you can negate them to exclude certain matches. So you could do something like the following: for DIR in $(find Films -type d ! -name alt_ thumbs ! -name extrafanart) ; do [ -f "$DIR/folder.jpg" ] || echo "$DIR" done The ! negates the following condition, so this means: find everything in Films that is a directory (-type d) but not called alt_thumbs and not called extrafanart. Matches for find are “anded” by default – a file has to match everything for it to be selected. (There are operators to allow OR matches.) Note that this will also find the Films directory itself and the A, B, etc directories. You can avoid this by giving find a minimum depth to search only directories below a certain level: $ find Films -mindepth 2 -type d ... As your folders are organised in a strict hierarchy, with all file directories at the same level and the ones you want to ignore below that level, you could also use maxdepth to ignore them: $ find Films -mindepth 2 -maxdepth 2 -type d This lists all the directories at the second level. If you really are using such a strict structure, however, then you can do away with find altogether and use only shell globbing: for DIR in Films/?/*; do [ ! -d $DIR ] || [ -f $DIR/folder.jpg ] || echo $DIR done

This checks that the item is a directory – you didn’t specify whether there might also be files in Films/A – and that folder.jpg exists, and prints the name if not.


Missing driver

I’m trying to install Gentoo on my new computer. However, I have no Ethernet interface anymore when rebooting, and with lspci I figured out that no driver was loaded for the following device: Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 12) With the live CD I figured out that the right driver was r8169. However, I don’t find it in the Linux Kernel options, so I guess I have to download it manually. I found some links but none of these matches Linux 4.4.6 kernel, which I’m trying to build. Where can I find a new version of such a driver, or another that works as well? Lucas Stevens I can assure you that this Ethernet adaptor works with Gentoo as I’ve been using one for the last five years! The correct driver is indeed r8169, and you can confirm this by booting from a live CD where the controller works and running lspci with the -k option, which shows the kernel module that is in use. This driver is in the kernel, but you might not be able to find it by simply browsing the configuration menus. That is because many options are conditional on others being set, so cannot show up until everything they need is in place. However, the kernel search option will show it up. If you are using make

Help us to help you We receive several questions each month that we are unable to answer, because they give insufficient detail about the problem. In order to give the best answers to your questions, we need to know as much as possible. If you get an error message, please tell us the exact message and precisely what you did to invoke it. If you have a hardware problem, let us know about the hardware. If Linux is already running, use the Hardinfo program ( that gives a full report on your hardware and system as an HTML file you can send us. Alternatively, the output from lshw is just as useful ( One or both of these should be in your distro’s repositories. If you are unwilling, or unable, to install these, run the following commands in a root terminal and attach the system.txt file to your email. This will still be a great help in diagnosing your problem. uname -a >system.txt lspci >>system.txt lspci -vv >>system.txt

menuconfig to configure the kernel, press / to pop up the search dialog; if you are using make xconfig, use Ctrl+F. Then type 8169 in the search box to find the driver. Make sure that everything on the Depends on: line is set to y or m, then you can enable the module and compile your kernel. There is also an r8168 driver package in Gentoo’s portage tree that uses Realtek’s own driver. This is far less convenient – your networking will stop after a kernel update until you emerge the package again. However, it does support the most recent version of the chips, which the in-kernel driver may not. I would only use it if the in-kernel driver failed to work with my hardware. LXF

Frequently asked questions…

fsck You have mentioned fsck in response to a couple of questions this month. Assuming it’s not an expletive, what is it? Fsck is a filesystem check and repair tool. It scans a filesystem for errors and optionally fixes them if it can.

There is a general fsck program that looks at the filesystem, determines its type and runs the corresponding fsck.filesystem with the arguments you gave it. That’s the one you use – let it decide what to run next.

Sounds useful, but which filesystems does it support? Each type of filesystem comes with its own set of tools, usually including an fsck program. For example the ext2/3/4 tools include fsck.ext.

Arguments? So I don’t just run fsck? It needs at least the name of the device holding the filesystem, such as /dev/sda1. There are other options you can pass. For example, the ext2/3/4 filesystems will not check a filesystem if it is marked as clean – that is, it was unmounted cleanly. Adding -f overrides this behaviour.

So I have to use the one for my particular filesystem? That’s not what you said in the answers.

You mentioned unmounting. Is that important? Very. You can only repair a

filesystem that is unmounted. Trying to manipulate directory tables while something else writes to the disk is asking for trouble. Most fsck tools won’t even check a mounted filesystem. So how do I check or repair my root filesystem? You need to boot from a live CD and run fsck from there. There are other options, but this is the only truly safe one. But can’t this be done automatically? Why do I need to use fsck, and how often should I do it? It is done automatically, at least with the default ext filesystems. An internal counter keeps track of how many times the filesystem has been mounted since the last

check and runs fsck in the background when a threshold, usually around 30, is exceeded. This is one reason a reboot sometimes takes longer than normal. You only need to run fsck manually if you suspect a problem. I’ve run fsck and it keeps telling me it has detected a problem and do I want it to fix it. I must have pressed y 50 times and it keeps asking. The default is to ask for every single fault, which can be a lot of seeming identical faults on different blocks. If you add the -y or -a argument, fsck will assume you answered yes to every one of these questions and try to fix things for you. But use this option with care – the answer you want might not always be yes!

October 2016 LXF216 95

On the disc Distros, apps, games, books, miscellany and more…

The best of the internet, crammed into a phantom-zone like 4GB DVD.



his month’s features are about security, an area in which Linux has a good reputation. I’d go so far as to say that the main security flaw in Linux is its reputation for security. Windows users know their computers are at risk (that’s a polite way of saying ‘full of gaping security holes’) and act accordingly. They install virus checkers and outgoing firewalls to stop software doing the things it tries to do in the background. The trouble with Linux is that so many of its users think it’s secure, so they don’t have to worry about it. It’s true that it’s much harder to inadvertently introduce malware onto a Linux box; you can’t do it by carelessly clicking on an email attachment, but it’s still possible. Sticking to software provided by your distro is a good start and compiling other software from source, where someone has had the chance to check it, are good practices. Bear in mind that at least one Linux trojan was distributed as a screensaver package on a reputable site, so it’s possible to get infected. Don’t think that the limitation of a user account will protect you, either. If you can send emails, so can a trojan spambot you’ve installed. Security is good, complacency is not.

Privacy distro

Tails 2.5 Tails is a secure and anonymous live distro. It uses the Tor network to hide the source and destination of all traffic to prevent others snooping on your activities. It also encrypts everything. Tor is used by many that value their privacy while online, but Tails offers more than that. As a live distro, Tails will leave no trace of your activities on the computer when you reboot. Everything is gone: browsing history, cookies, everything. This also makes it ideal for sensitive operations, such as online banking, when using someone else’s computer.

Security testing distro

Kali Linux 2016.1 Kali Linux forms the basis of one of our main features this month, we thought it would be nice to give you a copy. There are two versions of Kali: the full version is huge and too big for the LXFDVD. This is partly because it uses Gnome but mainly because it includes a massive range of software. The light version goes too far the other way, using


NOtIce! Defective discs

For basic help on running the disc or in the  unlikely event of your Linux Format  coverdisc being in any way defective,  please visit our support site at:  Unfortunately, we are unable to offer  advice on using the applications, your  hardware or the operating system itself.

96     LXF216 October 2016

32-bit & 64-bit


Xfce and including a minimal set of software. So we used Kali’s build scripts to create a remix that still uses Xfce but including the most popular packages. Kali is unusual in that it boots to a desktop logged in as root. This is because many of its tools require root privileges. As such you shouldn’t need to know the root password, but in case you do it’s toor.

New to Linux? Start here

What is Linux? How do I install it? Is there an equivalent of MS Office? What’s this command line all about? Are you reading this on a tablet? How do I install software?

Open Index.html on the disc to find out Lightweight distro


AntiX 16


Checkinstall Install tarballs with your  package manager. Coreutils The basic utilities that should  exist on every operating system. HardInfo A system benchmarking tool. Kernel Source code for the latest stable  kernel release, should you need it. Memtest86+ Check for faulty memory. Plop A simple manager for booting  OSes, from CD, DVD and USB. RawWrite Create boot floppy disks  under MS-DOS in Windows. Smart Boot Manager An OS-agnostic  manager with an easy-to-use interface.

32-bit & 64-bit

SystemRescueCd 4.8.1 We like to include a small(ish) rescue CD on the LXFDVD when there’s space. You never know when you’re going to need one so knowing you can just grab our DVD when bad things happen is a comfort. SystemRescueCd doesn’t have the friendly interface of something like Rescatux, but it more than makes up for that with the sheer range of tools available. Whatever has gone wrong with your operating system, be it Linux or Windows, the chances are the SystemRescueCd has the tools to

And more! System tools

AntiX is a lightweight and easy-to-use distro suitable for new and more experienced users alike. Being lightweight, it’s well suited to older hardware, but also gives a speed boost to more modern systems. New Linux users are often surprised by the variety and choice of desktops: Gnome, KDE, Xfce and friends are all popular. But here’s another one, antiX uses the IceWM desktop. This is less well publicised but has been around for a long time and is a mature, lightweight and fast desktop. Underneath all of this is a Debian base, so antiX should be stable and dependable too.

Saviour cD

Download your DVD from

fix it. It defaults to booting to a text console but you can also load a lightweight desktop, which is handy if you need to search the web for instructions to fix your current problem. SystemRescueCd has 32-bit and 64-bit kernels, and two versions of each. The standard kernel is an older, stable version. If you have newer hardware, the alternative kernel should be a better choice, but you get the same range of tools whichever kernel you choose.

WvDial Connect with a dial-up modem.

Reading matter Bookshelf

Advanced Bash-Scripting Guide   Go further with shell scripting. Bash Guide for Beginners Get to grips  with Bash scripting. Bourne Shell Scripting Guide Get started with shell scripting. The Cathedral and the Bazaar Eric S  Raymond’s classic text explaining the  advantages of open development. The Debian Administrator’s Handbook   An essential guide for sysadmins. Introduction to Linux A handy guide  full of pointers for new Linux users. Linux Dictionary The A-Z of everything  to do with Linux. Linux Kernel in a Nutshell An  introduction to the kernel written by  master hacker Greg Kroah-Hartman. The Linux System Administrator’s Guide Take control of your system. Tools Summary A complete overview  of GNU tools.

October 2016 LXF216    97

Get into Linux today! Future Publishing, Quay House, The Ambury, Bath, BA1 1UA Tel 01225 442244 Email


Editor Neil Mohr Technical editor Jonni Bidwell Operations editor Chris Thornett Art editor Efrain Hernandez-Mendoza Editorial contributors Desire Athow, Neil Bothwick, Jolyon Brown, Sean Conway, Nate Drake, Matthew Hanson, Ali Jennings, Nick Peers, Les Pounder, Ramanathan Muthaiah, Afnan Rehman, Mayank Sharma, Shashank Sharma, Dan Smith, Alexander Tolstoy, Mihalis Tsoukalos Cover illustration Cartoons Shane Collinge


Commercial sales director Clare Dove Senior advertising manager Lara Jaggon Advertising manager Michael Pyatt Director of agency sales Matt Downs Ad director – Technology John Burke Head of strategic partnerships Clare Jonik

LXF 217

how to build a…

will be on sa le Tuesday 25 Oct 2016

super Pi!

Marketing manager Richard Stephens


Production controller Nola Cokely Head of production UK & US Mark Constance Distributed by Seymour Distribution Ltd, 2 East Poultry Avenue, London EC1A 9PT Tel 020 7429 4000 Overseas distribution by Seymour International


Senior Licensing & Syndication Manager Matt Ellis Tel + 44 (0)1225 442244


Trade marketing manager Juliette Winyard Tel 07551 150 984

subscRIPTIOns & bAck IssuEs

Get the most from your Pi running it as a desktop,  laptop, media streamer and smart device.

Single-board PCs

Raspberry Pi isn’t the only fruit-based board, but is it  still the best? We look at the new competition.

Roundup: Secure chat

Have we made you paranoid yet? Good. We round up  the  most secure of all the chat clients.

Multi-boot USB

You asked for it and we’ve delivered: how to transform  your USB drive into a multi-booting Linux machine. Contents of future issues subject to change – we might be too busy saving the world cheerleader.

98     LXF216 October 2016


UK reader order line & enquiries 0344 848 2852 Overseas order line & enquiries +44 344 848 2852 Online enquiries Email


Managing director, Magazines Joe McEvoy Editorial director Paul Newman Group art director Graham Dalzell Editor-in-chief, Technology Graham Barlow LINUX is a trademark of Linus Torvalds, GNU/Linux is abbreviated to Linux throughout for brevity. All other trademarks are the property of their respective owners. Where applicable code printed in this magazine is licensed under the GNU GPL v2 or later. See Copyright © 2016 Future Publishing Ltd. No part of this publication may be reproduced without written permission from our publisher. We assume all letters sent – by email, fax or post – are for publication unless otherwise stated, and reserve the right to edit contributions. All contributions to Linux Format are submitted and accepted on the basis of non-exclusive worldwide licence to publish or license others to do so unless otherwise agreed in advance in writing. Linux Format recognises all copyrights in this issue. Where possible, we have acknowledged the copyright holder. Contact us if we haven’t credited your copyright and we will always correct any oversight. We cannot be held responsible for mistakes or misprints. All DVD demos and reader submissions are supplied to us on the assumption they can be incorporated into a future covermounted DVD, unless stated to the contrary. Disclaimer All tips in this magazine are used at your own risk. We accept no liability for any loss of data or damage to your computer, peripherals or software through the use of any tips or advice. Printed in the UK by William Gibbons on behalf of Future.

Future is an award-winning international media group and leading digital business. We reach more than 57 million international consumers a month and create world-class content and advertising solutions for passionate consumers online, on tablet & smartphone and in print. Future plc is a public company quoted on the London Stock Exchange (symbol: FUTR).

Chief executive officer Zillah Byng-Thorne Non-executive chairman Peter Allen Chief financial officer Penny Ladkin-Brand Managing director, Magazines Joe McEvoy

We are committed to only using magazine paper which is derived from well-managed, certified forestry and chlorinefree manufacture. Future Publishing and its paper suppliers have been independently certified in accordance with the rules of the FSC (Forest Stewardship Council).

Tel +44 (0)1225 442 244



▼Linux Format - Stop The Hackers