Issuu on Google+

FREE DVD WITH 5 LIVE BOOTING DISTROS www.linuxuser.co.uk

THE ESSENTIAL MAGAZINE FOR THE GNU GENERATION

ELECTRONICS FOR

PI HACKERS Take maker projects to the next level by learning how to hack hardware

AUTOMATE PROCESSES Use Red Hat’s Ansible to run a variety of project tasks

MASTER FILE I/O IN GO Learn how to develop Go programs that read and write to files

REVITALISE AN OLD LAPTOP Give an aging device a new lease of life with Linux

HACK A TOY THE BEST OPEN SOURCE MUSIC PLAYERS PART 2 Add new functions

The best ways to play & organise tunes

PLUS

» Find files with Bash scripts » Run RiscOS on Raspberry Pi » Linux Mint 18 on test » Manage the Ubuntu system » Build an explorer robot - part 3

ISSUE 169


Welcome

THE MAGAZINE FOR THE GNU GENERATION

Imagine Publishing Ltd Richmond House, 33 Richmond Hill Bournemouth, Dorset, BH2 6EZ ☎ +44 (0) 1202 586200 Web: www.imagine-publishing.co.uk www.linuxuser.co.uk www.greatdigitalmags.com

to issue 169 of Linux User & Developer

Magazine team Editor April Madden

april.madden@imagine-publishing.co.uk ☎ 01202 586218 Designer Rebekka Hearl Photographer James Sheppard Senior Art Editor Stephen Williams Editor in Chief Dan Hutchinson Publishing Director Aaron Asadi Head of Design Ross Andrews

This issue

Contributors Dan Aldred, Mike Bedford, Joey Bernard, Christian Cawley, Sanne De Boer, Kunal Deo, Alex Ellis, Tam Hanna, Oliver Hill, Phil King, Jon Masters, Paul O’Brien, Swayam Prakasha, Richard Smedley, Jasmin Snook, Nitish Tiwari, Mihalis Tsoukalos and Kevin Wittmer

Advertising

Digital or printed media packs are available on request. Head of Sales Hang Deretz ☎ 01202 586442 hang.deretz@imagine-publishing.co.uk Sales Executive Luke Biddiscombe ☎ 01202 586431 luke.biddiscombe@imagine-publishing.co.uk

FileSilo.co.uk

Assets and resource files for this magazine can now be found on this website. Support filesilohelp@imagine-publishing.co.uk

International

Linux User & Developer is available for licensing. Head of International Licensing Cathy Blackman ☎ +44 (0) 1202 586401 licensing@imagine-publishing.co.uk

Subscriptions

For all subscriptions enquiries LUD@servicehelpline.co.uk ☎ UK 0844 249 0282 ☎ Overseas +44 (0) 1795 418661 www.imaginesubs.co.uk Head of Subscriptions Sharon Todd

Circulation

Circulation Director Darren Pearce ☎ 01202 586200

Look for issue 170t on 22 Sep

Production

Production Director Jane Hawkins ☎ 01202 586200

Finance

Finance Director Marco Peroni

er? Want it soon

Founder

Group Managing Director Damian Butt

Printing & Distribution

e Subscrib ! y toda

Printed by William Gibbons, 26 Planetary Road, Willenhall, West Midlands, WV13 3XT Distributed in the UK, Eire & the Rest of the World by: Marketforce, 5 Churchill Place, Canary Wharf London, E14 5HU ☎ 0203 148 3300 www.marketforce.co.uk Distributed in Australia by: Gordon & Gotch Australia Pty Ltd 26 Rodborough Road Frenchs Forest, New South Wales 2086, Australia ☎ +61 2 9972 8800 www.gordongotch.com.au

Disclaimer

The publisher cannot accept responsibility for any unsolicited material lost or damaged in the post. All text and layout is the copyright of Imagine Publishing Ltd. Nothing in this magazine may be reproduced in whole or part without the written permission of the publisher. All copyrights are recognised and used specifically for the purpose of criticism and review. Although the magazine has endeavoured to ensure all information is correct at time of print, prices and availability may change. This magazine is fully independent and not affiliated in any way with the companies mentioned herein. If you submit material to Imagine Publishing via post, email, social network or any other means, you automatically grant Imagine Publishing an irrevocable, perpetual, royalty-free license to use the material across its entire portfolio, in print, online and digital, and to deliver the material to existing and future clients, including but not limited to international licensees for reproduction in international, licensed editions of Imagine products. Any material you submit is sent at your risk and, although every care is taken, neither Imagine Publishing nor its employees, agents or subcontractors shall be liable for the loss or damage.

© Imagine Publishing Ltd 2016

» Electronics for Pi hackers » Ultimate rescue kit » Revitalise an old laptop » Manage the Ubuntu system Welcome to the latest issue of Linux User & Developer, the UK and America’s favourite Linux and open source magazine. The Raspberry Pi has been a runaway success story since its launch in 2012, and it’s at the heart of many fantastic maker projects. We often talk about the Pi being a great device for getting people coding, but what about if you’re an experienced coder who’d like to create the kind of advanced hardware projects that have inspired so many of us? That’s where our feature on p58 this issue comes in. It explains all the electronics a budding Pi hacker needs to know, from detailing all of the essential tools and components you need to translating the symbols on circuit diagrams and how to get to grips with soldering. Take your maker projects to the next level today! Also this issue you’ll find the ultimate rescue kit on our free DVD. It’s packed with five live booting distros plus two essential data recovery tools to help you treat any PC problems. Deal with boot errors, lost data, system repairs and more with this handy disc – you can read up on everything you need to know about the distros and FOSS on it in our feature on p18. If you read the digital edition of the magazine, don’t worry – it’s all on our secure FileSilo repo for you to download (I strongly recommend you grab the lot and burn it to a disc or flash drive just in case you ever need it!). Enjoy the issue! April Madden, Editor

Get in touch with the team: linuxuser@imagine-publishing.co.uk Facebook:

Linux User & Developer

Twitter:

Buy online

@linuxusermag

Visit us online for more news, opinion, tutorials and reviews:

www.linuxuser.co.uk

ISSN 2041-3270

www.linuxuser.co.uk

3


Contents e Subscrib! e v & sa r 28

t ou Check ou fer! of great new ers om US cust ibe cr can subs on page 98

Reviews 85 Music players & organisers Which music player is best for discerning Linux users?

58 Electronics for Pi Hackers

Learn to build electronic circuits to interface with your Pi Spotify

Audacious

TOMAHAWK

Clementine

OpenSource Tutorials 08 News

30 Bash masterclass: Find files

12 Interview

34 .NET reboots on Linux with .NET Core

The biggest stories from the open source world

Boken Lin from Onion on the Omega2 board

16 Kernel column

The latest on the Linux kernel with Jon Masters

Computers are good at storing files. Bash scripts makes finding them again easier

Explore the new open source .NET runtime and SDK

38 Revitalise an old laptop

Give an old laptop a new lease of life with lightweight Linux

90 Linux Mint 18

Distrowatch’s favourite continues on in leaps and bounds

92 Free software

Richard Smedley recommends some excellent FOSS packages for you to try

42 Automate your project tasks with Ansible

Learn how to install and use Ansible and Ansible Tower to automate tasks

46 Manage the system on Ubuntu Linux

Take a closer look at how you can manage the system within Ubuntu

Features

18 Ultimate rescue kit

50 Learn Go: Develop programs that read and write to files

Learn how to develop and use Go packages

57 Practical Raspberry Pi

Ten ways to fix your PC with our rescue disc

58 Electronics for Pi hackers

Take maker projects further

96 Free downloads

Find out what we’ve uploaded to our digital content hub FileSilo for you this month

Continue our Explorer robot and toy hacking tutorials, get RISC OS on your Pi, check mail with a Python script and discover key electronics skills

Join us online for more Linux news, opinion and reviews www.linuxuser.co.uk 4


Open Source On the disc

On your free DVD this issue Find out what’s on your free disc Welcome to the Linux User & Developer DVD. Save your system from data loss, fix boot problems, interrogate partitions and hard drives, and much more with the ultimate rescue kit. This live booting disc is your one-stop triage

for PC problems. Just restart your computer and boot from the DVD to access the five livebooting distros. Plus there are two great FOSS programs for recovering data from hard drives and storage media.

Featured software:

KNOPPIX Puppy Linux Slacko 6.3 KNOPPIX is a bootable live system consisting of a Puppy Linux is a special build of Linux meant to make computing easy and fast, even allowing you to do magic by recovering data from destroyed PCs or by removing malware from Windows.

representative collection of GNU/Linux software, automatic hardware detection, and support for many graphics cards, sound cards, SCSI and USB devices and other peripherals.

CloneZilla SystemRescueCd is a Linux system rescue disk available as a bootable CD-ROM or USB stick for administrating or repairing your system and data after a crash. It aims to provide an easy way to carry out admin tasks on your computer.

CloneZilla is a partition and disk imaging/cloning program similar to True Image or Norton Ghost. It helps you to do system deployment, bare metal backup and recovery.

Trinity Rescue Kit

ddrescue 1.16

SystemRescueCd

Trinity Rescue Kit or TRK is a free live Linux distribution that aims specifically at recovery and repair operations on Windows machines, but is equally usable for Linux recovery issues.

GNU ddrescue is a data recovery tool. It copies data from one file or block device (hard disc, CD ROM, etc) to another, trying to rescue the good parts first in case of read errors.

safecopy 1.7

safecopy is a data recovery tool, which tries to extract as much data as possible from a problematic (i.e. damaged sectors) source – like floppy drives, hard disk partitions, CDs, tape devices, where other tools like dd would fail due to I/O errors.

6

Load DVD

To access software and tutorial files, simply insert the disc into your computer and double-click the icon.

Live boot

To live-boot into the distros supplied on this disc, insert the disc into your disc drive and reboot your computer.

Please note: • You will need to ensure that your computer is set up to boot from disc (press F9 on your computer’s BIOS screen to change Boot Options). • Some computers require you to press a key to enable booting from disc – check your manual or the manufacturer’s website to find out if this is the case on your PC. • Live-booting distros are read from the disc: they will not be installed permanently on your computer unless you choose to do so.

For best results: This disc has been optimised for modern browsers capable of rendering recent updates to the HTML and CSS standards. So to get the best experience we recommend you use: • Internet Explorer 8 or higher • Firefox 3 or higher • Safari 4 or higher • Chrome 5 or higher

Problems with the disc? Send us an email at linuxuser@ imagine-publishing.co.uk Please note however that if you are having problems using the programs or resources provided, then please contact the relevant software companies.


Disclaimer Important information

Check this before installing or using the disc For the purpose of this disclaimer statement the phrase ‘this disc’ refers to all software and resources supplied on the disc as well as the physical disc itself. You must agree to the following terms and conditions before using ‘this disc’:

Loss of data

In no event will Imagine Publishing Limited accept liability or be held responsible for any damage, disruption and/or loss to data or computer systems as a result of using ‘this disc’. Imagine Publishing Limited makes every effort to ensure that ‘this disc’ is delivered to you free from viruses and spyware. We do still strongly recommend that you run a virus checker over ‘this disc’ before use and that you have an up-to-date backup of your hard drive before using ‘this disc’.

Hyperlinks:

Imagine Publishing Limited does not accept any liability for content that may appear as a result of visiting hyperlinks published in ‘this disc’. At the time of production, all hyperlinks on ‘this disc’ linked to the desired destination. Imagine Publishing Limited cannot guarantee that at the time of use these hyperlinks direct to that same intended content as Imagine Publishing Limited has no control over the content delivered on these hyperlinks.

Software Licensing

Software is licensed under different terms; please check that you know which one a program uses before you install it.

Live boot

Distros

Insert the disc into your computer and reboot. You will need to make sure that your computer is set up to boot from disc

FOSS

Insert the disc into your computer and double-click on the icon or Launch Disc file to explore the contents

Distros can be live booted so that you can try a new operating system instantly without making permanent changes to your computer

Explore

Alternatively you can insert and run the disc to explore the interface and content

• Shareware: If you continue to use the program you should register it with the author • Freeware: You can use the program free of charge • Trials/Demos: These are either time-limited or have some functions/features disabled • Open source/GPL: Free to use, but for more details please visit https://opensource.org/licenses/ gpl-license Unless otherwise stated you do not have permission to duplicate and distribute ‘this disc’.

www.linuxuser.co.uk

7


08 News & Opinion | 12 Interview | 96 FileSilo HARDWARE

$8 Raspberry Pi challenger revealed The NanoPi Neo is big on features, but for a wallet-friendly price Of course the Raspberry Pi is no expensive piece of kit; in fact, it’s one of the best ways for budding builders and tinkerers to get themselves started with computing without breaking the bank. With units of the original Pi and its numerous successors selling in their millions, it was inevitable that a whole host of competitors would try and get in on the action. We’ve encountered many of them during our time on Linux User & Developer, and it’s truly hard to beat the value for money and ease-ofuse that the Raspberry Pi can offer. One of the newest, and also arguably one of the best, to arrive on the scene is the NanoPi NEO. Produced by the relatively well known FriendlyARM brand, the biggest selling point of the tiny computer is its $8 price tag. But it’s not until you take a closer look at the internals of the NanoPi NEO that you realise just how much of a steal it is at that price for both beginner and advanced users alike. The NanoPi NEO is available with 256MB and 512MB of RAM, with the larger variant only slightly more expensive at $10 and worth the upgrade for the added power. It’s also remarkably smaller than other noted competitors, with a 40mm square board powered by the Allwinner H3 quad-core processor clocked at 1.2GHz. For most beginner-level projects, that’s more than enough power, and FriendlyARM itself lists a number of projects for users to try out

Expect to be able to build things like a USB camera, a GPS system, a USB Wi-Fi device and a multi-media streamer. The only real limit is your own creativity and imagination. 8

Above The 40mm square board is one of the smaller Pi competitors on the market

themselves on its website. The processor and RAM combination is also much faster than the Pi Zero’s offering, but it’s slightly less than the bigger Raspberry Pi models. Dig even deeper and you’ll find a 10/100 Ethernet port, three USB 2.0 ports and an additional microUSB port. Despite its size, there’s also room for a microSD slot and a debugging serial port to top it off. Expansion also plays a key role in the development of the NanoPi, with a 36-pin GPIO also included for good measure. We should also add that the GPIO included here also includes UART, SPI, I2C and IO, all integral part of a developer’s arsenal.

The NanoPi Neo comes pre-loaded with Ubuntu, but it’s also being marketed as being adaptable to work with many other distributions. The number of ports also allow it to work with many other expansion sets. So expect to be able to build things like a USB camera, a GPS system, a USB Wi-Fi device and a multi-media streamer. The only real limit is your own creativity and imagination. At the time of writing, stock of the NanoPi NEO was plentiful, but due to initial press feedback, it’s expected that stock will soon dwindle. You can order your own NanoPi NEO over at www.friendlyarm.com.


TOP FIVE

NVIDIA

Lesserknown Linux distributions

1 Scientific Linux

Nvidia unveils big changes in graphic driver update

Scientific Linux is definitely for those who have spent years learning the ins and outs of the wonderful world of Linux distributions. It includes a variety of packages; including Cluster Suite, FUSE and Squashfs, which all work in tandem with one another in order to create a seamless user experience.

2 Yellow Dog

Although Yellow Dog was originally released for use on early Apple computers, under its current management, it has been transformed in to a

Big fixes are plentiful in Nvidia’s latest update It’s been just over a month since we last saw the 367.27 video driver update from Nvidia, which first introduced support for its costly, but impressive, GeForce 1080 and GTK 1070 graphics cards on the Linux kernel-based operating system. But that short timeframe hasn’t stopped it releasing the next milestone update, 367.35. In its current form, the update is available for all UNIX-like operating systems, including Solaris, FreeBSD and GNU/Linux, although wider support is expected to appear in the coming weeks. The main thrust of this update is to provide fixes and tweaks to an array of issues first highlighted in the previous version. One of the key fixes is the stopping of console corruption, when users would resume their activity after using the suspend feature, something that would often lead to additional errors appearing on other areas of the desktop. Another big hole that’s been plugged is within the nvidiasettings configuration tool. Previously, there was a frequent crash on displays using 8 or 15bit colour depths when using the configuration

tool. Based on user responses, it seems that both of these crashes have been rectified in the update. Another small bug that’s been squashed addresses the system crash when the peer-topeer mapping tool was enabled, but we’ve yet to see the effectiveness of this in action. As well as bug fixing, the one key inclusion of the 367.35 update comes in the form of new support for VDPAU (Video Decode and Presentation API for UNIX). For end users, it will enable their Nvidia graphics card to decode 8K encoded video streams, a massive step for media consumption for Linux users. It’s expected that the feature will be fleshed out over the coming months, as Nvidia explores the true capabilities of VDPAU. Last but not least, the overall buffer write performance has been massively improved in the update. The implementation of DRM Dumb Buffers should make the buffer write process that little bit more manageable. Users can download Nvidia 367.35 for their 65 or 320bit system over at nvidia.com, or directly through the Nvidia desktop client installed on their machines.

high-performance distribution that doesn’t have a steep learning curve.

3 Ubuntu Studio

Whether you’re into music, video or photography, Ubuntu Studio is the ideal choice for media enthusiasts. It comes packed with an array of media-friendly apps and a host of helpful editing tools to boot.

4 Parted Magic

Parted Magic is half distro, half program. It includes all the necessary tools you could possibly need to fix broken partitions, and is especially useful when trying to diagnose something that simply won’t boot.

5 Bodhi Linux

Perhaps the best known of these is Bodhi Linux, a unique distribution in terms of both its looks and usability. It includes a fantastic window manager system, which is handy to have when organising your desktop.

www.linuxuser.co.uk

9


OpenSource

Your source of Linux news & views

SKYPE

Alpha release of Skype for Linux is finally here An all-new Linux client brings voice calls back to your machine It’s been a while since we’ve seen a release from Skype for its Linux client, so much so that the old client was stuck on version 4.3, whereas the Windows version is on version 7. Instead of providing a new update for the current client, Microsoft has launched an all-new client for Linux users to enjoy. Due to the client currently being in its alpha stage, there are numerous restrictions

Instead of providing a new update for the current client, Microsoft has launched an all-new client for Linux users to enjoy

Above Microsoft has ditched the old Skype client and launched something new

currently involved. Behind the scenes, Skype is working on a big overhaul of its network infrastructure, to help improve connection performance and keep server problems to an absolute minimum. There’s also restrictions on the type of calls that can be made, with only voice calls working at the time of writing. And for those hoping to use the old version of Skype in the meantime, it won’t work in tandem with the new client, so be warned. In its current state, only users of the current OS X, Windows or Android version of Skype will be able to get their hands on the alpha release

of Skype for Linux, but a more wide-spread release will be taking place soon. Also new from Skype is a second client for Linux users to get excited about. Skype Web Client will work on both Chrome on Linux machines, as well as any Chromebooks running the Chrome OS. By and large, the core capabilities of the program are the same as the desktop version, and it runs on Chrome’s WebRTC to provide a plugin-free Skype experience. Users should head over to Skype. com for download links and installation guides to both of these clients.

OPEN SOURCE

Toyota and others join Open Invention Network Established back in 1995 in a bid to defend Linux against intellectual property attacks, the Open Invention Network was spearheaded by an array of leading technology brands. While the likes of Red Hat, Sony, IBM and Philips are all credited with launching the alliance, it has grown dramatically to cover many areas outside of core technology companies. One of the last remaining areas not to get heavily involved with the Open Invention Network has been the automotive industry, but that has just changed for the better. Toyota has led a wave of leading automotive manufacturers to join the network and their reasons for doing so are clear. With the rise of

10

self-driving cars, many of which are embracing the open-source ethos, patent protection is at an all-time high. Also joining Toyota are Ford Motor, Kia and Hyundai. In this case, the same rules apply as any other company joining the network; members need to agree that they’ll share their patents with the OIN License Agreement, arguably to

The Open Invention Network handles its businesses and enforces its shared patent pool

stop any internal battles between members. In return, all members have a say on how the Open Invention Network handles its businesses and enforces its shared patent pool. Considering the potential cost of patent enforcement and litigation, the alliance has proven to save many companies a lot of money and headaches. In total, the Open Invention Network boasts over 2000 members, covering everything from mobile, IoT, embedded technologies, and now automotive. Following on from the recent rise in members, rumours are hinting that Microsoft may soon be jumping on board with the Open Invention Network, but nothing is confirmed.


ANDROID

85 million Android devices compromised

DISTRO FEED

Top 10

(Average hits per day, 30 June – 31 July) 1. 2.

HummingBad malware infects devices with an array of malicious apps

3.

Over 85 million Android devices have been taken over by a group of China-based cybercriminals, who have created the root-based HummingBad malicious code. The malware looks to establish a rootkit on its target device, which in turn looks to install fraudulent apps and spam ad revenue. If

6.

HummingBad is unable to implant the rootkit, it instead mass-spams an array of fraudulent apps. As expected, the device then becomes unusable. No matter what areas of the HummingBad install fail, the blend of several malicious components is so toxic that it actually makes it near-impossible for any infected Android smartphone to not suffer the consequences. According to a report from Check Point, the team that is behind HummingBad has been generating revenue of US$300,000 a month targeting Android smartphones. The number of infected devices varies drastically between countries. China and India have the biggest infection rates, with 1.6 and 1.3 million respectively. The United States is eighth on the list, with 287,000 victims. In its current state, it’s believed that a further 3 million devices could end up being infected by Q4 of 2016. While there’s no known fix to the HummingBad code, all Android users should take the necessary precautions to secure their device. This means setting passwords, being wary of public Wi-Fi and only downloading apps from trusted sources.

CLOUD

IBM launches Blockchain Cloud Services

Staying connected to the cloud has never been easier An all-new cloud environment for businessto-business networks will allow companies to test the privacy, performance and interoperability of their ecosystems, on a wide spread scale that simply wasn’t available previously. Launched by IBM, Blockchain is currently in beta, but is largely expected to be made available by the end of 2016. Blockchain primarily consists of a distributed database, offering a secure way for companies to store records and other important data. For small businesses, it’s seen as a great way to manage

accounts, while for larger businesses, it’s a new way to manage digital assets and stay in control of other keys areas of the business. The new environment is currently undergoing work on a massive scale, aimed at ironing out key security issues before its full launch. It should also be added that Blockchain will work in tandem with IBM Cloud, one of the more popular cloud solutions for large-scale businesses throughout Europe and beyond. Interested parties should head across to IBM. com for further information.

4. 5.

7. 8. 9. 10.

Linux Mint Debian Ubuntu openSUSE Manjaro Fedora Zorin CentOS elementary Arch Linux

3,013 1,937 1,761 1,216 1,180 1,139 946 875 865 836

This month ■ Stable releases (15) ■ In development (6) The CentOS 6.8 release has been received well by users. It features a number of fresh changes, including a combined repository for installation.

Highlights elementary OS

The second beta of the elementary OS 0.4 update has garnered plenty of love this past month. Over 70 issues have been fixed and patched since its previous update, making it one of the more complete beta packages out there.

Linux Mint

Mint is still riding the crest of the wave from its amazing Mint 18 update. We

reviewed it this issue, and found it to be one of the best releases based on Ubuntu 16.04.

Manjaro

While Zorin still provides one of the better ways for users to get started with

Linux, the increased competition it’s facing has seen it gradually lose monthly downloads.

Latest distros available: filesilo.co.uk

www.linuxuser.co.uk

11


OpenSource

Your source of Linux news & views

INTERVIEW BOKEN LIN

The last word in SBCs Onion’s Omega2 is one powerful micro-computer, and best of all, it’s just $5. Cofounder Boken Lin tells us what we can expect from this amazing piece of kit

Boken Lin

Boken Lin has been one of the main players in the development of both the Omega and Omega2 microcomputers. He’s an active part of his own community, liaising with users to garner user feedback for future projects over at www.onion.io

Where did the idea for Omega2 come from? Was the development process a long one? We really started working on the Omega2 a few months after our last campaign for the Omega1 had finished. The objective for the Omega1 was to bring over the powerful technologies used in web development to the world of hardware development. In other words, the Omega1 was a hardware development board for software developers. That

12

worked very well, and we introduced over 10,000 software developers to hardware development for the first time. But we didn’t want to stop there; instead we wanted to get even more people, ideally everyone, to join the maker movement. So we went out and did some market research. After a lot of research, we found two things in particular. One, cheaper development boards get huge traction; and two, many people don’t want to program (so it’s not a matter of using web technologies to program, they don’t want to program at all). So we got to work. We got the manufacturing price of the Omega2 down by partnering with more experienced manufacturing partners and we spent a lot of time to improve our development environment to make it even easier to use. What this means is that it allows users who don’t want to do any programming on the Omega2 to be able to use it. To further aid this idea, we integrated NodeRed, which is a drag-and-drop programming interface that works straight out of the box. We hope this system is intuitive enough to allow everyone, whether you are a programmer or not, to start developing hardware projects with the Omega2.


The original Omega was a great piece of kit; how much did you learn from the original Omega to help you develop the Omega2? A lot. We have a very active community, and community members often give us plenty of advice on what we should improve on the Omega in terms of both the software and the hardware. Right before starting to work on the Omega2, we asked our users what they want to see on a revised version of the Omega, and most people asked for a microSD card slot because the original Omega was pretty constrained in terms of the storage space. In addition to that, we also spent a lot of time to polish up the software and how end users could get on terms with it. We also spent a lot of time to fix the bugs on our Cloud development environment. After that, we added a few new features such as the Cloud Compile to make it easier for users to cross-compile code for the Omega. So instead of needing to set up a cross-compile environment for the Omega, they simply upload the source code to our cloud, and the compiled binary will be deployed automatically to the Omega. It’s a lot easier than how things used to be. What do you feel are the key features of the Omega2? In your opinion, does it stand out from the Raspberry Pi? The Omega and the Omega2 are positioned to be in-between an Arduino and a Raspberry Pi. The Arduino is great for hardware applications, it has many hardware-specific interfaces, but it is very resource-constrained, so you can’t do much other

Staying in the Onion Cloud When investing in a product like the Omega2, chances are you’ll have additional connected products to hand. The problem is, managing them all simultaneously can be a difficult task. As well as hardware solutions, Onion also provide its own cloud management system, Onion Cloud. The system looks to help keep the controls of your connected devices together, enabling you to sync them up and create more projects with them. At its core is the Device Manager section, which is the central hub for all connected devices. What’s even cooler about this section is that every device will be given its own unique identification number, and users can also see the current status of said device and the users that have access to it. For developers, there’s also the highly useful Cloud Compile. With this, users can upload C code and a range of compatible binaries to any device stored through the system. It makes developing for an embedded device easier than it has ever been. Interested? Go ahead and download the Onion Cloud over at onion.io/cloud.

www.linuxuser.co.uk

13


OpenSource

Your source of Linux news & views

The best add-ons for the Omega2 While the core Omega2 board is powerful enough, users can really make it their own by implementing a range of unique add-ons and modules. There’s a lot of them to explore, but here are a few of our favourites. Bluetooth LE Expansion This USB expansion adds Bluetooth 4.1 connectivity to the Omega2, so that it can interact with a range of Bluetooth-connected devices. It’s also compatible with most smartphones running iOS 5+ and Android 4.3+. Mini Dock The Mini Dock is designed to help power your Omega for apps that do not require or use its GPIO pins. Its small size means it’s a handy accessory to have when it comes to using it alongside wireless applications such as video streaming and printing.

than interfacing with other hardware peripherals. The Raspberry Pi, on the other hand, is a full Linux machine that has a huge amount of computing power. But the Raspberry Pi isn’t designed for hardware applications. It doesn’t have many hardware interfaces (such as I2C, SPI, etc). Instead, it will all need to be emulated over GPIO. It’s also bulky, very power-hungry, and just an over-kill for most hardware projects. So we aimed to make the Omega the best of both worlds. It is small, very power-efficient, has most of the hardware interfaces, but [is] still powerful enough to run Linux. So you can program it just like a Raspberry Pi or other Linux machines (so you will be able to get to use all the tools and languages you are already familiar with), but it is designed specifically for hardware applications. Just how powerful is the Omega2? It can do pretty much everything the Raspberry Pi can do, except for video output. We have been able to run Linux applications such as vi, Emacs, Git, Apache, and use Python, Node.js, PHP, and it doesn’t really have any issue. We implore people to

14

[The Pi is] bulky, very powerhungry, and just an overkill for most hardware projects. So we aimed to make the Omega the best of both worlds

Ethernet Expansion As its name suggests, this expansions adds a 100mbps Ethernet port to the Omega2. The unit plugs direction to the Expansion Dock, which you’ll also need to buy. The Ethernet Expansion is particularly handy when it comes to quickly reflashing the firmware of an Omega unit.


our last campaign, we established the trust in the minds of our users, and they supported us as a result. Having the community behind us is a big boost to how we feel about the Omega2.

really explore what sort of applications they can run on the Omega2. We like the modular aspect of the Omega2; what sort of add-ons can people integrate into it? We want to keep the Omega2 as an open platform. All the add-ons communicate with the Omega2 using standard communication protocols such as I2C, so there is really no limit to what kind of addons people can integrate into it. As long as it can communicate with the Omega2 through UART, SPI, I2C, GPIO or Wi-Fi, you can actually integrate anything with it. We’ve got a number of these listed through our site. At the time of writing, you’ve raised a staggering $120,000 dollars through your Kickstarter campaign, are you surprised with the backing you’ve had so far? Yeah, we are surprised that we were able to pass the $100,000 so quickly. With our last campaign, things happened a lot slower. I think by fulfilling

There is really no limit to what kind of add-ons people can integrate into it

Looking forward, what are your plans for the Omega2 in the next 12 months? The most important thing we are going to do for the Omega2 is to create a complete hardware development course with it. We will be working with high schools and community colleges to come up with a list of projects students can build using the Omega, and then turning these projects into systematic courses that introduce people to the world of hardware development. In addition, we have many people coming to us wanting to use the Omega2 as well as our cloud solution as a part of their commercial product system, so we will be working with them to iron out their specific needs. We are hoping to see many smart products on the market that are built with the Omega2.

www.linuxuser.co.uk

15


OpenSource

Your source of Linux news & views

OPINION

The kernel column Jon Masters summarises the latest in the Linux kernel community, asLinus Torvalds releases Linux 4.7, and work towards 4.8 continues

Jon Masters

is a Linux-kernel hacker who has been working on Linux for some 19 years, since he first attended university at the age of 13. Jon lives in Cambridge, Massachusetts, and works for a large enterprise Linux vendor, where he is driving the creation of standards for energy efficient ARM-powered servers

16

Linus Torvalds announced Linux 4.7 (final) roughly when expected (under the revised schedule including a one week slip for his travel plans, which had resulted in an additional week between added between rc7 and final). In his announcement email, Torvalds noted that the patch delta between 4.7-rc7 and final was mostly just trivial one-liner fixes even given the extra week. He added that, “Judging by the linux-next contents” Linux 4.8 was “going to be a bigger release than the current one (4.7 really was fairly calm, I blame at least partly summer in the northern hemisphere)”. Linux 4.7 includes a number of new features, including support for parallel directory lookups for entries within the same filesystem directory (which will really improve performance when dealing with large directories filled with very many files). We’ve covered parallel directory lookup optimisations previously (see issue 168 for a detailed description). Of the new 4.7 features we have not covered previously, certain readers will likely be very interested in EFI ‘Capsule’ updates as a standardised mechanism to locate and deploy system firmware upgrades (especially at the data centre or fleet-wide level) both on x86 and 64-bit server class ARM systems. Others (in particular laptop users) will probably enjoy tracking some of the ongoing work to provide a powerefficient scheduler able to make runtime decisions based upon power use through the new ‘schedutil’ CPU performance governor. Meanwhile, work is still continuing toward what will be known as Linux 4.8. A number of exciting features have been merged during the merge window (the period of time – usually two weeks early in a development cycle – during which disruptive changes that have previously been vetted in the linux-next test kernels are allowed to be merged into the mainline kernel source tree) and will land in the final 4.8 release. These include a number of critical features for 64-bit ARM server systems, such as support for PCIe adaptors, and NUMA-based memory (specifically, for both of these, when under an ACPI-based hardware enumeration regime), as well as kexec infrastructure required to later support kernel crashdumps (which some distros already carry support for).

Other features being merged into Linux 4.8 include support for NVMe-over-fabrics (the ability to connect to non-volatile next generation memory technologies from a distance, such as across a data centre, which we will have quite a bit to say about in future issues), some of the work toward virtually mapped kernel stacks (as featured in last month’s edition), a new feature to allow easy updates to ACPI tables (such as to test fixes to vendor bugs in your laptop firmware, for debugging purposes), and a new kernel random number generator algorithm (which is being merged in spite of the random number code accidentally not making it into any linux-next tree prior, due to an oversight). We’ll cover a few of these in greater detail as Linux 4.8 takes shape and heads toward release.

EFI Capsule Updates

Linux 4.7 includes support for UEFI ‘Capsule’ Updates. UEFI (Unified Extensible Firmware Interface) is the contemporary firmware used by all newer x86 systems (of every type), as well ARM server-class systems. It has heritage dating back to the days of the Intel Itanium (for which it was originally created, and was known without the U for Unified, hence we still talk of ‘EFI’ applications, which may better be called UEFI apps), but it later replaced both the PC BIOS, as well as similar low-level system firmware on a number of different architectures. UEFI provides a defined set of interfaces for Operating System software to interact with platform firmware for such operations as loading OS software, rebooting, getting and setting the time of day, and (in UEFI 2.5+ compliant firmware) providing updated system firmware. With UEFI 2.5, gone are the days of needing to use a specific vendor-provided BIOS upgrade utility, and a number of related headaches. Those system upgrade utilities used to be the bane of Linux users everywhere since they often required Microsoft Windows, or at least access to a Windows system. UEFI has made this easier – thanks to EFI-based update applications – but there has still been a lot of variance between platforms when it comes to applying firmware updates. In future, not only can Linux users avail themselves of a single standard mechanism for loading firmware updates through a simple kernel-provided sysfs (/sys) kernel interface, but they can also leverage the


new ‘fwupd’ firmware update daemon to do so through a simple automated daemon that forms a part of future GNOME releases. UEFI 2.5 was developed with the support of many in the Linux and Microsoft ecosystems, including Microsoft itself, which is a very different company these days. In 2016, with a diverse ecosystem and broader scope that includes Microsoft’s Azure Cloud services, Microsoft itself actually benefits enormously from having a completely standards-based mechanism for applying firmware updates. To find out more about Capsule based updates, read the Intel blog post on the subject here: https://blogs. intel.com/evangelists/2015/06/23/better-firmwareupdates-in-linux-using-uefi-capsules/.

Schedutil Performance Governor

Linux 4.7 includes the first component of what will hopefully become a comprehensive solution to the problem of influencing runtime system performance and power use based upon the immediate scheduling needs of the running programs (tasks). Contemporary Linux systems use a variety of tricks to reduce their energy use, including runtime CPU frequency management (and DVFS – Dynamic Voltage and Frequence Scaling) through a variety of CPU governors, as well as CPU idle management (shutting cores down to a very low energy state briefly when they are in the idle loop due to lack of activity). Each of these techniques provides a benefit in terms of energy savings, but they rely upon (near term) historical system performance and scheduling metrics, rather than the instantaneous need of the kernel’s scheduling algorithms. The scheduler has a strong awareness of the dynamic needs of the overall system in terms of throughput, latency, and overall activity. Until now this has largely gone untapped in the various performance management software contained within the kernel. Linux 4.7 begins to address this by coupling the CPU schedutil governor to real-time data provided by the scheduler as it makes its decisions. Subsequent releases will build upon this to glue all of the existing pieces into one cohesive whole. This comes at the right time, since modern SoCs (Systemon-Chip) – which we used to simply call microprocessors – can adjust their performance characteristics near-

instantaneously, allowing your laptop to dynamically adjust its power and performance in a much more finegrained way to save battery life, while even providing a more responsive experience than under past regimes.

Ongoing Development

Intel’s Cache Allocation Technology (CAT) has seen much more development over the past month, as a renewed push coupled with presentations at LinuxCon Japan aimed at getting this technology into the core Linux kernel. Briefly, CAT provides a means for system administrators, hypervisor authors, and systems software developers, to partition system cache resources in order to provide a level of QoS (Quality-of-Service) and resource isolation between virtual machines and processes (tasks). CAT requires a means to describe and control cache topology in a more fine-grained manner that’s currently getting some pushback. PCIe device passthrough into guest Virtual Machines on 64-bit ARM server systems (one of the final pieces required for a full OPNFV or optimised hardware accelerated OpenStack experience on ARM) came a step closer to reality as the twelfth iteration of a patch series adding this support to KVM met with little pushback and positive commentary from reviewers. This comes at an interesting time, as a number of others are pushing patches aimed at providing optimised userspace IO (UIO) for DMA operations to be managed by userspace software libraries without kernel intervention. This will still yet accelerate performance for networking accelerators being controlled by software running outside the kernel, for example in a VM. Finally this month, a Year 2038 fix was applied to the jbd2 (journaling block 2) code used by the ext4 filesystem. Although the filesystem itself is clean against the Y2K-style 32-bit Unix time counter wrap that will hit all 32-bit Linux systems in 2038, the journal was not safe on such systems.

www.linuxuser.co.uk

17


Feature

Ultimate Rescue Kit

ESSENTIAL FIXES

A rescue disc should include the following tools or variants

Recover photos and data Lost data is one of the most common problems for computer users across all platforms, but in most cases the data will persist on your hard disk drive even though it is invisible to the operating system. A successful data recovery tool should handle photos, videos and other user-generated data.

ULTIMATE

RESCUE KIT

Discover 10 ways to fix your PC with the ultimate rescue kit on the cover Whether you’ve lost data and need a means to recover it, or you have an HDD infected with a worm, or have a corrupted GRUB boot loader, or some other hardware or operating system issue that cannot be resolved using an installed OS, then what you need is a rescue disc. While you might think that such a collection of tools can be tricky to squeeze onto a single CD or DVD-ROM, the truth is that you can get some very useful data recovery, diagnostic, anti-virus utilities, and more, in some of the smallest Linux distros available. They’re not just for optical media either – you can flash these distros to a USB stick for portable convenience.

18

In fact, the options here are considerable, from individual FOSS utilities and applications – almost all running in the command line, rather than from the desktop – to entire distros providing you with the tools to recover a system from the convenience and safety of a live environment. Rootkits can be removed, BIOS re-flashed, drives cloned for recovery… every hardware and data recovery permutation is covered. Plus, with a rescue disc to hand, you’ll be able to resolve data and security issues not just on your Linux computer, but also on those of your Windows-using friends and family – while persuading them to switch to Linux, of course!

Recover lost partitions

Lost HDD partitions can be recovered with disk-specific recovery tools, typically allowing you to make partition-level fixes. Additionally, such tools should let you recover partitions, make disks bootable, restore boot sectors, and undelete and copy files from NTFS, FAT, exFAT and ext2 file systems, enabling data recovery from any operating system.

Clone your HDD

For disks that are about to fail, time is of the essence, and your priority should be to find your vital data and archive it. If there is time after this, however, cloning the drive is the answer, creating a copy of the complete file structure onto a new device.

Repair the Master Boot Record The MBR determines how your computer boots, and is particularly important on dual- or multibooting setups. But if an update or disk error causes corruption of the MBR, you'll no longer be able to run any operating system on your PC. MBR repair tools can resolve such problems, often quickly.

Remove malware

While it is highly unlikely that a Linux system will be affected by malware, dual booting with Windows means that it is a possibility. Furthermore, if someone you know has issues removing malware from a Windows PC, Linux recovery suites usually include at least one malware removal tool.


Recover lost partitions TestDisk Most hard disk issues can be repaired with TestDisk It’s hard to think of a HDD issue that cannot be resolved with TestDisk. While it is unsuitable to be used with the disk running your operating system, TestDisk features a multitude of useful features and functions, from restoring the MBR to cloning a partition. Available in all standard repos, you can install TestDisk in Ubuntu with: sudo apt-get install testdisk Then launch with: sudo testdisk /list Once launched, this command-line utility will display a list of partitions to choose from, then the file system that has been detected. In the next screen you'll find tools to analyse, which should find lost partitions. Missing partitions can then be restored to the partition table, using the Write option. Plenty of other options are available following disk analysis, from restoring lost data to rebuilding the MBR. The key to successfully using TestDisk is to be patient, and observe the messages it displays.

With a rescue disc to hand, you’ll be able to resolve data and security issues not just on your Linux computer, but also on Windows

Recover photos and data Photorec You don’t have to let those deleted photos go forever You don’t need to have a live CD running a dedicated system rescue distro to find lost data on your PC. Photos and videos are particularly precious, and if you forgot to back them up, accidental deletion can be frustrating. Perhaps the best solution is Photorec, which comes as part of TestDisk. To run, simply enter: sudo photorec Ignoring the file system on your computer, Photorec scans for data left outside of the file system: the deleted data – not just photos and videos, but documents and other usergenerated data. Recovered data is displayed with a real-time counter, and when the scan is complete you can then restore your files to a new location, first making sure that the destination drive has the capacity! Be aware that scanning a larger drive or partition will take a long time, commensurate with the capacity of the device. Restoring data can also take a while.

www.linuxuser.co.uk

19


Feature

Ultimate Rescue Kit

Clone your files GNU ddrescue Rescue data in the face of read errors

An HDD suffering with repeated errors is only heading in one direction: the bin. But how do you recover your vital files from a failing hard disk drive? The answer is making a direct copy of your directories and files with ddrescue (in repos as gddrescue), which can be installed in any of the usual ways, and run as follows: sudo ddrescue [options] infile outfile [logfile] You’ll need to type help for the options, while specifying the name of the logfile for checking later. The following example copies data without retries or sector splits, and is the best way to get started: sudo ddrescue --no-split /dev/hda1 imagefile logfile Not only can ddrescue copy a file’s contents, it will do this from a HDD, flash or an optical disc, simply by changing the infile accordingly, from / dev/hda1 to /dev/cdrom. The resulting imagefile

– named accordingly – should then be written to a fresh optical disc. Included are some very useful features, such as the ability to append to an already existing

log, as well as stitch copied data together from different destination drives. Note: The MS-DOS (VFAT) file system cannot be used with images greater than 4GB.

Analyse disks and find lost data Sleuth Kit and Autopsy

Analyse UNIX-based and Windows disks, on the desktop! Troublesome disks should be immediately analysed when trouble strikes, so that you can establish what the problem is. In most cases, sector repair will resolve issues, but there remains the chance that a problem can be more deep-seated, and require you to investigate further to establish whether or not the time has come to buy a replacement drive. Both Sleuthkit and Autopsy are available as command-line tools, the latter providing information about disk data and file structures. Autopsy is capable of recovering lost data from Windows and UNIX disks with ease. Using Autopsy means launching from the command line, like this: sudo autopsy -d [FILEPATH] [IP ADDRESS]

20

For instance, if you wish to analyse and restore data from a disk connected to your local machine, and your IP address is 192.168.0.19, use: sudo autopsy -d /media/disk/ 192.168.0.19

Autopsy and Sleuthkit don't even have to use a physical disk to recover data. A complete disk image will include the deleted data, as well as the active files and folders, so if you already backed up an image of a failed disk and wanted to recover a file from it, you can!


Scan for viruses ClamAV Viruses and malware can be dealt with on any system

Regardless of the operating system or file system installed on the hard disk drive you're attempting to rescue, using anti-virus software like ClamAV can protect your data from further infection when the drive is next booted. After installing in the usual way, use the sudo freshclam command to refresh the virus signatures, which will enable you to scan for the most up-to-date viruses and other malware. The clamscan command is employed to scan your system, and works with a wide selection of parameters. As ever, it’s a good idea to use the help command (clamscan --help) before proceeding. A simple scan of a specific folder, with an audio alert to sound when a virus is found, would be: clamscan -r -bell -i /home/atomickarma/ Downloads In addition to the command-line tool, ClamTK can be installed. This is a desktop version

of ClamAV, and allows you to choose any file or directory to scan. With this installed, you can also perform quick scans, or check for viruses on a USB stick. In addition, ClamTK

features a scheduler, enabling you to schedule scans and updates. The interface also enables you to submit false positives to the ClamAV team.

Troublesome disks should be immediately analysed when trouble strikes, so that you can establish what the problem is

Find and remove rootkits chkrootkit Viruses aren’t the only security threats you might encounter

Rootkits are nasty pieces of malware that are able to escape detection by many antivirus scanners, often by subverting known methods of recognition. Dealing with rootkits – which can affect Linux systems as well as Windows – means employing specific software that is designed to detect and remove these malicious tools before they do any damage. Rarely alone, rootkits are often accompanied by other forms of malware, and are typically used to hide Trojans, worms, and even backdoor access that has been configured by hackers. Dealing with rootkits can be tricky, but with chkrootkit installed in the usual way, you will find that things are simpler. Using the -help condition when launching chkrootkit should reveal the various options

available, while -l displays the available tests on offer. For the best results, mount the compromised disk on a PC you trust, specifying a new root directory: sudo chkrootkit -r /mnt

To uncover suspicious activity – typically something that will point to a rootkit – enter: sudo chkrootkit -x | less Or use | more for more verbose results. If a rootkit is detected, chkrootkit will deal with the culprit.

www.linuxuser.co.uk

21


Feature

Ultimate Rescue Kit

Clone your hard disk drive Clonezilla Need to clone an entire hard disk full of data? The time will no doubt come in the lifetime of your computer when a hard disk drive starts struggling. Perhaps it’s making a clicking and whirring sound; perhaps the disk is slow to respond. Whether this is due to malware, a past life running Windows, an accidental drop or just old age, when the drive starts to get slow and noisy, you know it’s time to do something about it. That something will typically be to replace the drive, but once you’ve got something newer with a larger capacity (the same size is an option; smaller probably not), the next step is to copy your data. Various options are available here, but if you’re seriously concerned about the quality of the drive, running a live disc with Clonezilla should be your next step. With old and new hard disks connected to your computer, and Clonezilla running, you’ll be able to select a few options and leave this utility running in order to clone the data from the suffering disk to the new device. Image files can be cloned to disk or server, and most file systems that you’re likely to encounter – including Windows, Mac and Linux – are supported.

01

Download and burn Clonezilla

To clone your hard disk drive, you’ll need Clonezilla downloaded and written to CD, DVD or USB flash drive – or mount it from the cover disc! Download from clonezilla.org, then use your distro’s built-in tool to burn the CD or DVD. If you’re writing to USB, you will also need Tuxboot, (from tuxboot.org) – this features a wizard-style user interface to write the data to the USB drive and make it bootable, so simply install and run the software to produce your bootable USB of Clonezilla.

22

You’ll need to ensure that the destination drive is equal in capacity, or larger, than the source device

02

Choose the right mode

With your computer rebooted and the Clonezilla live CD running, you’ll be presented with a menu. Here, you can choose between device-image – which creates an image of the source disk (typically your dying HDD) – and device-device, which works on a disk-to-disk basis, cloning one disk to another, preferably new, hard drive. Choosing the right option here is important, as it will determine how you use the resulting backup. Use the arrow keys to make your choice, then select OK and hit Enter.

03

Making a disk image?

If you select the device-image option, you will then be greeted with a further menu, requesting that you select the intended destination of the image file. This might be to a locally connected device, or to one stored on your local network. This menu requires that you understand what all of the entries are referring to, and that you have a network device that matches, so be careful how you make your choice. If you want to go back, select Cancel.


04

Choose the wizard mode

Whichever option you chose, the subsequent screen will invite you to choose between Beginner and Expert mode. The former features default options, while the latter lets you choose your own. First-timers should typically choose Beginner, unless you have experience with disk cloning, and the defaults shown will relate to the internal and external hard disk drives you have connected. Note that throughout the process, you can also select Exit – this option will appear on subsequent screens, allowing you to quit the disk cloning process at any stage.

07

Select the source and destination disks

08

Set extra parameters

Get ready to clone!

Remember that when selecting your source and destination devices, you’ll need to ensure that the destination drive is equal in capacity, or larger, than the source device. Observing the displayed capacity of each drive will help you here. With the source device selected, tap OK and choose the destination drive. Again, keep an eye on the disk capacity and disk label to help avoid mistakes. Note that in the accompanying image we’re attempting to clone USB flash devices. These low-capacity devices will clone faster than, say, a 1TB HDD.

05

Select your cloning type

The next choice concerns the type of disk cloning you want to initiate, and each option is explained. For instance, disk_to_local disk will clone the system disk to another local drive. Making the right choice here is imperative, as is ensuring that the target device has no data on it that you wish to keep. Clonezilla will overwrite the data on the target device, whether it is stored in a partition or not. If it’s a new drive, of course, then this should not be a problem.

Before disk cloning can begin, Clonezilla offers you the chance to set some extra parameters. This is provided under the assumption that the source device has some file system errors that may need resolving before the device is cloned. Three options are available: Skip, Interactively check and repair, and Auto check and repair. Note that the third option comes with a clear caveat, a caution that indicates automatically checking and repairing the source file system without user interaction is risky. Make your choice, and proceed with the OK option.

06

Recognise your disks

09

With your option selected, it’s time to choose the disks you’re going to use. The first disk you choose will be the source, and its name will be clearly displayed, along with the capacity. Note the device names of the disks: connected devices will start with hda or sda, and the naming continues with hdb or sdb, with the third letter proceeding through the alphabet. So some disks might be labelled sdd, or hde. It's unlikely that you’ll have too many physical disks connected, however.

In the next step, you’re ready to clone your device to a new drive, or create the cloned image file. A summary of the computer will be displayed, along with a warning informing you that the data on the target device will be lost. When you’re happy, simply hit Y to continue, and wait for the disk cloning to complete. If you feel like you want to go back and make sure, enter N and when you’ve been dumped back to the command line, enter clonezilla to restart the wizard.

www.linuxuser.co.uk

23


Feature

Ultimate Rescue Kit

Rescue optical discs and backup tapes Safecopy It isn’t just hard disk drives that need recovering! If you wish to recover data from old, corrupt, scratched, or even decaying CD- or DVD-ROMs (older CDs are prone to chemical oxidization, a process known as disc rot) then the Linux rescue disc app you need to be looking at is Safecopy, a command-line tool that can read damaged optical discs and retrieve lost data. This means that not only can you retrieve individual files, you can also make an image of the entire disc, and then scan the resulting image file to find the data you’re looking for. Reading corrupted discs can be tricky,

however, and unless you’re using a suitable device, you’ll have trouble recovering anything. Rather than a standard CD or DVD reader, you’ll need to employ a DVD rewriter, as these are more precise. Alternatively, you could try a multiread player, but these still aren’t as precise as the rewriters. Safecopy is a command-line tool, and as such has no graphical user interface. However, you really don’t need one to work with corrupt CDs, as the utility has enough features and options built in to effortlessly create images and even

size; all of this can be adjusted to deal with particularly troublesome discs. Creating a full disc image can take quite a while, of course, depending upon how big it is.

03

Try repeat reads for better results

01

Use Safecopy in the command line

With Safecopy installed onto your hard disk drive (or running from a USB rescue device – after all, you’ll need your computer’s optical drive free to load up the old disc!), run the utility from the command line with: sudo safecopy This will immediately display the various commands and conditions that are associated with the tool, so take some time to read through these. If you want more help from within the app, enter: man safecopy

Another useful command is r, which can be employed to force multiple reads of the disc, specifically to handle bad sectors. safecopy -r 5 /dev/cdrom By default, sectors are read three times. It’s conceivable that additional reads will deliver better results, but this isn’t always the way. Fortunately, we have some alternative tools that we can employ with Safecopy. The tool can also be instructed to produce a log of results, with the --debug command: safecopy -r 5 --debug /dev/cdrom temp.iso

…to display the manual.

02

Create a CD/DVD recovery image

To create an image of the currently loaded CD or DVD-ROM, use this command: safecopy /dev/cdrom image.iso You can change this file path depending on the device you’re attempting to recover data from, such as a tape drive. Upon entering, this will prompt the disc to be scanned, and you’ll see a bunch of relevant information concerning the disc and the block

24

04

Adjust the block size

Safecopy’s default setting for reading block size is 4096 bytes, but this can be adjusted for better results. For example, 4096 is eight times greater than 512, so specifying a block size scan of 512 bytes should improve results by a factor of 8. safecopy -r 5 -b 512 /dev/cdrom discimage.iso

combine partial images of corrupt data to create a third, complete file. You can find Safecopy in the Software section of the cover disc.

Once a scan has completed, a brief report of the progress will be displayed. If all went well, this should be reflected in the number of bad blocks recovered, and the number considered unrecoverable. In this case, reduce the block size scan further.

05

Using the recovered disc image With the successful recovery of your CD/ DVD data (based on the report), you’ll want to take a look at the disc on your computer to see if the contents have indeed been recovered from the damaged optical disc. To do this, you’ll need to mount the ISO archive as a virtual drive. Various solutions are available for this. If you’re using Safecopy in a Terminal window on the desktop, for instance, open the file manager and right-click the ISO file, then select Open With > Disk Image Mounter.

06

Extract or burn your recovered data With the data available to browse in your file manager while the image file is mounted, you can drill down through the directories and find the data you’ve been trying to access. With this done, you should then write the image to a new optical disc, using your distro’s disc-burning software. Alternatively, copy the contents of the ISO file to a new directory on your hard disk drive, where the data can then be copied to another device, or burned to a larger optical disc.


Is the file system supported? The partition table on your hard disk partitions have varying levels of support from the GParted tool. To see what is compatible, open View>File System Support.

Launch with elevated permissions GParted is launched with the sudo gparted command, or from your distro’s desktop menu system. If you have several drives or partitions, it will take a while to initialise.

Selecting a disk partition There’s quite a bit of disk or partition selection when you’re using GParted, and it’s important to make sure you’re selecting the right one. If possible, remove unnecessary devices!

Edit and resize partitions GParted Graphically manage your partitions and manipulate file systems Squeezing extra space from your hard disk drive can be tough, but if you’re in a recovery situation that involves a partition with errors or unreliable data, it might be practical to offer additional space to the partition. This might be tricky while the disk is running, however, or if there is another operating system installed. One solution in this scenario is to employ a partition-editing tool. GParted – Gnome Partition Editor – is a particularly popular option, and comes included with several Linux distros, including Ubuntu. The GParted utility can also be installed in the common way for your distro or the distros on the cover disc, as well as downloaded from gparted.sourceforge.net as an ISO for burning as a live CD. GParted offers a visual representation of your hard disk device and installed partitions – more useful than the command line! Once you’ve got this running, you’ll have a variety of options at your fingertips, from shrinking or expanding your partitions, creating new partitions, and rescuing data from lost partitions. There is also support for file systems such as ext2, ext3, ext4, linux-swap, NTFS and FAT16/ FAT32. To run GParted, your system will need to have at least 256MB of RAM installed.

01

Get started with GParted

To begin the process of editing a disk partition, launch GParted from the System Tools menu, by using Unity search if you’re running Ubuntu, or by opening the Terminal and entering : sudo gparted Upon launch, GParted scans your system for connected drives, and displays them graphically, one partition on each row. To confirm which of the partitions is the one you wish to edit, select it, then View>Device Information. Here you’ll find information about the file system, size of the partition, the device it is on, etc.

02

Expand a partition

To expand a partition, you’ll either need to shrink another partition first, or take advantage of unallocated space, a portion of the disk that is yet to be formatted. GParted will display unallocated space, so this will give you an idea of how much you can expand. To resize a partition, select the partition, then Partition>Resize/Move. In the resulting dialog box, enter a new size, or use the arrow buttons to adjust free space settings. Set a compatible partition alignment, then Resize/Move.

03

Recover data with GParted

04

Other tasks with GParted

While dedicated tools usually give better results, having one utility that performs multiple tasks is always a good option. GParted has a companion data recovery tool, gpart, which when installed saves time running other tools. With your device selected, open Device>Attempt Data Rescue, and click OK in the subsequent dialog box. If the ‘No file systems found’ message is displayed, then use Photorec instead. Alternatively, if a file system is found, click View to mount, and use the file manager to copy the data to a safe location.

Beyond these two features, GParted offers far more support for your disk partitions, from creating a new partition table and editing the UUID (unwise when recovering a Windows disk – this can invalidate the licence) to formatting and even copying and pasting partitions. GParted even offers a facility for restoring the GRUB 2 bootloader and the GRUB legacy bootloader, should this be an issue. You don’t have to use GParted on a live CD either – it’s found in most Linux distros and can be installed from the default repos.

www.linuxuser.co.uk

25


Feature

Ultimate Rescue Kit

Reset root password Your Linux boot disc Do you really need a recovery disc after all? In this feature, we’ve taken a look at a whole host of Linux rescue tools that you’ll find on this issue's cover disc. They will prove incredibly useful for cloning dying disk drives, removing malware and rootkits, and finding lost data. If your problem is a simple forgotten root password, however, you’ll be able to uncover this without additional software or a boot disc (although the latter may prove

useful with some distros, for repairing or reinstalling the bootloader, for example). When you reboot your Linux installation (and we’re looking at this from the viewpoint of an Ubuntu 16.04 user) to display the GRUB boot menu, you’ll be presented with a menu and various additional options that you usually ignore. However, with a bit of clever editing, you’ll find that options can be used

linux /boot/vmlinuz-4-4.0-22-generic root=UUID=fd613f7b-\c527-4b4a-af61e337e1975df9 rw init=/bin/bash

01

Reboot your PC

To get started, restart your PC and tap SHIFT to display the GRUB 2 menu. Alternatively, if you’re using a live CD, wait for the GRUB menu to appear. At this stage, you’ll need to turn your attention away from the various options (such as the Memory test, which is otherwise very useful for establishing the integrity of your computer’s RAM modules) and towards the section at the bottom of the screen. Here, note the instruction to hit E to edit commands, and follow it to edit the GRUB menu.

05 03

Reset the root password

With the system rebooted, Linux will drop you into a command prompt, with the root partition mounted. To confirm whether this is the case or not (you’ll need it to proceed), enter:

With this confirmed, it’s time to reset the root password. All you need to do is enter the passwd command and enter the new password, then confirm it. As a root password, this should be something that is both memorable and tough to crack!

Edit the GRUB menu

With the GRUB boot menu opened up in an Emacs-like text editor, use the keyboard arrows to scroll through the file, looking for the first line that begins with ‘linux’. It should read something like this: linux /boot/vmlinuz-4-4.0-22-generic root=UUID=fd613f7b-\c527-4b4a-af61e337e1975df9 ro quiet splash $vt_handoff

Paying close attention to the second half of the line, remove the characters following (and including) ro quiet splash, and replace with rw init=/bin/bash.

26

This should resolve the problem, enabling you to change the password.

When this is done, press CTRL+X to exit and reboot, or simply press F10.

mount | grep -w /

02

to edit the boot menu and reset the root account password. A vast proportion of Linux recovery issues are down to a lost root password, so it makes sense that the mechanism would be provided with your OS to reset it. Naturally, this information could be misused, so follow these steps only on a PC you have been given authorisation to use, and use it responsibly!

04

Deal with problems

Resetting your root password isn't going to work if the partition wasn’t mounted, or the password you entered was not correctly confirmed. In this situation, you’ll receive an error message, like the one pictured. A typical error message is ‘Authentication token manipulation error’ and ‘password unchanged’, but this can be overcome by entering the new root password with more care. Meanwhile, if the root partition was mounted as read-only, enter: mount -o remount,rw /

Avoid kernel panic

One error message that you might encounter can seem worrying, but is in fact caused by an error editing the GRUB 2 menu. If you spot this message: [ end Kernel panic - not syncing: Attempted to kill init! exit code=0x0007f00

…ignore the presence of the word ‘panic’ and instead restart the whole procedure, this time making sure you remove the quiet splash and associated text, and replace it with the command string described in Step 2. With no more error messages, you should be able to change the password.

06

Reboot, and carry on!

With a new password set, all that is left is to boot the system, resetting the status of the root partition so that it can no longer be manipulated (at least until the next time you forget the password). exec /sbin/init Note that the standard reboot command won't work while the computer is in this state. Look out for a stream of text as the system reboots – this is normal. Within a few moments, the computer should have booted back into the desktop environment, awaiting your next instruction.


ENJOY MORE OF YOUR FAVOURITE LIN

SUBSCRIBE* & SAVE UP TO

37% OPENWRT: TURN YOUR PI

OT R ROisB sue OidRe E L P arts this X st E N gu A plete 4-part BU1ILofD the com er.co.uk

www.linuxuser.co.u k

INTO A ROUTER

us www.linux

Part

THE ESSENTIAL MA GAZINE FOR THE GNU GEN ERATION

MAKE WITH

ITY KIT SECUR VD OTING D

KERS HE HAC BEAT T

LIVE-BO

E SECURM

tion Penetraw ith testing rry Pi Raspbe

Y O UR

SYSTE

r Pi into the Turn you security tool ultimate

new Compilee ar out softwFOS S with

ed ms you ne ll progra ap ur tkitPC ark • nm The 7 fu ck down yo • Wiresh to lo enSSH • Snort • chkroo

Get newg your distro updatin

24

of pages y Pi

ux • • Kali Lin

ntrol Take coBa ofto usesh shell

e Learn how save tim scripts to

rk best netwo What’s the vacy needs? for your pri

insilainde Also ed e ure fram OUTE2 exp

» IPR a Ras Pi pict functions » Maketo work with Go » How

Black k3r HATfroHmac your Pi and

Get more with this add-on HATs 17/06/2016

09:59

ISSUE 167

LUD 167

ital.indd

Cover_Dig

Learn Go Discover how to work with composite data including arrays, slices types , maps, pointers and structures

file systems like a 001_LUD166_DIGITA

pro

L.indd 1

1

how to read, interpr et and adjust a Makefile accura tely

LAUNC H PACK

WITH

Pi to detect and repor t movement

Essential file syste admin revealed m How you can solve compilation problems Create, check and manage Learn

UBUNTU

16.04 www.linu xuser.co.u k

E MICROTH :BIT

Program a motion sensor

Get the Raspberry

FREE N EW

THE ES FOR TH SENTIAL MA E GNU GE GAZINE NERATIO HANDS-ON N

Use a Pi Zero & a pHAT DAC to make your own music player

err Raspb

VP

S micro:b it VER S RaspbSU erry Pi

Build a web radio

to visu Use the Piin Minecraft music

IPFire • Op

ing by buildvic a network de e s across r own storage Serve file ising you and optim Ns on test

PLU

5 projects to con and animate LEDS and build a dice, trol torch and compas s

ical Play mus ft Minecraalise

Lock down yo

online securitury INSTALL LIVE BO FROM THOT E FREE DIS C

Get more from your password man agers

Investigate netw sockets with ss ork

Learn to prograhow m in

16. 4

Recove a failed r data from disk

Resc ious with Kno ppix

ue 18/05/201 17:36 files 6 your prec

Code Ro Scissors, ck, Paper, Learn how Lizard, Spock to recreate game on 001_LU

the Sens a e HAT classic

D165W

eek5_D

igital.in

dd 1

1OF6 RA

PAGES

I TUTORIASP LS

LAUNCH PACK

Go Start codi open sourng in Google’s ce language

Find out what proces ses are bound to your ports

Ubuntu

UPDATE

ESSENTIA L SOFTWA PACKAGRE ES

• Ubun • Deskttu 16.04 Final • 64-bit op & Server edBeta and 32-bi itions t versions

GET UB AC SSUNTU OF YORO L UR DEAL VICES Edit vid eo

in OpenS Improve hot 2.0

movies your home in minu tes

Power applica up CoreOS

Debug virtual mawith a Clone yourchine to simp system lify testi ng

Ne sensor tworked displays Display anot live data her Rasp from berry Pi

Take adva tions etcd and ntage of fleet systemd ,

ISSUE 165

20/04/2

016 12:29

*US Subscribers save up to 40% off the single issue price.

See more at: www.greatdigitalmags.com


NUX MAGAZINE FOR LESS WHEN YOU SUBSCRIBE!

Every issue packed with… Programming guides from Linux experts Breaking news from enterprise vendors Creative Raspberry Pi projects and advice Best Linux distros and hardware reviewed

Why you should subscribe... Save up to 37 % off the single issue price Immediate delivery to your device Never miss an issue Available across a wide range of digital devices

Subscribe today and take advantage of this great offer!

Download to your device now


Tutorial

Tam Hanna

learned to appreciate the value of Bash’s filefinding capabilities during work on a longsince discontinued file-finding application for Palm OS. After all, coding a file finder requires you to find the SDK files‌

Resources Bash gnu.org/software/ bash

Bash masterclass

Bash masterclass How to find files with Bash scripts

Computers are very good at storing files. Using Bash scripts makes finding them again easier

Tutorial files available: filesilo.co.uk

30

No matter how well designed a file structure might be, as time goes by, documents will be lost to the sands of time. The Bash shell can be combined with a bunch of useful system commands to create nifty scripts dedicated to file system traversal. In addition to that, the ever-useful grep utility can be used to analyse the contents of files. When done right, a well-designed Bash script even opens the files of interest in an editor of choice, thereby freeing up precious time which can then be used on other tasks. Using shell commands with Bash requires us to warm up an old trick: command substitution enables you to run

a command from inside a shell script while redirecting its output into a local variable. One simple example for this would be the printing of the local time, which can be accomplished like this:

#/bin/bash now=$(date) echo "$now" Be aware that the syntax shown here is in no way exclusive. Developers can also opt to use backticks,


leading to a declaration that looks like this:

rejected. Using the syntax shown in the code block allows a ten-minute delay before rejection:

#/bin/bash #/bin/bash now=`date` echo "$now" Using backticks is not without risks of its own: in addition to the valid character `, most keyboards also offer convenient access to the similar-looking characters ' and ´.

newfiles=$(find . -name "*.mp3" -cmin -10) echo "$newfiles"

Above Bash scripts can hunt through the file system for you, saving time when you’re looking for archived files

Inquiring minds might ask themselves about the role of the dot in the beginning of the find command. It indicates

The Bash shell can be combined with a bunch of useful system commands to create nifty scripts dedicated to file system traversal Using one of these leads to a badly behaved shell script – a problem best mitigated via the $() syntax.

Let’s get searching

Fulfilling users’ wishes tends to produce new ones: the file-sending utility created in the last instalment of this tutorial (Linux User & Developer 168) could benefit a lot from gaining some additional, um… contextual insight. Recreating a new folder for each set of transcribeables is annoying – a much more satisfactory version of the utility would process only those files that were created less than ten minutes before its invocation. This is best accomplished by providing the find command with a new parameter. Passing in -cmin makes the program look for files that have had their status modified recently: its parameter’s value determines how many minutes are allowed to pass before the file is

the path where find should start working – passing in the full stop starts the search process in the script’s current working directory.

Adjust the behaviour in more detail

After these steps, it is time to take a look at some more advanced problems. Let’s assume that the user also has a subfolder containing some additional recently modified MP3 files. Running our current version of the file sender would grasp both the contents of the folder and the files found in the subfolder – in the command line, results similar to the ones shown in the listing are to be expected.

~/Desktop$ runner.sh ./subfolder/20160711_042912.mp3 ./subfolder/20160711_044527.mp3 ./subfolder/20160711_041903.mp3

www.linuxuser.co.uk

31


Tutorial

Bash masterclass

./20160711_042912.mp3 ./20160711_044527.mp3 ./20160711_041903.mp3 This slightly odd outcome is explained by a behaviour called recursion: if find stumbles upon a folder, it then spawns a child process to inspect that folder's contents. Should you ever feel like disabling this behaviour, feel free to resort to an invocation along the following lines:

but surely while counting the results. The time span must be increased as long as the number of files found is below par – if we get above par, the aforementioned error must be thrown instead:

#/bin/bash countval=$((1)) filesfound=$((0)) shouldbe=$((3))

find . -maxdepth 1 -name "*.mp3" -cmin -10 Passing in a maxdepth value of one might sound nonsensical at first glance: let us reassure you that this is not the case, but is rather due to a peculiarity of the command. Passing in zero makes the program focus only on the files passed in – if it is provided with a full stop, no file system access takes place whatsoever. Should the finder limit itself to a specific depth, use the following bit of code instead:

while (( filesfound < shouldbe )); do IFS=' ' countval=$((countval+1)) newfiles=($(find . -name "*.mp3" -cmin -$countval)) echo ${#newfiles[*]} filesfound=${#newfiles[*]} done

find . -maxdepth 2 -name "*.mp3" -cmin -10 Be aware that find can throw a lot of scary errors when it is invoked on the root path – this particular problem is caused by its recursive nature, which makes the program attempt to open each and every file in its way. If find does not run as root, it will inadvertently stumble across files which it cannot access, leading to the throwing of an error.

Enter the metafinder

So far, our search script was – by and large – limited to working with criteria passed in by its developer. Let’s now expand to a new dimension of problem: instead of working with a fixed time frame, the user is instead asked about how many files they dumped into the working area. The script then proceeds to adjusting the criteria

if [ $filesfound -eq $shouldbe ]; then echo "Can process"; fi if [ $filesfound -gt $shouldbe ]; then echo "Invalid state"; fi Executing strings is frowned upon in the Bash community: the process shown here is a more elegant way to assemble a command dynamically. Splitting up the resulting array is done via some trickery here: first of all, the shell status variable IFS is set to a return character. In production code, this change should be undone when the script has run – for our example, closing and reopening the terminal window does the same

As the development of Bash scripts is an acquired habit, investing a bit of time in one uneconomic job is likely to offer returns on future scripting tasks passed to find automatically until the correct number of files is returned – if incrementing the counter leads to a larger-than-expected increase in files, an error is thrown instead. Implementing this behaviour requires us to add a level of indirection to our script. The code to be run is now no longer static, but is instead assembled in a string variable at runtime:

#/bin/bash store="echo hello" alpha=$($store) echo $alpha The next step involves increasing the negative value slowly

32

trick. The next bit of magic involves the brackets around the invocation, which transforms a string into an array by breaking it up at the delimiters found in IFS. Bash treats arrays as first-class variables. Getting their length is accomplished via the line ${#newfiles[*]} – keep it in mind, as it might see quite a bit of use in your own scripts. The last interesting detail is the somewhat cumbersome implementation of the error-handling condition. It is caused by the lack of a goto command – POSIX-compatible shells do not implement it for one reason or another. Finally, a small problem remains in the current implementation: if the script cannot fulfil the conditions set up, it will keep running endlessly. This problem can be


solved by modifying the terminator condition of the loop to look like this:

#/bin/bash countval=$((1)) filesfound=$((0)) shouldbe=$((3)) while (( filesfound < shouldbe && countval < 1000 )); do IFS=' Alternatively, you could also address the situation by using a fixed for loop – TLDP would recommend a syntax similar to this one:

#!/bin/bash for i in `seq 1 10`; do echo $i done

Beware the sands of time

At this point, a potential problem of time-based processes rears its ugly head. Let’s assume that our script is built up like the following pseudo code snippet:

var a=fetch() count(a) //bang var a=fetch() process(a) If the minute timer increments at the position marked with the //bang comment, the second invocation of fetch() might return different values than the ones used for the computation of the count. Developers of timebased systems must always keep in mind that time runs on mercilessly – one way to work around the problem would be the reuse of the values of a between count() and process().

Go even further

find comes with numerous additional commands that can be used to adjust its acceptance parameters. Listing all of them here would exhaust the space needed for other interesting things, and is furthermore counterproductive. Most, if not all, Unix-like console commands come with an accompanying help file known as a man page. It can be accessed by entering the man command. Navigating along the content of the man page is best accomplished via the PgUp and PgDown keys of your workstation; the open man page can be exited by pressing Q. Please keep in mind that GNU-based versions of Linux come with an extended version of the find command: some of its parameters are not supported on non-GNU Unix operating systems such as Solaris.

Harness the power of the GREP

So far, our scripts have been limited to the outer parts of the file system: as of this writing, we had no way to look into the contents of the actual files handled by the programs. Professional Unix-heads use a command called grep. It takes the file name of a textual file along with a so-called regular expression, and proceeds to finding matching content. grep’s text-based approach makes it ideally suited to handling source code and plain-text files: applying regular expressions to bitmaps, MP3 files or other compressed or complex file formats (DOC, ODT) is not particularly effective:

~/$ grep file advancedsearcher.sh filesfound=$((0)) while (( filesfound < shouldbe && countval < 1000 )); newfiles=($(find . -name "*.mp3" -cmin -$countval)) echo ${#newfiles[*]} filesfound=${#newfiles[*]} if [ $filesfound -eq $shouldbe ]; then if [ $filesfound -gt $shouldbe ]; then

Cut and paste

The final act in this tutorial is the introduction of two small but helpful utilities that allow you to cut out numerically described parts from files. Both head and tail take the same syntax – using the -n parameter makes them return a specified number of lines from the beginning or the end of the file:

tamhan@tamhan-thinkpad:~/$ head -n 2 advancedsearcher.sh #/bin/bash tamhan@tamhan-thinkpad:~/$ tail -n 2 advancedsearcher.sh echo "Invalid state"; fi

Conclusion

Having worked through these examples, your command of Bash and accompanying commands now enables you to create both simple and complex scripts. At this point, many a task will pose the question of whether to automate or handle manually. Even though a lot of texts have been written about this topic, one old rule holds true: most jobs that need doing once tend to need doing again in the future. As the development of Bash scripts is an acquired habit, investing a bit of time in one uneconomic job is likely to offer returns on future scripting tasks. Our trip through the world of Bash scripts does not end here. Over the following issues we will introduce you to the magical worlds of user input collection and diagramming, thereby breaking the border between textual and graphical user interface. Until that time, keep coding and code well!

www.linuxuser.co.uk

33


Tutorial

.NET Core

.NET reboots on Linux with .NET Core

Grab your favorite Linux distro and jump to the shell prompt to explore the new open-source .NET runtime and SDK

Kevin Wittmer

Kevin Wittmer is a software technologist and IT manager at the BOSCH group. He deeply enjoys Linux and has fond memories of hacking Minix back in the early Nineties. He also enjoys crafting software, particularly with C#.

Resources GitHub CoreCLR Repository https://github.com/ dotnet/coreclr

GitHub CoreFx Repository

https://github.com/ dotnet/corefx

GitHub .NET CLI Tools Repository

https://github.com/ dotnet/cli

Debian installation instructions

https://www.microsoft. com/net/core#debian

CentOS installation instructions

https://www.microsoft. com/net/core#centos

Right The simplified .NET Core technology stack in a Linux context

34

Microsoft is delivering on its new love for Linux. A new modularized form of the .NET technology base called .NET Core is now available across several Linux distros. Microsoft has partnered with Canonical and is readying support for Bash on Ubuntu on Windows that includes integrated Linux-based file system support. A new version of SQL Server that runs on Linux is on the horizon. Red Hat has joined the Technical Steering Group of the .NET Foundation, an organisation devoted to improving open-source software development and collaboration in the .NET ecosystem. Xamarin, which was acquired by Microsoft, has open-sourced its SDK software tools which leverages Mono to support iOS and Android app cross-platform development in C#. The Mono project, now transitioned to the MIT open-source license, has also been brought under the umbrella of the .NET foundation. These are just some of the highlights as the pace of advancement in the .NET space has accelerated with significant contributions from the open-source community together with Microsoft. This journey that Microsoft has embarked on into the Linux space is unprecedented. Fundamentally, it is part of a larger movement toward providing .NET developers a new level of choice, flexibility, and openness in helping to shape the future direction of .NET. In this first tutorial installment, we will focus on installing and firing up the .NET Core SDK from the shell prompt. Without further ado, let’s get started.

Introducing .NET Core and .NET Core SDK

At a high-level, the .NET Core technology stack is made up of the following: • A modular CoreCLR runtime with features that can actually be cherry picked

• A strong subset of .NET framework libraries rolled into CoreFx • Framework modularisation achieved by NuGet package organisation/delivery • Shell-based, cross-platform tooling available with dotnet CLI tooling • Cross-platform compiler support available from Rosyln and LLILC .NET Core is truly open-source. Repositories for core components such as CoreCLR, CoreFx, and dotnet CLI are maintained at GitHub (see Resources for further details). These public repositories are collectively supported by Microsoft, partners, and the open-source community under the stewardship of the .NET Foundation . This gives developers outside of Microsoft access to daily builds of core components such as CoreCLR and CoreFx.

01

Install .NET Core SDK

Installation of .NET Core SDK can vary by Linux distro and your interest in installing developer or preview releases. To help illustrate, we will install the .NET Core SDK into CentOS. The initial step involves updating the local package repository using the yum package manager.

su -c 'yum update' Depending on the distro, you may need to satisfy dependencies such as gettext (which offers utilities for internationalisation), stack call chain management, and support of International Components for Unicode (ICU), among others. In the case of CentOS, packages libunwind and libicu are required. The libunwind library is a portable C-based API to determine the callchain of a program at runtime. The libicu library provides support for Unicode and localisation.

sudo yum -y install curl libunwind libicu The next step is to download the compress tar archive file from the reference link that Microsoft has already provided for you (Note this is another step that varies as other distros such as RedHat and Ubuntu also have .NET Core packages that can be installed via the respective package management systems).

curl -sSL -o dotnet.tar.gz https://go.microsoft.com/ fwlink/?LinkID=809131


Now identify the target installation location and untar. Directory /opt is suggested as a default but a different directory location based on system admin preference or your organisation's standards can be specified.

sudo mkdir -p /opt/dotnet && sudo tar zxf dotnet. tar.gz -C /opt/dotnet After installation of the bits is complete, create a soft-link as shown. (Another option would be to add the dotnet command tooling to the environment PATH statement.)

sudo ln -s /opt/dotnet/dotnet /usr/local/bin If all went well, issuing the command dotnet –h from the shell prompt will match the following:

You can also inspect the hidden directory .dotnet in your home directory using ls or tree to explore and verify installation.

tree. dotnet It's important to note that standalone (or embedded) application-specific installation configuration scenarios are possible with the .NET Core runtime stack.

02

Create a new C# project

Once the .NET Core SDK is installed, you can quickly begin to develop in C#. The SDK comes with dotnet, a command line tool that is focused on providing a simple, developer-friendly experience. It supports some basic arguments including new, which will generate the “Hello, World” scaffold in C#. To create the C# project, first create a subdirectory with mkdir and then simply type:

dotnet new

Fundamentally, it is part of a larger movement toward providing .NET developers a new level of choice, flexibility, and openness in helping to shape the future direction of .NET

Have I seen this before? In one sense, the ability to develop C# code and run .NET binaries in the Linux environment is not new. Mono has been available in the repositories of most Linux distros for years now. So how does .NET Core compare to Mono? Like Mono, the .NET Core technology base is open-source. Microsoft has placed components of .NET Core under the MIT license, which is a highly permissive Open-Source Software (OSS) license type. And similar to Mono, .NET Core also offers cross platform support and can be run on different types of Linux distros such as Debian, CentOS, OpenSUSE, RedHat and Ubuntu. There are differences however, with the most significant being that .NET Core will continue to be led and officially supported by Microsoft with new feature development and regular releases coordinated and fully synchronised across Windows, Linux and Mac OS platforms. Further, you do not have the option of paid technical support from Microsoft for Mono technology in Linux. Nevertheless, Microsoft will continue to collaborate with the open-source community to advance the Mono stack, both in the Xamarin space as well as future .NET Platform Standard efforts involving Mono. Thus, you can look forward to leveraging Mono to run .NET workloads including ASP.NET Core on devices and platforms that Microsoft's Core CLR will likely never reach (such as PowerPC).

The result of dotnet new is two generated files: a C# source file Program.cs and the default project.json file. The C# source file contains a Program class with a static Main method and simple "Hello world!" console statement. The project.json file represents a break from traditional XML-based .NET project files. This JSON-based file specifies top-level project attributes, build options, framework and dependency specifications along with similar types of metadata. It is used during project builds and also at runtime. Looking ahead, anticipate that dotnet new will quickly mature and take on more advanced scaffold generation options, in particular, with increased scaffolding over ASP.NET Core. This would provide an out-of-the-box alternative to yeoman in supporting basic cases of scaffolding for ASP.NET Core and similar .NET project types that carry a high degree of boiler plate code. Ideally the open source community will take a strong hand in helping to realise more advanced scaffolding options and then enjoy eating the dog food.

03

Code logic & unit tests in C#

To move past the "Hello, World" scaffold code generated, let’s code a simple class and unit test example that calculates surface area of a sphere. You can use the vim editor to develop the source as syntax highlighting of C# code is supported by the editor's default configuration. The contents of SurfaceAreaCalculator.cs contains this source:

using System; namespace MyMathLibrary.SurfaceArea { public class SurfaceAreaCalculator { public static double CalculateAreaOfSphere(int radius) { return 4 * Math.PI * Math.Pow(radius, 2); }

www.linuxuser.co.uk

35


Tutorial

.NET Core

} } The Xunit test for the above class is short and relatively simple and captured in the SurfaceAreaCalculatorTest.cs.

using Xunit; namespace MyMathLibrary.SurfaceArea.Tests { public class SurfaceAreaCalculatorTest { [Fact] public void CalculateSphereAreaTest() { Assert.Equal(314.16, SurfaceAreaCalculator. CalculateAreaOfSphere(5), 2); } } } Since we are using the Xunit testing framework, we must modify the project.json file to specify Xunit attributes and dependencies. The project.json file must be updated at the toplevel section to specify the test runner.

From your shell prompt residing at the home directory, you can list the contents of the hidden directory .nuget/packages to inspect the NuGet packages downloaded.

ls .nuget

05

Run the unit tests

After completion of the restore step we are ready to execute the Xunit-based unit tests. Execute the following to trigger the Xunit test runner and execute the unit tests:

dotnet test The test runner is dumped to the console as such:

"testRunner": "xunit", Next, add the Xunit-related dependencies specifying the xunit and dotnet-test-xunit library dependencies.

"dependencies": { "xunit": "2.2.0-beta2-build3300", "dotnet-test-xunit": "2.2.0-preview2-build1029" },

04

Download NuGet project dependencies

The dotnet restore command will restore project dependencies based on what is specified in the project.json file. It accomplishes this by downloading NuGet packages from specific package sources, which by default includes nuget.org, into a user-centric, centralised location.

06

Run the .NET program

With the unit tests passing successfully, you are ready to run the program. The command dotnet run will start execution at the static Main method entry point. The calculated surface area value is written to console based on the radius argument passed in as a command line argument.

dotnet run

dotnet restore

Looking ahead, anticipate that dotnet new will quickly mature and take on more advanced scaffold generation options, in particular, with increased scaffolding over ASP.NET Core 36

07

Edit and rerun

The diagram here illustrates the command cycle involved in an edit and rerun context. As long as you don't introduce new NuGet-related dependencies, then you have the option to skip the dotnet restore command step and simply issue dotnet run after successive code editing sessions (of course, you would also be skipping over execution of unit tests which is considered poor form).


Running automated jobs The table below provides a brief overview of where to locate the compressed tar or package of the .NET Core SDK for your favourite Linux distro. If you don't find your distro listed, then some assembly might be required. In such cases, you can seek out activity among the open source community for

that distro, join the effort to realize .NET Core SDK support, and contribute back to the GitHub repositories. Exactly this type of effort was taken up for FreeBSD support and the contributions have been added back to the source base of .NET Core.

DISTRO

WHERE TO FIND?

DEPENDENCIES

HOW TO INSTALL?

UBUNTU, MINT

Source "https://apt-mo. trafficmanager.net/repos/ dotnet/ trusty main" in the APT sources.list.d package configuration.

gettext, libunwind

Install package dotnet-dev1.0.0-preview2-003121

DEBIAN

Curl dotnet.tar.gz from https://go.microsoft.com/ fwlink/?LinkID=809130

libunwind8, gettext

Extract dotnet.tar to /opt or another preferred location. Create soft link in /usr/ local/bin.

CentOS, ORACLE LINUX

Curl dotnet.tar.gz from curl -sSL -o dotnet.tar.gz https://go.microsoft.com/ fwlink/?LinkID=809131

libunwind libicu

Extract dotnet.tar to /opt or another preferred location. Create soft link in /usr/ local/bin.

REDHAT

Enable repository via RedHat subscription rhel-7server-dotnet-rpms

scl-utils

Use yum to install rhdotnetcore10. Follow this by the command string scl enable rh-dotnetcore10 bash to complete config.

OpenSUSE

Curl dotnet.tar.gz from https://go.microsoft.com/ fwlink/?LinkID=816867

libunwind libicu

Extract dotnet.tar to /opt or another preferred location. Create soft link in /usr/ local/bin.

FEDORA

Curl dotnet.tar.gz from https://go.microsoft.com/ fwlink/?LinkID=816869

libunwind libicu

Extract dotnet.tar to /opt or another preferred location. Create soft link in /usr/ local/bin.

FreeBSD

Build binaries of CoreCLR, CoreFx and dotnet CLI from GitHub repositories. Look to community for latest updates and/or resolved issues.

cmake, llvm37 (includes LLVM 3.7, Clang 3.7 and LLDB 3.7), libunwind, gettext, icu & bash

Use build scripts to build .NET Core components. (See https://github.com/dotnet/ coreclr/blob/master/ Documentation/building/ freebsd-instructions.md for details.)

NetBSD

Build binaries of CoreCLR, CoreFx and dotnet CLI from GitHub repositories. Look to community for latest updates and/or resolved issues.

See FreeBSD above, pkgsrc

Similar to FreeBSD but also involves pkgsrc setup. (See https://github.com/dotnet/ coreclr/blob/master/ Documentation/building/ netbsd-instructions.md for details.)

www.linuxuser.co.uk

37


Tutorial

Revitalise an old laptop

Revitalise an old laptop with lightweight Linux You can give that old laptop sitting in the cupboard a new lease of life!

Paul O’Brien

is a professional crossplatform software developer, with extensive experience of deploying and maintaining Linux systems. Android, built on top of Linux, is also one of Paul's specialist topics.

Left Elementary OS is a visually stunning distribution for low-end machines, providing much of the polish of OSX

Resources Elementary OS

https://elementary.io/

unetbootin

https://unetbootin.github.io/

EaseUS Todo Backup

http://www.todo-backup. com/

Ubuntu

http://www.ubuntu.com/

Lubuntu

http://lubuntu.net/

Ubuntu Mate

https://ubuntu-mate.org/

Puppy Linux

http://puppylinux.org/

Debian

https://www.debian.org/

VirtualBox

https://www.virtualbox.org/

Midori

http://midori-browser.org/

Vivaldi

https://vivaldi.com/

QupZilla

http://www.qupzilla.com/

AdBlock Plus

https://adblockplus.org/

LibreOffice

https://www.libreoffice.org/

OooLight

http://bit.ly/2auhV0p

Android-X86

http://bit.ly/1AQYvqj

CloudReady

http://www.neverware.com/

38

If you’re a keen observer of PC hardware, you may have noticed that in recent times, the increase in power of desktop and laptop machines has slowed somewhat. While companies such as Intel have continued relentlessly releasing new chips, advances have been more in the area of efficiency (and therefore battery life) than they have been raw performance. What this means is that PCs that would traditionally be nearing end of life, perhaps three to four years old, are now more viable for continued use than ever before. Of course, operating systems have inevitably got heavier during this time (especially Windows), but the same can’t be said of Linux – it’s ideally suited to running on both new and old hardware, particularly with the wide variety of distributions and desktop environments available to suit every need. In this article we’ll look at some of the different options available, the best ways to test and how to get a perfectly usable everyday system, even on the oldest of laptops.

01

Prepare your hardware for install

If you’re installing on an old machine that’s been sitting around for a while, there are a few things to do to make this process more pleasant. Aside from giving it a good clean (an air duster is a fantastic tool), it’s worth

resetting the BIOS and perhaps scanning the hard disk for errors before you start (using a Live CD or similar), to avoid issues later on. This is probably a good time to ensure you’ve backed up any valuable data too!

02

unetbootin and USB installation

Lots of older laptops have CD drives, which is a convenient way to install a new OS using a Live CD/Install CD burned from a downloaded ISO. If you’re going to be trying out a few different options however, this could soon become both time-consuming and expensive as you work through a pile of CDs. Most distributions now include instructions on


how to install to a USB stick, often using the incredibly useful unetbootin tool to create a bootable USB from an ISO.

Another option: Android X86

but it uses the minimal LXDE desktop and comes with a selection of light applications including Pcmanfm for file management, the Openbox window manager, Lightdm and Firefox. Lubuntu should run on systems with as little as 512MB RAM and any processor newer than a Pentium 4 or Pentium M.

03

Dual-boot or wipe?

Should you dual-boot your new OS or wipe the hard disk completely? Unless you have a particular abundance of disk space or a likelihood you’ll want to revert later, we’re fans of the latter. With that said, if your machine has an active Windows licence, you might want to back up the existing OS for a potential restore in the future. Check out EaseUS Todo Backup (free), which will allow you to make an image over USB, leaving you free to completely repartition during your install.

04

Full Ubuntu install

05

The lighter option – Lubuntu

By far the most popular Linux distribution is, of course, Ubuntu. If you’re not quite sure how well your hardware config will cope with the full Linux experience, giving it a go as a starting point is probably a good idea – it is certainly more tolerant of low-spec systems than you might expect. You can boot from the Live CD without installing, but bear in mind that this isn’t always representative of the experience you will enjoy from a full install.

06

Ubuntu MATE

07

Puppy Linux

There are some machines that are just too old to run modern distributions well, or the distros that are available for them don’t provide enough functionality. How about running Android? The Android-X86 project strives to port Google’s OS to PC hardware, providing a lightweight platform with great app support. The Remix OS effort provides a more desktop-like UI paradigm.

Rounding out our Ubuntu options is Ubuntu MATE. Designed for computers that aren’t powerful enough to run a composited desktop, the OS still uses a traditional desktop paradigm to remain easy to use. The look and feel is broadly similar to the main Ubuntu distribution too, such that it should feel instantly familiar. The Ubuntu MATE team even provides ready to run images for the Raspberry Pi 2 and 3, as well as base images for users who want to run the OS on ARM systems.

If the full fat Ubuntu isn’t really cutting it on your system, there is a lighter option that might be just right. Lubuntu is based on the same core as its bigger brother,

What’s special about Puppy Linux? It’s small – less than 100MB in size and it runs from RAM

Puppy Linux is based on Slackware and has been around for more than ten years now, bringing an impressively complete desktop experience to the most basic of hardware. What’s special about Puppy Linux? It’s small – less than 100MB in size. It runs from RAM – which as well as helping performance, means it can even be used in machines without a working hard

www.linuxuser.co.uk

39


Tutorial

Another option: CloudReady The real champion of low power, high productivity computing is probably Chrome OS. Chromebooks, based on Linux of course, have taken the world by storm by virtue of their small price tag but great experience. Courtesy of CloudReady, you can get a Chromebook experience on your own hardware. It actually works surprisingly well, subject to the usual Chrome OS ‘always on network’ limitations.

Revitalise an old laptop

disk. It still includes a range of useful applications including word processors, browsers, image editors and more. An Ubuntu compatible version of Puppy is also offered.

08

Test distributions with VirtualBox

With such a large number of distributions available, it’s tricky to know which one to choose, and time-consuming to try them all out to see which one you like. An alternative approach is to use the VirtualBox virtualisation software on another machine to boot up the images, allowing you to have a little bit of a play around and see if you then want to try the OS on your real hardware. This avoids the need to burn every single image to either a CD or USB stick. Vmware workstation is another suitable virtualisation tool.

The pure Debian option

Often overlooked for light desktops is Debian itself. Installing a minimal version of the OS gives the option to then install any of a range of desktop environments to suit the host system, such as XFCE, LXDE or MATE. It’s definitely worth noting that we’ve seen systems happily running Debian where Ubuntu failed to work well (or indeed at all), so don’t discount Debian if you’ve had no luck with Canonical’s offering. Of course, with Debian you get vast community support, so it’s a solid option.

09

10

Elementary OS

One downside of choosing a lightweight Linux distribution is that often the first things to get cut are attractive visuals. Elementary OS takes a different approach, ensuring that a class leading (for Linux as a whole) graphical interface is key to the distribution, while still maintaining impressive performance on low-end systems. The extends beyond the OS itself to the core app set too, with photo, music, video and the Midori web browser included. There is a real air of OS X-level polish about the product.

11

Remove unnecessary packages

12

Check memory usage

13

Disable services

If you’re short of disk space, after installing your distribution it’s a good idea to then look at which packages have been installed and remove anything that’s surplus to requirements. If you have a graphical package manager installed then that’s a good place to start or, if using Debian / Ubuntu, you can simply use the dpkg –l command in the terminal to show installed packages, which you can then remove as required. You can also achieve the same with apt, using apt --installed list.

After you’ve sorted out your disk space, it’s worth seeing if you can do the same with RAM (this might also give you hints as to packages that you can remove and see a performance benefit). The top command lists the processes are currently using the most CPU and RAM on your machine. After typing top in your termina l, press Shift + F, select MEM then press s to sort by that field. You can also use top with sudo to see root processes.

When Linux boots, just like other OSs, lots of processes may be started and then run in the background. If you have a reasonable amount of Linux knowledge, it’s worth going through these to see which ones you can disable. Out of the box, there are often lots of services running that are only really useful on a server, like sendmail, httpd etc. If you’re on an Ubuntu distro, you can use ‘boot-up manager’, better known as bum (yes really!). Run sudo apt-get install bum then gksudo bum and uncheck as required, then hit Apply.

40


14

Battery optimisation

An old laptop means an old battery. Typically the battery is the first thing to degrade or fail on a laptop, which, of course, severely compromises its usefulness as a mobile workstation. A great way to ensure Linux is as power-efficient as possible is using the TLP tool, which is available for most distributions. TLP itself is highly customisable, but it includes sensible defaults that mean you can just install and forget. Simply run sudo aptget install tlp tlp-rdw from the terminal, followed by sudo tlp start and you’re set. Easy!

16

Using Adblock

17

Low spec Office

18

Cheap and easy hardware boosts

While certainly controversial and by no means compatible with everyone ethically, there is no denying that using Adblock in a compatible browser makes the internet perform better, something that is likely to be a consideration on an older laptop. Adblock Plus, the best-known solution, is available on a range of browsers including Chrome, Firefox and Vivaldi. If you are using Midori, an advertisement blocker is preinstalled, which uses the same lists as Adblock Plus.

The go-to Microsoft Office compatible suite for Linux today is LibreOffice. Usefully, LibreOffice is also fairly light on resources and runs well on a range of older hardware. WPS Office is also growing in popularity. If you need something even lighter however, it’s worth checking out OOoLight, a cut down version of OpenOffice. Finally, don’t discount web based offerings such as Google Docs or Office Online – if your system is limited to running a web browser well and you have a stable internet connection, these might fit the bill.

15

Lightweight browsing

If you are looking to minimise CPU, RAM and battery usage on a laptop, try to avoid Chrome. It might be the best web browser for many things, but efficient it is not. A lot of lightweight distributions ship with Firefox by default, but if you need something still lighter you should try Midori or, if you want to go super-slim, QupZilla. Also worth a look is Vivaldi, developed by former Opera engineers, and based on the Chromium engine.

A great way to ensure Linux is as power-efficient as possible is using the TLP tool, which is available for most distributions

While this article is about revitalising your old laptop hardware, there are certain hardware upgrades that might be worth considering just to give you that extra little kick of performance. The two main upgrades that will provide the most noticeable boost are more memory and swapping out a hard disk for an SSD. Both have fallen in price hugely in recent months (SSDs in particular). The latter is likely a more sensible option given that it can be easily transplanted to another machine later if required, something that is a less viable option with RAM.

www.linuxuser.co.uk

41


Tutorial

Ansible automation

Automate your project tasks with Ansible Learn how to install Ansible and Ansible Tower to automate the various tasks in your projects

Nitish Tiwari

is a software developer by profession, with a huge interest in free and open source software. As well as serving as a community moderator and author for leading FOSS publications, he helps organisations adopt open source software for their business needs.

Resources Ansible ansible.com

Right The Ansible Tower homepage showing the main dashboard screen summary. This dashboard is the central interface to Tower

42

Software systems these days are significantly large, complex and critical. With the cost of development going up, automation of activities like testing, monitoring and building software offers a great deal of cost and simplification benefits. Automation can also help speed up the development of software systems. The goal is to partially or fully automate activities, thereby significantly increasing both quality and productivity. In line with these developments, several tools have been launched to help you automate your software processes. One of the most famous of them is Ansible by Red Hat. Ansible is an open source IT configuration management, deployment and orchestration tool. Designed to be minimal, secure and highly reliable, Ansible has an extremely low learning curve for software development stakeholders. Ansible Tower is an add-on for Ansible. Tower builds on top of Ansible to bring extra features like secure credential storage, delegation, job customisation, role-based access control and centralised logging for all your automation tasks. Tower includes both a web-based console for end-user use and a fully featured REST API for embedding Ansible Tower into your existing tools and workflows. In this guide, we’ll take a look at how to install and use Ansible and Ansible Tower to automate project tasks. Let’s start with the installation process.

01

Installation preconditions

02

Ansible installation

Ansible can be installed on all Linux distros – Red Hat, Debian, CentOS, any of the BSDs and even OS X. Windows is not supported, though. The current version of Ansible requires Python 2.6 or 2.7. Python 3.0 is slightly different from its other versions and Ansible has not yet switched to it. Some Linux distributions like Gentoo and Arch may not have a Python 2.x interpreter installed by default. So, if you are using these distributions, you’ll need to install Python 2.x and set the ‘ansible_python_interpreter’ variable in inventory to point at your Python 2.x. Nearly all Linux distributions have a 2.x interpreter installed by default, however, and this does not apply to those distributions. If you are running less than Python 2.5 on remote machines, you will also need python-simplejson. In this tutorial, we will use Ubuntu 12.04 to install and demonstrate Ansible. Also, we will use Ansible’s default database, PostgreSQL. If you want to use any other database, install it first and keep the connection details with you.

Ansible can be installed from its source code, but if you are installing its latest version on any Linux machine, then using the OS package manager is recommended. We are using the package manager to install it for this tutorial. Ubuntu


Ansible playbooks

builds are available from the Launchpad site: bit.ly/2bfXZek. To configure PPA on your machine and install Ansible, run the following commands:

$ $ $ $

sudo sudo sudo sudo

apt-get install software-properties-common apt-add-repository ppa:ansible/ansible apt-get update apt-get install ansible

graphs for job status and host status can be viewed by clicking on their tabs. Summaries of recently used job templates and recently run jobs are also available for review in this section. You can start, stop or restart Tower with an admin script that’s in the /usr/bin/ansible-tower-service folder and can be executed as:

$ ansible-tower-service stop $ ansible-tower-service start $ ansible-tower-service restart

04

Ansible playbooks are a way to manage configurations. Playbooks are expressed in YAML and have a minimum of syntax. Each playbook is composed of one or more ‘plays’ in a list. The goal of a play is to map a group of hosts to some well-defined roles, represented by Ansible tasks.

Tower configuration The setup menu in the top-right corner of the Tower

Ansible can be installed on all Linux distros – Red Hat, Debian, CentOS, any of the BSDs and even OS X As mentioned earlier, we have used Ansible Tower as the web GUI for Ansible in this tutorial. To install Ansible Tower, download it from the Ansible website: ansible.com/tower-trial. Once you obtain the tar file, execute these commands:

dashboard provides configuration of organisations, users, groups and permissions. To add or edit users, click on the Users link. You will see the admin user already present. To create a new user, click the plus symbol on the right-hand side. Enter user details as requested on the page and save.

$ tar xvzf ansible-tower-setup-latest.tar.gz $ cd ansible-tower-setup-2.4.5/ $ ./configure When you execute configure, it prompts for host name, database details and admin password. Specify the host as localhost. Now, since we are going to use an internal database on the same machine as Tower, enter i when prompted for the database. If you have a PostgreSQL database installed on some other machine, enter e. Enter the password for admin and retype it for confirmation. Now, once the configurations are created, execute:

$ ./setup.sh This installs Ansible Tower on the localhost. You can then access Tower from localhost via a browser and log in with the credentials provided.

03

How to get started

After you have logged in to Tower, the next step is to obtain the licence, which is required for Tower to run. Go to ansible.com/license for free or paid licence options. Paste in the licence you receive from Ansible, agree to the End User License Agreement, then click Submit. To view this licence later, click the setup menu’s ‘View License’ link. The homepage has a main dashboard screen summary which lists current hosts, inventories and projects. Charts and

If you want to give the newly created user admin privileges, check the checkbox below for creating user as superuser. After saving the user, you will be given the option to add credentials and permissions to them. Select or create permissions in the respective sections. Add the user to an Organization and a Team from their respective tabs. To create a team, follow the same steps in the Team link of the setup menu. To create or edit an organisation, navigate to the Organization link. Add users or admin users to an organisation from here, too. Click the organisation you want to edit. In the Users section, search for users and add them.

05

Project creation

To create a new project, navigate to Projects from the menu and click the plus symbol. Enter a name and description for the project. Search and add an organisation to the project. Select the appropriate SCM type – here we will be using Git. After selecting the SCM type, specify the SCM URL, SCM branch and SCM credentials. Select the appropriate SCM update options. If you want to have a manual SCM type, then you will need to create an Ansible playbook.

www.linuxuser.co.uk

43


Tutorial

Ansible automation

After creating an inventory, create a group within it. Click the plus symbol under the Group section and specify the group name and description. Specify the Variables in a similar way as we did while creating an inventory.

Create a subdirectory with its name as the Project Name on the Tower server file system using the command line. Ansible playbooks for the project will be stored here. The directoryâ&#x20AC;&#x2122;s location will be under the default project directory under Tower, which is by default /var/lib/awx/projects/.

$ mkdir /var/lib/awx/projects/helloworld $ cd /var/lib/awx/projects/helloworld Create a YML file inside this folder, helloworld.yml:

$ sudo vi /var/lib/awx/projects/helloworld/ helloworld.yml Code the functionality you want this playbook to execute. For example, write this code to test Tower running a playbook against the host in the inventory:

--- name: Hello World! hosts: all tasks:

07

Job templates

08

Jobs

09

Job results

You can create and view job templates by clicking the link at the top. You can also view the job template we created previously in the inventory. To create a new job template, click the plus symbol and specify the details as we did earlier. Jobs can be of type run, check or scan. Select the job type as required. Once you select a project, the playbook is automatically populated. Choose the inventory, project and credentials from those we created earlier and leave the rest of the fields as the default. Select verbosity level as required and save. To launch the playbook, click the launch symbol under Actions. You will be prompted for password; enter it and click launch. You will be redirected to the Jobs link, where you can see the status of this job and monitor it as it runs. The job can also be scheduled. Click the calendar symbol and then the plus symbol. Specify name, start date, start time, local time zone, UTC start time and repeat frequency, then click Save.

- name: Hello World! shell: echo "Tower working fine" The indent above should be maintained, as it plays an important role. Now, while specifying SCM type, specify Manual and enter the playbook directory name.

06

Inventories

An inventory in Ansible Tower is the same as an Ansible inventory file. It is a collection of hosts against which jobs may be launched. Navigate to Inventories to create or view inventories. To create one, click the plus symbol and specify the name and description. Select an organisation from the available ones. In the Variables section, specify variable definitions and values to be applied to all hosts in this inventory using JSON or YAML syntax. Click on Save and you will then be shown the Scan Job Templates section. Expand it to view current scan jobs. To create a new Job template, click the plus symbol, fill in the required fields and click Save. Now click the launch symbol to launch the job.

44

Create and schedule jobs in the Jobs section by clicking in the menu. The Jobs page lists all created job details, displaying their ID, name, status, finished date time and job type. Under the Actions column, you have the option to launch or delete the job. For the Playbook Run job type, an additional action to view the standard output of a particular job is available. Click the View Standard Output button; you can see the output of the script executed that you wrote in the playbook YML file. When you click on a job using types SCM Update and Inventory Sync, the Job Results screen opens up. When a job with the type Playbook Run is clicked, the Job Status screen opens up. This page also opens when the Launch button is clicked from Job Templates, as we did earlier. This page uses Towerâ&#x20AC;&#x2122;s Live Event feature to automatically refresh until the job is completed.

The Job Results window displays information about jobs of type Inventory Sync and SCM Update in three different


Ansible modules Ansible modules are reusable standalone scripts that can be used by Ansible API, Ansible CLI or Ansible Playbooks. They return information to Ansible by printing a JSON string to standard output before exiting. Ansible ships with a number of modules that can be executed directly on remote hosts or through Playbooks. Take a look at core Ansible modules at the project’s GitHub repository: bit.ly/2baCs7L. You can also write your own modules – read more details here: bit.ly/1QuFkM5.

tabs – Status, Standard Out and Options. The Status tab shows the name of the job template that launched the job, the status of which may be Pending, Running, Successful or Failed. The Status tab also displays Started and Finished, which are timestamps of when the job was initiated and completed respectively. You can also see the launch type – Manual

sections – Status, Plays, Tasks, Host Events, Event Summary and Hosts Summary. The Status section shows the status of the job. Click More to view basic settings for this job, such as template, job type, launched by, inventory and project associated, running playbook, credential, verbosity settings and any extra variables used by this job.

An inventory in Ansible Tower is the same as an Ansible inventory file. It is a collection of hosts against which jobs may be launched or Scheduled – along with the total time the job took in the Elapsed field. The Standard Out tab shows the full results of running the SCM Update or Inventory Sync playbook. The Options tab for SCM Update jobs has Projects associated with the job. For Inventory Sync jobs, it consists of credentials for the job, the group that is being synced, source for cloud inventory, regions, overwrite value and overwrite vars value.

10

Job status

The Job Status page for Playbook Run jobs displays all of the task details and events. It comprises several different

The Plays section shows details of the plays that were run in this playbook. Click on a specific play to filter the Tasks, and the Host Events area to just display tasks and hosts relative to that play. The Tasks area shows the task details running as part of plays. The Host Status here displays a summary of the host status for all hosts associated with this task. Click on a specific task to filter the Host Events area to only display hosts relative to that task. The Events Summary area shows a summary of events for all hosts affected by this playbook. The Host Summary area shows a graph summarising the status of all hosts affected by this playbook run.

www.linuxuser.co.uk

45


Tutorial

Ubuntu system management

Manage the system on Ubuntu Linux

Take a closer look at how you can manage the system in the Ubuntu Linux environment

Swayam Prakasha

has a master’s degree in computer engineering. He has been working in information technology for several years, concentrating on areas such as operating systems, networking, network security, electronic commerce, internet services, LDAP and web servers. Swayam has authored a number of articles for trade publications, and he presents his own papers at industry conferences. He can be reached at swayam.prakasha@ gmail.com

Resources Ubuntu Linux server monitoring and management bit.ly/2agfqcK

Linux system administration and configuration bit.ly/1N2N1tt

Server management bit.ly/1jKdVmc

46

Without careful management, you can expect that the demands on your Ubuntu Linux system can exceed the resources that are available. There is a real requirement to monitor system resources such as memory, CPU and device usage over time and this will ensure that the system has enough resources to do what you need it to. In a similar way, it is also important to manage the other aspects of your system, such as the various device drivers it uses and how the boot process works. This will help in avoiding performance problems and system failures on your Ubuntu system. Let’s take a deeper look at how you can monitor the various resources. It is important to note here that Ubuntu systems do a good job of keeping track of what they do. If you are really interested, then you’re in a position to find out a lot of information about how CPU, hard disks, virtual memory and other resources are being used. This can be done in two ways. First, you can take a look at the contents of the files in the /proc directory – this is the place where the Linux kernel stores information about the system. Second, you can also

make use of various commands that will help you to view information about how your computer’s resources are being used. There are commands specific to virtual memory, processor, storage devices, network interfaces and so on. These commands are very useful in monitoring the different aspects of a system’s resources. When it comes to monitoring the memory usage, we all know that the system performance will decline when we run out of memory. There are a few basic commands, such as free and top, that enable you to see basic information about how a system’s RAM and swap area are used. Another command, vmstat, gives more detailed information about the memory use and can run continuously. If we need to know how much memory the kernel is consuming, we can use the slabtop command. Of all the commands that are available, the free command is widely used and it provides the quickest way to see how much memory is being used on your machine. It shows various parameters such as RAM, swap space and the amount of memory currently being used.

There is a real requirement to monitor system resources such as memory, CPU and device usage over time


It is important to note here that, like the free command, top also shows the total memory usage for RAM and swap space. The top command is screen-oriented and provides ongoing monitoring. With this feature, you will be able to watch memory usage change once every three seconds. When top is running, if you press SHIFT+M, then the running processes will be displayed in memory-use order – with this you can see which of the processes are consuming more memory. The most important column that needs to be analysed from a memory usage perspective is RES – residential size – since this typically shows the actual physical RAM usage of a process. For a detailed view of your virtual memory statistics, the preferred command to use is vmstat. By using the vmstat command, you will be able to view memory usage over a given period of time. Some of the options that can be used with the free command are shown in the above image. The following command can be used to list the memory usage in blocks:

~$ free -b Sometimes, we may need to list memory usage with swap and memory areas. In such cases, use the -mt option, as shown below:

The following are some more examples of using vmstat with various options.

~$ free -mt ~$ vmstat -S m Another common use of the free command is to display the memory usage at regular intervals. This can be achieved by using the following command (where we have set the interval as five seconds):

~$ free -s 5 One way to guess how much memory is needed on a system is to go to another machine running Ubuntu and then open every application you think you will be running at once. Now you are good to run the free command with the total option (free -t) to see how much memory is being used. With this, you now need to make sure that your new system has at least that much total memory. Another useful command, top, provides a means of watching the currently running processes. You can use this command to watch your memory usage in a screen-oriented way. The following screenshot illustrates this.

Master the time on the Ubuntu system It is essential to keep the correct time on your Ubuntu machine so that the system can function properly. Generally, a machine running Ubuntu Linux maintains time in two different ways: a system clock and a hardware clock. The system clock is used by Linux to keep track of time, whereas the hardware clock will set the system time when Linux boots up. A command – uptime – is normally used to determine how long your Ubuntu system has been up.

This displays the output in 1000k megabytes.

~$ vmstat -n 2 10 Output every two seconds, repeat ten times.

~$ vmstat -s | less Displays event counters and memory stats. If you are interested in seeing how much memory each application is consuming on the system, then commands such as ps and top will be very handy. The kernel has its own memory cache to keep track of its resources and it is called kernel slab. You can use the vmstat command to display the kernel slab memory statistics.

~$ sudo vmstat -m | less

www.linuxuser.co.uk

47


Tutorial

Ubuntu system management

Let’s look at the output of the command in more detail.

As seen from the screenshot, the kernel slab memory The information shows each cache name, the number of objects Ubuntu that are active for that cache type, the size of the cache Linux Pre-defined rule sets and so on. If you are interested in displaying the cache system’s information in a screen-oriented way, then the command to time zone The Ubuntu Linux system’s time zone can be set based on the contents of the /etc/localtime file. In order to set a new time zone, all you need to do is to copy the file representing your time zone from a subdirectory of /usr/share/zoneinfo. For example, to change the current time zone to that of Asia/Kolkata, use the following command:

~$ sudo cp / usr/share/ zoneinfo/Asia/ Kolkata /etc/ localtime Ubuntu Linux makes it easier for endusers to watch and sometimes modify several aspects of their system so that it always operates at manageable performance levels. You can use various commands to see how the system is using memory, CPU and storage devices.

use is slabtop.

using the following command:

~$ iostat -c -t ~$ iostat -c -t 2 10 This will be repeated every two seconds for ten times. Another popular command – dstat – is also available on the Ubuntu system, and this is an alternative to iostat. The dstat command also displays information about the CPU usage on your system; it has a clear advantage in that it clearly shows exactly the units of measurements it is displaying. You can install dstat using the following command: It needs to be noted here that the slabtop output is updated every three seconds. Next, let’s look at how we can monitor CPU usage. You will definitely face performance issues if the CPU is overloaded. As noted earlier, the vmstat command can help with getting basic information about the CPU usage. Another command, iostat, can provide us with more detailed information. Please note that iostat is not installed by default in Ubuntu, so you’ll need to install the sysstat package:

~$ sudo apt-get install sysstat

Let’s take a closer look at the iostat command with some examples:

~$ iostat -c 3 This command displays the CPU stats every three seconds. The iostat command can also be used to print CPU utilisation reports with timestamps. This can be done by

48

~$ sudo apt-get install dstat The following command displays the CPU usage information continuously with timestamps:

~$ dstat -t -c 3 You can see from this output that we have a date/time valuesbased report, and this report runs continuously until we stop it by pressing CTRL+C. Sometimes you may be interested in looking at the processes that are consuming the most processing time. The top command can be used in such scenarios. You can type the top command at the Ubuntu prompt and then press SHIFT+P so that it will be sorted based on the CPU usage.

Note here that the output shows several processes and they are sorted as per the current CPU usage. If you need the information about the processor itself, the best way is to go directly into the file /proc/cpuinfo and take a look. Let’s now turn our attention to monitoring the storage devices on the Ubuntu system. If you are interested in only very basic information about the storage space available on the system, then commands such as du and df can


be used. More details about how the storage devices are performing can be obtained with the help of commands such as vmstat and iostat. The parameter to notice in the above output is iowait. A high value for this parameter indicates that disk input/output is the major hurdle on the Ubuntu system. The vmstat command can also be used to display the statistics about the disks. The screenshot here illustrates this command with an example. The vmstat command can also be used to list the read/ write information for a selected disk partition, as shown in the following example.

~$ vmstat -p sda1 Another popular command that will be of great help in the Ubuntu environment is lsof. System administrators can use this command to get a list of files and directories that are currently open on storage devices. When we are analysing the output of the lsof command, we will be looking at the file or the directory that is open, the command that has it open, and the process ID of that running process. Next, let’s take a look at the GRUB bootloader on the Ubuntu environment. When we first installed Ubuntu, if GRUB was set up correctly, then the file /boot/grub/grub.cfg will have all the settings for the bootloader. It is important to note here that even though grub.cfg is a configuration file, you should never edit this file. This is basically because the grub.cfg will get overwritten by the contents of the file in the /etc/grub.d directory. The configuration file grub.cfg has settings to control how GRUB behaves. It also helps in instructing how the various modules must be loaded. Though we cannot directly edit this configuration file, we can use the following two options instead. • Use a custom file – this can be done by adding entries to the file /etc/grub.d/40_custom • Use a default file – You can use the default file located at /etc/default/grub and its contents, as shown in the screen below.

Once you’ve changed the custom or default file contents, in order to make these changes effective, you’ll need to install a new kernel or just run the update-grub command. Once the kernel has started up, it hands over the controls to a process called init. This init process has a PID of 1 and it will be the first running process on the system. Based on the contents of /etc/init, this initial process starts up the other processes. It also sets up the default run level. By using the sudo command, you can view the current run level.

~$ sudo runlevel In order to change the current run level, use the init command. This is shown in the following command:

~$ sudo init 1 The above command will change the run level to 1 – that is, to the single user mode. Sometimes, we will come across situations where we need to manage services – this can be done by using the service command. For example, in order to start rsyslog service, you can type the following at the command prompt.

~$ sudo /etc/init.d/start rsyslog It is important to note that each service comes with a shell script, located at /etc/init.d. We just need to pass the start or stop option in order to start or stop a specific service. In addition to start and stop, some scripts support a few more options. In order to understand the various options supported by a specific script, use a command similar to the one shown below.

~$ sudo /etc/init.d/ rsyslog

As noted earlier, you can use the init command to change to any run level – this includes a run level of 0 (indicating a shutdown) and a run level of 6 (indicating a reboot). You can also find some specific commands in Ubuntu for stopping Linux. Some of the very useful commands in this category are reboot, poweroff, shutdown and halt. Let’s have a quick look at some of the examples.

:~$ sudo reboot This reboots your system.

~$ sudo shutdown 10 This will shut down the system in ten minutes; a warning message will be displayed to the users well in advance.

~$ sudo shutdown -r 20 This will reboot the system in 20 minutes, after first warning the users.

www.linuxuser.co.uk

49


Tutorial

Mihalis Tsoukalos

Mihalis Tsoukalos is a UNIX administrator, a programmer (UNIX & iOS), a DBA and a mathematician. He has been using Linux since 1993. You can reach him at @mactsouk (Twitter) and his website: mtsoukalos.eu

Resources A text editor such as Emacs or vi The Go compiler

Go functions

Develop Go programs that read and write to files Discover how to perform various file I/O operations in Go

This tutorial will talk about how to perform a variety of file I/O operations in Go. After discussing the Go packages and functions needed for reading, writing and deleting files, it will show you how to deal with records, binary data and sparse files in Go. You will also learn about byte slices and how they can help you execute file I/O operations.

About File I/O

Figure 1 shows the Go code of naiveCP.go. The program is a naĂŻve version of the cp command-line utility; its core functionality is based on the next two lines of Go code: input, err := ioutil.ReadFile(in) err = ioutil.WriteFile(out, input, 0644) Although naiveCP.go does its job, it does not implement

cp in an efficient way because it reads its input file all at once using ioutil.ReadFile and then copies it using ioutil.WriteFile. This might work well for relatively small files, but it is not the most professional way for copying large or huge files, especially on busy Linux systems with relatively small amounts of RAM.

Tutorial files available: filesilo.co.uk

50

Standard Go packages related to file operations

As Go comes with a rich set of libraries, it is good to know the libraries that are related to file input and output operations.

The os package is used for interacting with the operating system, the io package is for performing primitive file I/O operations, while the bufio package is for executing buffered I/O. The functionality of the os package will help you to create, open and delete files, as well as acquire the command-line arguments of your programs. Among its most important functions are os.Chdir, os.Exit, os.Remove, os.Open, os.Close, os.Readdir and os.Read. The os package also contains the os.Args array that holds the command-line arguments given to a program, including the program name. The main advantage of os is that it provides a platformindependent interface to the operation system. The functions of the io package help you read or write to a file byte by byte, and other similar operations. Its most important functions are io.Read, io.Write and io.Close. Last, the functions of bufio help you perform buffered operations, which means that although its operations look similar to the ones found in io, they work in a slightly different way. What bufio actually does is wrap an io.Reader or io.Writer object into a new object that implements the required interface while providing buffering to the new object. The bufio package allows you to read a text file line by line. Should you wish to help additional help for any of the aforementioned packages, you can execute one of the following commands:


About ZIP files Go can help you create ZIP files using the zip package. For each file you want to put in the archive, you must first read it and then write its contents using an io.Writer. The process is not particularly difficult but you need to be extra careful and avoid bugs because you might fill up your hard disk space by accident! The example code can be found in zipFiles.go. Executing it produces the following output:

Figure 1

Basic file I/O information

The best and easiest way to open an existing file for reading is the following:

f, err := os.Open(â&#x20AC;&#x153;aFileâ&#x20AC;?) After a successful call to os.Open, you can start reading aFile using the f variable and the os.Read function, which returns the number of bytes read, which you can either use or ignore:

$ go run zipFiles.go test test1 Adding test Adding test1 $ ls -l ZIP.zip -rw-r--r-- 1 mtsouk staff 1066 Jul 8 21:58 ZIP.zip

n, err := f.Read(data) Alternatively, you can create a new scanner from an already open file, as follows:

scanner := bufio.NewReader(f)

Above This is the Go code of naiveCP.go, which presents an unorthodox, yet fully working, way of making a copy of an existing file

$ godoc os $ godoc io $ godoc bufio Similarly, you can learn more about the io.Reader and io.Writer interfaces:

You can then use the scanner variable to iterate over the file contents using Scan:

for scanner.Scan() { line := scanner.Text() } Or, you can put the entire contents of a file into a variable without the need for a scanner or a call to os.Open:

input, err := ioutil.ReadFile(in) You can create a new file for writing, as follows:

$ godoc io Reader $ godoc io Writer Each one of the previous two documentation pages shows the functions that have to be implemented for each one of the two interfaces.

newF, err := os.Create(dst) Then, you can start writing to a file using os.Write:

_, err = f.Write([]byte{0})

www.linuxuser.co.uk

51


Tutorial

Go functions

Right The Go code in errorHandle.go shows you how to handle file I/O errors

Figure 2

Figure 3

Across The wc.go program implements the basic functionality of the wc(1) UNIX command-line utility. This figure shows the Go code of the countLines function

In case you previously used the ioutil.ReadFile function, you can do the following:

err = ioutil.WriteFile(out, input, 0644) After finishing with a file that is open for reading or writing, you should make a call to the os.Close function to close the file. The previous Go code does not deal with errors and error handling, which is considered very bad practice. The next section shows how to add error-handling code to your programs.

Error handling

The error-checking code is very important and should never be omitted in order to save you from writing more lines of Go code. Usually, functions either return a single value or return no values â&#x20AC;&#x201C; Go functions also return errorrelated information. The next function returns no value apart from an error variable:

err := os.Remove(filename) Similarly, the following function returns a variable as well as an error variable:

approaches are good as long as you are able to detect the error! The above code makes the program exit using the os.Exit function. You can see the complete Go code of errorHandle.go in Figure 2.

A simple program

So, itâ&#x20AC;&#x2122;s now time to combine all previous knowledge to develop a real program in Go. The simple.go program uses io.Copy to produce the actual copy of a file. The Copy function copies the source file to the destination file and returns the total number of bytes copied and the earliest error encountered while copying. The relevant code is as follows:

source, err := os.Open(src) destination, err := os.Create(dst) nBytes, err := io.Copy(destination, source) Building and executing it produces the following output:

f, err := os.Open(filename) However, in both cases the code for error handling is the same:

if err != nil { fmt.Println("There was an error!") fmt.Println(err) os.Exit(1) } else { fmt.Println("Everything is OK!") } Depending on your current policy, a program may exit in case of error or handle the error and continue. Both

52

$ go build simple.go $ ./simple simple anotherCopy Copying simple to anotherCopy Copied 1835016 bytes! $ ls -l simple anotherCopy -rw-r--r-- 1 mtsouk mtsouk 1835016 Jul 8 15:27 anotherCopy -rwxr-xr-x 1 mtsouk mtsouk 1835016 Jul 8 15:27 simple The only thing that's different between anotherCopy and simple is their file permissions.


The program also uses the handy defer command that defers the execution of a function until the surrounding function returns â&#x20AC;&#x201C; defer is used very frequently in file I/O operations because it saves you from having to remember to execute the Close() call after you are done with a file that is open either for reading or writing. Additionally, it prohibits you from accidentally closing a file descriptor that is in use and, therefore, still needed by the program. The full Go code of this section is saved as simple.go and is a much-improved version of naiveCP.go that also includes error-handling code. Please study simple.go and experiment with it before continuing with the rest of the tutorial!

Figure 4

Left This is the Go code of records.go, which illustrates how to read and write records in Go

Figure 5

Left This is the source code of binary.go, which shows how to read and write data to binary files

Deleting a file

This section will show you how to delete a file using Go code. The central Go code of deleteFile.go is as follows:

err := os.Remove(filename) if err != nil { fmt.Println(err) return } As you can see, you only need to call os.Remove and, as expected, verify that the function was executed without any errors. If you do not have the required permissions to delete a file or the files does not exist, deleteFile.go will generate the next error messages, since the call to os.Remove was not successful:

$ go run deleteFile.go toBeDeleted remove toBeDeleted: no such file or directory $ go run deleteFile.go /usr/bin/perl remove /usr/bin/perl: operation not permitted As you can see, Go error messages are very expressive and usually reveal the reason a function call has failed.

Developing a Go version of wc

This section will teach you how to develop a simple version of the wc command-line tool in Go. The principal idea behind wc.go is that you read a text file line by line until you reach its end. For each line you read, you find out the number of characters and the number of words it has. As you need to read your input line by line, the use of bufio is preferred instead of the plain io because it simplifies the code. However, trying to implement wc.go on your own using io would be a good exercise. Reading a text file line by line is done with the help of the bufio.ReadString function:

r := bufio.NewReader(f) for { line, err := r.ReadString('\n') ... } If you have many files to process, you will get a line for each file but no summary information â&#x20AC;&#x201C; this is left as an

exercise for the reader. Executing wc.go produces the following kind of output:

$ ./wc naiveCP.go deleteFile.go binary.go naiveCP.go: 27 lines 61 words 418 characters deleteFile.go: 22 lines 37 words 278 characters binary.go: 37 lines 78 words 631 characters Executing wc(1) produces the following output:

$ wc naiveCP.go deleteFile.go binary.go 27 61 418 naiveCP.go 22 37 278 deleteFile.go 37 78 631 binary.go 86 176 1327 total

www.linuxuser.co.uk

53


Tutorial

Go functions

You can see the source code of the countLines() function in Figure 3, where you can understand that both line- and byte-counting tasks are pretty easy. What is tricky is counting the number of words in a line, which is implemented with the help of regular expressions:

Right This is the output of the strace utility when tracing the executable of naiveCP.go, which shows what an executable does behind the scenes

Figure 6

r := regexp.MustCompile("[^\\s]+") for range r.FindAllString(line, -1) { numberOfWords++ } The next two Go tutorials will talk about the concurrent capabilities of Go, where you will also learn how to turn the current Go implementation of wc into a concurrent program, so stay tuned!

Byte slices

A very handy Go feature is called byte slices; byte slices are a special kind of slice used for file reading and writing operations. The example code is saved in byteSlice.go, which you can see in Figure 9. The program declares two byte slices, named aByteSlice and anotherByteSlice. Executing byteSlice.go generates the following output:

$ go run byteSlice.go Read 11 bytes: Tsoukalos! Note that the reason it says that it read 11 bytes instead of 10 bytes is the existence of the newline character at the end of the string.

Reading and writing records

The strace tool The strace command-line utility allows you to trace the system calls and signals of an executable program. The program that is going to get traced will be the executable of naiveCP.go:

$ strace ./naiveCP naiveCP copy ... read(3, "\177ELF\2\1\1\0\0\0\0\0\0"..., 2041152) = 2040640 read(3, "", 512) =0 close(3) =0 open("copy", O_WRONLY|O_CREAT|O_TRUNC|O_CLOEXEC, 0644) =3 write(3, "\177ELF\2\1\1\0\0\0\0\0\0"..., 2040640) = 2040640 ... What its output reveals is that everything is translated to C functions and system calls, because this is the appropriate way to communicate with the kernel. Nevertheless, writing the same program in C would have required many more lines of C code. What you can also understand from the strace output is that there is a single call to read(2) and then a single call to the write(2) system call, which means that a large buffer is needed for keeping the data read from the read(2) call, which verifies the inefficiency of the naiveCP.go program. You can see a part of the large strace output in Figure 6. If you trace the executable from simple.go using strace, you will see that there are multiple calls to read(2) and write(2), which verifies that simple.go is more efficient because it does not need a big buffer.

This section of the tutorial will show you how to read and write data using records. A record can have many types, including JSON data, XML data and plain text data. What differentiates a record from other kinds of text data is that a record has a given structure with a specific number of fields â&#x20AC;&#x201C; think of it as a row of a table from a relational database. Actually, records can be very useful for storing data in tables in case you want to develop your own database server in Go. You can see the source code of records.go in Figure 4. Executing it produces the following output:

$ go run records.go aCSVfile 1:a:A 2:b:B 3:c:C 4:d:D

Reading binary files

This section will talk about how you can read data from a binary file in Go. The Go code for reading the binary data file is as follows:

data = data[:cap(data)] n, err := f.Read(data) The following code allows you to process a binary file, byte by byte:

for _, b := range data { fmt.Printf("% x ", b) } The source code of binary.go can be seen in Figure 5.

54


Figure 7

Figure 8

Left This is the source code of find.go, which shows you a way of recursively visiting all files and directories of a directory tree Across This is the source code of sparse.go, which illustrates how to create sparse files

Developing a version of find in Go This part will briefly discuss how to start developing a version of the find command-line utility in Go. Particularly, it will show you how to visit all files and directories of a directory tree given a root directory, which is the most difficult and important task of find. Although find does not have to deal with file I/O per se, it is useful to know how to search a directory tree in Go. The crucial Go code is the following:

err := filepath.Walk(searchDir, func(path string, f os.FileInfo, err error) error { fileList = append(fileList, path) return nil }) The Walk function walks the given directory tree, calling another function, that is often called the walking function, for each file or directory in the tree, including root. The root directory is given as the first parameter to the Walk function and the walking function is the second parameter. In this case, the anonymous walking function just executes the following code:

Sparse files Big files that are created with os.Seek may have holes in them and occupy fewer disk blocks than files with the same size but without holes in them; such files are called sparse files. Go uses the traditional C approach in order to create sparse files. The most important part of sparse.go is the following, where you can see the use of os.Seek:

_, err = fd.Seek(SIZE-1, 0) You can see the full source code of sparse.go in Figure 8. Executing it produces the following:

$ go run sparse.go 1000000 aSparseFile $ ls -l aSparseFile -rw-r--r-- 1 mtsouk mtsouk 1000000 Jul 7 12:10 aSparseFile $ ls -alsh aSparseFile 8.0K -rw-r--r-- 1 mtsouk mtsouk 977K Jul 7 12:10 aSparseFile So, although the size of aSparseFile is about 1MB, its size on the hard disk is only 8Kb. Please note the use of strconv.ParseInt in sparse.go for converting a string value into an integer value.

fileList = append(fileList, path) return nil This is a pretty ingenious way to walk a directory tree â&#x20AC;&#x201C; if you are more interested in this functionality, you should further explore walking functions. You can see the source code of find.go in Figure 7. Executing it produces the following kind of output:

$ go run find.go . <nil> . find.go naiveCP.go wc.go

Figure 9

Left This is the source code of byteSlice.go, which illustrates the use of byte slices for writing and reading files

Executing binary.go produces the following kind of output:

$ file aBinary aBinary: data $ go run binary.go aBinary 3a ec 5f a7 31 a0 2d 8a 22 63 49 â&#x20AC;Ś

www.linuxuser.co.uk

55


BUILD A BETTER WEB www.webdesignermag.co.uk

Available from all good newsagents and supermarkets

SOURCE RE

EVERY IS DS

E â&#x20AC;¢ FREE SU

ON SALE NOW

WNLOA DO

jQuery 3.0: What's new? | JavaScript core coding techniques | Build HTML5 games DESIGN INSPIRATION

PRACTICAL TIPS

BEHIND THE SCENES

STEP-BY-STEP ADVICE

INDUSTRY OPINION

BUY YOUR ISSUE TODAY Print edition available at www.imagineshop.co.uk Digital edition available at www.greatdigitalmags.com Available on the following platforms

facebook.com/webdesignermag

twitter.com/webdesignermag


THE ESSENTIAL GUIDE FOR CODERS & MAKERS

PRACTICAL

Raspberry Pi

58

“Perhaps you can easily churn out code to run on your Raspberry Pi, but sooner or later you’ll probably have a hankering to interface it to LEDs, sensors and other real-world devices...”

Contents 70

Pi-controlled music box

72

Check your mail with Python

74

Hack a toy with Pi: Part two

78

Run RISC OS on your Pi

80

Build an Explorer robot: Part three

www.linuxuser.co.uk

57


Feature

Electronics for Pi Hackers

F OR

ircuit s c ic n o r t c le e d Learn to buil our Raspberry Pi to interface y eal world to the r your t code to run on n easily churn ou ve a ha ly ab ob Perhaps you ca pr l er or later, you’l on so t, bu alre Pi r Raspberry sensors and othe face it to LEDs, er de int to ma dy ing er rea nk ha y attach a re you can easil you if t bu , tor ec world devices. Su nn IO co HAT, to the Pi’s GP ming skills board, perhaps a Python program or + C+ ur yo er th fur go its for the to cu nt cir wa ash with lp. The web is aw rough PIC th , Ds aren’t going to he LE up ple ones to lightsim m fro g cts. But gin oje Pi, ran automation pr ambitious home circuit ng rki wo programmers to a o circuit diagram int a m fro e es th to turn ctronic skills. from requires some ele electronic circuit you how to build an ch tea cuits cir ild bu Here we’ll u yo ng addition to helpi In m. gra good a dia it a circu basic skills are designed, these d, in an ills sk ics that others have electron ther develop your foundation to fur . its cu cir signing your own time, even start de

58


Where to buy

ESSENTIAL COMPONENTS

Maplin is about the only big name hig hstreet shop to sel l electronic compo nents and tools and you might feel more comfortable selec ting components from the shelves when you’re just startin g out or if you need som ething in a hurry. You ’ll have more choice ordering online, tho ugh, with large supplier s including Maplin (www.maplin.co.uk ), Rapid Electronics (www.rapidonline .com), Digi-Key (ww w. digikey.co.uk), Fa rnell element 14 (ht tp:// uk.farnell.com), an d RS Components (http://uk.rs-onli ne.com).

Discover some of the components you’ll be working with as you build electronic circuits Resistor values are measured in Ohms (Ω), kΩ (103 Ohms) and MΩ (106 Ohms). Commonly on circuit diagrams, the Ω is omitted and k, M or R (for Ω) replaces the decimal point, e.g. 3R3, 47k and 2M2. Most resistors have a power rating of 0.25W but, if a greater value is specified, do use it. Resistors are marked with coloured bands to represent their value. The last band, separated by a larger gap, represents the tolerance, usually silver (10%) or gold (5%). If there are four bands, the first two represent digits and the third is a multiplier (i.e. number of zeros); with five bands, the first three are digits and the fourth is the multiplier. Black, brown, red, orange, yellow, green, blue, violet, grey and white represent 0-9 so red, violet, orange, gold means 27k, 5% tolerance. Resistors have two leads and can be connected either way round. File I/O: Capacitor values are measured in pF (10-12 Farads), nF (10-9 Farads) or µF (10-6 Farads). Commonly on circuit diagrams, the F is omitted and the multiplier replaces the decimal point, e.g. 33p, 2n7 and 10µ. Capacitors also differ in their voltage rating. Sometimes a capacitor is also specified by its construction (e.g. ceramic, polypropylene) and you should always use the type specified. The means of marking the value varies. It could be obvious (e.g. 10nF) but it might also be two digits plus

multiplier but with numbers rather than coloured bands, e.g. 103 for 10,000pF, i.e. 10nF. Capacitors have two leads and most can be connected either way round although some are polarised (having positive and negative terminals) and must be connected the right way round. Transistors: Transistors have part numbers (e.g. BC547B) which are printed on them. They have three leads: base, emitter and collector for bipolar transistors, or gate, source and drain for FETs, and must be connected the right way round. Identify the leads from the specification sheet. Diodes: Diodes have part numbers (e.g. 1N4002) which are printed on them. They have an anode and a cathode and must be connected the right way round. A band on the body identifies the cathode. Light Emitting Diodes (LEDs): LEDs have an anode and a cathode and they must be connected the right way round. A flat on its body identifies the cathode. Integrated Circuits (ICs): ICs have part numbers that are printed on them. They have lots of pins which are shown on the circuit diagram as numbers and must be connected correctly. For DIL ICs, pin 1 is at the top-left when viewed from above (the top being the edge with the notch) and then progress anti-clockwise. Jumper Wire: Use insulated wire, single conductor (1/0.6mm) for connections on a breadboard or stripboard, but stranded wire (16/0.2mm) for connections to off-board components.

46mm

Data sheets

Datasheets provide detailed inform ation – both mechanical and electronic – on comp onents and, in the case of more advanced componen ts such as ICs, they sometimes provide application inform ation including diagrams of typical circuits. If you’re building a circuit that some one else has designed you won’t need to consult a datasheet to learn about a component’s electronic chara cteristics but you might need to find out about its mech anical details in order to wire it up correctly. Generally this won’t apply to simple components like resistors and capacitors since you can connect them either way round, nor to diodes and LEDs because the mark ing of the cathode is standardised. However, you’ll often need to look at the datasheet to identify the leads on transistors. This is because transistors have three leads which must

be correctly identified but the posit ioning of the base, emitter and collector (or gate, emitt er and drain) is not standardised and usually differ s from one type of transistor to another. In time, if you advance to designing your own circuits, datasheets will be useful for so much more than just identifying component leads and you’ll find the application notes to be invaluable. Datasheets are widely available onlin e. A Google search for datasheet and the part numb er of the component will usually lead you straig ht to it. In some cases a particular type of componen t (e.g. a transistor or IC with a particular part number) is manufactured by several manufacturers. In this case any datasheet will do; you don’t necessarily have to consu lt the one from the manufacturer of the particular device you bought.

59


Feature

Electronics for Pi Hackers

TOOLS YOU’LL NEED If you’re a DIY enthusiast you’ll probably have a collection of tools such as screwdrivers and pliers. You might be able to press some of these into service for electronic construction but that will make life harder and you’ll risk damaging components. What’s more, tools such as a soldering iron are unlikely to be lurking in your toolbox. Exactly what you need depends on your method of construction but here are details of what you might need.

De-soldering Tool

Solder

More a consumable than a tool, but necessary for all methods of construction except breadboarding and terminal strip, is solder. This is a low melting point alloy which, when melted and allowed to solidify, makes a good physical and electrical connection between metal parts such as a component’s leads and the copper tracks on stripboards or PCBs. The type of solder used in electronics comes as a fine wire, to ease melting, and has a rosin core so that separate flux isn’t needed.

A de-soldering tool, known colloquially as a solder sucker, is used to remove a component that has been soldered onto a stripboard or PCB. You’d use it if you’d accidentally soldered a wrong component in place or if a component failed. This tool takes the form of a spring-operated suction pump. To use it, prime the pump by pushing in the plunger, apply your soldering iron to the joint and, with the soldering iron still in place, place the tip of the de-soldering tool against the joint and press the button, thereby causing it to suck up the molten solder.

Screwdrivers

You’ll need a screwdriver if you’re constructing on terminal strip but, because they’re so cheap, you’d be advised to invest in some screwdrivers because they’re also useful for so many other jobs. For example, if you want to house your circuit in a small plastic case, you’ll need a screwdriver to remove and re-attach the lid and possibly also to fix the stripboard or PCB inside the case. Screwdrivers you might have for general DIY jobs will probably be too large so get hold of a small-bladed screwdriver, commonly known as a terminal ’driver, and a similarly sized cross-head screwdriver.

Pliers

Soldering Iron

A soldering iron is the tool used to melt solder to make an electrical connection. The type of soldering iron intended for electronic construction has a fine tip so you can easily make a joint without accidentally bridging it to another component lead that might be separated by only 0.1”. Electronics soldering irons also come with a stand so you can put them down, while still switched on, without risking damage to your workbench or items on it.

60

Ordinary pliers are often multipurpose tools that can also be pressed into service as wire cutters. For electronic construction, though, they are far from ideal for either use. Electronics pliers, also known as snipe nose pliers, are much smaller than general purpose pliers, commonly about 120mm long, and they have fine jaws, thereby allowing more control in accurately bending wires such as component leads. In addition, some are spring-loaded so that they’ll open automatically when you release the pressure on the handles. This latter feature might not sound too significant but it can make you much more productive when carrying out the same sort of operation over and again.


Wire Cutters

Of similar size to electronics pliers and often with the same spring-loaded mechanism, wire cutters offer the same sort of advantages over ordinary DIY tools. Of particular importance is the fact that they allow you to cut a wire or component lead more accurately and also let you cut much closer to a solid surface such as a stripboard or PCB.

circuit a multimeter is especially useful. Depending on how you set the dials it can measure AC or DC voltage or current in a circuit and you can measure the values of components like resistors (although not if they’re wired into a circuit).

Spot Face Cutter

Used exclusively for construction on stripboard, a spot face cutter is used to make a break in a copper strip where electrical continuity isn’t required. At a pinch you could use an appropriately sized drill bit, but if you use it as a hand tool you risk cutting your fingers and if you use it in a drill you might accidentally drill a hole through the board. To use a spot face cutter, place the pointed end in the hole at the point you want a gap in the copper strip and twist it back and forth a few times.

Logic Probe

Another useful piece of test equipment for logic circuits, which is primarily what you’ll be building to interface to a Raspberry Pi, is the logic probe. In a logic circuit, one voltage is used to represent a binary 0 while another represents a binary 1. A logic probe lets you easily identify whether points in the circuit are ones or zeros, a job that a multimeter isn’t particularly good at. Simply clip the probe’s supply lead across the circuit’s power supply and apply the point of the probe to any point in the circuit. The green LED lights to indicate 0, the red LED lights to indicate 1, and if both appear to be lit the point is oscillating rapidly between 0 and 1.

Stay safe

Multimeter

Sooner or later you’ll build a circuit that doesn’t behave as you’d hoped and intended. To help you debug an electronic

The main risk in electronic construction is heat from the soldering iron. Wires and components such as resistors conduct heat very effectively so, if you’re soldering to something that needs holding in place, hold it with pliers rather than your fingers. Molten solder is particularly hazardous so be careful about actions like shaking the soldering iron to get rid of excess solder. In fact it might be a good idea to wear safety glasses while soldering.

www.linuxuser.co.uk

61


Feature

Electronics for Pi Hackers

CONSTRUCTION METHODS Pick a method of construction that suits the complexity of the circuit and its longevity Breadboarding

Our first method of construction is useful when your circuit is temporary. It’s often used by electronics engineers to try ideas out while designing a circuit before building it using a more permanent method. However, you might also find it useful for a circuit that you only want to use for a short time before dismantling it and building something else. A breadboard is a plastic board with an array of holes, usually on a 0.1” pitch, through which you can push the leads of components which are then “grabbed” by the breadboard. The leads of the components automatically connect together in rows as defined by the arrangement of the breadboard. Connections perpendicular to these rows are made using jumper wires (patch leads) that are fitted in the same way as the components. Breadboards with around 400 holes are common and can be picked up for £3 or less. As a bit of practical advice, try to hold the leads using pliers while you’re inserting them to prevent them from bending. Note also that some components such as resistors have leads sticking straight out so you’ll have to bend them at right angles, at an appropriate pitch, before you can insert them into the breadboard. Always bend using pliers and avoid bending too close to the body of the component as this could cause damage.

Rat’s Nest

Very simple circuits comprising, say, less than ten components, can by soldered together in what’s often referred to as a rat’s nest. It won’t look pretty but it’ll work, and that’s the main idea. Unlike all the other methods of construction discussed within this feature, there is no means

of physically securing the components other than the solder connection. Soldering together components that are not otherwise attached can be tricky so it’s a good idea to make a loop in the end of the lead of one component using a pair of pliers, pass the lead of the other component through it, and then squeeze the loop, again with the pliers, to grip the other lead. With the leads therefore held together temporarily, the joint can be soldered to make it permanent. Because you need both hands to make a solder joint, you need some way of holding the components while you solder them. A small and cheap device often known as a “helping hands”, which can be thought of as the equivalent of a tiny vice, is very useful for this purpose.

Terminal Strip

This next method could be used to build up moderately small circuits for permanent use but, because it doesn’t involve soldering, it’s also suitable for temporary circuits. It uses terminal strips (otherwise known as terminal blocks) of the type you might have used in joint boxes for household electrical wiring and which can be bought from DIY stores. You should choose a strip with a small current rating such as 5A or less because it will be much smaller than a 20A terminal strip. Terminal strips have two rows of terminals, the adjacent terminals in each row being connected together. In itself this internal connection doesn’t offer much scope for making the required connections in a circuit, but it’s possible to insert more than one component lead into any of the terminals and jumper wires can be used to connect together non-adjacent terminals. Little in the way of practical guidance is needed since the means of connecting leads to the terminal strip is fairly obvious. Just make sure the terminal is unscrewed, insert the component lead(s) or jumper wire(s), and screw the terminal up. If you’re inserting several leads or wires into the same terminal, it might be a good idea to twist them together first. The instructions about bending the leads of components that was given for breadboard construction apply here too.

Above Diodes are in the middle; clockwise from top-left: resistors, capacitors, transistors and ICs

62

33mm

PCBs

Consumer electronic gear uses print ed circuit boards (PCBs) to connect the comp onents together. With a single-sides PCB, components are fitted to one side of the board with their leads passing through holes, and these are soldered to copper tracks on the other side of the board to make up the circuit. These are particularly suited to volume manu facturing or doing complicated circuits but much less so for the average simple circuits that are going to be built individually.

Because designing a PCB and gettin g it manufactured requires additional skills , we don’t recommend you design your own PCB. However, if you find a circuit for which the designer has made a PCB available, this is an easy method of construction. Havin g bought the PCB, construction is similar to that of construction on a stripboard but witho ut the need to make gaps in the copper strips . However, read our construction advice about stripboards before fitting any components to the PCB.


Understanding circuit diagrams AND gate, NAND gate

Buzzer

Capacitor (nonpolarised)

Capacitor (polarised)

Cell, Battery

Crystal

Diode

Inverter

LED (Light Emitting Diode)

Op-amp

OR gate, NOR gate

Resistor

Switch (push button, on/off)

Transistor (bipolar, NPN)

Transistor (bipolar, PNP)

Transistor (FET, n-channel)

Transistor (FET, p-channel)

Wires (no connection)

Wires (connecting)

XOR (exclusive OR) gate

33mm

How diagrams work

A circuit diagram is a logical representa tion of how components connect together to form a circuit. It does not convey information about how they are arranged physically. Each component is represented by a symbol and lines indicate the connections between them. Some common symbols are shown above (with alternatives in some cases). We’ve show n labels on some symbols (e.g. the plus sign on diodes, LEDs, cells, batteries and polarised capacitors, and base/emitter/collector or gate/ source/drain on transistors) but this is just to help you identify the leads of these symbols and they won’t normally appear on circuit diagrams. Op-amps symbols will show plus and minus signs, thoug h, as there

is no other way of identifying them. Symbols have a value or part number next to them and often an identifying number, for example R1 for resistor number 1. A word on ICs... Although the table above has symbols for AND, NAND, OR, NOR and XOR gates and inverters, these are not individual components; instead they are part of an IC. For example, an IC may contain four NAN D gates. If the circuit diagram doesn’t give IC pin numbers, you’ll have to consult a data shee t to identify pin numbers for the input and outp uts. Pretty much all other ICs will appear on the circuit diagram just as boxes with the pins numbered and named.

Stripboard

Stripboard, often referred to by the trade name Veroboard, provides a versatile method of construction and is suitable for medium complexity circuits. It’s an insulating board drilled with holes on a 0.1” pitch and with parallel copper strips on one side, running the length of the board. Components are mounted on the opposite side to the copper strips, with their leads passing through the holes, and soldered onto one of the copper strips. This way, connections are made between any component leads soldered onto the same copper strip. The copper strips can be broken where a connection isn’t required and lengths of wire, on the same side as the components, are used to make connections perpendicular to the copper strips. With a simple circuit, you might be able to move straight to constructing it on a piece of stripboard. In most cases, though, to avoid errors, it would be wise to plan out the positions of components and patch leads and of gaps in the copper strips.

As an alternative to carrying out this exercise using a pencil and paper, you could use a vector graphics package and there are also a few software packages dedicated to this task. Most are only for Windows although VeeCAD reportedly works well under Linux/Wine, and it’s free. Construction on stripboard involves the following steps. First bend the leads of linear components such as resistors at the necessary separation to pass through the relevant holes. Next, put the components in place on the opposite side to the copper strips by passing their leads through the holes. When all the components have been fitted, hold them in place with a piece of foam before turning the board over. Now solder all the component leads to the copper strips before removing any excess length using wire cutters. Finally, make any necessary gaps in the copper using a tool called a spot face cutter.

www.linuxuser.co.uk

63


Feature

Electronics for Pi Hackers

HOW TO USE A SOLDERING IRON Learn the basic skills of soldering with our step-by-step guide Soldering is an essential skill in electronic construction, and while it’s not difficult, it’s unlikely you’ll have success without some guidance. In particular, unless you know what you’re doing, you could fail to make soldered joints at all, you could

solder a larger area than intended thereby making unintended electrical connections, or you could make a dry joint. Here we guide you through the basic steps and we suggest you try this out to get some practice before building a real circuit.

01

Warm up your soldering iron

04

Bring the surfaces together

02

Clean the tip

05

Solder the joint

06

Make sure it’s okay

Before you can do anything with your soldering iron it needs to be warmed up to its correct operating temperature so turn it on and wait. Resist the temptation to check whether it’s hot enough with your fingers (it’ll hurt) and you’ll find it’s at the right temperature in two or three minutes. If it isn’t hot enough you’ll soon find out because the solder won’t melt when you try to tin the tip in Step 3.

Usually, if your soldering iron has been unused for a while – even a very short time – the tip will be tarnished and dirty. In this state it won’t work very well. So, first of all you need to remove any dirt. First, make sure the sponge on the base of the stand is wet. Now, with the soldering iron hot, draw the tip over the wet sponge, being careful not to leave it in one place long enough to dry and burn the sponge.

03

Tin the tip

With all the dirt and tarnished excess solder removed, you can carry out the important step of tinning the tip of the soldering iron. To do this, touch some solder onto the flat face of the tip. All being well the tip will become covered in molten solder and look silver. If there’s excess solder wipe it off on the sponge. If this doesn’t work the first time go back to Step 2 and start again. You might also need to carry out Steps 2 and 3 periodically as you’re working.

64

With the tip of the soldering iron clean and tinned you’re ready to make your first soldered joint. To do this, you first need to bring the two metallic surfaces that you want to solder together into close proximity with each other. In the photograph, you’ll notice that a component has been fitted to a piece of stripboard with its leads passing through, ready to be soldered to the copper strips. The screwdriver is pointing out the newly fitted component leads.

With the two metal surfaces in close proximity, touch both the surfaces with the soldering iron simultaneously so they’re both heated. This should only take a second or so. Try to avoid over-heating as this can damage some components. Now, with the soldering iron still in place, touch the joint between the two surfaces, ideally at the other side from the soldering iron, until solder melts onto both surfaces. At this point remove both the soldering iron and the solder.

When you’re first starting out it’s a good idea to carry out a visual inspection of the joint. First of all make sure that the solder has genuinely flowed and solidified onto both surfaces. Second, make sure the solder joint looks shiny and silver. If it doesn’t look shiny then you have a so-called dry joint, due to over-heating the solder, and this results in a poor electrical connection. If the joint isn’t perfect, remove the solder with a de-soldering tool and try again.


SIX PROJECTS TO TRY

Ideal ways to try out your new hardware and electronics skills Once you’ve started to hone your hardware skills, then the sky (or the Pi) is the limit. From building your own toys, like remote-controlled cars or your own games machine, to media projects like a Raspberry Pi-powered picture frame or your own networked music player, to practical

electronics like creating circuits and building a Bluetooth iBeacon, these are just some of the projects you can try to get started. You can find detailed guides to all of them in Raspberry Pi: The Complete Manual (http://bit. ly/2aHhSJz).

Paint your own circuits

Make a networked media player

Bare Conductive (www.bareconductive.com) has taken the joy of electronics and made it far safer, easier and more versatile with its conductive paint. You can literally draw wires on paper with a paintbrush; use it for cold soldering, as a conductive adhesive, and much more. There’s not a great deal of boundaries to what you can do. Pair this paint with a microcontroller board and you could be creating interactive art, clothing and projects in no time.

Make a Pi-powered photo frame

Use Logitech’s Squeezebox Media Server and the squeezelite client to create a multi-room audio system. The media server can stream audio to each squeezelite client and also synchronise the playback between multiple clients if desired. Logitech opensourced the Squeezebox software after officially discontinuing the hardware line. The server can be pointed at a music directory on the Pi. You can either put music directly onto the SD card or connect up a USB flash drive or hard drive.

Build an iBeacon with Bluetooth

Digital picture frames that displayed a selection of your favourite photos were quite popular for a time. These tablet-like devices often made for interesting talking points, but were often let down by low memory, a poor user interface, or both. You don’t have to worry about either of those problems with this project. Set up a Raspberry Pi with some photo-displaying software, connect a touchscreen display, place it in a suitable stand, and sit back to enjoy the results.

Though it once seemed futuristic, targeted advertising has now become commonplace. While it doesn’t rely on retinal identification (yet), iBeacon from Apple uses Bluetooth and an iPhone 4S or later in close proximity to trigger an app or message. Google, meanwhile, has its own version, known as Eddystone. Raspberry Pi owners can investigate this technology in more detail by setting their devices up as ‘PiBeacons’ with the addition of a low-cost Bluetooth Low-Energy (BLE) USB module, available from ModMyPi at bit.ly/1MtDbJC.

Build a Pi-powered car

Make an Xbox Zero arcade machine

Grab an old remote-control car, rip off its radio receiver and replace it with the Raspberry Pi, hook it up on the network, fire up a bleeding-edge web server and then get your smartphone or tablet to control it by tilting the device. By the end of this project, not only will you have a fun toy, you will have learnt about the basic technologies that are starting to power the world’s newest and biggest economy for the foreseeable future.

Gut an old videogames controller, replace its innards with a Raspberry Pi Zero, and then load it up with a treasure trove of retro games. While the Zero doesn’t take up much space, videogame controllers are often stuffed full of delicate electronics. The trick here is to find a games controller that has enough space inside for the Zero. The original Xbox controller, nicknamed The Duke, is perfect. Then use the RetroPie emulator to play the games that you legally own.

www.linuxuser.co.uk

65


Feature Electronics Pi Hackers kers Pi Hac tronics forfor ture Elec Fea What You’ll Need BREADBOARD

www.proto-pic.co.uk/ half-size-breadboard

YOUR FIRST HARDWARE HACK

3MM LED LIGHT

Start simple with the GPIO pins before moving on to your own projects

WIRES

The Raspberry Pi features a single PWM (pulse width modulation) output pin, along with a series of GPIO (General Purpose Input/Output) pins. These enable electronic hardware such as buzzers, lights and switches to be controlled via the Pi. For people who are used to either just ‘using’ a computer, or just programming software that

www.ultraleds.co.uk/ led-product-catalogue/ basic-leds-3-5-810mm.html www.picomake.com/ product/breadboardwires

270-OHM RESISTOR http://goo.gl/ ox4FTp5ntj0091

only acts on the machine itself, controlling a real physical item such as a light can be a revelation. This tutorial will assume no prior knowledge of electronics or programming, and will take you through the steps needed to control an LED using the Raspberry Pi, from setting it up to coding a simple application.

Coloured LED

Breadboard

Different coloured LEDs are a great way to physically see which part of the software is running and help you understand the program flow

The breadboard, or prototype board, provides an easy-to-use and solderless environment for creating and changing your development circuits

All the items you will need to get going with adjusting an LED using PWM. The wires should have male and female ends.

Place the female end onto the Pi, noting pin number 1 being identified by the small ‘P1’. The blue wire is ground.

Once everything is connected up, plug in your USB power cable.

Switch the power on. The LED will light up. If it’s dim, use a lower-rated resistor.

66

Breadboard wire

GPIO header

A must for any budding electrical engineer, these maleto-male and male-to-female wires make for fast and easy circuit building

This provides a mechanism for both gathering input and providing output to electrical circuits, enabling reactive program design

01

The Pi’s pins

Before we dive into writing code, let’s take a look at the layout of the pins on the Pi. If you have your Pi in a case, take it out and place it in front of you with the USB ports on the right. Over the next few steps we’ll look at some of the issues you’ll encounter when using the GPIO port.

02

Pi revision 1 or 2?

Depending on when you purchased your Pi, you may have a ‘revision 1’ or ‘revision 2’ model. The GPIO layout is slightly different for each, although they do have the same functionality. Here we have a revision 1; revision 2s became available towards the end of 2012.

03

Pin numbers

04

Pin uses

If you take a look at the top left of the board you will see a small white label, ‘P1’. This is pin 1 and above it is pin 2. To the right of pin 1 is pin 3, and above 3 is 4. This pattern continues until you get to pin 26 at the end. As you’ll see in the next step, some pins have important uses.

Pin 1 is 3V3, or 3.3 volts. This is the main pin we will be using in this guide to provide power to our LED. Pins 2 and 4 are 5V. Pin 6 is the other pin we will use here, which is ground. Other ground pins are 9, 14, 20 and 25. You should always ensure your project is properly grounded.

05

GPIO pins

The other pins on the board are GPIO (General Purpose Input/Output). These are used for other tasks that you need to do as your projects become more complex and challenging. Be aware that the USB power supply doesn’t offer much scope for powering large items.

06

Basic LED lighting

Okay, so let’s get down to business and start making something. First, get your breadboard, two wires, a 270Ω resistor and an LED. Note the slightly bent leg on one side of the LED; this is important for later on. Make sure your Pi is unplugged from the mains supply.


07

Wire the board

Plug one wire into the number 1 pin, and the other end into the breadboard. Note that it doesn’t matter where on the breadboard you plug it in, but make sure there are enough empty slots around it to add the LED and resistor to. Now get another wire ready.

10

Add the LED

Grab your LED and place the ‘bent’ leg end next to the 3V3 wire in the breadboard. Place the other leg next to the resistor leg opposite the ground wire. This now completes the circuit and we are ready to test out our little task.

13

Open up terminal

Assuming we want to use the GUI, rather than SSH into the Pi, open up a new terminal window by double-clicking on the LXTerminal icon. We need root access to control the LEDs, so either enter su now, or remember to prefix any commands with sudo.

su followed by password or add

sudo to the start of each command.

08

Add another wire

Place the female end of the wire into pin number 6 (ground) and the other end into the breadboard, making sure to leave room for the resistor, depending on how large it is. Next, get your resistor ready. You can use a little higher or lower than 270 ohms, but not using a resistor at all will likely blow the LED.

11

Now get your micro-USB socket and either plug the mains end into the wall, or plug it into a computer or laptop port (and powered on!). You should see the LED light up. If not, then check your connections on the breadboard or Pi, or try a different LED.

12 09

Add the resistor

Next we need to add our resistor. Place one end next to the ground wire on the breadboard, and the other one slot below the 3V3 wire connection. This will sit next to the LED when we add it in a second. Note that there is no correct or incorrect way to add a resistor.

GPIO pins

Power it up

Set up programming environment

Now, we need to be able to do a little bit more than just turn a light on – we want to be able to control it via code. Set up a new Raspbian installation (unless it’s already installed). You don’t need a GUI for this – it can all be done via the terminal if you so wish. Before starting, it’s best to check everything is up to date with:

sudo apt-get dist-upgrade

There are 26 GPIO pins on the Raspberry Pi and you can use the vast majority of them in any way you want. There are a few pins that have special purposes. The very top row of pins is designed to offer power to external devices like butto ns and lights. Since an earth line (often called ‘ground’) is needed to safely create a circuit, you’ll also find several ground pins located in the GPIO port. To exploit the power of the GPIO port you’ll need a few essential componen ts, the most important of which are jumper leads . Since the pins on the port are ‘male ’, you’ll need to purchase either ‘female to male’ or ‘female to female’ cables, depending on what hardware you intend to connect to your Pi. Once you’re ready to connect your device, the next task is to find the right pin for the job. While it’s true that all GPIO ports are multipurpose, some are more multipurpose than others! Some pins are reser ved for 5V, 3.3V and ground. Othe rs also have special capabilities, but they can also be called different things. For exam ple, GPIO 18 is also known as pin 12 and PCM_CLK. This particular pin (around halfway down the right side of the GPIO port) is capable of hard ware pulse-width modulation (PWM ), and is useful for controlling LED lights and moto rs among other things.

14

Download GPIO library

There is a handy GPIO Python library that makes manipulating the GPIO pins a breeze. From within your terminal window, use wget to download the tarball file, then extract it using tar. wget https://pypi.python.org/ packages source/R/RPi.GPIO/RPi.GPIO-0.5.2a. tar.gz tar zxf Rpi.GPIO-0.5.2a.tar.gz cd Rpi.GPIO-0.5.2a

15

Install the library

Now we need to install the library. This is simply a case of using Python’s install method; so we need the dev version of Python. Make sure you are in the directory of the library before running this command.

sudo apt-get install python-dev sudo python setup.py install

16

Import the library in a script

Create a new Python script. Next import the main GPIO library and we’ll put it in a try-except block. Save the file using Ctrl+X and choosing ‘yes’.

cd / cd Desktop sudo nano gpio.py try: import RPi.GPIO as GPIO except RuntimeError: print(“Error importing GPIO lib”) 67


Feature Electronics Pi Hackers kers Pi Hac tronics forfor ture Elec Fea Hands on with a Pi hacker Alex Ellis is a software engineer, Docker Captain and maker of Pi projects. He explains how he got into Raspberry Pi hacks and learned the skills he uses to create Pi-powered robots.

20 17

Test the script

Now to make sure that the script imported okay, we just need to run the Python command and then tell it the name of the script that we just created. If all goes well, you shouldn’t see any error messages. Don’t worry if you do, though. Just go back through the previous steps to check everything is as it should be.

sudo python gpio.py

18

Set GPIO mode

Reload the script in nano again. We will then set the GPIO mode to BOARD. This method is the safest for a beginner to adopt and will work whichever revision of the Pi you are using. It’s best to pick a GPIO convention and stick to it because this will save confusion later on.

sudo nano gpio.py GPIO.setmode(GPIO.BOARD)

19

Set pin mode

A pin has to be defined as either an input or an output before it can work. This is simplified in the GPIO library by calling the GPIO. setup method. You then pass in the pin number, and then GPIO.OUT or GPIO.IN. As we want to use an LED, it’s an output. You’ll be using these conventions frequently, so learn them as best you can so they soak in!

GPIO.setup(12, GPIO.OUT)

68

Using PWM

The next step is to tell the pin to output and then set a way of escaping our program. Here we call the GPIO class again and then the PWM method, passing in the pin number; the second value is the frequency in hertz – in this case, 0.5.

p = GPIO.PWM(12, 0.5) p.start(1) input(‘Press return to stop:’) p.stop() GPIO.cleanup()

21

Adjust PWM

To add a timer to the LED so it fades out, we first need to import the time library and then set the 12 pin to have 50Hz frequency to start off with.

import time import RPi.GPIO as GPIO GPIO.setmode(GPIO.BOARD) GPIO.setup(12, GPIO.OUT) p = GPIO.PWM(12, 50)  # channel=12 frequency=50Hz p.start(0)

22

Add the fade

Then we add in another try-except block, this time checking what power the LED is at – and once it reaches a certain level, we reverse the process. To run this code, simply save it from nano and then sudo python gpio.py.

while 1: for dc in range(0, 101, 5): p.ChangeDutyCycle(dc) time.sleep(0.1) for dc in range(100, -1, -5): p.ChangeDutyCycle(dc) time.sleep(0.1) except KeyboardInterrupt: pass p.stop() GPIO.cleanup()

When did you start making projects with the Raspberry Pi? I started with an original Pi Model B, by keeping it on a shelf for over a year. One day I saw an Adafruit tutorial about turning an LED on and off in code and had to try it out. After an initial success I started buying every sensor I could find on eBay and trying them out. How experienced with hardware were you before you started using the Pi? I had zero experience of micro-controllers. I had built PCs from parts but that was all. I didn’t even know how to use a breadboard. What electronics skills do you think people need to know to get the most from the Pi? For me it was learning to use a breadboard, how to solder and how to work a multimeter. I think these were the most important for me. What has been your most challenging hardware-based project to date? I built a robot around the Dagu Thumper chassis for Pi Wars v2. I bought cheap motor controllers and spent weeks trying to work out why things didn’t work properly. Finally after splashing out I was able to put together a gripping robot that could pull chairs across a floor using a PS3 remote. What skills did you learn? I learned to test hardware very thoroughly as I went through half a dozen cheap motor controllers. I’m really a software guy so I rely on pre-built components or hot glue. I would love to learn more about CNC milling or laser cutting. What’s your advice for makers looking to take their hardware skills further? Collaborate with other makers. When I needed to create an impressive hardware project for Dockercon 2016 in Seattle I teamed up with Pimoroni and they helped me more than I could have imagined, and through our conversations the Blinkt! add-on came into being. You may not be able to start a project with a hardware manufacturer but there are plenty of maker faires where like-minded makers hang out.

Alex Ellis’ Dagu Thumper-based robot for Pi Wars v2 was his most challenging Pi electronics project to date


01202 586442

e d a M

in

e

th

K U

Pi-DAC+

Pi Marketplace

IQaudIO Audiophile accessories for the Raspberry Pi

• Raspberry Pi HAT, no soldering required • Full-HD Audio (up to 24bit/192MHz) • Texas Instruments PCM5122 • Variable output to 2.1v RMS • Headphone Amplifier / 3.5mm socket • Out-of-the-box Raspbian support • Integrated hardware volume control • Access to Raspberry Pi GPIO • Connect to your own Hi-Fi's line-in/aux • Industry standard Phono (RCA) sockets • Supports the Pi-AMP+

Pi-AMP+

• Pi-DAC+ accessory, no soldering required • Full-HD Audio (up to 24bit/192MHz) • Texas Instruments TPA3118 • Up to 2x35w of stereo amplification • Provides power to the Raspberry Pi • Software mute on GPIO22 • Auto-Mute when using Pi-DAC+ headphones • Input voltage 12-19v • Supports speakers from 4-8ohm

Pi-DigiAMP+

• Raspberry Pi HAT, no soldering required • Full-HD Audio (up to 24bit/192MHz) • Texas Instruments TAS5756M • Up to 2x35w of stereo amplification • Out-of-the-box Raspbian support • Integrated hardware volume control • Provides power to the Raspberry Pi • Software mute on GPIO22 • I/O (i2c, 3v, 5v, 0v, GPIO22/23/24/25) • Just add speakers for a complete Hi-Fi • Input voltage 12-19v • Supports speakers from 4-8ohm

PiMusicBox

Twitter: @IQ_audio Email: info@iqaudio.com

WWW.IQAUDIO.COM

IQaudio Limited, Swindon, Wiltshire. Company No.: 9461908


Playback control

Playing chords

The three potentiometers control all the basic playback features on the music box. The red pot controls volume, green control the sound fonts used and the blue pot controls the notes that are played and the pitch they play at

One of the better things about using the GPIO Zero interface is that it’s multi-threaded. For Mike, this means he’s able to press more than one button simultaneously and form music chords

Quick shutdown

On either side of the wooden music box are a pair of buttons that can help shut down or reboot the Pi. It’s common for the Pi to develop a small hiccup, or crash, in larger projects, so these buttons are a quick and easy way to reset them

Using SoundFonts

Mike has preloaded 32 different SoundFonts in to the music box, enabling you to create some fantastic sounds. However, by downloading your own and implementing them into the associated project folder, you’ll be able to add your own with ease if you recreate the project

Components list ■ Wooden box ■ 10K potentiometers ■ LED panel ■ Raspberry Pi 2 ■ Adafruit Proto Pi HAT ■ MCP3008 analogue-todigital converter chip ■ Power circuitry ■ Speakers with an amp ■ LIPO battery

70

Right Mike first used Google Sketchup to get a realistic render of his music box and to help judge the size and button layout of the unit Below As well as the Pi, the inside of the music box contains an array of circuitry, an amplifier, two speakers and a LIPO battery


My Pi project

Music box Create some tunes with the help of Michael’s amazing Pi-controlled music box

Where did the idea for the music box come from? I’ve always had an interest in music. I perform in amateur musicals and have directed a few along the way, and I have a healthy obsession with film music in particular. Since the Raspberry Pi launched in 2012, I’ve become more and more interested in electronics and have completed a few projects such as the Picorder, which was a box full of sensors and an LCD display. I wanted to do a project that combined those two passions and so the idea for the Music Box was born. I took some inspiration from David Sharples’ Joytone (http://www.recantha.co.uk/ blog/?p=9966). My recent experience with Pi Wars nudged me along the way towards using a wooden box of some kind, rather than a plastic one, and the idea fell out of all that! Could you take us through the building process? Did you face any issues along the way? First of all, I searched eBay to find a suitable box. It needed to be roughly the width of my hand with my fingers stretched out. A bit of measuring, and a fair amount of guesswork, and I ended up with a £10 box. Next, I designed the Music Box in Google Sketchup. My first time though I managed to design it the wrong way round! Back to Sketchup I went, and put the buttons in the right place and made sure that everything fitted. I knew what I wanted: four buttons (one for each finger) plus a cluster of three buttons for the thumb, giving almost a full playable octave. I also wanted a button to reboot the Pi when the code (inevitably) crashed, and another to shut it down neatly. I started out by drilling holes for all the main (illuminated) buttons using a Dremelclone crafting drill. A lot of sawdust, smoke and singed wood and it looked about right. There was a lot of trial and error involved getting the holes the right size and shape. I went with a Raspberry Pi 2 – I had one spare and I knew that I’d probably need the multi-core CPU to allow multi-threading to work properly in the

code. I knew I would need some kind of proto board to solder everything to. I was given a board by Average Man (Rich Saville) but messed it up the first time and ended up with an Adafruit HAT proto board. I next added the potentiometers and hooked them up to an MCP3008 analogue-to-digital converter chip. I came a bit unstuck here because I’d gone for 1K potentiometers, thinking they would give me more precision. After a lot of swearing and burnt fingers, along with a lot of help from friends, I realised I actually needed to use 10K pots. If anyone knows why,

instruments. You just need to find the right settings. Providing you can find the right SoundFont, you can add pretty much any instrument. How complex is the software behind the music box? The software behind the music box is pretty simple. It uses Ben Nuttall and Dave Jones’ GPIO Zero to control the various inputs and outputs and also to read the MCP3008. Different threads are started using GPIO Zero’s in-built event handlers and as each button is pressed, different functions are called. SoundFonts are pre-loaded into

“The Music Box is polyphonic – you can play multiple sounds at once” we would love to know as we’re still none the wiser! I needed some way of making it fully portable. I also needed to add speakers. I looked up various Raspberry Pi/Gameboy tutorials and came across the one on the Adafruit site which showed you how to create a power circuit using a LIPO and use it to power both the Pi and the speakers. This was my first time using that particular type of battery and I knew to be cautious. I shopped over at Makersify (http://makersify.com), a UK maker store that stocks loads of Adafruit stuff, and ended up with a bunch of components to join together. I also realised I needed to drill another hole for a power button – strange the things you forget to add! What are the musical capabilities of the box? Is it possible to use sound effects? The Music Box is polyphonic – you can play multiple sounds at once by pressing the buttons at the same time. There are three potentiometers; pot A controls the volume, pot B controls which instrument (out of 32) is played and pot C controls what group of tones are played (turn it to the right to go higher and to the left to go lower). This means you should be able to play any note on a variety of different

the system and played back using a Python plugin for FluidSynth. The main control loop reads the settings from the potentiometers and sets global variables each time round the loop and keeps the whole program from exiting before it should. There’s just under 300 lines of code. If you’d like to see the code, see the Github repository here: https:// github.com/recantha/musicbox What would you say to those wanting to make their own music box? Anything in particular they need to pay close attention to? If you want to create your own music box, I would simply say: go for it! I found my soldering skills were tested, so it’s worthwhile getting some practice in before you start – see if your local Pi Jam will give you a lesson or two if you don’t know how. Plan out what you want the thing to look like, and plan how you are going to wire it up at an early stage. I wish I had planned a little bit more, but my enthusiasm got the better of me and I ended up doing it rather than intricately working out what I was going to do! I would also say get used to using the analogueto-digital converter you’ve chosen before you solder it in – breadboard it up beforehand if you’ve never used an A2D, and look at GPIO Zero, as it’s the easiest way to read in the inputs.

Michael Horne

is a ColdFusion/ SQL web developer residing in the UK. As well as being a Raspberry Pi enthusiast, he’s also an avid part of his local amateur theatre group.

Like it?

Standalone music players that make use of the Pi’s great features are available from several outlets. If you want your Pi to act as a hub for streaming your favourite tunes through, check out pimusicbox.com.

Further reading

Michael has been kind enough to upload detailed instructions on his music box build over on his site: www. recantha.co.uk. Make sure to check out his numerous other projects also listed on his site, you are bound to be inspired!

www.linuxuser.co.uk

71


Python column

Check your mail With Python, you can have your Raspberry Pi act as mail checker, giving you a running list on incoming email

Joey Bernard

Joey Bernard is a true Renaissance man, splitting his time between building furniture, helping researchers with scientific computing problems and writing Android apps

Why Python? It’s the official language of the Raspberry Pi. Read the docs at python.org/doc

Since the Raspberry Pi is such a small computer, it gets used in a lot of projects where you want to monitor a source of data. One such monitor you might want to create is a mail-checker that can display your current unread emails. This issue, we’ll look at how to use Python to create your own mailchecking monitor to run on your Pi. We’ll focus on the communications between the Pi and the mail server and not worry too much about how it might be displayed. That will be left as a further exercise. To start with, most email servers use one of two different communication protocols. The older, simpler one was called POP (Post Office Protocol), and the newer one is called IMAP (Internet Message Access Protocol). We will cover both protocols to cover all of the situations that you might run into. We’ll start with the older POP communications protocol. Luckily, there is support for this protocol as part of the standard library. In order to start using it, you will need to import the poplib module, and then create a new POP3 object. For example, the following

import getpass my_pop.user('my_name@gmail.com') my_pop.pass_(getpass.getpass()) You should now be fully logged in to your email account. Under POP, your account will be locked until you execute the quit() method of the connection. If you need a quick summary of what is on the server you can execute the stat() method:

my_pop.stat()

Most email servers use one of two communication protocols will create a connection to the POP server available through Gmail.

import poplib my_pop = poplib.POP3_SSL(host='pop. gmail.com') You need to use the POP3_SSL class when connecting to Gmail because Google uses SSL for its connections. If connecting to a different email server, you can use POP3 to make an unencrypted connection. The POP communication protocol involves the client sending a series of commands to the server to interact with it. For example, you can get the welcome message from the server with the getwelcome() method:

my_pop.getwelcome()

72

The first things that you will want to communicate to the server are the username and password for the email account that you are interested in. Having the username in your code is not too much of a security issue, but the password is another matter. Unless you have a good reason to have it written out in your code, you should probably ask the end-user for it. Included within the standard library is the getpass module, which you can use to ask the end-user for their password in a safer fashion. You could use the following code, for example.

This method returns a tuple consisting of the message count and the mailbox size. You can get an explicit list of messages with the list() method. You have two options for looking at the actual contents of these emails, depending on whether you want to leave the messages untouched or not. If you want to simply look at the first chunk of the messages, you can use the top() method. The following code will grab the headers and the first five lines of the first message in the list.

count for the message. The one problem with the top() method is that it is not always well implemented on every email server. In those cases, you can use the retr() method. It will return the entire requested message in the same form as that returned from top(). Once you have your message contents, you need to decide what you actually want to display. As an example, you might want to simply print out the subject lines for each message. You could do that with the following code.

for line in email_top[1]: if 'Subject' in i: print(i) You need to explicitly do the search because the number of lines included in the headers varies from message to message. One you are done, don’t forget to execute the quit() method to close down your connection to the email server. One last thing to keep in mind is how long your email server will keep the connection alive. While running test code for this article, it would frequently time out. If you need to, you can use the noop() method as a keepalive for the connection. As mentioned previously, the second, newer, protocol for talking to email servers is IMAP. Luckily, there is a module included in the standard library that you can use, similar to the poplib module we looked at above, called imaplib. Also, as above, it contains two main classes to encapsulate the connection details. If you need an SSL connection, you can use IMAP4_SSL. Otherwise, you can use IMAP4 for unencrypted connections. Using Gmail as an example, you can create an SSL connection with the following code.

email_top = my_pop.top(1, 5)

import imaplib import getpass my_imap = imaplib.IMAP4_SSL('imap. gmail.com')

This method will return a tuple consisting of the response text from the email server, a list of the headers and the number of requested lines, and the octet

As opposed to poplib, imaplib has a single method to handle authentication. You can use the getpass module to ask for the password.


Python column

my_imap.login('my_username@gmail. com', getpass.getpass())

the subject line for a quick display, you can use the code:

IMAP contains the concept of a tree of mailboxes where all of your emails are organised. Before you can start to look at the emails, you need to select which mailbox you want to work with. If you don’t give a mailbox name, the default is the inbox. This is fine since we only want to display the newest emails which have come in. Most of the interaction methods return a tuple that contains a status flag (either ‘OK’ or ‘NO’) and a list containing the actual data. The first thing we need to do after selecting the inbox is to search for all of the messages available, as in the following example.

subject_line = email_mesg. get('Subject')

my_imap.select() typ, email_list = my_imap. search(None, 'ALL') The email_list variable contains a list of binary strings that you can use to fetch individual messages. You should check the value stored in the variable typ to be sure that it contains ‘OK’. To loop through the list and select a given email, you can use the following code:

for num in email_list[0].split(): typ, email_raw = my_imap. fetch(num, '(RFC822)') The variable email_raw contains the entire email body as a single escaped string. While you could parse it to pull out the pieces that you want to display in your email monitor, that kind of defeats the power of Python. Again, available in the standard library is a module called email that can handle all of those parsing issues. You will need to import the module in order to use it, as in the example here.

import email email_mesg = email.message_from_ bytes(email_raw[0][1]) All of the sections of your email are now broken down into sections that you can pull out much more easily. Again, to pull out

There are many different potential items that you could select out. To get the full list of available header items, you can use the keys method, as below:

email_mesg.keys() Many times, the emails you get will come as multi-part messages. In these cases, you will need to use the get_payload() method to extract any attached parts. It will come back as a list of further email objects. You then need to use the get_payload() method on those returned email objects to get the main body. The code might look like:

payload1 = email_mesg.get_payload() [0] body1 = payload1.get_payload() As with POP email connections, you may need to do something to keep the connection from timing out. If you do, you can use the noop() method of the IMAP connection object. This method acts as a keep-alive function. When you are all done, you need to be sure to clean up after yourself before shutting down. The correct way to do this is to close the mailbox you have been using first, and then log out from the server. An example is given here:

my_imap.logout() my_imap.close() You now should have enough information to be able to connect to an email server, get a list of messages, and then pull out the sections that you might want to display as part of your email monitor. For example, if you are displaying the information on an LCD, you might just want to have the subject lines scrolling past. If you are using a larger screen display, you might want to grab a section of the body, or the date and time, to include as part of the information.

What about sending emails? In the main body of the article, we have only looked at how to connect to an email server and how to read from it. But what if you need to be able to also send emails off using some code? Similar to poplib and imaplib, the Python standard library includes a module called smtplib. Again, similar to poplib and imaplib, you need to create an SMTP object for the connection, and then log in to the server. If you are using the GMail SMTP server, you could use the code

import smtplib import getpass my_smtp = smtplib.SMTP_SSL('smtp.gmail.com') my_smtp.login('my_email@gmail.com', getpass. getpass()) This code asks the end user for their password, but if you aren't concerned about security, you could have it hard-coded into the code. Also, you only need to use the login() method for those servers that require it. If you are running your own SMTP server, you may have it set up to accept unauthenticated connections. Once you are connected and authenticated, you can now send emails out. The main method to do this is called sendmail(). As an example, the following code sends a ‘Hello World’ email to a couple of people.

my_smtp.sendmail('my_email@gmail.com', ['friend1@email.com', 'friend2@email.com'], 'This email\r\nsays\r\nHello World') The first parameter is the ‘from’ email address. The second parameter is a list of ‘to’ email addresses. If you have only a single ‘to’ address, you can put it as a single string rather than a list. The last parameter is a string containing the body of the email you are trying to send. One thing to be aware of is that you will only get an exception if the email can’t be sent to any of the ‘to’ email addresses. As long as the message can be sent to at least one of the given addresses, it will return as completed. Once you have finished sending your emails, you can clean up with the code:

my_smtp.quit() This cleans everything up and shuts down all active connections. So now your project can reply to incoming emails, too.

www.linuxuser.co.uk

73


Tutorial

Hack a toy with the Raspberry Pi: Part two

Embed your hacks into your toy and create the code to bring it to life

Dan Aldred

Dan is a Raspberry Pi Certified Educator and a lead school teacher for CAS. He is passionate about creating projects and uses projects like this to engage the students that he teaches. He led the winning team of the Astro Pi Secondary School Contest and his students’ code is currently being run aboard the ISS. Recently he appeared in the DfE’s ‘inspiring teacher’ TV advert.

In part one of this tutorial (Linux User & Developer issue 168) you created four hacks that were originally used to augment a £3 R2D2 sweet dispenser making it light up, vibrate, play music and stream a live web feed to a mobile device. You may have been working on your own features to use and customise your toy. Part two of this tutorial begins by showing two different ways to set up and use a button to trigger your hacks. One method is to add and code your own button, the second method is to utilise the toy’s own built-in button. The next part walks you through how to wire up, code and test each of the individual features before combining them into a single program which will bring your toy to life.

01

Prepare a button

A button is a simple and effective way of triggering the hacks that you created in part one of this tutorial. Take a 4 x 6mm tactile button or similar and solder / attach a wire to each of its sides. Take each wire and connect it to one end of a female-to-

■ An old/new toy ■ Resistors ■ Small disc motor ■ LED ■ A radio ■ Small webcam ■ Female-to-female jumper jerky wire ■ Tactile button

74

R2D2 is © LucasFilm

What you’ll need

female jumper wire. You can solder these into place or remove the plastic coating and wrap them around each metal contact.

02

Set up the button

Next set up and test the button to ensure that it is working correctly. Take one of the wires and slot it onto GPIO pin 17, this is physical pin number 11 on the board. The second wire connects to a ground pin, indicated by a minus sign or the letters GND. The pin directly above GPIO pin 17 is a ground pin,


Hack toys physical pin number nine. This will complete the circuit and make the button functional.

03

Test the button

Open your Python editor and start a new file. Use the test program below to check that the button is functioning correctly. To ensure the buttons are responsive, use a pull up resistor code GPIO.PUD_UP, line 4. This removes multiple touches and means that only one ‘press’ is registered each time the button is pressed. Save and run the program. If it is working correctly it will return the message “Button works”.

import RPi.GPIO as GPIO GPIO.setmode(GPIO.BCM) GPIO.cleanup() GPIO.setup(17, GPIO.IN, GPIO.PUD_UP) while True: if GPIO.input(17) == 0: print “Button works”

04

Use the button on the toy

Instead of adding your own button you can utilise an existing button on the toy to trigger the events. On the R2D2 example this is the button at the front which releases the sweets and plays the classic R2D2 beep sound. Using a screwdriver or suitable tool, open the casing of your toy and then locate the electrics of the button. Locate the negative wire and cut this in two. Now attach a jumper wire to each of the ends. You can test that the connection is still working by joining the two jumper wires together and pressing the button.

05

Wire up your button

If your toy has a button to trigger a sound or movement then it will use batteries. These will still be used to power the toy but the extra circuit you added in Step 4 creates a secondary circuit. When you press the button you join the two contacts and in turn complete the circuit, this sends a small current around the circuit which can be detected by the GPIO pin on the Raspberry Pi. Take one of the wires and attach it to GPIO pin 1, the 3.3 volts which will provide the current. Attach the other wire to GPIO pin

15, physical pin number 10. Pin 15 checks for a change in current. When the button is in its normal state, i.e. it has not been pressed, no current flows from the 3.3v pin as the circuit is broken. When you press the button it joins the contacts, the circuit completes and the current flows. Pin 15 registers a change in state which is used to trigger an event.

06

Test the button

Open a new Python file and enter the test code below. On line four a Pull Down is used to check for the change in state. The toy’s button completes the circuit and GPIO pin 15 receives a little current. Its state becomes True or 1, line six, and it triggers the display of a confirmation message. The final line prints out a confirmation message each time the button is pressed.

import time import RPi.GPIO as GPIO GPIO.setmode(GPIO.BCM) GPIO.setup(15, GPIO.IN, GPIO.PUD_DOWN) #checks for a change on pin 7 while True: if GPIO.input(15) == 1: print (“You touched R2D2!”)

07

Wire up the LED

08

Wire up the motor

BCM Number GPIO pins are a physical interface between the Pi and the outside world. At the simplest level, you can think of them as switches that you can turn on or off. You can also program your Raspberry Pi to turn them on or off (output). The GPIO.BCM option means that you are referring to the pins by the ‘Broadcom SOC channel’ number. If you start counting the pins, this is the physical pin number. The GPIO.BOARD option specifies that you are referring to the pins by the plug number, i.e the numbers printed on the board

Now to connect the individual hacks to your toy. These examples are based on Part One of the tutorial but can be replaced with your own. Shut your Pi down and unplug the power. Assuming that the LED is connected to the jumper jerky wires, take the positive wire of the LED, the one with the resistor, and attach it to GPIO pin 21. This is physical pin number 40, the one at the very bottom-right of the pins. The black wire connects to a ground pin. The nearest one is physical pin 39 just to the left of pin 40; attach it here.

Next take the positive wire from the motor, usually coloured red, and attach it to GPIO 09, which is physical pin

www.linuxuser.co.uk

75


Tutorial Check out R2D2 in action

plugged it in use the command sudo lsusb to list the connections to the port. If it displays the name of your web camera, then it has been recognised and is ready to go. Consider stripping away the plastic shell so you are left with just the board and lens. Adjust the casing so that it fits neatly and can be hidden within your toy.

Check out the video of the completed R2D2 toy hack and see the features in action. This may give you some ideas for your own toy hack. https://www. youtube.com/ watch?v= VnOsUaS5jSY

11

Setting up the program

Assuming that you have installed all the required software modules and libraries for your hacks, you are now ready to create the program to control your toy’s extra features. Open a file in your Python editor and import os, this will control the PiFM and web camera, line 1. Next import the PiFM library; the webcam runs automatically when you boot up your Pi, see Part One in issue 168, Step 12. Set the GPIO pins to BCM on line 6 and then define the PUD for the button setup you are using. The first option, line 7, is for a button that is already connected to your toy, the second is to be used if you have added your own button.

number 21. You will then need to connect the other wire to any of the ground pins, (GND) 6, 9, 14, 20, 39. You may need to relocate this wire later on as you add more wires for the other components.

import os import sys import time import RPi.GPIO as GPIO import PiFm GPIO.setmode(GPIO.BCM) GPIO.setup(15, GPIO.IN, GPIO.PUD_DOWN) #checks for a change on pin 7 GPIO.setup(17, GPIO.IN, GPIO.PUD_UP)

12

Set up the other outputs

Next prepare the two other GPIO outputs, in this example the LED and the motor. Set these as outputs using the code GPIO.setup(9, GPIO.OUT), line 1 and then turn the LED off using the code GPIO.output(21, GPIO.LOW). This ensures that when you start your program running the LED and the motor do not run until the trigger button is pressed.

GPIO.setup(9, GPIO.OUT) GPIO.output(9, GPIO.HIGH) GPIO.setup(21, GPIO.OUT) GPIO.output(21, GPIO.LOW)

13 09

Add the PiFM aerial

The Raspberry Pi can broadcast to your radio directly from physical pin 7 without a need to alter anything. However, you will probably want to extend the range of the broadcast by adding a wire to GPIO 04, physical pin number 7. Unbelievably this can extend the range of the broadcast up to 100 meters. You must use physical pin number 7 to broadcast so move around any other wires that may be attached from your own hacks.

10

Add the web camera

The web camera is the simplest component to connect as it uses one of the USB ports. Once you have

76

Set up the LED

Create a function to store the code that will control the LED, making it turn on for five seconds and then turn off again, lines 2, 3 and 4. Then set up a simple message to display at the start of the program to let you know that the toy is ready and the pins are prepared. This is useful for debugging your program.

def LED_Eye(): GPIO.output(21, GPIO.HIGH) time.sleep(5) GPIO.output(21, GPIO.LOW) print (“Welcome to R2D2”)

14

Trigger the events

Set up a while loop to continually check if the button has been pressed, line 1. Use an IF statement to check when the button has been pressed and that the input is


Hack toys HIGH. This uses the line GPIO.input(15) == 1: In this case the 1 refers to an equivalent value of True or On; this relates to the circuit being completed and a current flowing through as discussed in Step 6. Then trigger the motor to turn on using GPIO.output(9, GPIO.LOW) line 5, and call the function LED_Eye() to execute, lighting up the LED, line 6.

while True: if GPIO.input(15) == 1: print (“You touched R2D2!”) ‘’’Enable LED and Motor’’’ GPIO.output(9, GPIO.LOW) LED_Eye()

15

Trigger the webcam and radio broadcast

Finally, start the web camera streaming using the line os.system(‘service motion start’) and check out the feed on your viewing device. While the webcam is running, start the radio broadcast with PiFm.play_sound(“sound.wav”), line 4. The default FM station is set to 100FM, tune in your radio to hear your sound being played. Then stop the web feed after the sound has finished using the line os.system(‘service motion stop’). Note that this code is indented on the same level as the previous lines.

gpiozero

Full code listing import os import sys import time import RPi.GPIO as GPIO import PiFm GPIO.setmode(GPIO.BCM) import time GPIO.setup(15, GPIO.IN, GPIO.PUD_DOWN) #checks for a change on pin 7 ###Reset Motor GPIO.setup(9, GPIO.OUT) GPIO.output(9, GPIO.HIGH) ###Reset LED GPIO.setup(21, GPIO.OUT) GPIO.output(21, GPIO.LOW)

This is a smart Python library that makes interaction with the GPIO pins really simple, for example you can control an LED with the code led. on(). It covers a massive range of components, hardware and add on boards. You can find out more here https://gpiozero. readthedocs.io/en/ v1.2.0/#

###Controls the LED eye def LED_Eye(): GPIO.output(21, GPIO.HIGH) time.sleep(5) GPIO.output(21, GPIO.LOW) print (“Welcome to R2D2”)

‘’’Play the Star Wars Theme’’’ PiFm.play_sound(“sound.wav”) ‘’’Stop the Webcam’’’ os.system(‘service motion stop’)

16

Embed the hack into the toy

Once you have all your hacks triggering within the code, embed the wires and your Pi within your toy. You may choose to display the hardware and create a more ‘augmented’ style toy or discreetly hide it all away, surprising potential users when they interact with it.

while True: if GPIO.input(15) == 1: print (“You touched R2D2!”) ‘’’Enable LED’’’ GPIO.output(9, GPIO.LOW) LED_Eye() ‘’’Enable the Haptic Motor’’’ GPIO.output(9, GPIO.HIGH) ‘’’start webcam’’’ os.system(‘service motion start’) ‘’’Play the Star Wars Theme’’’ PiFm.play_sound(“sound.wav”) ‘’’Stop the Webcam’’’ os.system(‘service motion stop’)

R2D2 is © LucasFilm

‘’’start webcam’’’ os.system(‘service motion start’)

www.linuxuser.co.uk

77


Tutorial

Run RISC OS on your Raspberry Pi Forget Raspbian – install the quintessentially British operating system onto your Raspberry Pi and take it to the next level! Christian Cawley

I’m a former IT and software support engineer and since 2010 have provided advise and inspiration to computer and mobile users online and in print. I cover the Raspberry Pi as well as Linux at makeuseof.com.

You’re happy using Raspbian to get the most from your Raspberry Pi. You might have flirted with the idea of Ubuntu or Arch Linux, but never seen the point. After all, when it comes to maximising what you can do with the Pi, the official distro has everything you need, right? But, what about trying a non-Linux operating system? The Cambridge-developed RISC OS (RISC being an acronym for ‘reduced instruction set computing’) was the first operating system for ARM processors, and although older British readers will recall it from the classic Acorn Archimedes computers, RISC OS remains relevant and easy to get started with. Just make sure you’ve got a monitor, mouse and keyboard to hand before you boot it up!

What you’ll need ■ RISC OS http://bit.ly/2auOpYL

■ Monitor ■ Keyboard ■ Three buttoned mouse (a standard clickable scrollwheel should suffice)

specifying the downloaded file’s directory path and filename:

unzip Downloads/riscos-2015-02-17.14.zip

Get started with BASIC Whether you remember BBC BASIC or it’s completely new to you, you’ll have it at your fingertips in RISC OS. To open a command line, click Ctrl+F12, then type BASIC and enter. You’re then ready to enter a basic program:

10 WHILE TRUE 20 PRINT “Hello world!” 30 ENDWHILE RUN Esc will stop this routine. Dedicated programming editors are available for BBC BASIC, although you can develop code with WIMP and C too.

78

The .IMG file is in the Home directory, ready for the SD card.

Above Choose the version of RISC OS to suit your needs

01

Download RISC OS

Get started by downloading the correct version of RISC OS (https://www.riscosopen.org/content/downloads/ raspberry-pi), and save the 99.9Mb ZIP file to your computer. Several versions are available, including an ultra light Pico version, but we’d recommend getting started with the first option, the SD Card image.

Above SD cards are often labelled with the letters “mmc” or “sdd” for easy identification

03

Find your SD card

Next, it’s time to write the OS to a formatted SD card. With the card inserted into the card reader, switch to the Terminal window, and check your mounted devices:

df -h Above Before you can write the disk image, it must be extracted from the archive

02

Unzip the download

Before writing to SD, you’ll need to unzip the RISC OS disk image. In the Terminal, use the unzip command,

In this list, you’ll find the SD card listed, which you’ll identify by its filepath, name, and size.

04

Install RISC OS on the Raspberry Pi Next, unmount the SD card:


RISC OS RISC OS is unfortunately not currently compatible with wireless networking.

Above Ensure you enter the correct filepath and destination device name when writing to SD cards

umount /dev/mmcblk0p1 Next, use the dd command to write the image file. Take extra care to enter the correct destination – a mistake can delete your hard drive!

sudo dd bs=4M if=riscos-2015-02-17-RC14.img of /dev/ mmcblk0 Take note here how the “p1” section is omitted, as this refers to a partition.

Above Use a three buttoned mouse or your mouse scrollwheel to open the menu

07

Is your mouse compatible?

Anyone who recalls the Acorn Archimedes will know that RISC OS requires a three-button mouse. The middle button is a dedicated context menu (like right-clicking in Linux, OS X and Windows), but as long as you have a clickable scrollwheel on your mouse, this shouldn’t be a problem.

Installing software on RISC OS Two package managers are included in RISC OS. Packman is designed to install and upgrade software, while !Store offers commercial software. Software can also be installed manually, by copying ZIP archives from your desktop computer to the SD card, and copying the files from the archive into the RISC OS file system. Note that old software you may have used at school probably won’t run, due to differences in the architecture of early ARM CPUs and modern iterations.

Above You’ll need an Ethernet cable connected to your router, or an Ethernet Wi-Fi adaptor Above NOOBS offers an alternative, Terminal-free method of installing RISC OS on your Raspberry Pi

05

Alternative installation

If you would rather avoid the Terminal-focused installation of RISC OS, you can alternatively download the NOOBS software and copy it to your SD card before booting the Pi and selecting RISC OS as your operating system. Once installed, proceed with the steps below to get familiar with RISC OS.

Above You’ll need your keyboard and mouse connected to use RISC OS

06

Boot RISC OS

After a few minutes, the SD card should be ready, so unmount and insert into your Pi before switching it on. Make sure you’ve got a mouse and keyboard connected to your Raspberry Pi first, as well as an Ethernet cable, as

08

Enable Ethernet on RISC OS

By default, Ethernet is disabled. To fix this, follow the instructions in the welcome/html file on the desktop. If this is missing, double-click on !Configure, then Network>Internet>Enable TCP/IP Protocol Suite, followed by Close>Save. Finally, you can select Reboot now to restart the system with Ethernet enabled.

Above Spend a few minutes familiarising yourself with the RISC OS desktop

09

Get to grips with RISC OS

If you have previous experience with RISC OS, much of what you see on the desktop will be familiar. Otherwise, don’t worry, it’s pretty straightforward. Applications are essentially directories with ! (known in RISC OS as pling) as a prefix, and are launched by double-clicking the folder.

www.linuxuser.co.uk

79


Tutorial

Build an explorer pHAT robot Part three Give your explorer robot its legs with our robot-coding tutorial

Alex Ellis

@alexellisuk is a senior software engineer at ADP, a Docker Captain and an enthusiast for all things Linux. He is never far from a Raspberry Pi and regularly posts tutorials on his blog.

This tutorial builds the software to control your explorer robot so you can let it loose in the living room, the office or even in the garden. We will put together a program to drive the robot from a console, letting you type in WASD as if in a computer game. This gives you a chance to get comfortable with how the robot moves and how the code is put together. We will then go on to integrate a Nintendo Wiimote controller, which makes for a much more natural controller than a keyboard for a more enjoyable experience. The code will be written in Python, which is pre-installed on Raspbian, and wherever possible will be compatible with both Python 2 and 3. The entire source will be open source, so if you want to suggest an enhancement or to correct a bug you can submit a pull request or raise an issue on Github.

What you’ll need ■ Article’s Github repository (https://github.com/ alexellis/zumopi) ■ Explorer Robot as built in Part two ■ Genuine Wiimote (optional)

80

01

Going off-grid

02

Time for a pit-stop

Our robot chassis was originally intended to be run by an Arduino with a much lower current consumption than the Pi Zero. While a range between 5-36V can power some Arduino boards, our Pi needs a clean 5V from the UBEC. Alkaline batteries when brand new and unused have a voltage of around 1.65-1.7v, meaning that the input to the UBEC will be around 6-7v. Some of that will be wasted during the conversion to 5v and the rest will drive the motors and the Raspberry Pi at the same time.

While batteries are needed to go wireless and to roam around we suggest you set up a mini-workshop for your


Explorer robot explorer robot that you can bring it into to download new code and debug problems when things aren’t going to plan. Here you will have access to a USB power supply, a powered USB hub, a HDMI TV or monitor, a Bluetooth dongle, Wi-Fi dongle and a keyboard. Make absolutely sure that you disconnect your batteries before plugging in for a pit-stop.

03

Calibrate your motors

We will now calibrate the motor’s direction so that it matches up with the software. Dock your robot and put it on an empty matchbox so that its tracks can run free without taking the leads with it. Now use Python to check that each motor moves the tracks forward when invoked from code. Either type the code into an interactive Python prompt or use the file `motor_tests/ forwards.py`.

``` import explorerhat import time explorerhat.motor.one.forwards() time.sleep(2) explorerhat.motor.one.stop() explorerhat.motor.two.forwards() time.sleep(2) explorerhat.motor.two.stop() ```

04

Turn left and right

In order to turn left or right we will simply ask one motor to move forwards and the other to move backwards. The motors you need to turn will depend on how you have plugged in motors one and two. We have designated motor one as the left and motor one as the right. Therefore to test turning, let’s try this to turn right.

``` import explorerhat import time explorerhat.motor.one.forwards() explorerhat.motor.two.backwards() time.sleep(1) explorerhat.motor.one.stop() explorerhat.motor.two.stop() ``` This file also exists as motor_tests/turn_right.py. To turn left you do the opposite meaning motor one goes backwards and motor two goes forwards.

05

Work with abstractions

If you look over the code so far there is a lot of repetition, so let’s create an abstraction where we can hide away the details of how the motors work and which motors need to go backwards or forwards in order to move in any direction. To try out the Motors class, type in the following or use motor_ test/motors_v1_test.py.

``` import time from motors_v1 import Motors

motors.forwards() time.sleep(1) motors.stop() time.sleep(1) motors.backwards() time.sleep(1) motors.stop() ``` ``` import time import explorerhat class Motors: def forwards(self): explorerhat.motor.one.forwards() explorerhat.motor.two.forwards() def stop(self): explorerhat.motor.one.stop() explorerhat.motor.two.stop() def backwards(self): explorerhat.motor.one.backwards() explorerhat.motor.two.backwards() def left(self): explorerhat.motor.one.backwards() explorerhat.motor.two.forwards() def left(self): explorerhat.motor.one.forwards() explorerhat.motor.two.backwards() ```

RECHARGEABLE VS ALKALINE BATTERIES For our UBEC to function correctly we need our four AA batteries to provide over 5V collectively. Rechargeable AA cells generally give a nominal 1.2v when fully charged, meaning 4.8v, which is much less than we need. Alkaline batteries often start at a nominal value of 1.6-1.7v, meaning we will have over 6V and the UBEC can run properly. An alternative to using AA batteries is to strap a USB power bank to the flat side of the robot with cable ties.

motors = Motors()

www.linuxuser.co.uk

81


Tutorial

06

Create a REPL

Now that we have a Motors class we can make our own REPL (read-evaluate-print loop) – this is similar to what you get if you type in Python without specifying a file. First we read input from the terminal, we then translate that to a motor direction and move the robot, then we print back to the user what we understood and start over again. Python provides us with a convenient method for this called raw_input(). We can call it in a loop and then use sys.stdout.write instead of print so that the cursor stays on the same line. We ignore all input other than ‘q’, which we use as our signal to exit the program.

``` import sys last = None while(last != ‘q’): sys.stdout.write(“Command: “) sys.stdout.flush() last = raw_input() print(“You said: “ + last) ```

07

Combine the REPL and Motors class

Let’s now start to interpret what we read into the inputs for the motors. Now you can choose whether to let the robot move until a new command is entered or for a short period and then immediately stop. We prefer the second option because it is easier to control. motor_repl_v1.py

08

Abstract the REPL

The REPL can be enhanced in many ways, including allowing the move/turn time variables to be edited. Let’s improve the structure of the code by introducing some more classes. Classes do one thing really well and help us swap out functionality later on, and even write automated tests. This step would allow us to re-use the same program later on with a gamepad or Twitter feed. You can check out the enhancements in motor_tests/ motor_repl_v2.py

09

Explanation of the classes

We have introduced three types of classes: one class to read terminal input and convert that to a Command, several classes to represent each command such as Forwards/Left/ Right and Quit and then another class called Robot to run the whole example. Our new loop is cleaner to read and allows us to update only one thing at a time, meaning changes are less likely to create unexpected problems. We have also named our code so that it is clear what its job is; that means we can save on messy or confusing comments.

``` cmd = terminalReader.read() cmd.Execute(motors, move_delay, turn_delay) ```

10

Repurpose a Bluetooth gamepad

In this step we begin introducing a gamepad – the Nintendo Wiimote. You may have one gathering dust or packed away in the attic, but if you decide to buy one online make sure that it is a genuine original item otherwise it will not work with the official Python library. It can also be difficult to find the perfect match in a

82

UPGRADING YOUR CONTROLLER The Wiimote is an easy controller to work with, but it has a limited amount of buttons. If you want more options then we suggest getting a genuine PS3 controller. This gives you many more buttons and two separate axes, which can be used to steer the robot, switch between modes or command the robot in other ways. An example would be assigning the select button to take a photo and upload it to Twitter.

Bluetooth dongle for your gamepad. We found that a Genuine PS3 controller would not operate with a £1 dongle, but with a premium dongle worked fine. We tried reproduction PS3 gamepads and Wiimotes but neither worked.

11

Configure Bluetooth

Raspbian Jessie comes pre-loaded with the requisite system utilities to access Bluetooth accessories. If you are using a different system then look for Bluez tools. We will use the cwiid library to provide our Wiimote interaction. There are other libaries available and you may want to try them out too. Now install cwiid with apt-get:

``` sudo apt-get install python-cwiid ``` The following code (also in wiimote_tests/pair_v1.py) will pair to the gamepad and then vibrate or ‘rumble’ it for half a second. Before running the code, hold down A and B or the button under the battery cover to enter pairing mode.

``` import cwiid, time wii = cwiid.Wiimote() wii.rumble = True time.sleep(0.5) wii.rumble = False ```

12

Read the buttons

Reading buttons is not event-driven; it’s something that we need to do in a loop using polling (continual checking). After pairing in the line `wii = cwiid.Wiimote()` we start interrogating a button’s mapping. The code below in wiimote_tests/buttons_v1.py will print out on the terminal when you press either A or B. Notice that the message is repeated for as long as the buttons are pressed – we’ve added a pause to stop us from overloading the Pi. When finished, hit Ctrl + C to quit.

``` import cwiid, time wii = cwiid.Wiimote() wii.rpt_mode = cwiid.RPT_BTN while(True): buttons = wii.state[‘buttons’] if buttons & cwiid.BTN_A: print(“A pressed, go forwards”) if buttons & cwiid.BTN_B: print(“B pressed, go backwards”) time.sleep(0.02) ```


Explorer robot 13

Create a WiimoteReader class

Now that we can read inputs from our gamepad, let’s create something to convert that into commands that our robot can understand. Since we will be doing this in its own class, we can then test the code without needing to move the robot. Below is the code from drive_v1/wiimote_test.py, which will print out the command as we press buttons on the gamepad. Use Ctrl + C to stop the example.

``` import time from wiimotereader import WiimoteReader reader = WiimoteReader() cmd = None while(True): cmd = reader.read() print(cmd) time.sleep(0.25) ```

14

Finish the control software

We can now control the robot through individual command classes and have them created either by a terminal and keyboard or by a Wiimote gamepad. If you were to download the Python library for a PS3 controller all you would need to do is to convert the inputs into the command classes that we are already working with. Taking the example we had in Step 13, let’s go one stop further and start executing the commands as they are interpreted by the wiimotereader.py code.

15

We could stop here

We could stop the tutorial at this point because we have fully functioning manual control of our robot with a gamepad, but there are still a few important steps we want to take you through. We need to make our program auto-start so that when it’s powered by batteries and our sole USB port is populated with a Bluetooth dongle we can still load the control software. We can create a simple startup task with Cron, a tried and tested Unix scheduler. Add an @reboot directive to your crontab file. Type in `chmod +x start.sh` (hit enter) then `crontab -e`:

``` @reboot /home/pi/explorerrobot/part3/drive_v1/start.sh ``` Systemd could also be used to create a configuration and start-up file; it has many advantages but those layers also add complexity.

SHOULD YOU CODE ON PI DIRECTLY? With modern, open-source code editors available such as Visual Studio code (https://code.visualstudio.com/) and Atom (https://atom.io) there is every reason to write the majority of code on your laptop or desktop PC before using scp/sftp or git push/pull to transfer the code across to the device. We use a mixture of coding on the device and then consolidating that into our Github repository or code folder on a PC or laptop. This also means you always have a second copy of your work if the Pi’s SD card suffers corruption.

16

Run other commands

In addition to auto-starting we need a safe shutdown mechanism, so that the Pi will perform a clean power-down leaving our SD card in good condition. Linux’s halt command is a good candidate for a safe shutdown. Import Python’s os module and use it to execute a command, here’s an example with uptime:

``` >>> import os >>> os.system(‘uptime’) 21:34:11 up 16 min, 1 user, load average: 0.00, 0.03, 0.07 ``` All we need to do is to add a Shutdown class and then have it parsed by the WiimoteReader class. We’ve picked the Home button for shutdown because it’s hard to press by accident.

17

Implement a Shutdown class The shutdown class looks like this:

``` import os class Shutdown: def Execute(self, motors, move_delay, turn_delay): os.system(“sudo halt”) return False ``` The updated parsing code can be found in WiimoteReader. You are now ready to unplug the USB hub and PSU – replace the batteries and plug in 5V/GND from the UBEC. This will immediately start up the Pi. Start holding the pairing button down when the Bluetooth dongle starts to flash.

18

Wrapping up

We’re now three parts into the tutorial and have built-up a fully functioning explorer robot controlled by a Wiimote gamepad. Don’t miss part four next issue where we design the code to drive our robot autonomously through sensors.

www.linuxuser.co.uk

83


EXPLORE THE TECH INSIDE w w w.gad getdaily.x y z

Available from all good newsagents and supermarkets

ON SALE NOW ■ WOW TECH ■ SECURE YOUR SMART HOME ■ POKÉMON GO GEAR THE COOLEST KIT

TECH TEARDOWNS

BUYING ADVICE

SCIENCE EXPLAINED HOW DOES FOAM WORK?

HOW TO GUIDES

Nonpolar end

AIR Polar end

AIR

WATER

WATER

Without an emulsifier added to the mixture, air and water just won’t mix – there is nothing between the particles that will enable them to bind

Emulsifiers create polar and non-polar ends that attach to water and air respectively. These enable the particles to bind, which is why foams retain their shape and integrity for longer

The casing A series of valves and springs ensures the foam is released with enough force in order to keep the consistency intact

The canister Nitrous oxide is used to force air into whatever’s in the siphon – handy for creating foams of any density

£70 | $65 moleculargastronomy.com

Siphon R-Evolution A full do-it-yourself kit aimed at any would-be culinary whipper. It comes with various whipping agents, so you can start experimenting with airs and foams of your own the second it arrives.

BUY YOUR ISSUE TODAY

Print edition available at www.imagineshop.co.uk Digital edition available at www.greatdigitalmags.com 032-045 GDT004 Kitchen Tech.indd 43

22/12/2015 22:04

Available on the following platforms

facebook.com/gadgetmagdaily

twitter.com/gadget_magazine


85 Group test | 90 Linux Mint 18 | 92 Free software

Spotify

Audacious

TOMAHAWK

Clementine

GROUP TEST

Music players and organisers There’s a host of choices out there for people who like to keep their music organised, but which one is best for Linux users?

Spotify

Audacious

Despite an ever-increasing database of users, Spotify for Linux has had a somewhat rocky development. Slow updates and frequent bugs have left users wanting more. However, with some new features and critical fixes, it is becoming one of the better players on the market. www.spotify.com

For those seeking an audio player that’s easy on your computer resources, Audacious is the one. All the core organisational features are present, but it does lack the playback options we would have expected. It is, however, one of the better choices for those who prefer frequent updates. http://audacious-media-player.org/

TOMAHAWK Connectivity is key in TOMAHAWK, and it’s one of the key reasons why it has gained a positive reputation. It’s easy to integrate various social media feeds, add and make new friends to share music with and also add the details of any cloud accounts that have music stored within. www.tomahawk-player.org/

Clementine Inspired by the equally popular Amarok 1.4 media player, Clementine is a functional music player. It boasts an impressive set of features, many of which make it stand out well from the crowd. We especially like being able to control Clementine with our smartphone and even a Wii remote! www.clementine-player.org

www.linuxuser.co.uk

85


Review

Music players and organisers

Spotify for Linux Audacious

Have recent updates finally made Spotify a winner for Linux users?

Choice is everything when it comes to the Audacious music player

n Discovering new music is one of the best things about Spotify, especially when it comes to finding obscure artists and tracks

n Users can download and integrate lyrics to sit alongside their favourite tracks, ideal if you want to have a sing-along session!

Design

Design

Despite its plethora of features, Spotify does a great job at keeping things from not getting overly cluttered. The UI is especially easy to navigate, with sections listed both at the top and side of the interface. During our time with it, we did notice some slowdown when using the Browse section, which seemed to stem from the need to load a high amount of album artwork.

While the default music view is a bit too basic for our liking, users are able to switch between different views through the program. Many elements of its UI can also be customised, so users are in complete control when it comes to arranging tabs and implementing album artwork. There’s a help section on hand if the customisation process becomes overwhelming.

Playback controls

Playback controls

While all the basics are easy to find through the bottom of the display, Spotify also benefits from a series of programmable keyboard shortcuts. We also especially like being able to control our music feed directly through the iOS and Android apps respectively. A comprehensive list of controls is available through the app, helpful if you’re using Spotify for the first time.

Organising music library

Perhaps the weakest link in Spotify’s arsenal is its way of organising your music library. While menus are easy to find, they’re often hard to navigate, with occasional missing artwork and wrong details on occasion. Users can however change the way the storage menus function and work, depending on their exact needs for Spotify.

Extra features

When compared to the other music players in this group, Audacious is a little barebones in this department. All the basics you’d expect are included, but we’d have liked to have had greater control of how our music is played. We did like using the gapless playback, even if most players also include it and it’s slightly more impressive elsewhere.

Organising music library

Audacious well and truly puts the control in the users’ hands when it comes to storing and managing your music. Every inch of metadata can be picked apart and edited, and the program will automatically scan for invalid and duplicate files after every upload. Uploading in general is a little on the slow side, however, so adding multiple songs simultaneously isn’t advised.

Extra features

One reason why many choose to use Spotify is that it enables users to discover music they wouldn’t normally listen to. This is done through an array of cleverly mixed playlists, all of which are tailored to you, based on your previous listening experiences. Of course, being able to use Spotify across a range of devices is also remarkably handy at times.

There’s a myriad of external plugins that can be implemented into Audacious, dramatically increasing its feature set. One of the better ones enables users to import their Last.fm account into the program, while the Effects plugin can be used to take the program’s song processing to the next level. By far the best thing about Audacious.

Overall

Overall

There are still a few things to be ironed out within Spotify for Linux, but for an easy-to-use audio player that has some genuinely impressive features, make sure to check it out.

86

8

The choice of design will be an instant hit for both new and advanced users alike, but playback controls are lacking for the most part. It’s certainly worth considering, but there are better alternatives.

7


TOMAHAWK

The multi-platform audio player that certainly looks the part

Clementine

Can Clementine keep up with the big hitters in this audio player group test?

n Playlists can be instantly created by using the drag-and-drop system that can be found on the left side of TOMAHAWK

n The clunky menu system means it can be hard to navigate between your music library and other tabs

Design

Design

This is without doubt one of the best-looking audio players out there today. While it relies heavily on elements taken from Apple’s iTunes program, the minimal approach makes it easy for users to navigate. There’s little in terms of actual customisation, but we’d argue that there’s no real reason why you’d want to change anything here.

Playback controls

There’s no standout features when it comes to playback, but everything added works well. Queueing up songs in particular works a treat and the playback function can be controlled via different menus. Android users can also control their playback feed through the corresponding app, although the app isn’t half as good as the desktop program.

Organising music library

Using Clementine for the first time will almost certainly leave you scratching your head. Menus are hard to distinguish between and the amount of submenus and additional tabs just adds to the problem. Once you’ve found your footing, you’ll realise the complexity of the feature set it includes, which is impressive for the most part.

Playback controls

A small panel of controls opens up when you play any song, and it includes everything the average listener would need. Playlist management options are also listed here, so users can make on-the-spot changes to them without having to go through the whole menu system. The Android app is also usable, but similarly to TOMAHAWK, lacks in some areas.

Organising music library

Integration plays a big role in TOMAHAWK and it’s one of the very few that enables users to add music sources from all over. Whether it’s stored on your desktop, in the cloud, in other audio players, or even in subscription services, syncing it with TOMAHAWK is quick and easy. The program will automatically put this music in different folders, saving you time and effort.

The complexity of Clementine’s audio player means that organising and storing your music library isn’t as easy as we’d like. However, we should applaud the level of detail users can go into when importing their tracks. Metadata can be implemented at any point and it’s easy enough to share certain metadata elements between playlists and albums.

Extra features

Extra features

Social features are another positive in TOMAHAWK. You can connect with friends and invite them to TOMAHAWK, but the user base is small in its current state. You can follow your friend’s playlists and even listen to them on your desktop. There can be some lag when loading curated playlists, but it largely depends on the number of songs within it.

One of the key things that the Linux version of Clementine boasts is a native notification system. This enables users to stay up to date with the progress of the track imports and playback without needing to enter the actual program. Remote control is also possible via the command line, although it’s by far the hardest way of controlling Clementine.

Overall

Overall

Although it’ll be relatively unknown to many, TOMAHAWK is one of the standout audio players for Linux users. While it does rely on elements from Spotify and iTunes, there are plenty of its own creations that should be applauded.

9

There’s an active community behind Clementine, so we expect lots of changes over time. However, in its current state it feels a little overcomplicated when compared to some of the other audio players listed here.

www.linuxuser.co.uk

7 87


Review

Music players and organisers

In brief: compare and contrast our verdicts Spotify for Linux Design

An easily accessible UI that boasts slick design with plenty of attractive elements on offer

Playback controls

Multitude of ways to control your music without having to open up the program

Organising music

Import data is usually missing and album artwork is pretty hit and miss at times

Extra features

Discovering new music you’ll like is a particular highlight here, thanks to curated playlists

Overall

Music discovery is the key thing in Spotify and here it excels – but alas not in other areas

Audacious

9

Simplistic in parts, but plenty of customisable elements for users to explore

8

Far too basic for our liking and lacking in core options that the competition offers

6

Each and every piece of metadata can be edited and shared between playlists

8

A wide array of plugins can be added according to your taste, including some great effects

8

A good range of plugins and excellent metadata handling, let down by its simplicity

TOMAHAWK

7

A modern player that incorporates several familiar elements from Apple’s iTunes

5

All the basics are well implemented and the Android app can also be used

7

Plenty of import options are available, which help keep your library well organised

8

An array of social features are available, just be warned that the user base is small

7

Great looking and with a familiar UI, it has great features but a small user base

Clementine

9

Over-complicated for the most part, with an abundance of menus to work through

6

8

A separate control panel is available that includes all the basic controls you need

7

9

The complexity of the menu system hinders the user when importing tracks

6

8

Linux users can take advantage of some particularly helpful desktop notifications

8

9

Overly complicated for a media player, even by FOSS standards, but useful on the desktop

7

AND THE WINNER IS… TOMAHAWK

Compared to other desktop operating systems, the choice of audio players for Linux users doesn’t quite cut it in terms of overall quality. However, if we proved anything with this group test, it’s that there are a couple of solid choices for users to check out. Spotify’s Linux client has certainly come on leaps and bounds in the past 12 months, and while improvements are numerous, there are still some key issues that need to be ironed out. TOMAHAWK on the other hand, just feels like a complete and highly usable product. While its design lends itself well to the likes of iTunes, it provides some modern twists on Apple’s winning formula, and they do say that imitation is the best form of flattery. Navigating between menus is simple enough and there’s enough scope for people to really make it their own. Importing options are also plentiful, and it’s one of the few audio players we looked at that really puts the control in the users’ hands. Of course, there are still things that need to be looked into and improved. The current user base is a little bit too small to really take advantage of the plethora of all the social features that TOMAHAWK includes, but over

88

n Displaying your album artwork looks fantastic in the TOMAHAWK audio player

time we expect them to become vastly more useful – especially when it comes to connecting to curated playlists. We would also like to see the Android app undergo a bit of a transformation, as in its current state, it can be used to control your music feed, but not very much more.

If you can look past the few issues that TOMAHAWK does have, it’s by far and away the premier audio player available for Linux users. Download it right now, upload your favourite albums and start rocking out. Oliver Hill


01202 586442

Classified Advertising

HAPPY BIRTHDAY

ADD A REMOTE DBA TO YOUR TEAM

Remote DBA from 2ndQuadrant is the ideal solution for organisations needing extra help to manage their PostgreSQL database â&#x20AC;&#x201C; with our expert knowledge of the core code, you couldn't be in safer hands

SCAN YOUR TREAT Celebrating

TWENTY YEARS

of Hosting

Remote DBA is offered as a service extension to our Gold and Platinum Production Support service plans

UK +44 (0)870 766 7756 US +1 650 378 1218 2ndquadrant.com/rdba

Come Celebrate with us and

0800 808 5450

scan the QR Code to grab

your birthday treat!

Domains : Hosting - Cloud - Servers


Review

Linux Mint 18

DISTRO

Linux Mint 18

Linux Mint continues to establish dominance over the competition in its latest update

RAM 512mb

Storage 9GB (20GB recommended)

Specs 1024 x 768 resolution 64 and 32-bit versions

90

If you head over to every Linux enthusiast’s friend, DistroWatch, chances are you’ll have seen Linux Mint sitting pretty at the top of its download list for some time now. While it may not be the first distro on everyone’s lips, it’s carved out a solid reputation for being one of the premier routes for Linux newcomers seeking an entry point when changing up their desktop experience. As with most Linux Mint updates we’ve seen previously, changes are plentiful and there’s continued support for both Cinnamon 3 and

MATE. No matter your choice, bundled software is minimal but effective. Both LibreOffice and VLC have become staple Linux products, and the addition of the Banshee audio player is a nice touch, especially as it’s considered one of the better music players on the market currently. After a simple install, something that’s becoming more commonplace in most entry-level distributions, the most obvious addition to Linux Mint is the introduction of X-Apps. Essentially these apps are


Everything from arranging hot corners to windows tiling has been made available from the start forks of traditional GNOME applications, which have been modified to work across a wide array of desktop environments. While some will see this as an unnecessary change, it does help solve the issue of GNOME applications not working properly when used on non-GNOME environments. A small, subtle fix that really gives users free rein with how they want to use Linux Mint from the beginning, which is something that all users can get behind. Also noticeable is the string of updates within Mint’s update manager. Update policies are much clearer to understand than previous versions, with Mint on hand to now assist newcomers with the updates it recommends they should install for their system. It’s a far cry from the annoying necessity of installing batch updates simultaneously. Ease of access plays a big role in Mint’s targeting of new Linux users, and the fundamental System Settings menu has been revamped to make it a more welcoming site for all. Everything from arranging hot corners to windows tiling has been made available from the start, with no complicated menus to navigate. It’s styled very similarly to that of Windows, which won’t please some, but it’s effective in what it does. We recommend venturing into the driver manager menu here, which includes a step-by-step recommendation guide on the drivers that Mint thinks would benefit your system. Advanced users will scoff at such handholding, but it’s another nod to new users for sure.

Digging deeper into Linux Mint reveals the use of two very distinct graphical package managers, targeted at very different audiences. The first is called Synaptic, and relies on a plain-text list to provide you with all the software items in your library. It’s possible to queue up packages, install and remove them simultaneously. It’s a little slow at times and fairly clunky to use for the most part. On the other side you’ll find Software Manager, a vastly more clean-cut manager that distributes software between categories and icons. Its design is far more pleasing to the eye, but it’s fairly superfluous in its approach. Neither package manager really hit the mark for us, but some users will certainly feel at home with one of these. Thanks to its library of updates, we’d have been shocked if we had found any major faults during our time with Linux Mint 18. Everything from the installer to the configuration tools work flawlessly and there’s everything included that a new user could possibly need to get equipped with their first Linux distribution. Of course, its ease-of-use will also appeal to those advanced users who don’t want the hassle that more demanding distros offer, but it certainly won’t be for everyone. While we’d like to see some improvements to the graphical package managers used within, it’s only a small blip in what is otherwise another stellar update from the Mint team. Oliver Hill

Pros Everything from setup,

customising and driver management works how it should. There’s no big surprises, but Linux Mint 18 gets top marks for usability.

Cons We’d like to see some improvements to the graphical package managers used here. Synaptic in particular is a little on the sluggish side.

Summary We’re not surprised with how good Linux Mint 18 is; in fact we were expecting it. It just goes to show that if you provide regular updates, listen to and embrace user feedback, and continue to push and develop your distribution, good things can happen

9

www.linuxuser.co.uk

91


Review

Free software

PROGRAMMING ENVIRONMENT

Pharo 5.0

You can’t keep a good language down - Smalltalk is back! It may come as a surprise to some that before Java’s dominance of the Enterprise scene, and nearsynonymity with Object Orientation (OO), that OO was once a revolutionary approach, and its implementation meant Smalltalk. Despite gaining a strong foothold in financial software and other niches, it was edged out by Sun’s 500lb gorilla, and clung on in fragmented implementations, notably Squeak and Pharo. Squeak was the basis for the Etoys programming environment for OLPC (One Laptop Per Child), as well as previous versions of Scratch, but its development balances the need for backward-compatibility, so for a more modern Smalltalk we turn to Pharo. Modern is a relative term, and installation was straightforward on an old 32-bit Ubuntu Trusty PC, but installing the 32-bit compatibility libraries upset our Debian Unstable box (although Debian Stable was fine). Documentation for Pharo (and Smalltalk in general) is extensive, and we were soon diving into web development with the Seaside Web framework, and enjoying a language we’d scarcely used since the 1990s. No room here to persuade you of the joys of this old OO language, only to say it still has much to offer, and Pharo’s a good place to try it from.

Above Smalltalk pioneered user-friendly IDEs and object inspection decades ago, and continues to make coding easier

Pros Powerful OO language with

good web app potential, and a live programming environment. Well documented.

Cons Pharo is quite an immersive live environment, and it will take time to adjust to its perspective.

Great for…

Rediscovering why OO is actually quite a good idea http://pharo.org/

IM CLIENT

Converse.js 1.0.5

Far from retro, a chat client is becoming a web app essential Thanks to the Slack-driven resurgence in the popularity of in-browser text chat, as well as mobile-friendly “chat to an advisor” options on many websites, you may feel you need to offer a chat client integrated into your own web application – for your own XMPP server or a public one. In which case, Converse.js has to be on your shortlist, for its features and easy integration with popular web platforms from Rails and Django through to Alfresco and WordPress. You can also use it as a standalone client, setting it up on a page of your own, to add Jabber everywhere from your Raspberry Pi to your laptop. With “single and multiuser chats, invitations, service discovery, direct registration, contact lists, roosters and vCard

92

exchange, status changes and messages, typing and state notification, and OTR encryption”, Converse.js provides everything you need apart from the Jabber server, and something to manage connections (as HTTP is unsuitable for such long-lived, bidirectional communication) if your XMPP server does not. Set-up tutorials cover both Bidirectional-streams Over Synchronous HTTP (BOSH) and websockets for this purpose. The documentation is excellent, and will quickly get you set up, or help you along the way with tweaking for your needs – for example, serverside authentication (prebinding), for logged in users of your site to be automatically logged into existing XMPP sessions.

Pros Integrates well, with good

documentation and easy instructions. Flexible and powerful.

Cons Needs a separate Jabber/XMPP

daemon set up if you aren’t using a public one.

Great for…

Nearly painless chat integration for any website https://conversejs.org/


PYTHON PLOTTING PACKAGE

matplotlib 2.0.0b3

Plot like a Pythonista as 2.0 approaches release matplotlib is one of the jewels in Python’s crown. Together with NumPy, it gives users a mathematical teaching tool with some advantages over the proprietary MATLAB, but provides a similar interface for those who’ve used MATLAB in education. Use within IPython and you’ll see one of the best interactive interfaces. Its only disadvantage is it lacks the multi-language accessibility of Gnuplot, but with Python (and Julia, which can access matplotlib via the PyPlot package) in the ascendency in scientific computing, finance, and Big Data, this is not such a handicap. We looked at the third beta release: it’s an easy install, with many Python users and coders likely to have most

of the dependencies, but publication lag being what it is, there may be a full release and package for your distro by the time you read this. If you do build yourself, test with python tests.py - or use the Python interpreter: import matplotlib matplotlib.test() The ease with which you can generate plots in 2D and, with the appropriate library, 3D is a joy. Work through some of the common tutorial examples to begin to appreciate the possibilities, then try them out with your project’s data – not just in scripts and with the GUI toolkits, but in your web applications. The available documentation is excellent – reading like it was produced by people who actually use the software to get things done.

Pros For Pythonistas, a natural choice. For newbies, the MATLAB-style interface is an easy start.

Cons Python and Julia only, NumPy

dependency, and lacks the exceptional speed of Gnuplot.

Great for…

Beautiful plots for presentations and publications http://matplotlib.org/

GAMING

Galaxy Forces V2 1.85

2D space shooter and race-track game, with missions Sometimes the simplest games are the best, and sometimes they’re far from simple. Side scrollers are easy to play when it’s just a jumping penguin, but with a space rocket and Newtonian physics, our first few attempts at Galaxy Forces V2 ended in small fireballs as the ship crashed. Fortunately we’re persistent here at LUD Towers, and our old 2D-playing reflexes eventually asserted themselves. A good thing too, for there’s a lot to get to in this game - inspired by Gravity Force on the Amiga – with 50 levels, multiple opponents, and four modes. Start with the race mode against the clock, then with up to eight players over the network. Now challenge yourself with a dogfight against the (still developing) AI. Last, missions involve transporting cargo – competitively and under fire – or co-operating with other players. Installation is a simple matter of unpacking and running the appropriate binary. If even that is too much effort, there’s an online version to play in your browser, although you’ll then miss out on adding to the game with the level editor. Contributions are welcomed by the author, who discusses the AI and other aspects of implementation – C++, OpenGL – on the game’s webpage.

Above We were quite proud when we got the thrust right and could take off without crashing (triggering memories of Zarch!)

Pros Flexible playing: networked

dogfights and missions, compete against the AI, or race against the clock.

Cons For casual gamers, the

keyboard controls can take a little practice to achieve steering finesse.

Great for…

Quick distraction - or longer spent building your skills www.galaxy-forces.com/

www.linuxuser.co.uk

93


OpenSource

Get your listing in our directory To advertise here, contact Luke

luke.biddiscombe@imagine-publishing.co.uk | +44 (0)1202586431

RECOMMENDED

Hosting listings Featured host: www.netcetera.co.uk 0800 808 5450

About us

Formed in 1996, Netcetera is one of Europe’s leading web hosting service providers, with customers in over 75 countries worldwide. As the premier provider of datacentre colocation, cloud hosting, dedicated servers and managed web hosting services in the UK, Netcetera offers an array of services designed to more effectively manage IT infrastructures. A state-of-the-art data

What we offer

• Managed hosting - A full range of solutions for a cost effective, reliable, secure host. • Cloud hosting - Linux, Windows, Hybrid and Private Cloud Solutions with support and scalability features.

centre environment enables Netcetera to offer your business enterprise-level colocation and hosted solutions. Providing an unmatched value for your budget is the driving force behind our customer and managed infrastructure services. From single server to fully customised data centre suites, we focus on the IT solutions you need.

• Datacentre colocation - Single server through to full racks with FREE setup and a generous bandwidth. • Dedicated servers - From QuadCore up to Smart Servers with quick setup and fully customisable.

5 Tips from the pros

01

Reliability, trust, support Reliability is a major factor when it comes to choosing a hosting partner. Netcetera guarantees 100% uptime, multiple internet routes with the ability to handle DDOS attacks, ensuring your site doesn’t go down when you need it.

02

Secure & dependable Netcetera prides itself on offering its clients a secure environment. It is accredited with ISO 27001 for Security along with the options of configurable secure rackspace available in various configurations.

03

24/7 technical support Netcetera has a committed team of knowledgeable staff available

94

A state-of-the-art data centre environment enables Netcetera to offer your business enterpriselevel colocation and hosted solutions

Testimonials

24/7 to provide you with assistance when you need it most. Our people make sure you are happy and your problems are resolved as quickly as possible.

Roy T “I have always had great service from Netcetera. Their technical support is second to none. My issues have always been resolved very quickly.”

04

Suzy B “We have several servers from Netcetera and their network connectivity is top notch, with great uptime and speed is never an issue. Tech support is knowledgeable and quick in replying. We would highly recommend Netcetera.”

Value for money We do not claim to be the cheapest service available but we do claim to offer excellent value for money. We also provide a price match on a like for like basis as well as a price guarantee for your length of service.

05

Ecofriendly Netcetera’s environmental commitment is backed by use of ecocooling and hydroelectric power. This makes Netcetera one of the greenest datacentres in Europe.

Steve B “We put several racks into Netcetera, basically a complete corporate backend. They could not have been more professional, helpful, responsive or friendly. All the team were an absolute pleasure to deal with, and nothing was too much trouble so they matched our requirements 100%.”


Supreme hosting

SSD Web hosting

www.cwcs.co.uk 0800 1 777 000

www.bargainhost.co.uk 0843 289 2681

CWCS Managed Hosting is the UK’s leading hosting specialist. They offer a fully comprehensive range of hosting products, services and support. Their highly trained staff are not only hosting experts, they’re also committed to delivering a great customer experience and passionate about what they do.

Since 2001 Bargain Host have campaigned to offer the lowest possible priced hosting in the UK. They have achieved this goal successfully and built up a large client database which includes many repeat customers. They have also won several awards for providing an outstanding hosting service.

• Colocation hosting • VPS • 100% Network uptime

Value hosting elastichosts.co.uk 02071 838250

UK-based hosting: www.cyberhostpro.com | 0845 5279 345 Cyber Host Pro are committed to providing the best cloud server hosting in the UK – they are obsessed with automation. They’ve grown year on year and love their solid, growing customer base who trust them to keep their business’ cloud online! If you’re looking for a

hosting provider who will provide you with the quality you need to help your business grow, then look no further than CyberHost Pro. • Cloud VPS Servers • Reseller hosting • Dedicated Servers

• Shared hosting • Cloud servers • Domain names

Value Linux hosting patchman-hosting.co.uk 01642 424 237

ElasticHosts offers simple, flexible and cost-effective cloud services with high performance, availability and scalability for businesses worldwide. Their team of engineers provide excellent support around the clock over the phone, email and ticketing system.

Linux hosting is a great solution for home users, business users and web designers looking for cost-effective and powerful hosting. Whether you are building a single-page portfolio, or you are running a database-driven ecommerce website, there is a Linux hosting solution for you.

• Cloud servers on any OS • Linux OS containers • World-class 24/7 support

• Student hosting deals • Site designer • Domain names

Small business host www.hostpapa.co.uk 0800 051 7126 HostPapa is an award-winning web hosting service and a leader in green hosting. They offer one of the most fully featured hosting packages on the market, along with 24/7 customer support, learning resources, as well as outstanding reliability. • Website builder • Budget prices • Unlimited databases

Quality VPS Hosting:

Fast, reliable hosting

www.BHost.net | sales@BHost.net BHost specialises in one thing and doing it right – Linux Virtual Private Servers (VPS). We don’t sell extras like domain names or SSL certificates, we are simply dedicated to providing you with the highest quality VPS service with exceptional uptime and competitive pricing.Our customer focus means our team always goes above and beyond to help.

Our platform successfully hosts a whole variety of services including: game servers, office file storage servers, email, and DNS servers plus many thousands of websites. • Linux VPS hosting • Unlimited Bandwidth • Run your favourite Linux distro • 100% satisfaction or a full refund!

www.bytemark.co.uk 01904 890 890 Founded in 2002, Bytemark are “the UK experts in cloud & dedicated hosting”. Their manifesto includes in-house expertise, transparent pricing, free software support, keeping promises made by support staff and top-quality hosting hardware at fair prices. • Managed hosting • UK cloud hosting • Linux hosting

www.linuxuser.co.uk

95


Free with your magazine

Instant access to these incredible free gifts…

The best distros and FOSS Essential software for your Linux PC

Professional video tutorials

The Linux Foundation shares its skills

Tutorial project files

All the assets you’ll need to follow our tutorials

Plus, all of this is yours too…

• The ultimate rescue kit, packed with distros. Burn them to disc or a flash drive for an instant rescue remedy for your PC • 20 hours of expert video tutorials from The Linux Foundation • All-new tutorial files to help you master this issue’s Go tutorial • Must-have data recovery tools

• Program code for our Linux and Raspberry Pi tutorials

Log in to www.filesilo.co.uk/linuxuser Register to get instant access to this pack of must-have Linux distros and software, how-to videos and tutorial assets

Free for digital readers too!

Read on your tablet, download on your computer


The home of great downloads – exclusive to your favourite magazines from Imagine Publishing Secure and safe online access, from anywhere Free access for every reader, print and digital

An incredible gift for subscribers

Download only the files you want, when you want All your gifts, from all your issues, in one place

Get started Everything you need to know about accessing your FileSilo account

01

Follow the instructions on screen to create an account with our secure FileSilo system. Log in and unlock the issue by answering a simple question about the magazine.

Unlock every issue

Subscribe today & unlock the free gifts from more than 40 issues

Access our entire library of resources with a money saving subscription to the magazine – that’s hundreds of free resources

02

You can access FileSilo on any computer, tablet or smartphone device using any popular browser. However, we recommend that you use a computer to download content, as you may not be able to download files to other devices.

03

If you have any problems with accessing content on FileSilo take a look at the FAQs online or email our team at the address below

filesilohelp@imagine-publishing.co.uk

Over 20 hours of video guides

Essential advice from the Linux Foundation

The best Linux distros Specialist Linux operating systems

Free Open Source Software Must-have programs for your Linux PC

Head to page 28 to subscribe now Already a print subscriber? Here’s how to unlock FileSilo today… Unlock the entire LU&D FileSilo library with your unique Web ID – the eight-digit alphanumeric code that is printed above your address details on the mailing label of your subscription copies. It can also be found on any renewal letters.

More than 400 reasons to subscribe

More added every issue


Special offer for readers in North America

6 issues FREE FREE

resource downloads in every issue

When you subscribe

The open source authority for professionals and developers

Order hotline +44 (0)1795 418661 Online at www.imaginesubs.co.uk/lud *Terms and conditions This is a US subscription offer. You will actually be charged ÂŁ80 sterling for an annual subscription.

This is equivalent to $120 at the time of writing â&#x20AC;&#x201C; exchange rate may vary. 6 free issues refers to the USA newsstand price of $16.99 for 13 issues being $220.87, compared with $120 for a subscription. Your subscription starts from the next available issue and will run for 13 issues. This offer expires 30 November 2016.

Quote

USA

for this exclusive offer!


Linux Server Hosting from UK Specialists

24/7 UK Support • ISO 27001 Certified • Free Migrations

Managed Hosting • Cloud Hosting • Dedicated Servers

Supreme Hosting. Supreme Support.

www.CWCS.co.uk


▼Linux User & Developer - Electronics For Pi Hackers