Page 1

Free DVD

Choose a Cloud MakuluLinux We compare 5 leading openSUSE 13.2 + Cinnamon 2.0 Debian Edition Latest Release! 64-bit

OpenStack alternatives


istros! 2 Great D

M ay 2 0 1 5

Choose a Cloud We compare OpenStack solutions from 5 leading vendors MakuluLinux

Is this dauntless desktop Linux the new Mint or something totally different?

Get a job in Linux! We interview hiring managers for SUSE and Red Hat

How Does ls Work?

Find out what really happens when you type a simple Bash command

Go Language The power of C with lots of modern conveniences



Darktable 1.6

Troubleshoot network problems Sync your devices in Perfect your from the command line your own home cloud digital images www.linux-

Issue 174 May 2015

9 771471 567026



Shop the Shop

FREE FROM XP You don’t have to abandon your computer just because Microsoft is abandoning Windows XP. Get a fresh start with Linux! Our Free from XP special shows you how to:

▪ Install Linux ▪ Download and install free software for your Linux system

▪ Create documents and spreadsheets ▪ Play games ▪ Manage photos ▪ Play music and videos This single-volume special edition is all you need to get started. If you already know Linux, buy the Free from XP special for a friend!

Join The linux revoluTion!

Free FroM xP order online:

Editorial Welcome

What It Is and What It Does Dear Linux Magazine Reader, I recently ran across an article in Ars Technica on the impending launch of the .sucks domain [1]. The article linked to another piece on .sucks at the Marketing Land site [2]. When a company called Momentous picked up the rights to the new .sucks top-level domain (TLD) at the generic TLD sale late last year, the concept of site owners registering a domain in .sucks the way they register a domain in .com made the rounds as an office joke. But now, a few months later, it looks like the operation is really ready for business. A company called Vox Populi (reportedly a subsidiary of Momentous) even has a website [3], where they express their ideals and explain the arcane pricing options. The company bases its pitch around free speech and consumer advocacy. A quotation on the home page states: “By building an easyto-locate, ‘central town square’ available 24 hours a day, 7 days a week, 365 days a year, dotSucks is designed to help consumers find their voices and allow companies to find the value in criticism. Each .sucks domain has the potential to become an essential part of every organization’s customer relationship management program.” The homepage even links to a video with what appears to be an endorsement from consumer-advocate-turned-polititian Ralph Nader, playing amid moving footage of a speech by Martin Luther King, Jr. What the website doesn’t include is a quote from former US Senator Jay Rockefeller, who is quoted in the Marketing Land article calling the .sucks domain “… little more than a predatory shakedown scheme.” Imagine the possibilities … if you think something “sucks,” you can register a website to talk about how it sucks. Well sort of, but as I mentioned, the pricing options are a bit arcane. Is Vox Populi offering the .sucks space as public service, to provide a forum for healthy feedback on companies, products, and services? They certainly say that’s what they’re trying to do. Declaring that something “sucks” didn’t used to be regarded as “healthy” feedback. I realize this might be a dialect thing, and to a new generation, “sucks” might mean something more like, “If I might but offer a bit of kindly advice … .” Most experts, however (except for apparently Ralph Nader), believe the ultimate goal of the dotSucks experiment is to force trademark holders to ante up registration fees to protect their brands from getting roasted. According to the Vox Populi site, the company has decided on a list of “Premium” site names that will cost $2,499 per year to register. If ProductA is on the

list, the company will charge the trademark holder $2,499 for control of the URL The Vox Populi site describes several other levels of pricing, including a “Blocking” rate, which would allow a company to block the use of a domain for only $199 per year; however, they won’t let you block a site that is on the Premium list. What would you do if you had a big company? Certainly $2,499 per year is a bargain compared with the damage that could be done by a disrespectful website populated by people who hate your product. But the problem is, you don’t really solve the problem by reserving your own .sucks domain. In fact, there might already be a disrespectful website out there populated by people who hate your product that existed even before the incubation of .sucks. For that matter, a customer might easily juxtapose “sucks” with your product name in a dozen other ways without resorting to the predictable TLD formulation:,, Given that no vendor is seriously going to believe they can lock down the possibilities for negative press just by registering a .sucks domain, I’m inclined to think the creators truly don’t just see this as extortion and honestly believe they are giving a new meaning to the term “sucks” that will usher in a bold new era for Internet feedback. But seriously, though, even if it isn’t evil, doesn’t this idea seem a little unpolished? You might even say it doth promote a negative fluid pressure, resulting in a local pressure differential and the accompanying flow …

Joe Casad, Editor in Chief

Info [1] “.sucks” Registrations Begin Soon: http://​­arstechnica.​­com/​­information‑technology/​­2015/​­03/​­sucks‑ tld‑to‑accept‑sunrise‑registrations‑soon‑but‑theyll‑be‑pricey/ [2] Watch Out Brands: The Controversial .Sucks Domain is Almost Here: http://​­marketingland.​­com/​­controversial‑sucks‑do main‑almost‑here‑121505 [3] Vox Populi: https://​­www.​­nic.​­sucks/ | Issue 174 May 2015


Linux Magazine May 2015


Choose a Cloud

  8 Code of Conflict

• Kernel developers adopt Code of Conflict to guide behavior.

• New high-density storage solution from Supermicro.

  9 Linux 4.0

• Next kernel release marks end of the 3.x series.

• Ubuntu switches to systemd startup daemon.

• More online.

We review the OpenStack cloud market, then take a closer look at five OpenStack providers.

10 Samba Security Bug

• Flaw in smbd file server daemon affects Samba 3.5.0 to 4.2.0rc4.

• Misconfigured servers and poorly coded middleware keep old vulnerabilities alive.

Market 12OpenStack Overview

• PrivDog security app could compromise user security.

Take a tour of the market for OpenStack products and services.

16OpenStack Solutions


We examine similarities and differences in Red Hat, SUSE, Ubuntu, Mirantis, and HP OpenStack solutions.

  3 Comment   6 DVD

Community Notebook

96 Featured Events 97 Call for Papers

84 Doghouse: BBC Computer Education Scheme

98 Preview

Linux Magazine ISSN 1471-5678 Linux Magazine is published monthly by Linux New Media USA, LLC, Lawrence, KS, USA


May 2015

88 Linux Jobs

 new program to prepare youth A for the digital future.

We interview experts from Red Hat and SUSE about the Linux jobs market.

86 Laidout Book Creator

92 Kernel News

Issue 174

 aidout simplifies the design of L books and booklets.

 ermanent deletion, limiting open P processes, and TraceFS. |

Linux Magazine May 2015



Testing OpenStack Cloud solutions from five OpenStack providers.




A sumptuous distro with multiple desktop versions and thoughtful usability additions.



A simple commandline tool for monitoring and analyzing data streams.

Features 32 How Does ls Work?

Darktable 1.6 This photo editing program combines many effects in a simple interface.

LinuxUser 64 Darktable 1.6

Find out what happens behind the scenes when you type a Bash command.

 arktable fixes incorrect D exposure, conceals unfavorable lighting conditions, and ensures harmonious colors.

38 Seafile

Sync devices and collaborate with other users in your own personal cloud.

72 Command Line: Desktop Recorders

44 Ask Klaus!

54 Go Language

Booting natively, recovering files from an improperly disconnected USB device, and nesting menus.

48 Tshark

Running an instant messaging back end with Prosody, a lean XMPP server.

Avoid the routine tasks and focus on the important stuff.

Several tools that can help you record desktop activities.

76 FreeFileSync

58 Perl: Searching Git

 he Tshark packet analyzer gives T precise information about the data streams on the network.

52 Charly: Prosody

The GitHub API opens up wonderful opportunities for snooping around.

Create a reliable backup quickly and easily with FreeFileSync.

Review 26 MakuluLinux

Two desktop environments and two different distributions as a base – introducing MakuluLinux.

Latest Release!

80 Workspace: Bottle

Using the Bottle framework to build Python apps.


openSUSE 13.2 + Cinnamon 2.0 Debian Edition 64-bit

• Btrfs and XFS filesystems • AppArmor 2.9 •G  CC 4.8 with 4.9 option •G  nome and KDE desktops •M  ore responsive YaST |


• Compiz window manager • PAE-enabled kernel • Wine with Winetricks • PlayOnLinux and Steam • Slingshot launcher

See page 6 for details

Issue 174

May 2015



This Month’s DVD

On the DVD openSUSE 13.2 (64-bit)

The new openSUSE release has been stabilized through intense testing with an automated testing tool that “helps ensure the latest openSUSE releases are not full of any nasty surprises for end users.” Btrfs is the default filesystem for the root partition, whereas XFS hosts the home partition. The YaST configuration tool has been ported to Ruby and is now more responsive and reliable. Standard features include AppArmor 2.9 enabled by default, GCC 4.8, Linux kernel 3.14, and your choice of Gnome or KDE desktops.

MakuluLinux Cinnamon 2.0 Debian Edition (32-bit) The MakuluLinux Cinnamon 2.0 Debian Edition (MCDE) is a stunning distribution with rich graphical desktops. Each desktop features a large calendar, clock, and quotation, along with many favorite Linux applications and plenty of pre-installed drivers for an easy out-of-the-box experience. MCDE is based on the Debian Testing rolling release and the PAE i686 kernel. Steam is pre-loaded for instant gaming.

Additional Resources

Defective discs will be replaced. Please send an email to


May 2015

Issue 174

[1] openSUSE release notes: https://​­en.​­opensuse.​­org/​­Portal:13.​­2 [2] openSUSE hardware requirements: https://​­en.​­opensuse.​­org/​­Hardware_ requirements [3] openSUSE wiki: https://​­en.​­opensuse.​­org/​­Portal:Wiki [4] MakuluLinux: http://​­makululinux.​­com/​­home/ [5] MakuluLinux downloads: http://​­makululinux.​­com/​­downloads/ [6] MakuluLinux Cinnamon 2.0: http://​­makululinux.​­com/​­cinnamon/ |



4.99 £ From




per month* excl. 20% VAT




Unlimited webspace ■ Unlimited websites ■ Unlimited traffic ■ Unlimited e-mail accounts ■ Unlimited e-mail storage ■ Unlimited MySQL databases ■ Unlimited domains (1 included)

Geo-redundancy ■ Daily backups ■ 1&1 CDN ■ 1&1 SiteLock Basic ■ 24/7 phone and e-mail support





1&1 Click & Build Applications including WordPress and Joomla!® ■ 1&1 Mobile Website Builder




0333 336 5509 *1&1 Unlimited from £2.99 per month. Some features listed are only available with 1&1 Unlimited Plus from £4.99 per month. 12 month contract term and 1 month billing cycle, paid in advance, then regular price applies. Prices exclude 20% VAT. Visit for full offer details, terms and conditions. Rubik’s Cube® used by permission of Rubik’s Brand Ltd.


Updates on technologies, trends, and tools This month’s News


Code of Conflict

• Kernel developers adopt Code of Conflict to guide behavior


• New high-density storage solution from Supermicro

Supermicro Announces New High-Density  Server Solutions

Linux 4.0

• Next kernel release marks end of the 3.x series

• Ubuntu switches to systemd startup daemon

• More Online


Samba Security Bug

• Flaw in smbd file server daemon affects Samba 3.5.0 to 4.2.0rc4

• Misconfigured servers and poorly coded middleware keep old vulnerabilities alive

• PrivDog security app could compromise user security


May 2015

High-performance server vendor Supermicro has announced a new class of Mini-ITX high-efficiency, low-power server solutions that the company says are “optimized for embedded and hyperscale workloads.” The new systems will be available as motherboard units or through Mini-Tower, rack server, or MicroBlade products. The Mini-ITX features a 64-bit Intel Xeon D-1500 processor with eight cores, 128GB of memory, and integrated 10GB Ethernet. According to president Charles Liang, the new “high-density server and storage solutions address growing demands for energy efficiency in data center and cloud environments.” The new Mini-ITX products are designed to help users achieve better performance per watt per dollar across a wide range of embedded and hyperscale scenarios.

Kernel Developers Adopt Code of Conflict After many recent controversies and some high-profile critiques from influential developers, the Linux kernel team has posted new rules for developers to guide the often combative code reviewing process. The new guidelines appeared under the title “Code of Conflict.” The rules are under the torvalds directory of the kernel Git tree, and Linus Torvalds is listed as the author, although the message itself is not signed by Linus. The document starts by explaining that the code review process requires careful critique to ensure high quality, and all contributors should expect to receive feedback on their work. The next part of the message adds a revolutionary new factor to the process, stating that the behavior of developers, as well as the quality of code, will be subject to review. The guidelines state: “If however, anyone feels personally abused, threatened, or otherwise uncomfortable due to this process, that is not acceptable. If so, please contact the Linux Foundation’s Technical Advisory Board (, or the individual members, and they will work to resolve the issue to the best of their ability.” The addition of Linux Foundation Advisory Board as an independent referee and monitor for developer conflicts makes the kernel community a little less hierarchical and autocratic than it has seemed to some in the past. The good news is that, if the process works, the top-tier kernel developers will be able to focus their energies on what they do best – writing and reviewing code, with fewer controversies, rants, and flame wars. The document ends in an endearing Linux geek flourish that sums up the aspirations of so many: “As a reviewer of code, please strive to keep things civil and focused on the technical issues involved. We are all humans, and frustrations can be high on both sides of the process. Try to keep in mind the immortal words of Bill and Ted, ‘Be excellent to each other’.”

Issue 174 |

News Linux News

Linus Torvalds Announces Linux 4.0

More Online

Linus Torvalds has announced that the next release of the Linux kernel will have the name Linux 4.0. This release will mark the end of the Linux 3.X series, which began in July, 2011, and will mark the beginning of a new 4.X series. The announcement comes after Linus polled kernel developers to see if they were ready to start a new series. (If the developers had voted down the 4.0 name, the release would have been Linux 3.20.) The next release has received some significant attention for adding live kernel patching. Still, the casual attitude of Linus and the other developers regarding the release number is strangely comical – seemingly a parody of the commercial software industry, where a new “major release” is accompanied with vast explosions of fanfare and hype. As Linus says in his message to the kernel mailing list, “Because the people have spoken, and while most of it was complete gibberish, numbers don’t lie. People preferred 4.0 and 4.0 it shall be. Unless somebody can come up with a good reason against it.” According to Torvalds, the strongest argument for some people wishing for the start of the 4.X series was “… a wish to see 4.1.5, because that was the version of Linux skynet used to the T-800 Terminator,” an android played by Arnold Schwarzenegger in the Terminator film series. He goes on to the report that “… moving to 4.0 does not mean that we somehow changed what people see. It’s all just more of the same, just with smaller numbers so CC BY-SA 4.0 that I can do releases without having to take off my socks again.” Torvalds plays down the argument that it is better for a major number change to match a major feature release, stating “We don’t break compatibility, and we haven’t done feature-based releases since basically forever.” The current version of Linux 4.0 is a release candidate. The kernel team will wait for feedback and bug fixes before posting the final release.

Ubuntu Switches to systemd Ubuntu developer Martin Pitt has announced the official switch to the systemd startup daemon for the upcoming Ubuntu 15.04 “Vivid Vervet” release. Ubuntu’s plan to switch to systemd has been known for some time. Canonical founder and Ubuntu godfather Mark Shuttleworth announced the change a year ago after the Debian project (which is the basis for Ubuntu) elected to adopt systemd. Still, the official announcement marks the end of an era for users of the many Ubuntu variants and other derivative distros that depend on the Ubuntu development system. The init startup daemon served the Unix and Linux communities for years, but many developers believe a change to a newer system is necessary. Those clamoring for the change believe modern methods require a service management system with better parallel processing and more efficient handling of complex dependencies. Debian’s migration to systemd caused some controversy within the community and even precipitated a fork, known as Devuan, which will continue to develop around init. Ubuntu had previously determined init needed replacing and was working on its own init alternative, known as Upstart, in recent releases. With this change to systemd, Ubuntu is discontinuing work on Upstart. Debian plans to enable systemd by default in the upcoming Debian 8 “Jessie” release. Fedora, Arch, openSUSE, and Mageia have installed systemd by default for two years or more. Red Hat Enterprise and SUSE Linux Enterprise made systemd the default in 2014. Ubuntu’s announcement means that systemd is truly the new standard service startup daemon for the Linux universe.

Linux Magazine

Off the Beat • Bruce Byfield Nine Myths About Styles in LibreOffice Writer As Robin Williams (the designer, not the comedian) once explained in her book title, The PC Is Not A Typewriter. Office and layout programs are not just a keyboard with a screen, but an entirely different way of working. Central to that difference is the idea of styles – a defined set of formatting options comparable to a variable declared in code. Yet many writers refuse to use styles, preferring to format everything manually, even at the cost of making their work slower and more laborious. Why I’m Switching from Gimp to Krita I consider myself neither a technophile nor a technophobe. Yet every once or twice a year, I discover a piece of software so welldesigned and useful that I spend whatever spare time I have learning it as thoroughly as possible. For the past couple of months, that software has been the paint program Krita. Flash in the Pan I’ve known for several years that development of Adobe’s Flash player for Linux has ended except for service updates. In the last couple of months, though, maintaining it on my Debian system has become a series of rear guard actions.

Paw Prints • Jon “maddog” Hall “Open” is as “Open” Does: Parsing an Announcement One of the great things about studying compiler theory is that you learn a lot about avoiding ambiguity in language. Computer languages should not be ambiguous, since there is little time for the compiler or the computer to come back and ask, “Did you really mean that?”

Productivity Sauce • Dmitri Popov Transfer Photos to Linux From Transcend WiFi SD Card Using TransSVR Transcend WiFi SD cards offer a simple and economical way to add wireless transfer capabilities to your camera. Using the accompanying app, you can use your Android device for transferring and previewing photos. The WiFi SD card also features a web interface, so it’s possible to access and transfer photos using a regular browser.

CC BY 2.0 |

Issue 174

May 2015


News Linux News

Old Vulnerabilities Are Kept Alive  Through Bad Configuration

More Online Writebox: Almost Perfect Chrome/ Chromium Text Editor While it might look like yet another lightweight text editor for Google Chrome and Chromium, Writebox features functionality essential for any writing professional. For starters, Writebox works offline, which makes it a perfect text editor for working on text files while offline. Keep Tabs on Social Media Accounts with HubYard Twitter, Tumblr, RSS, YouTube, Instagram – there is a myriad of sources that compete for your attention. To make matters worse, each service wants you to use its own app or website, so things can quickly get out of hand if you need to keep up with all your social media services. Enter HubYard – an open source platform for aggregating and managing social media accounts. Scribble Notes in Your Browser with Notepad5 Notepad5 can come in rather handy when you need to take notes without leaving the convenience of your favorite browser. This super simple browser-based text editor can run locally (perfect when you are offline), and it’s surprisingly functional despite being rather bare-bones. Users practicing the art of distraction-free writing will appreciate Notepad5’s minimalist interface.

HP released its annual Cyber Risk report, which summarizes and attempts to quantify some of the major security problems facing IT departments today. One of the more interesting findings is that “Well-known attacks are still commonplace.” In other words, despite the attention that admins, intruders, and spies pay to new zero-day attacks, many of the vulnerabilities exploited in 2014 have been around for years – or even decades. Misconfigured servers and poorly coded middleware layers keep old vulnerabilities alive even when remedies might be known. The report also states that new technologies, such as the Internet of Things (IoT) and point of sale credit systems, have led to new avenues of attack. To compound the problems, intruders are more numerous and more sophisticated than ever, and traditional protections such as anti-malware scanners are less reliable with the new generation of attacks. According to the report, anti-malware software catches only about half of all cyberattacks. An executive summary of the 2015 Cyber Risk report is available for download. You’ll need to register with an email address and some basic information.

Big Samba Security Bug Revealed The Samba team has confirmed a recent CVE report (CVE-2015-0240) regarding a flaw in the smbd file server daemon that could allow a remote user to execute arbitrary code with root privileges. The vulnerability, which was originally discovered by Microsoft, affects Samba versions from 3.5.0 to 4.2.0rc4. The Samba project has already released a patch and recommends an immediate patch or upgrade. The Samba team also provides a workaround for versions 4.0.0 and later, which consists of disabling rpc_server netlogon.


PrivDog Security App Could  Compromise User Security Virtuous Benchmarks: Using Benchmarks to Your Advantage • Jeff Layton Benchmarks have been misused by both users and vendors for many years, but they don’t have to be the evil creature we all think them to be.

ADMIN Online Setting up FreeNAS • Joseph Guarino FreeNAS offers a range of features to suit your storage needs. We show you how to get started. Transport Encryption with DANE and DNSSEC • Markus Feilner Those who think that enabling STARTTLS in the mail client will make their mail traffic more secure are wrong. Only those who bank on DANE can be sure that a mail server or a firewall will not switch off encryption in transit.


May 2015

The PrivDog “security” application by AdTrustMedia has come under fire as yet another SSL manipulation tool that actually compromises security. According to the US_CERT report, PrivDog is supposed to provide “… safer, faster, and more private web browsing.” The tool actually behaves as a man-in-the-middle proxy that replaces online ads with different ads. PrivDog inserts its own trusted root CA certificate into the connection, and according to reports, affected versions of the tool fail to check the certificates of the sites visited by the user properly, which means no warnings will appear when the user visits some spoofed HTTPS web pages. The CERT team has confirmed that the problem affects version PrivDog However, even if you’re using another version of the tool, this might be a good time to ask whether your web browsing will really be “safer and more private” if you let a third-party company insert itself into all of your HTTPS connections, which actually seems to defeat the whole purpose of HTTPS. This discovery comes on the heels of a similar controversy regarding the Superfish tool distributed by PC vendor Lenovo, which allegedly plays similar tricks with SSL connections to inject ads. Lenovo claims it is no longer shipping Superfish, but the recent trend for so-called “security” add-on tools that break the chain of trust for SSL connections shows just how much the IT industry has come to depend on online advertising – and how far some companies are willing to go to cultivate sources of ad revenue.

Issue 174 |

Shop the Shop

Want to subscribe? Need training? Searching for that back issue you really wish you‘d picked up at the newsstand?

Discover the past and invest in a new year of IT solutions at Linux New Media‘s online store.





• LPIC-1 LPI 101 - CompTIA Linux+ LX0-101 • LPIC-1 LPI 102 - CompTIA Linux+ LX0-102 • LPIC-1 - CompTIA Linux+ 101 + 102

Cover Stories OpenStack Market Overview

The OpenStack market at a glance


We take a look at some of the options available in the international OpenStack marketplace. By Udo Seidel


penStack has been the darling of the cloud scene for four years of unbroken enthusiasm. The term OpenStack is broadly defined, starting with the Enterprise product, through self-made stacks or individual components, and extending to private, public, or hosted clouds. The project website defines OpenStack as “a set of software tools for building and managing cloud computing platforms for public and private clouds.” Tools like OpenStack automate the process of launching and managing virtual machines to meet computing demand. “OpenStack lets users deploy virtual machines and other instances, which handle different tasks for managing a cloud environment on the fly” [1]. The beginnings of OpenStack are familiar to many readers. Hosting service provider Rackspace [2] and NASA [3] both had the idea of developing an open source IaaS platform [4]. The two initiatives were separate until some people involved in the projects had the idea of joining forces. In October 2010, the first version (named Austin [5], for the hometown of Rackspace) saw the light of day.

Table 1: Open Stack Releases and Release Dates

Cover Stories OpenStack Market Overview

The many faces of OpenStack.

OpenStack Solutions

Five of the most important OpenStack Providers.


May 2015

12 16

Issue 174


Code Name

Publication Date



October 2010



February 2011



April 2011



September 2011



April 2012



September 2012



April 2013



October 2013



April 2014



October 2014



April 2015 |

Cover Stories OpenStack Market Overview

Since then, two OpenStack editions have been published each year; the name of the next version is always decided at one of the OpenStack summits, and it always has something to do with the venue for the summit – for example, “Kilo” refers to the international prototype kilogram that is kept in Paris. In the past four years, the OpenStack project has grown enormously. The members and supporters now include major Linux distributors, as well as companies in telecommunications and services [6]-[10]. The current OpenStack release is named Juno, with Kilo expected in a few weeks (Table 1). The number of components has grown from an initial two to the current 11, and the next bunch is already waiting in the wings.

Foundation, Members, and Sponsors The OpenStack project is backed by a non-profit foundation; since 2012, the OpenStack Foundation [11] has decided the fate of the OpenStack project. Companies and individuals can become members or act as sponsors for the foundation. Several membership categories are available. From its Gold or Platinum members, the Foundation expects an annual financial contribution. Achieving Gold status is easy; however, Platinum status is limited and is currently fully booked. A Gold member pays 0.025 percent of their annual turnover, but at least $50,000 and a maximum of $200,000. The specifics of rights and obligations for the various membership levels are published at the OpenStack website [12]. On the sponsors page, you’ll also see two categories that relate to how well established the company is. Startups – the definition is available on the website – pay less than half the contribution of “mature” IT companies.

Some companies in the OpenStack orbit make their money in consulting and sales around OpenStack. Other vendors, such as cloud providers, use OpenStack as their technical base to provide other services to customers. OpenStack supports both public clouds and enterprise-grade private clouds that operate behind corporate firewalls.

OpenStack Distributors Several prominent vendors provide their own OpenStack distributions [13] (Table 2). Several criteria could lead one to opt for a specific product. Which OpenStack release and components form the backbone? What hypervisors are supported? What guest operating systems can you use? Which interfaces are available? The devil is in the detail, and you should look closely at the product description before signing. For example, OpenStack supports a wide range of hypervisors [14], including KVM [15], Xen [16], VMware [17], Hyper-V [18], LXC [19], and even Docker [20] [21]. If you don’t want to commit to extra expenses or long-term service agreements, you can take your own first steps with community versions. OpenSUSE [22] [23] and Fedora [24] [25] both come with the necessary software to help you get started with OpenStack. One challenge facing the distributions that support OpenStack is the relatively high update frequency. Every six months a new version appears. Fast adaptation gains you a competitive advantage – but just six months later, the game starts again from scratch.

Service My Cloud, Please Even an initial look at the list of public cloud providers that offer cloud-based hosting around OpenStack [26] reveals interesting details. Europe, for example, is quite progressive and boasts more than 10 providers. The rest of the world – excluding North America – has just eight. Rackspace [27] is represented on several continents. In Europe, the emphasis is clearly on the western part with a few outliers in Scandinavia. Table 3 lists the best-known European public providers. Some providers simultaneously serve several countries, which will be of interest to customers who are also geographically distributed. If you shop at Rackspace for your cloud services, you can do so in Europe, the United States, China (Hong Kong), or Australia. As with the distributions, choosing a cloud provider requires a close look at where the differences lie. For technical

Table 2: Known OpenStack Distributions Name


HP Helion OpenStack

Hewlett-Packard http://​­www8.​­hp.​­com/​­us/​­en/​­cloud/​­hphelion‑openstack‑community.​­html


IBM Cloud Manager with OpenStack


Mirantis OpenStack


http://​­www.​­ibm.​­com/​­developerworks/​­servicemanagement/​­cvm/​­sce/​­index.​­html http://​­software.​­mirantis.​­com

Nebula One (Cloud Controller appliance) Nebula


Oracle OpenStack (for Linux and Solaris) Oracle


Rackspace Private Cloud



RHEL OpenStack Platform

Red Hat


SUSE OpenStack Cloud



Ubuntu OpenStack



VMware Integrated OpenStack


http://​­www.​­vmware.​­com/​­products/​­openstack |

Issue 174

May 2015


Cover Stories OpenStack Market Overview

Table 3: European Public Clouds with an OpenStack Base Operator



City Cloud

United Kingdom, Sweden






United Kingdom





Internap AgileCloud

The Netherlands

http://​­www.​­internap.​­com/​­agile/​­flexible‑cloud‑hosting‑solutions/​­enterprise‑public‑​ cloud‑solutions/​­next‑generation‑agilecloud/








United Kingdom





Host Europe



or legal reasons, the physical location can be decisive. Additionally, you’ll want to consider which OpenStack components are available and which interfaces. Even the version of the corresponding API may be of interest. Of course, price also plays a role. If a public cloud is too public, but operating OpenStack in your own data center is also not an option, you can always get someone to host a private cloud for you. As you might expect, you will find providers of public cloud services in this domain [28]. You’ll find external private providers in both Asia and Europe, but the United States offers the most choices.

Make or Buy? The market offers a variety of options for accessing OpenStack. If you really want to, you can start with the sources and build everything yourself. Somewhat more convenient are the community distributions of Linux and the OpenStack versions they integrate.

In enterprise environments, users are spoiled with even more choices. Commercial OpenStack distributions let you build a cloud in your own IT environment. If you don’t have the space to do so, hosting service providers are waiting in the wings with suitable offerings. Last but not least are vendors for the public cloud that rely on OpenStack behind the scenes. Of course, you can also arbitrarily combine the different offerings. Many roads lead to OpenStack; you only need to walk down them. n n n

Author Udo Seidel is a Math and Physics Teacher and has been a Linux fan since 1996. After graduating, he worked as a Linux/​Unix trainer, system administrator, and senior solution engineer. Today he is section manager of the Linux strategy team at Amadeus Data Processing GmbH in Erding, Germany.

Info   [1] OpenStack: http://​­www.​­openstack.​­org   [2] Rackspace: http://​­www.​­rackspace.​­com

[14] OpenStack hypervisors: http://​­docs.​­openstack.​­org/​­juno/​ ­config‑reference/​­content/​­section_compute‑hypervisors.​­html

  [3] NASA: http://​­www.​­nasa.​­gov

[15] KVM: http://​­www.​­linux‑kvm.​­org

  [4] Cloud computing: http://​­en.​­wikipedia.​­org/​­wiki/​­Cloud_computing

[16] Xen: http://​­www.​­xenproject.​­org [17] VMware: http://​­www.​­vmware.​­com

  [5] OpenStack releases: http://​­wiki.​­openstack.​­org/​­wiki/​­Releases

[18] Microsoft virtualization: http://​­www.​­microsoft.​­com/​­en‑us/​

  [6] Folsom how-to: http://​­wiki.​­debian.​­org/​­OpenStackHowto/​­Folsom

[19] Linux containers: http://​­linuxcontainers.​­org

  [7] Canonical joins OpenStack: http://​­blog.​­canonical.​­com/​­2011/​­02/​ ­03/​­canonical‑joins‑the‑openstack‑community/   [8] Red Hat Enterprise OpenStack preview: http://​­www.​­redhat.​ ­com/​­en/​­technologies/​­linux‑platforms/​­openstack‑platform   [9] SUSE Enterprise Private Cloud: https://​­www.​­suse.​­com/​­promo/​­susecloud.​­html [10] OpenStack supporters: http://​­www.​­openstack.​­org/​­foundation/​­companies/


[20] Docker: http://​­www.​­docker.​­com [21] OpenStack and Docker: http://​­wiki.​­openstack.​­org/​­wiki/​­Docker [22] OpenSUSE: http://​­www.​­opensuse.​­org/​­en/ [23] openSUSE OpenStack: http://​­en.​­opensuse.​­org/​­Portal:OpenStack [24] Fedora: http://​­fedoraproject.​­org [25] Fedora OpenStack: http://​­fedoraproject.​­org/​­wiki/​­OpenStack

[11] OpenStack Foundation: http://​­www.​­openstack.​­org/​­blog/​­2012/​ ­07/​­join‑the‑openstack‑foundation/

[26] OpenStack public clouds:

[12] Join OpenStack: http://​­www.​­openstack.​­org/​­join

[27] Rackspace developer portal: http://​­developer.​­rackspace.​­com

[13] OpenStack marketplace: http://​­www.​­openstack.​­org/​­marketplace/​­distros/

[28] OpenStack-hosted private clouds:


May 2015

Issue 174


http://​­www.​­openstack.​­org/​­marketplace/​­hosted‑private‑clouds/ |

Shop the Shop

GIMP Handbook

Sure you know Linux... but do you know GiMP? ▪ Fix your digital photos ▪ Create animations ▪ Build posters, signs, and logos order now and become an expert in one of the most important and practical open source tools!

Order online: shop .lin u xnew m e d i a . c o m/s p e c ia ls

For WindoWs, Mac os, and Linux users!

Cover Stories OpenStack Solutions

Five OpenStack solutions tested

Stacked High Several companies offer OpenStack solutions for the enterprise. We look at the similarities and differences of offers from Red Hat, SUSE, Ubuntu, Mirantis, and HP. By Martin Loschwitz and Udo Seidel


penStack cloud software is an open source enterprise solution for managing large amounts of data in a data center. In this article, we examine the five most important OpenStack providers and compare them in a test. We start with SUSE [1] and Red Hat [2] then take a look at Ubuntu’s approach [3]. Finally, we explain what Mirantis [4] and HP Helion [5] seek to do differently.

Red Hat and SUSE Anyone who wants to try SUSE Cloud first needs a user account from the SUSE Customer Center. Registering is a bit annoying, but unavoidable. The account has three functions: First, the user can register for the SUSE Cloud test subscription. Second, the account enables access to the necessary patches and updates, for which you must register the computer in the Customer Center, as well. Third, it gives you access to the software. The software comes in the form of three ISO images. Only one is needed for normal operation. The other two contain the source code and the debug info packages. When downloading the software, you can request a registration code, which unfortunately did not work every time in our lab. The Linux distributor in our area then had to step in and provide the missing data. Anyone who has already decided to purchase will receive a key through the usual channels. The first (mandatory) steps for building a cloud with SUSE were well documented, and fortunately, implementation was pain free. The procedure for market leader Red Hat appears similar to that described for SUSE. Initially, you need a valid account on the Red Hat customer portal, the successor to RHN (Red Hat Network). Here, too, this allows you to download the software, access updates and patches, and register the software subscriptions for your account. Unlike SUSE, Red Hat no longer offers ISO images for the OpenStack components. A suitably prepared RHEL 7 fetches and installs the packages via the built-in package manager Yum. The Red Hat customer account is used to enable the corresponding software repositories with the distributor. Of course, the computer needs to be registered with Red Hat, too. A registration key is not required for the trial subscription; you declare your user account with Red Hat with your request. Red Hat then handles your subscription allocation in the background. The procedure is identical to ordering the commercial Red Hat product. If you have a VMware infrastructure, you can test OpenStack as a prepackaged appliance. In any case, the first steps are clearly and well documented – just as for


May 2015

Issue 174 |

Cover Stories OpenStack Solutions

SUSE. Getting started with Red Hat Enterprise Linux OpenStack Platform (RHELOSP) can mean taking completely differently approaches, depending on your experience. Good documentation is essential for the product’s success, which Red Hat provides for your initial steps, along with detailed information for later use. Red Hat defines three roles for cloud operations: the end user, the cloud software administrator, and the main administrator. Red Hat distinguishes between a product evaluation and an installation in the enterprise environment. This distinction is also reflected in different installation documents. With the instructions for how to perform the evaluation setup, no OpenStack-related knowledge is required. A complete launch on a greenfield site is also possible. If you already have the necessary knowledge, you can either enjoy the review or skip that section, whereas those with no experience who want to start using the SUSE Cloud can avail themselves of the sufficient documentation. As with Red Hat, the Linux distributor from Nuremberg mediated the different roles within the cloud.

SUSE Cloud The documentation comprises installation instructions and documents for SUSE Cloud administrators and end users. The user can choose between HTML, PDF, or the free EPUB e-book format. Previous knowledge of OpenStack might be helpful, but it is not required; SUSE helps the user from the beginning. Although experts are confronted with a load of redundant OpenStack documentation, it is very useful to have all the nec-

essary information in one place. A reference to the corresponding OpenStack help pages would be a little confusing in comparison. OpenStack release cycles are a challenge for traditional Enterprise distributors. A six-month release cycle comes up against distribution support periods of 10 years or more. The Linux distributor needs to adapt to accommodate the OpenStack project with its cloud solution. Version 3 – based on Havana – came out in February 2014 and support was withdrawn just a year later. The version 4 is based on Icehouse and will be supported until the end of August 2015. SUSE OpenStack Cloud 5 follows this and is based on OpenStack Juno. A minimal version of the SUSE Cloud requires three computers (see the Installation section). The associated costs are EUR8,300/​$12,500 per year. SUSE provides priority support – that is, support at any time of day or night with a guaranteed response time of one hour. The subscriptions for the underlying SUSE Linux Enterprise Server are also included. This is equivalent to a subscription for SLES 11 SP3 at the time of writing. Users need to dip into their pockets for additional computers: EUR2,080/​$2,500 for each controller and EUR670/​$800 per compute node. SUSE invoices compute nodes per socket pair on a physical server. Additionally, you will have the costs of the underlying operating system. It does not matter whether KVM or Microsoft Hyper-V is used as the hypervisor. At least version 5.1 of V-Center is required for integration in a VMware environment. Clouds do not work without the corresponding data storage capacities, so for a completely friendly setting, you should use the new SUSE storage server [6] product. This is not free, of course. At the present time, 36TB cost $5,000.

Red Hat OpenStack The significantly different release cycles of Red Hat’s own Enterprise Linux and versions of OpenStack affect Red Hat just as they do SUSE. When this article went to press, version 5 of (Icehouse-based) RHELOSP was still current, and version 6 (with OpenStack Juno) had just been released. Currently, it looks like Red Hat is trying to reduce the support gaps between its OpenStack product and Enterprise Linux. The customer receives RHELOSP version 3 (based on Grizzly) with one year of support in production. The next version, with Havana underpinnings, has already been on offer for 18 months. You can get a full three years of support for version 5, which means that it currently terminates at the end of June 2017. That is quite a long time if you consider that OpenStack will be five versions further on by that time. The services included depend in detail on the specific agreement with Red Hat. In the |

Issue 174

May 2015


Cover Stories OpenStack Solutions

simplest case (i.e., for an evaluation), only one computer is required. Red Hat recommends Enterprise Linux 7 as a good base on the operating system side. In principle, version 6.5 is conceivable, too, but not if you are looking to the future. A valid subscription for the RHEL OpenStack platform also authorizes you to update the underlying in-house Enterprise Linux. The question of how much a cloud by Red Hat costs is not easy to answer. The Red Hat version of OpenStack is “only” a component of the Red Hat Cloud Infrastructure (RHCI). Moreover, the customer needs to buy virtualization (RHEV), system management (Satellite), and cloud management (CloudForms). The pricing situation seems to be so complicated that even Red Hat contradicts itself in its blog posts on the Internet, which talk of prices from EUR3,000/​$4,600. Verbose responses to requests for quotations become amazingly terse when it comes to pricing; in fact, Red Hat does not quote a single price. As already mentioned, the Red Hat version of OpenStack is very well documented. This also applies to installation, with a separate entry for the evaluation installation. However, this depth of detail is not absolutely necessary. The documentation online [2] is sufficient in many cases.

Red Hat Installation Red Hat lists four requirements to proceed with installation: a RHEL 7 installation DVD, a network connection to the Internet,

Figure 1: Installing OpenStack in fell swoop – thanks to PackStack.

a computer, and 30 to 45 minutes of your time. Red Hat divides the installation process into six sections: • Equip the computer with RHEL 7 • Register the system on the customer portal • Remove the unrequired software directories from the Yum configuration • Install a few auxiliary tools for the package manager • Adjust the repository entries • Install available patches and updates This process can take some time, depending on the size of the software. The final step is to disconnect the network manager and restart the system. The Getting Started Guide suggests installing PackStack [7] to get OpenStack up and running easily (Figure 1). The Puppet-based PackStack runs in the background, and the ‑‑allinone option lets you set everything up in one fell swoop, without preliminary considerations about the space you will need or the planned network setup. That which helps beginners, however, also makes life easier for experienced OpenStack users: You can configure several computers using PackStack and customize the preconfigured default values to suit your own needs. For die-hard admins, instructions are supplied to lead them through a completely manual setup. Either way, the user sees a login screen at the end of the procedure. Fans of shell access should take a look in the /root/ keystonerc_admin and /root/keystonerc_ demo files. Thanks to additional services such as the Red Hat Enterprise Linux Open Stack Platform Installer or the included CirrOS image [8], the Red Hat installation is thankfully a non-event, especially in view of the complexity of the OpenStack stack. Red Hat Enterprise Linux OpenStack Platform Installer is a kind of control center for deploying OpenStack and its components. Puppet again runs in the background, actively supported by Foreman. A PXE service that boots the clients runs on the admin server. Unfortunately, we fought with a node-provisioning failure [9] during the tests. The cloud admin needs to study the documentation in detail before getting started. Decisions and configurations made cannot always be corrected later. A Live version is also available; however, it is unclear as to whether this method is an option with RHEL 7 underpinnings. The hard disk installation drew attention to itself in our lab with its intensive retroactive installations of various software packages: It should be less touchy. If you run yum ‑y install rhel‑osp‑installer

Figure 2: Future OpenStack machines waiting for their final installation with Red Hat.


May 2015

Issue 174

it indicates that the server will be used as a RHELOSP control center later on. |

Cover Stories OpenStack Solutions

Now Crowbar comes into play: You can configure the computers belonging to the SUSE Cloud using what are known as barclamps [12], which express Crowbar functionality. Order is important: A minor bug in the documentation fails to say that RabbitMQ must be set up before setting up Keystone. Fortunately, the software is smarter than the documentation Figure 3: The network configuration for SUSE Cloud requires advance planning. and gives the user the correct instructions (Figure 4). Otherwise, setting up the OpenUse of the RHELOSP installer is largely intuitive. After the Stack cloud in SUSE proceeds exactly as specified in the instalPXE boot of a server, you will find it on the Discovered Hosts lation instructions. page (Figure 2). The next step is to set up a basic operating sysIn the minimum version, the three computers are installed tem and configuration using Puppet modules. and ready for use at the end of the day. Our lab revealed no abnormalities. All further control is now the domain of the SUSE Installation controlling instance: When a user enters the server name as a SUSE, like Red Hat, describes the installation of their cloud URL in their browser, a login screen appears. Although this comprehensively and in detail. However, a few considerations has been customized by SUSE, you can quite easily see its are necessary when starting for the first time. For example, OpenStack affiliation. Anyone who prefers to work at the comSUSE Cloud expects at least three computers for an infrastrucmand line will find the necessary shell variables stored under ture. The OpenStack control center runs separately from compute nodes, storage nodes, or both. This distinction is fairly /root/.openrc. artificial and unnecessary for your first steps. The third comApart from the necessary network considerations and the puter is the Admin server. It has two functions: as a PXE boot fact that the SUSE stack expects more than one computer, inserver that installs SLES on the future OpenStack computers stalling the SUSE Cloud is painless, with only a few problems and as a Crowbar [10] master, which builds the OpenStack running barclamps; however, anyone who also manages their deployment. server using Crowbar has new opportunities for synergy. The documentation describes hardware requirements for the Admin and Control machines in great detail. Future Features cloud admins need to pay particular attention to the netSUSE Cloud is based on the OpenStack release Icehouse and work configuration. This proves to be significantly more dispenses with newer features such as those from Juno. Becomplex on SUSE than on Red Hat. You need to keep an eye cause almost all major IT environments already have a hyperon no fewer than five subnets (Figure 3), and the selected configuration cannot be changed again later. Good planning is required, because corrections only work if you start again from scratch. Puzzling out the network configuration seems to be the be-all and end-all of installing the SUSE Cloud. This includes setting up the software directories. In the simplest version, the Admin computer takes on the role of an SMT (Subscription Management tool) server. Otherwise, the configuration of a Bastion host [11] is almost essential. However, things become much simpler after that: Simply start the corresponding computers that receive their operating systems via the Admin server and then wait for further instructions. Figure 4: RabbitMQ must be running before Keystone. |

Issue 174

May 2015


Cover Stories OpenStack Solutions

visor infrastructure, or at least a corresponding strategy, SUSE fits in well: It supports KVM, VMware, Hyper-V, and Xen straight out of the box. Although LBaaS (load balancing as a service) and FWaaS (firewall as a service) are enabled and supported, Trove (DBaaS, database as a service) is only included as a technology preview. Ceph (Firefly) is now also fully supported; the user can set up a Ceph network while installing the SUSE Cloud. The corresponding Crowbar barclamps are available and documented accordingly. Integrating an existing installation is, of course, also possible. The default configuration of SUSE Cloud does not enable the latest API version for all components (e.g., Nova and Cinder). OpenStack from Red Hat is also based on Icehouse. Unlike SUSE, Red Hat supports Trove fully. Sahara (Big Data with Hadoop) is already included as a technology preview. The Red Hat version is slightly more choosy on the hypervisor side. Here, the user can only opt for KVM or VMware. With KVM, it is important which RHEL version is running on the host. The Microsoft operating systems in the current version 7 are not certified. Support for Gluster on the storage side is not a big surprise. The commercial version even expects the Red Hat storage server; the integration of Inktank Ceph Enterprise (ICE) [13] is also the logical consequence of the distributor acquiring Inktank. As with SUSE, Red Hat also does not enable the latest version of the APIs for Nova and Cinder.

so much on distribution combinations as does the latest Ubuntu version (Figure 5), for which Canonical itself offers OpenStack packages. OpenStack still uses Launchpad – a Canonical product – for bug tracking. Ubuntu is also the only distribution for which packages are available a few hours after the release of a new OpenStack version, which means the combination of Ubuntu and OpenStack offers the greatest flexibility and possibilities for admins. On one hand, Canonical provides OpenStack packages from the beginning for the current and for the last supported LTS version. For example, current OpenStack version packages are available for Ubuntu 12.04 and 14.04. Ubuntu has assigned members of its server team who take care only of OpenStack integration for these packages. The server team’s website [14] provides information on which OpenStack version is still supported by Ubuntu. However, Canonical wouldn’t be Canonical if there wasn’t also an Ubuntu cloud. Basically, Ubuntu OpenStack is nothing more than the latest LTS edition as a basis, which makes installing and using OpenStack possible using additional components. The big difference from the do-it-yourself version is just that Ubuntu provides the integration, which you would otherwise have to take care of yourself. Preconfigured integration of

The First: Ubuntu! As unlikely as it may seem, Canonical was the first major distributor with OpenStack. This did not happen by chance, but by orders from the top: From as early as 2012, Canonical founder Mark Shuttleworth decided that Ubuntu would commit to OpenStack. Some observers appeared very surprised at the time, because until Shuttleworth’s sudden change of mind, Eucalyptus – which had been released by HP in the meantime – was used at Ubuntu for cloud computing. From version 12.04 onward, Ubuntu not only delivered OpenStack Essex, it also offered commercial support for OpenStack within the framework of distribution support. Canonical and Ubuntu have, of course, suffered from this step because, in 2012, the quality of OpenStack was far from that of the platform today. Nevertheless, Shuttleworth thought the step was still worthwhile: The ploy of binding OpenStack to Canonical and Ubuntu has more than paid off in the long term. Hardly any other online guide to OpenStack relies


May 2015

Figure 5: Ubuntu OpenStack is, in the best sense, original; it is closely related to what OpenStack provides itself.

Figure 6: The OpenStack dashboard in Ubuntu 14.04.

Issue 174 |

Cover Stories OpenStack Solutions

this kind offers both advantages and disadvantages – testing will prove this later in detail.

Ubuntu Quick-Start with DevStack

What many admins don’t know is that Ubuntu offers a whole bouquet of services for OpenStack. For example, with BootStack, Canonical hosts an OpenStack cloud for the user. The customer then takes over a ready-to-use cloud on rented computers, so there are no investment costs for hardware; however, the customer has no physical way to access their data storage device. Anyone who is interested in OpenStack training is right to use Canonical: The Orange Box [16] is a microcluster consisting of 10 Intel Next Unit of Computing (NUC) boards and corresponding additional hardware. Ubuntu marketed the box as a portable mini-cloud on which several students could set up their own cloud during training. Ubuntu does not, however, explain why three virtual machines within VirtualBox would not suffice for this. In our eyes, the Orange Box is more of a marketing gimmick, but a successful one nonetheless. Canonical OpenStack training has a good reputation; you cannot go wrong there.

The fastest way to get a solid OpenStack base is to combine a classic LTS installation and the Ubuntu OpenStack packages. Canonical tries to keep its packages as close as possible to the originals published by OpenStack itself. Canonical only provides a typical Ubuntu theme for the dashboard (Figure 6). The downside to OpenStack from the manufacturer is that you are left to complete the entire integration (e.g., the configuration of individual services). This might be useful the first time as a learning experience, but it is certainly not good for productivity. The deployment tool DevStack [15] is popular because it launches a clean OpenStack and equips it with configuration files at the same time. Anyone who combines an LTS installation with the OpenStack packages from Ubuntu can at least avoid the Puppet modules for the OpenStack parts. Writing a site manifest might still be required, but you will receive a reproducible setup for Mirantis Fuel your pains. No provisions for high availability (HA) are made Unlike the previous products, Mirantis is not a Linux distribuin this version. At least Ubuntu provides some of the necessary tion; it also doesn’t have the experience that distinguishes the tools, such as MaaS (metal as a service, Canonical bare-metal three others. However, Mirantis emerged almost exactly at the deployment), free of charge. Their own deployment tool is start of the OpenStack hype and has grown with OpenStack available in the form of Juju, which carries out tasks similar to since then. Initially active as a training provider, to this day those performed by Puppet or Chef. Mirantis earns a large part of its income by organizing training Questions arise as to whether it is really useful to take a in all parts of the world. The company wants to be a big player freshly installed Ubuntu and then jazz it up using MaaS, Openin all matters concerning OpenStack and to claim a piece of the Stack, Juju, and other components when you could just buy large OpenStack cake. the whole thing ready-made, because Ubuntu OpenStack is exThis would not be possible if all they offered were training, actly that: a complete OpenStack (i.e., an OpenStack product especially because all distributors offer training for their Openlike those provided by Red Hat and SUSE). Stack products. In 2013, Mirantis therefore scheduled training The big difference between the Red Hat and SUSE versions in places where admins struggled most: the OpenStack installaand Ubuntu’s is that Ubuntu also provides the OpenStack packtion. Those who wanted to build an OpenStack cloud at the beages separately, so you can tinker with them if you want. If ginning of 2013 had to adjust their configuration at the comyou’re not inclined to tinker, turn to Ubuntu OpenStack, and mand line. Mirantis subsequently developed Fuel (Figure 7), you’ll get similar results. Ubuntu OpenStack essentially conwhich provides the resources to start up a cloud based on Red tains four components: the most recent Ubuntu LTS, MaaS, Hat, CentOS, or Ubuntu. Juju, and Landscape, the central management tool that Ubuntu The most important difference between Red Hat, Ubuntu, also uses to handle distribution support. SUSE, and Mirantis is therefore that Mirantis sees itself as a Unlike the SUSE or Red Hat OpenStack solutions, Ubuntu OpenStack does not yet come as an installation CD. Instead, Ubuntu publishes instructions on how users can get their cloud up and running fast. Following the instructions worked very well in our lab, but it is not exactly suitable for the Enterprise. It may be only a matter of time before Canonical distributes the product on CD or USB flash drive and also attaches a colorful installer to makes things even easier. In terms of price, Canonical is certainly oriented toward the upper end, as reflected the bill. If you have more than 10 physical and 10 virtual machines, the smallest package is for up to 100 computers and costs EUR80,000/​$75,000 per year. Figure 7: The Mirantis OpenStack installer Fuel, which, among other things, performs health checks. |

Issue 174

May 2015


Cover Stories OpenStack Solutions

provider across distributions and offers its services on both Red Hat and compatible systems such as Ubuntu. If you invest more, you also receive the support appropriate for the OpenStack product. For the magic to work, Mirantis contributes its own OpenStack packages that Fuel automatically installs on Ubuntu or Red Hat. These packages could be partly based on those the manufacturers provide themselves, but they often differ enough to be incompatible. Although the vendor aggressively promotes the fact that Mirantis is the only OpenStack product without vendor lock-in, admittedly, customers are bound to Mirantis; therefore, know that you will definitely be bound to some company if you want to use OpenStack. Mirantis got quite a few things right with Fuel. The tool is publicly available along with the source and is being developed as a community project within the OpenStack project, so anyone who wants to do so can join in. The main aim is always to make OpenStack easy to install so that admins can set up a workable OpenStack quickly. Under the hood, Fuel relies on many components that are already known from other projects. For example, Puppet looks after configuration management within a Fuel cloud. The Mirantis Nailgun is at the heart of the solution: This component is a RESTful API in the first step and is implemented in Python. Nailgun is the tool that accepts external commands in Fuel on the one hand and initiates the processes to ensure commands are implemented in OpenStack on the other. The web interface that exposes the important functions to the user belongs to Fuel, whereas the Fuel web interface in the background is directly connected to Nailgun. Mirantis does an excellent job in this respect, because the web interface is straightforward yet so flexible that all the important controls and switches are available. Fuel does not abandon you after the installation and provides a few extra features, such as a health check for the services around OpenStack. Fuel also has a plugin layer through which the external functionality can be upgraded. This is an invitation to other manufac-

turers to integrate their products with Fuel. Network manufacturers can set up Fuel so that is possible to communicate directly with their own devices. Cisco and Juniper have already made use of these possibilities. The same applies to storage: Ceph is an outstanding example for connecting external storage types. The certification that Mirantis supplies for its retrofit plugins on request is an official blessing that a plugin is implemented in a technically impeccable way and delivers the promised performance. All told, Fuel leaves a positive impression. As an aside, Red Hat is still officially listed as a Mirantis investor, even though the company now invests massively in the OpenStack market itself. Mirantis countered the change in Red Hats’ strategy with the help of Canonical. The waves have now calmed down a little, but Mirantis and Red Hat will probably never be friends. However, the two providers have one thing in common: Mirantis refused to quote reference prices, despite persistent requests.

HP Helion

When Hewlett-Packard (HP) announced the purchase of Eucalyptus in September 2014, it surprised the cloud community, because HP had actually zeroed in on OpenStack very early. Now, there is hardly an OpenStack Conference without HP’s official presence, often as a sponsor. HP now also plays a significant part in the OpenStack design summits. In addition to the Helion Public Cloud that HP operates, Helion is available as a downloadable image. The community version is free, but obviously doesn’t come with any form of support. The commercial version of Helion does come with support and is aimed at large companies. HP follows the pattern of SUSE and Red Hat: Helion is almost perfectly suited to conjuring up an OpenStack cloud out of nothing. Of course, this involves more than the mere OpenStack installation: As a hardware manufacturer, the subject of hardware is also close to HP’s heart, so bare-metal deployment absolutely plays a role. TripleO [17], which not only uses OpenStack as a virtualization layer but also manages the hardware node of a cloud with OpenStack, comes directly from HP. HP Helion uses the TripleO deployment model: Create the seed node, which is a bootable image deployed in a VM instance, then create the undercloud, which is a single-node OpenStack installation on a server that is used to maintain the overcloud, the functional cloud available to users that contains the elements for HA. A minimal installation requires at least eight nodes; the seed host is another server (i.e., the cloud controller). According to HP, it should run on Ubuntu 14.04, so Ubuntu admins should have an easier job than those who are used to RPM-based distributions. As you can see, Helion differs substantially from other OpenStack products, which is not necessarily good or bad; HP has invested a lot of work in TripleO, Figure 8: At first sight, the Helion Horizon component only differs from other Horizon impleand the solution definitely has its mentations in terms of colors.


May 2015

Issue 174 |

strengths. Unlike SUSE, Red Hat, and Ubuntu, for which bare-metal deployment is always a bonus on top of the normal OpenStack, this component is an integral part the HP platform design. From a user perspective, the overcloud hardly differs from implementations by the other manufacturers. Although the Horizon OpenStack dashboard in HP Helion comes with its own theme (Figure 8), anyone who has used Horizon will get along fine. When evaluating OpenStack solutions, Helion is worth considering – especially for companies in which HP is already a hardware supplier – given that the TripleO undercloud provides useful additional features, such as automatic BIOS updates for all cloud nodes. Helion is affordable, as well: Support packages start at EUR1,500/​$1,200 per year per server.

IT Highlights at a Glance

Conclusions Ubuntu is the top dog when it comes to OpenStack. Canonical proudly points out that 55 percent of all OpenStack deployments are based on Ubuntu. Anyone who wants a quick start can be up and running within a few hours with Ubuntu OpenStack. Linux veterans SUSE and Red Hat bundle their OpenStack offers with other products or embed them in their existing portfolios. Getting started is slightly easier with Red Hat than with SUSE, but sooner or later you will arrive at a pretty similar hardware infrastructure. In the end, a fondness for Puppet or Chef could tip the balance in choosing an OpenStack solution. If you’re thinking outside the box, Mirantis and HP might be worth considering. n n n

Info [1] SUSE Cloud: http://​­www.​­suse.​­com/​­products/​­suse‑cloud/ [2] Red Hat Open Stack Platform: http://​­access.​­redhat.​­com/​ ­products/​­red‑hat‑enterprise‑linux‑openstack‑platform [3] Ubuntu OpenStack: http://​­www.​­ubuntu.​­com/​­cloud [4] Mirantis Fuel: https://​­software.​­mirantis.​­com [5] HP Helion: http://​­www8.​­hp.​­com/​­us/​­en/​­cloud/​­helion‑overview.​­html [6] SUSE Storage Server: https://​­www.​­suse.​­com/​­products/​ ­suse‑enterprise‑storage/ [7] PackStack: https://​­wiki.​­openstack.​­org/​­wiki/​­Packstack [8] CirrOS: https://​­launchpad.​­net/​­cirros [9] RHELOSP error report: http://​­bugzilla.​­redhat.​­com/​­show_bug.​­cgi?​­id=1174381 [10] Crowbar: http://​­crowbar.​­github.​­io/​­home.​­html [11] Bastion host: http://​­en.​­wikipedia.​­org/​­wiki/​­Bastion_host [12] Barclamps: https://​­github.​­com/​­crowbar [13] Ceph as Enterprise product: http://​­www.​­inktank.​­com/​­enterprise/ [14] Ubuntu Server Team: https://​­wiki.​­ubuntu.​­com/​­ServerTeam/​­OpenStack [15] DevStack: http://​­docs.​­openstack.​­org/​­developer/​­devstack/ [16] Ubuntu’s Orange Box: https://​­insights.​­ubuntu.​­com/​ ­wp‑content/​­uploads/​­DS_The_Orange_Box.​­pdf

Too busy to wade through press releases and chatty tech news sites? Let us deliver the most relevant news, technical articles, and tool tips – straight to your in box. ADMIN Update • ADMIN HPC Linux Update • Raspberry Pi Keep your finger on the pulse of the IT industry.

Admin and HPC:

[17] TripleO: http://​­docs.​­hpcloud.​­com/​­helion/​­openstack/​­1.​­1/​ ­services/​­tripleo/​­overview/

Linux Update: |

Raspberry Pi:

Reviews MakuluLinux

MakuluLinux MCDE 2.0 and Xfce 7.1

News from Africa Two desktop environments and two different distributions as a base – introducing MakuluLinux. By Ferdinand Thommes


May 2015

Like other Linux variants designed for out-of-the-box usability, Makulu comes pre-installed with the necessary codecs and drivers for smooth multimedia experience on a variety of hardware platforms. Makulu also comes pre-installed with the

Steam gaming platform. According to the Makulu developer, you can “… simply log in to Steam and start playing your favorite game titles.” Wine is also pre-installed. The Makulu website promises that, “… installing

Figure 1: Despite being reduced to the essentials, the application menu in MakuluLinux Cinnamon 2.0 reveals a wealth of software.

Issue 174 |

Lead Image © Martin Malchev,


ost Linux users associate Africa with Mark Shuttleworth, the founder of Ubuntu, but other Linux fruits thrive on African themes: The exotic-sounding MakuluLinux takes its name from a Zambian mountain. Maintainer Jacque Raymer publishes Makulu, which means “big” in the language of the Zulu, for a number of desktops: KDE, Xfce, Cinnamon, and a soon-to-be-released LXDE/ Xfce hybrid. Start-up Linux distros often fall into predictable patterns, but MakuluLinux comes across as a truly original vision. Makulu’s sophisticated design, inspired collection of extras, and sophisticated package selection help it stand out against the field of competing distributions. According to the Makulu website [1], the ambitious goal of the project is to provide “… a sleek, smooth, and stable user experience that is able to run on any computer from old to new, from netbooks to notebooks, desktops to server stations.”

Reviews MakuluLinux

Figure 2: The installation wizard in the Debian-based Cinnamon edition of MakuluLinux offers three different installation routines.

Windows software has never been easier; simply double-click your installer or exe files and they will operate in Linux much the same way they do in Windows.” Thus far, Makulu has always been based on Debian “Testing.” During the release preparations for Debian 8 “Jessie,” the Testing repository was frozen in November. Since very little is happening in the Debian camp right now for this reason, Raymer followed user requests and published Makulu with the Xfce desktop environment on the basis of a Ubuntu offshoot. More releases on this basis are to follow, according to statements made by the developer, but a return to Debian “Testing” is also planned.

Raymer brings extraordinary energy to the Makulu project, and new features appear frequently. As this article went to press, Raymer announced an exciting new tool called the Makulu Constructor [2], which lets you easily clone a customized system to create a UEFI-ready Live boot ISO, which you can then install on another computer. The first 64-bit edition of Makulu arrived in March 2015.

MakuluLinux Cinnamon 2.0

MCDE (MakuluLinux Cinnamon Debian Edition) [3] was published in January 2015. Although MakuluLinux releases have always been slightly bulky, weighing in at around 2GB, the developer has cut back this time. By significantly reducing the number of pre-installed applications, the developer reduced the ISO image to 1.2GB. Nevertheless, the software selection should cover most applications (Figure 1). The login manager used here is Figure 3: Disk partitioning is handled either graphically using GParted GDM3. Gaming or in the terminal using Cfdisk during the MakuluLinux MCDE install. fans will be happy to see that the system does not just offer the usual selection of programs, but specifically caters to their daily needs with a version of Wine [4] specially patched with D3D and Figure 4: After the install, the Driver check searches for updated drivCSMT for gaming, ers for the video card or the WLAN module. |

including Winetricks, PlayOnLinux [5] and Steam [6]. MakuluLinux comes as a hybrid ISO for DVD or USB flash drive in the form of a Live medium with an installer. MCDE is a 32-bit system, but a new 64-bit version appeared as this issue went to press. Live mode boots to a login screen where typing the password, makulu, takes you to the tidy desktop. The installation routine, which you can call from there, handles localization of keyboard layout and date and time. This was not the case with MCDE 1.1, but a bug report quickly led to a remedy. In terms of language support, the maintainer has spared no efforts, integrating many Berber languages and several dialects and ethnic idioms from around the world into Makulu to reflect its origins. The installation wizard borrowed from Sparky Linux provides three modes from which to choose (Figure 2): graphical, graphical with terminal, and an advanced installer that runs in a terminal window and additionally prompts the user for information via dialog boxes. In this mode, you can disable many languages and dialects for an English-language installation, which would otherwise unnecessarily take too much time and occupy too much disk space for language data and translations. Here, too, Makulu is much improved compared with version 1.1. In all three modes, partitioning now works perfectly (Figure 3), so there is nothing to prevent the install. On our lab system, the installation completed in about five minutes. Restarting the freshly installed system revealed very few worries, as did the system updates (Figure 4). The Cinnamon install takes only a modest amount of RAM (Figure 5), and when you begin customizing your environment, you’ll find the system and desktop configuration tools in the Control Center (Figure 6).

Everything Is Fine MakuluLinux uses a PAE-enabled kernel [7] version 3.16.7 and Systemd. The system starts in VirtualBox in around 6.5 seconds and feels very agile. The source list contains the official Debian sources for the “Testing” branch, as well as the source for Debian Multimedia, Skype, Opera, and several Google services. Additionaly, a Makulu repository contains the developer’s programs.

Issue 174

May 2015


Reviews MakuluLinux The Software selection attaches great importance to multimedia apps and the matching codecs, but it does not lack software for other applications. For of-

fice applications, it relies on WPS Office (formerly Kingsoft Office) [8]. Although it impresses with very good compatibility with Microsoft formats, it cannot handle the Open Document format of LibreOffice/​ OpenOffice. There is even a WPS app for Android. If you prefer LibreOffice, then you can install it, along with tens of thousands of other applications, via the Figure 5: Immediately after the install, Makulu occupies a little less package manager. than 6GB on the hard disk. Makulu’s choice of graphical front end for this task is Synaptic. For the document viewer, Makulu uses FoxitReader, thus choosing freeware over free software. Google Chrome is on board for Internet access, Thunderbird acts as the mail client, and Pidgin is available for instant messaging. The application menus Tools, Settings, and System management are jam-packed with apps such as Figure 6: In the Makulu Control Center, you will find the system setAdobe Flash tings and configuration tools.

Figure 7: The unique appearance of both Makulu editions – the Xfce edition is shown here – manifests itself in strong colors and imaginative wallpapers.


May 2015

Issue 174

Player, the Variety wallpaper changer, the Leafpad text editor, the GDebi package installer, and many other useful programs and utilities. In addition to the configurable main menu on the left edge of the panel, the Slingshot program launcher is located on the right side as a menu option that works well on tablets with a touchscreen [9]. Because the sources are integrated out of the box, proprietary applications such as the Opera browser or Skype install just as easily as Google Music Manager, the Talk plugin for Google Hangouts, or Google Earth. Because the newer versions of Opera are only available for 64bit systems, Opera v16.12 is installed here. This will delight fans of the original Opera browser because Opera in this version still includes the built-in mail client and is based on Opera’s own Presto engine. As of Opera 15, the Norwegian browser has relied on Google’s render engine Blink and the source code from Chromium. However, because Opera only releases security updates for the latest edition of the browser, you might want to consider installing the Opera clone Vivaldi [10].

MakuluLinux 7 with Xfce Shortly after the Cinnamon Edition of MakuluLinux was updated, the Xfce 7 series followed. After many user requests, the Xfce desktop environment (Figure 7) is now based on the Ubuntu 14.04 LTS version, thus inheriting the advantage of long-term support to the year 2019. Because of the upcoming release of Debian 8 “Jessie” and the associated change freeze in the development branch, Debian is currently not a good choice for derivatives anyway. As usual, Makulu does not simply clone the original; in fact, the current Makulu 7.1 Xfce has a unique look and feel. The installer comes from Ubuntu and shows no weaknesses in the partitioning step. The option inherited from Ubuntu of encrypting the hard disk during installation with Cryptsetup worked well in our lab. In this case, however, you should not lose the password; otherwise, the data on your hard disk will remain encrypted and inaccessible forever. The Xfce variant is attractive in terms of both visuals and software engineering. The program selection matches that of the Cinnamon Edition in many ways (Fig- |

Reviews MakuluLinux was buggy at first, the developer quickly took care of the problem. The PAE-enabled kernel means the 32-bit system can address more that 3.2GB of RAM; if you have an older system, you need to be sure it supports PAE. Makulu now has an alternative 64-bit version, although it didn’t arrive in time to be included in this review. The variant with Cinnamon is better suited for Figure 8: After the install, Steam reports outstanding updates, and users with recent PlayOnLinux is ready for your first game. hardware, because ure 8), as displayed in a Whisker Menu the desktop needs the 3D capabilities of [11]; the Synapse program launcher rethe graphics card to display desktop efsides at the other end of the panel. Addifects. In the Xfce edition of the distributionally, you can pin Docky [12] at the tion, you can switch off Compiz in the top of the screen. Other options include menu; it thus works well on older comCompiz 1.9.2 and the Emerald Desktop puters. It should be noted that a new Effects window dresser – if your comMakulu version for KDE has been reputer has enough power. To ensure up-toleased that uses Compiz instead of KWin date software, the distribution relies on a as its a window manager. dozen Ubuntu PPA repositories. Y PPA Manager is on board to manage them. Makulu’s browser of choice for Xfce is Firefox 35.1. Like its counterpart Chrome in Cinnamon, it comes with some plugins already in place, including Adblock, a YouTube downloader, and notification modules for Facebook, Twitter, and Gmail. The design of the desktop environment is extravagant and colorful, as in the Cinnamon variant, with some overlap in terms of backgrounds and themes. Systemd is not used in this distribution: Ubuntu is waiting until the next version, 15.04, to introduce this. Nevertheless, the startup is quite a fast process, and the system lets you work smoothly.

In addition to its general usability, MakuluLinux also offers a unique look and thoughtful design. Raymer hopes to alternate between Debian “Testing” and Ubuntu as the basis in the future, so future editions should have something for everyone. n n n

Info [1] MakuluLinux: http://​­makululinux.​­com/​­home/ [2] Makulu Constructor: [3] MakuluLinux Cinnamon: http://​­makululinux.​­com/​­cinnamon/ [4] Wine: https://​­www.​­winehq.​­org/​ ­pipermail/​­wine‑devel/​ ­2013‑September/​­101106.​­html [5] PlayOnLinux: https://​­www.​­playonlinux.​­com/​­en/ [6] Steam: http://​­steamcommunity.​­com [7] PAE: http://​­en.​­wikipedia.​­org/​­wiki/​ ­Physical_Address_Extension [8] WPS Office: http://​­www.​­wps.​­com/ [9] Install Slingshot: http://​­www.​ ­noobslab.​­com/​­2012/​­02/​­install‑slingsh ot‑launcher‑mac‑os‑style.​­html [10] Vivaldi: https://​­vivaldi.​­com [11] Whisker Menu: http://​­gottcode.​­org/​ ­xfce4‑whiskermenu‑plugin/ [12] Docky: http://​­wiki.​­go‑docky.​­com/​­index.​­php?​ ­title=Welcome_to_the_Docky_wiki

Conclusions Jacque Raymer mostly develops Makulu alone, which means the Makulu project succeeds or fails with him. We found very little to criticize in our assessment of the Makulu MCDE 2.0 and Makulu 7.1 Xfce releases. Although the installation routine

Figure 9: Clicking the Variety option will change the wallpaper periodically on request. To do so, the small tool loads the images from the Internet. |

Issue 174

May 2015


Features How Does ls Work?

Anatomy of a simple Linux utility

How Does ls Work? A simple Linux utility program such as ls might look simple, but many steps happen behind the scenes from the time you type “ls” to the time you see the directory listing. In this article, we look at these behind-the-scene details. By Amit Saha

Author Amit Saha is a Software Engineer at Red Hat in Brisbane, Australia. He is working on his second book, Doing Math with Python and writes on various Linux and programming topics. He blogs at http://​­echorand.​­me.


May 2015

following along will still give you some insight into the inner workings of a program on Linux. This article assumes you are running Linux kernel 3.18 [5] with the debug symbols for Bash installed, that a local copy of the 3.18 kernel source is available, and that SystemTap is set up properly. In the next section, I will describe how to configure your system to follow this article.

Setting Up Your System To install the Bash debug symbols on Fedora 21, you can use the command: # debuginfo‑install bash

If you do not have the GNU debugger gdb installed, you can install it using yum install gdb. The kernel 3.18 source can be downloaded from The Linux Kernel Archives [6], or, if you prefer to clone the kernel source, switch to the v3.19 branch. SystemTap can be installed on Fedora 21 with:

Issue 174

# yum install systemtap‑devel systemtap‑client # stap‑prep


The last line installs the necessary kernel packages for your kernel.

Methodology Before getting started, it is worthwhile discussing the methodology I adopted for this investigation. The first step is to understand how the program – an executable script or a binary program – corresponding to a command entered on the command line is found. By placing breakpoints at key locations in Bash, you can halt the execution of Bash and examine key variables to get an idea what the program is processing at that point in the program. The next section makes this step clearer with an example that uses the ls program. Once you know how the program to be executed is found, you want to know how the program itself works. System calls are the entry point for a program to the kernel space. The program either invokes one directly or via a library function call. After determining the key system call or calls, you then look into the kernel source code to find the function implementing that system call. SystemTap scripts can then trace the entry and exit from these functions, illustrating how |

Lead Image © racorn,


hat really happens when you enter a program’s name in a terminal window? This article is a journey into the workings of a commonly used program – the ubiquitous ls file listing command. This journey starts with the Bash [1] shell finding the ls program in response to the letters ls typed at the terminal, and it leads to a list of files and directories retrieved from the underlying filesystem [2]. To recreate these results, you’ll need some basic understanding of standard debugging techniques using the GNU debugger (gdb), some familiarity with the SystemTap system information utility [3] [4], and an intermediate-level understanding of C programming code. SystemTap is a scripting language and an instrumentation framework that allows you to examine a Linux kernel dynamically. If you don’t have all these skills,

Features How Does ls Work? the control flow occurs to and from kernel space. I adopt this methodology to understand how the ls program works, but the same techniques should be relevant for any program.

First Steps: Typing ls When I type ls, the location of the binary corresponding to the command is first searched in the locations in the PATH environment variable. You can chart this action using the GNU debugger (gdb); you’ll either need the debug symbols for Bash installed or a locally built copy of Bash with debug enabled. To begin, start a gdb session and pass in the bash binary:

Listing 1: Placing Breakpoints in Bash Source 01  (gdb) b search_for_command 02  Breakpoint 1 at 0x46ce80: file findcmd.c, line 307 03  (gdb) run ‑c ls 04  05  Breakpoint 1, search_for_command (pathname=0x707140 "ls", flags=1) at 06  findcmd.c:307 07  307

09  #0

search_for_command (pathname=0x707140 "ls", flags=1) at

10  findcmd.c:307 11  #1

 0x000000000041f69a in execute_disk_command (cmdflags=64, fds_to_ close=0x705c10, async=0, pipe_out=‑1, pipe_in=‑1,command_line=0x7071e0 "ls", redirects=0x0, words=0x707200) at execute_cmd.c:4918

12  #2

 execute_simple_command (simple_command=<optimized out>, pipe_in=pipe_in@ entry=‑1, pipe_out=pipe_out@entry=‑1, async=async@entry=0, fds_to_close=fds_ to_close@entry=0x705c10) at execute_cmd.c:4240

13  #3

 0x00000000004362cc in execute_command_internal_command=0x705bc0, asynchronous=asynchronous@entry=0, pipe_in=pipe_in@entry=‑1, pipe_out=pipe_ out@entry=‑1, fds_to_close=fds_to_close@entry=0x705c10) at execute_cmd.c:799

14  #4

 0x00000000004771ab in parse_and_execute (string=<optimized out>, from_ file=from_file@entry=0x4b3050 "‑c", flags=flags@entry=4) at evalstring.c:387

15  #5

 0x000000000042238e in run_one_command(command=<optimized out>) at shell.c:1358

16  #6

 0x00000000004212af in main (argc=3,argv=0x7fffffffdc18, env=0x7fffffffdc38) at shell.c:705

> gdb bash

Place a breakpoint in the search_for_ command() function and start bash, passing in ls as the argument (Listing 1). As you can see from line #0 in Listing 1, the argument pathname refers to the string ls, which now has to be searched in the locations specified by the user’s $PATH variable. My $PATH is as follows:


08  (gdb) bt

Listing 2: Searching for the Program in $PATH 01  (gdb) b find_user_command_in_path 02  Breakpoint 2 at 0x46c850: file findcmd.c, line 557. 03  04  (gdb) cont 05  Continuing.

> echo $PATH /usr/lib64/qt‑3.3/bin:U /usr/lib64/ccache:U /bin:/usr/bin:/usr/local/bin:U /usr/local/sbin:/usr/sbin:U /home/asaha/.local/bin:U /home/asaha/bin

I now place a breakpoint in the find_ user_command_in_path() function to see how Bash searches through all the locations present in $PATH (Listing 2). At the end of Listing 2, /usr/bin/ls has been found (/bin is a symlink to / usr/bin on Fedora 21); the function shell_execve() invokes the execve() system call to execute the command. The stat() system call is invoked to check the existence of the executable corresponding to ls in the path locations. Listing 3 shows the snippet of the calls to stat() for the three path locations. A closer look at the kernel reveals how the stat() command works. From here on out, all source references are relative to the top-level kernel source directory. The stat() system call is defined as in fs/stat.c (Listing 4). The vfs_stat() function in turn is defined as shown in

06  Breakpoint 3, find_in_path_element (name=name@entry=0x707140 "ls", path=path@ entry=0x707280 "/usr/lib64/qt‑3.3/bin", flags=flags@entry=36, dotinfop=dotinfop@ entry=0x7fffffffd650, name_len=<optimized out>) at findcmd.c:472 07  472

find_in_path_element (name, path, flags, name_len, dotinfop)

08  09  (gdb) cont 10  Continuing. 11  Breakpoint 3, find_in_path_element (name=name@entry=0x707140 "ls", path=path@ entry=0x707280 "/usr/lib64/ccache", flags=flags@entry=36, 12  dotinfop=dotinfop@entry=0x7fffffffd650, name_len=<optimized out>) at findcmd.c:472 13  472

find_in_path_element (name, path, flags, name_len, dotinfop)

14  15  (gdb) cont 16  Continuing. 17  Breakpoint 3, find_in_path_element (name=name@entry=0x707140 "ls", path=path@ entry=0x707280 "/bin", flags=flags@entry=36, 18 

 dotinfop=dotinfop@entry=0x7fffffffd650, name_len=<optimized out>) at findcmd.c:472

19  472

find_in_path_element (name, path, flags, name_len, dotinfop)

20  (gdb) cont 21  Continuing. 22  process 11762 is executing new program: /usr/bin/ls

Listing 5. The function vfs_ fstatat() makes use of the inode data structures to check for the file’s existence, and, if |

Listing 3: stat() Calls to Path Locations 01 s tat("/usr/lib64/qt‑3.3/bin/ls", 0x7fff8c535c40) = ‑1 ENOENT (No such file or directory) 02 s tat("/usr/lib64/ccache/ls", 0x7fff8c535c40) = ‑1 ENOENT (No such file or directory) 03 s tat("/bin/ls", {st_mode=S_IFREG|0755, st_size=123088, ...}) = 0

Issue 174

May 2015


Features How Does ls Work? Listing 4: Definition of stat() System Call

Listing 5: Definition of vfs_stat()

01  SYSCALL_DEFINE2(stat, const char __user *, filename, struct__old_kernel_stat __user *, statbuf)

01 i nt vfs_stat(const char __user *name, struct kstat *stat) 02 { 

02  { 03 

struct kstat stat;


int error;


return vfs_fstatat(AT_FDCWD, name, stat, 0);

04 } 

05  06 

error = vfs_stat(filename, &stat);


if (error)


Listing 10, I place a probe at the do_ execve() function; struct filename is defined in in‑

return error;

09  10 

return cp_old_stat(&stat, statbuf);

11  }

Hence, I use $file‑>name to retrieve the filename of the binary that is being executed. Invoking this SystemTap script with the following


as follows: it exists, it retrieves the file’s attributes. To see what is happening in kernel space when the stat() function call is invoked, I use the SystemTap script in Listing 6 to trace the call to and from the vfs_ fstatat() function (Listing 6). The vfs_fstatat() function has the prototype:

and executing ls in another terminal window produces:

struct filename { const char *name;


/* pointer to actual string */ const __user char *uptr;



const char __user *filename,

0 bash(26013): ‑> SyS_execve /bin/ls

/* original userland pointer */ struct audit_names bool




/* should "name" be freed? */ int vfs_fstatat(int dfd,

stap ‑v do_execve.stap


The process ID of the executing ls process is 26013, and the binary corresponding to the command that is executed is /bin/ls. Several other things


Listing 6: Tracing Call To and From vfs_fstatat()

struct kstat *stat, int flag)

The parameter, filename is what I am interested in here. When you run the SystemTap script, you will see the lines shown in Listing 7. Now, execute the ls command in another terminal window. You should see the lines shown in Listing 8 in the SystemTap window. At this stage, I have a fairly reasonable idea of what happens in userspace and kernel space so that the location of the program to which ls corresponds is found. Now, I am ready to see how the binary is executed.

01  probe kernel.function("vfs_fstatat@fs/stat.c").call 02  { 03 

# we are only interested in calls to vfs_fstatat() from "bash"


if(execname() == "bash")


pr intf("%s ‑> %s %s\n", thread_indent(‑1), probefunc(), kernel_ string($filename));

06  } 07  08  probe kernel.function("vfs_fstatat@fs/stat.c").return 09  { 10  11 

if(execname() == "bash") printf("%s <‑ %s\n", thread_indent(‑1), probefunc());

12  } 13  14  probe 15  {

How Does the Shell Execute ls?


Once the file corresponding to the typed command is found, a call to the execve() system call is made from the function shell_execve() in the file execute_cmd.c. The call is defined in the kernel (fs/ exec.c) as shown in Listing 9. Effectively, the do_execve() function does the work. do_execve() has the following prototype: int do_execve (struct filename *filename, const char __user *const __user *__argv, const char __user *const __user *__envp)

Using the SystemTap script shown in


May 2015


17  }

Issue 174

Listing 7: Output of Script in Listing 6 01  # stap ‑v find_ls.stp 02  Pass 1: parsed user script and 174 library script(s) using 03  448852virt/271248res/6248shr/267632data kb, in 1600usr/120sys/1721real 04  ms. 05  Pass 2: analyzed script: 3 probe(s), 17 function(s), 5 embed(s), 2 06  global(s) using 519976virt/341516res/7620shr/338756data kb, in 07  700usr/100sys/801real ms. 08  Pass 3: using cached 09  /root/.systemtap/cache/40/stap_40cbb339787d6b1aad27f7870ca767f0_6441.c 10  Pass 4: using cached 11  /root/.systemtap/cache/40/stap_40cbb339787d6b1aad27f7870ca767f0_6441.ko 12  Pass 5: starting run. |

Features How Does ls Work? have to happen before the binary /bin/ ls is executed. For example, the program has to be read from the disk, its binary format needs to be found, and the appropriate handling code must read the binary into memory. The SystemTap script in Listing 11 probes some of the key functions that

show how the /bin/ls binary is loaded function iterates through the list of curinto memory. If you run the SystemTap rently supported binary formats and, script and execute the ls command in once it finds that the executable is a supported format, proceeds to call the another window, you will see output appropriate function to load the binary. similar to Listing 12 in the SystemTap window. The search_bi‑ Listing 9: Kernel Definition of execve() nary_handler()

01 S YSCALL_DEFINE3(execve, 02 

const char __user *, filename,


const char __user *const __user *, argv,

0 bash(28736): ‑> vfs_fstatat .


const char __user *const __user *, envp)

106 bash(28736): ‑> vfs_fstatat /usr/lib64/qt‑3.3/bin/ls

05 { 

Listing 8: Output of SystemTap script in Listing 7

118 bash(28736): <‑ SYSC_newstat 125 bash(28736): ‑> vfs_fstatat /usr/local/bin/ls

06  return do_execve(getname(filename), argv, envp); 07 } 

134 bash(28736): <‑ SYSC_newstat 141 bash(28736): ‑> vfs_fstatat /bin/ls 155 bash(28736): <‑ SYSC_newstat

Listing 10: Tracing Calls to and from do_execve()

162 bash(28736): ‑> vfs_fstatat /bin/ls

01 p robe kernel.function("do_execve@fs/exec.c")

170 bash(28736): <‑ SYSC_newstat

02 { 

201 bash(28736): ‑> vfs_fstatat /bin/ls


213 bash(28736): <‑ SYSC_newstat


245 bash(28736): ‑> vfs_fstatat /bin/ls 253 bash(28736): <‑ SYSC_newstat 259 bash(28736): ‑> vfs_fstatat /bin/ls 267 bash(28736): <‑ SYSC_newstat

if(execname() == "bash") pr intf("%s ‑> %s %s\n", thread_indent(1), probefunc(), kernel_string($filename‑>name));

05 }  06  07 p robe 08 { 

283 bash(28736): ‑> vfs_fstatat /bin/ls


290 bash(28736): <‑ SYSC_newstat

10 } 


Listing 11: SystemTap Trace 01  probe kernel.function("do_execve_common@fs/exec.c")


02  {

25 p robe kernel.function("open_exec@fs/exec.c").return


if(execname() == "bash")


pr intf("%s ‑> %s %s\n", thread_indent(1), probefunc(), kernel_string($filename‑>name));

26 {  27 

if(execname() == "bash")


pr intf("%s <‑ %s \n", thread_indent(‑1),

05  }



29 } 

probe kernel.function("search_binary_handler@fs/exec.c").call 07 


08  {

probe kernel.function("load_elf_binary@fs/binfmt_elf.c").call 31 


if(execname() == "bash")


pr intf("%s ‑> %s Executable: %s Interpreter: %s\n", thread_indent(1), probefunc(), kernel_string($bprm‑>filename),

32 {  33 

if(execname() == "bash")


pr intf("%s ‑> %s Executable: %s Interpreter: %s\n", thread_indent(1), probefunc(),


kernel_string($bprm‑>filename), kernel_string($bprm‑>interp));

11  } 12 

35 } 

13  probe kernel.function("search_binary_handler@fs/exec.c").



37 p robe kernel.function("load_elf_binary@fs/binfmt_elf.c"). return

14  { 15  16 

if(execname() == "bash") pr intf("%s <‑ %s \n", thread_indent(‑1), probefunc());

38 {  39  40 

if(execname() == "bash") pr intf("%s <‑ %s \n", thread_indent(‑1), probefunc());

17  } 18 

41 } 

19  probe kernel.function("open_exec@fs/exec.c").call


20  {

43 p robe

21  22 

if(execname() == "bash") pr intf("%s ‑> %s %s\n", thread_indent(1), probefunc(), kernel_string($name));

44 {  45 


46 } 

23  } |

Issue 174

May 2015


Features How Does ls Work? Listing 12: Is Executable Format Supported? 0  bash(6235): ‑> do_execve_common.isra.26 /bin/ls 216  bash(6235): 239  bash(6235):

‑> search_binary_handler Executable: /bin/ls Interpreter: /bin/ls ‑> load_elf_binary Executable: /bin/ls Interpreter: /bin/ls

255  bash(6235):

‑> open_exec /lib64/ld‑linux‑x86‑

283  bash(6235):

<‑ load_elf_binary

In this case, it is the function load_elf_ binary().

Dynamic and Static Linking You can see that the glibc loader (/ lib64/ld‑linux‑x86‑ is opened, because ls dynamically loads glibc into memory. To see how things are different when you compile a program statically, compile the C program in Listing 13 with gcc ‑o simple simple.c

and execute it while keeping the SystemTap script in Listing 11 running (see Listing 14). Next, compile the program, passing the ‑static flag to gcc as gcc ‑o simple_static simple.c ‑static

In this case, you can see that the loader is not being opened any more. Now, a number of things have to happen before the program is executed, including setting up the memory areas and copying over the arguments, as well as a handful of other tasks.

Retrieving the Files List from Disk At this stage, the program is in memory and ready to execute when it gets a chance. So, how does ls read the directories and files from disk, and what happens in the kernel space to make that happen? The ls utility uses the readdir(3) function to read the directory contents, which in turn invokes the getdents() system call defined as follows in fs/ readdir.c:

and execute the program. On Fedora 21, you need to have the glibc‑static package. You should see the output shown in Listing 15 in the SystemTap window.


linux_dirent __user*,

01  # include <stdio.h> 02  03  int main(int argc, char **argv) 04  { printf("Hello World\n");


return 0;


dirent, unsigned int,count)

Listing 13: The printf() Library Function Call



unsigned int, fd, struct

07  }

The getdents() system call invokes the iterate_ dir() function, also defined in the same file. This function reads the list of files in the directory by con-

0 b ash(6266): ‑> do_execve_common.isra.26 ./simple

240 b ash(6266):

‑> search_binary_handler Executable: ./simple Interpreter: ./simple ‑> load_elf_binary Executable: ./simple Interpreter: ./simple

259 b ash(6266):

‑> open_exec /lib64/ld‑linux‑x86‑

292 b ash(6266):

<‑ load_elf_binary

Listing 15: Statically Compiled Program 0  bash(6437): ‑> do_execve_common.isra.26 ./simple_static 236  bash(6437):

‑> search_binary_handler Executable:

./simple_static Interpreter: ./simple_static 263  bash(6437):

‑> load_elf_binary Executable:

./simple_static Interpreter: ./simple_static


May 2015

Issue 174

static int filldir(void * __buf, const char * name, int namlen, loff_t offset, u64 ino,



unsigned int d_type)

The argument name corresponds to the file name of a file in the directory in which ls is invoked; hence, I print it in the SystemTap script. If you run the above SystemTap script and execute ls in another terminal window, you should see output similar to Listing 17 in the SystemTap window. In Listing 17, you can see that besides the lines showing filldir, each of the filenames in the directory in which ls is executed is shown, including hidden files. Once the entries have been retrieved, the getdents() system call returns and the list of files appears in your terminal window.

Summary In this article, I first looked at how Bash finds the location of the binary corresponding to the ls command; then, I showed how the kernel knows how to execute the binary using the appropriate binary handler. Finally, I dived deeper to see how the directory listing is retrieved from the underlying filesystem. n n n


Listing 14: SystemTap Output with Simple Program 189 b ash(6266):

sulting the underlying filesystem’s inode entries. Depending on which filesystem the path specified to ls is formatted, the function used to read the directory contents will vary. On ext4, the ext4_read‑ dir() function in fs/ext4/dir.c is the function that does this, and the fill‑ dir() function in fs/readdir.c is called for every entry it finds. The SystemTap script in Listing 16 traces the retrieval of the directory listing. The filldir() function prototype is:

[1] Bash source: http://​­savannah.​­gnu.​ ­org/​­git/​­?​­group=bash [2] Coreutils: https://​­www.​­gnu.​­org/​ ­software/​­coreutils/ [3] SystemTap Beginners Guide: https://​­sourceware.​­org/​­systemtap/​ ­SystemTap_Beginners_Guide/ [4] SystemTap tutorial: https://​ ­sourceware.​­org/​­systemtap/​­tutorial/ [5] Linux kernel source: https://​­github.​­com/​­torvalds/​­linux [6] The Linux Kernel Archives: http://​­kernel.​­org |

Listing 16: Tracing Locations 01  probe kernel.function("iterate_dir@fs/readdir.c").call 02  { 03  04 

if(execname() == "ls") printf("%s ‑> %s\n", thread_indent(1), probefunc());

05  } 06 


07  probe kernel.function("iterate_dir@fs/readdir.c").return 08  { 09  10 

if(execname() == "ls") printf("%s <‑ %s\n", thread_indent(‑1), probefunc());

11  } 12  13  probe kernel.function("filldir@fs/readdir.c").call 14  { 15  16 

if(execname() == "ls") pr intf("%s ‑> %s : %s\n", thread_indent(1), probefunc(), kernel_string($name));

17  } 18  19  probe kernel.function("filldir@fs/readdir.c").return 20  { 21  22 

if(execname() == "ls") printf("%s <‑ %s\n", thread_indent(‑1), probefunc());

23  } 24  25  probe kernel.function("ext4_readdir@fs/ext4/dir.c").call


26  { 27  28 

if(execname() == "ls")


printf("%s ‑> %s\n", thread_indent(1), probefunc());

29  }

in every Issue!

30  31  probe kernel.function("ext4_readdir@fs/ext4/dir.c").return 32  { 33  34 

if(execname() == "ls") printf("%s <‑ %s\n", thread_indent(‑1), probefunc());

35  }

Listing 17: Output of ls on ext4 Filesystem 0 ls(25338): ‑> iterate_dir 16 ls(25338):

‑> ext4_readdir

Each issue delivers technical solutions to the real-world problems you face every day. ADMIN magazine covers

44 ls(25338):

‑> filldir : listings

Windows, Linux, Solaris,

53 ls(25338):

<‑ call_filldir

59 ls(25338):

‑> filldir : ..

and popular varieties of the

63 ls(25338):

<‑ call_filldir

67 ls(25338):

‑> filldir : Formatting‑your‑text.html

74 ls(25338):

<‑ call_filldir

78 ls(25338):

‑> filldir : article.txt

better network security, system

83 ls(25338):

<‑ call_filldir

management, troubleshooting,

87 ls(25338):

‑> filldir : .

91 ls(25338):

<‑ call_filldir

performance tuning, virtualization,

94 ls(25338):

‑> filldir : source_code

100 ls(25338):

<‑ call_filldir

103 ls(25338):

‑> filldir : example.txt

108 ls(25338):

<‑ call_filldir

112 ls(25338):

<‑ iterate_dir

Unix platform. Learn the latest techniques for

cloud computing, and much more!

6 issues per year!

138 ls(25338): <‑ SyS_getdents

Order Online at |

Features Seafile

Sea Treasure Managing files in the Seafile personal cloud

Sync your devices and collaborate with other users in your own personal cloud.


loud services have become a necessary evil. Most users have multiple devices and want access to the same data across all those devices. Gone are the days when a user was satisfied to move data between devices using USB drives. Users now want data to move with them – and that means network-based storage, sync, and sharing. Several Internet services offer cloud-based file storage and synchronization, but some users don’t want to risk throwing their data onto an unknown server and moving it across the Internet whenever they want access. But, what if you could have the power of the cloud with the safety of on-site storage? Linux and open source tools like Seafile [1] can empower users to become the service providers and let them keep their data under their own control.

What Is Seafile? Seafile is an open source file storage, sync, and share solution. You can run Seafile on a local server, and it acts as


May 2015

the central location for storing your files (documents or multimedia). You can then sync your devices (desktop, laptop, smartphone, tablet, etc.) with this server. Your data is now accessible across devices – just like Dropbox. The only difference is you own this Dropbox. However, Seafile is much more than just a file storage and sync tool. You can use Seafile to create a very complex network of communication, and it can handle more than just files. If you are using Seafile in an organization, you can create multiple users and add those users to groups representing different units of the organization. Each Group can have its own Libraries, and you can share the files in those libraries with either read only or write permissions. Seafile is not limited to file storage and sharing. Some of its built-in capabilities include online file editing (though it doesn’t work in real-time as does Google Docs), wikis, discussions, messages, and contacts. It also has a notification system – users are notified every time an activity happens in their account, such as a mes-

Issue 174

sage delivery or the sharing of a file, group, or library. One of the greatest features of Seafile is the client-side encryption, which alone makes it a desirable solution. When a user creates a new library using the desktop client (you can simply drag and drop a folder to create the library), Seafile provides the option to encrypt the library. All the files and folders added to this library are encrypted before they are uploaded to the server. The password is never sent or stored on the server. Even the sys admin of the Seafile server cannot access the encrypted content. The client-side encryption does not work when libraries are created using a browser or the mobile client. If you want extreme protection, don’t create new libraries using the web browser or the mobile client. Use the desktop client instead. Before you entrust Seafile with extremely sensitive data, see the Security Features page at the Seafile website [2]. In the post Snowden era, trust in public data services is declining. Many users are looking for solutions like Seafile that |

Lead Image © Randy Hines,

By Swapnil Bhartiya

Features Seafile offer complete control over the cloud. Organizations that deal with sensitive data often want on-premises, self-hosted solutions for privacy as well as security reasons. Almost everyone who deals with sensitive data is a potential Seafile user. Additionally, connecting Seafile with Kolab’s groupware provides a best-ofthe-breed, fully open source, complete self-managed enterprise solution. The closest competitor of Seafile is probably ownCloud, which uses the traditional model of file storage and sharing. Both solutions have pros and cons, I personally have had mixed experiences with ownCloud (which is why I started looking for alternatives and found Seafile. However, both ownCloud and Seafile are actively developed and available for no cost. Try them and see which one suits you best.

Which Seafile to Choose? Seafile, which is the organization behind the Seafile project, offers two products: • Seafile cloud service, a managed cloud-like Dropbox. • The open source Seafile application, which users can download and install on their own servers. The self-hosted server comes in two editions: the freeof-cost Community Edition and the paid Pro Edition. The Pro edition comes with email support and additional features that are missing from the community edition [3]. This article will focus on the community edition.

The Server I installed Seafile on a hosted Digital Ocean Virtual Private Server (VPS). I chose a moderate server – 1GB RAM, 1 Core CPU and 30GB SSD for $10 per month. The VPS is running a fully patched Ubuntu 14.04 LTS server, although you could install Seafile on any Linux distribution. Core components needed for the complete solution include Python, MariaDB, Nginx, OpenSSL, and obviously, Seafile. The Seafile application actually runs on top of a web server. You can install Seafile on an Apache server, but the alternative Ngnix web server is known to be more resource efficient than Apache. MySQL (MariaDB) and SQLite are the

officially supported databases. Like many in the open source community, I use MariaDB instead of MySQL.

ommend using a key instead of a password to log into your system.

Set Up the MariaDB Database

Add Some Security to Your Server

The next step is to install the core components needed for Seafile. I’ll start with the database. I will use the latest stable branch (10.x) of MariaDB. Because Ubuntu doesn’t have the latest MariaDB packages, I will add official MariaDB repositories. Visit the download page of MariaDB [4] to obtain updated instructions for choosing the right mirror for your distro.

When you buy a server from Digital Ocean or Linode, you get a bare minimal system. The first thing to do is ensure that it’s fully updated. SSH into your server and update the system: ssh root@SERVER_IP sudo apt‑get update sudo apt‑get dist‑upgrade

Create a user for the system and add it to the sudoers file, so it has sudo powers and you can prevent other users from SSHing into the server as root user.

# apt‑get install


software‑properties‑common # apt‑key adv ‑‑recv‑keys ‑‑keyserver hkp://



0xcbcb082a1bb943db adduser swapnil gpasswd ‑a swapnil sudo

For additional security, change the default port for ssh and block root login. Open the sshd configuration file using your preferred editor. Look for the port number and change it from default 22 to any higher port (just don’t use any ports already used by system). To block root SSH access, Look for the following directive and change it from yes to no: PermitRootLogin no

Save and close the config file. Now restart ssh service: service ssh restart

Don’t log out of your server or close the terminal window. Open another terminal window and ssh into your system using the newly created user and port, using:

Then, open the source.list file and add the main repo at the bottom: deb mariadb/repo/10.0/ubuntu


trusty main

Update the repos and install the maridb server (choose the 10.x branch): apt‑get update apt‑get install mariadb‑server

During the installation, MariaDB will ask to create a root password for the database server. Once the database is installed, you will need to create some system tables. First, however, you should stop the MySQL daemon (MariaDB is the drop-in replacement for MySQL, so it uses the same commands used for MySQL server – don’t be confused with the sight of the term MySQL). Kill the MySQL daemon: killall mysqld


For example:

The following command will initialize the MariaDB data directory and create the necessary system tables.

ssh ‑p1977 swapnil@ mysql_install_db

Give the password for the user and log into your system. If everything works fine, you have added some basic security to the server. To add another layer of security, I rec- |

The preceding command also created some test tables and users, which should be removed for security purposes. Start the service with:

Issue 174

May 2015


Features Seafile service mysql start

Enter the following command to launch a script that will perform some tasks to secure the database: sudo mysql_secure_installation

The scripts asks a series of questions. Say no to the first question, because you don’t need to change the root password, and say yes to the rest. To add one more layer of security, you need to open the my.cnf file and add the line

SITE_DIRECTORY with the directory where you will download sea file packages. (Throughout this article, the root directory is sea and its path is /var/www/sea. Exchange these names with the names you chose on your server.) Next, save and close this file and then create a symlink in the site‑enabled directory:

Remove the default config file from siteenabled directory as shown in the following example: # rm ‑r /etc/nginx/sites‑enabled/default

Then, open the nginx.conf file un-comment the following lines: server_tokens off;

# ln ‑s /etc/nginx/sites‑available/sea



server_names_hash_bucket_size 64; server_name_in_redirect off;

Listing 1: Nginx Configuration File 01  server { 02  listen 80;


03  server_name www.your‑;

in the [mysqld] section, somewhere after the bind‑address directive.

Set Up Nginx and Other Packages If you want SSL support, you’ll need to install ngix‑full instead of nginx:

04  rewrite ^ https://$http_host$request_uri? permanent; # force redirect http to https 05  } 06  server { 07  listen 443; 08  ssl on; 09  ssl_certificate /etc/nginx/ssl/nginx.crt; # path to your cacert.pem 10  ssl_certificate_key /etc/nginx/ssl/nginx.key; # path to your privkey.pem 11  server_name www.your‑;

# apt‑get install nginx‑full # apt‑get install python

12  proxy_set_header X‑Forwarded‑For $remote_addr;


python‑setuptools python‑imaging



14  fastcgi_pass; 15  fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

Create a directory to store the certificate and the key:

16  fastcgi_param PATH_INFO $fastcgi_script_name; 17  fastcgi_param SERVER_PROTOCOL $server_protocol; 18  fastcgi_param QUERY_STRING $query_string; 19  fastcgi_param REQUEST_METHOD $request_method;

# mkdir /etc/nginx/ssl

20  fastcgi_param CONTENT_TYPE $content_type;

Then, you can generate the key and the certificate: # openssl req ‑x509 ‑nodes

13  location / {

22  fastcgi_param SERVER_ADDR $server_addr; 23  fastcgi_param SERVER_PORT $server_port; 24  fastcgi_param SERVER_NAME $server_name;


‑days 365 ‑newkey rsa:2048

21  fastcgi_param CONTENT_LENGTH $content_length;

25  fastcgi_param REMOTE_ADDR $remote_addr;


‑keyout /etc/​ nginx/​ ssl/​ nginx.key


26  fastcgi_param HTTPS on; 27  fastcgi_param HTTP_SCHEME https;

‑out /etc/​ nginx/​ ssl/​ nginx.crt

28  #proxy_set_header X‑Forwarded‑Proto $scheme;

You will have to provide some personal information on organization name, location, and web address to generate the SSL key and certificate. The certificate and the key will land in the /nginx/ssl directory. Now create an nginx configuration file for this server inside the sites‑available directory and populate the config file with the contents shown in Listing 1. You have to make three changes to the file in Listing 1. First, replace your‑do‑ with the name or IP address of your domain (two instances). Then, in the last section, location /media, replace


May 2015

Issue 174

29  access_log /var/log/nginx/seahub.access.log; 30  error_log /var/log/nginx/seahub.error.log; 31  } 32  location /seafhttp { 33  rewrite ^/seafhttp(.*)$ $1 break; 34  proxy_pass; 35  client_max_body_size 0; 36  proxy_connect_timeout 36000s; 37  proxy_read_timeout 36000s; 38  } 39  location /media { 40  root /var/www/SITE_DIRECTRORY/seafile‑server‑latest/seahub; 41  } 42  } |

Features Seafile After completing these steps, you can save and close the file.

Install Seafile Create the root directory for seafile (it must be the same path you give for this directory in the Nginx config file: # mkdir ‑p /var/www/sea

Change to the newly created directory and download the latest Seafile packages (check the download page [5] for the latest packages). # wget haiwen/seafile/downloads/U seafile‑server_4.0.6_x86‑64.tar.gz

Then extract the files:

isting database or create a new one. Choose to create a new database. You will get more questions, asking for the hostname of the MySQL server (localhost, in this case) and the port number for the MySQL server (the default value of 3306 is fine in most cases). Provide the current database root password, and create a new user for the Seafile database (don’t leave it as root). In this example, I created a user named sea. Accept the default values for the database name prompts. Then, open the ccnet.conf file as root and edit the SERVICE_URL line to use HTTPS instead of HTTP and remove the 8000 port. Also, cross-check that the site URL is pointing to your domain. Next, open the file and add the following line before DATA‑ BASES:

x# tar xzvf seafile-server* FILE_SERVER_ROOT =

Now change to the extracted seafileserver directory and run the following script, which will create the required databases and directories for the Seafile server:



Save and close the file. Restart Nginx with: # service nginx restart

# ./setup‑seafile‑

Then start Seafile and Seahub: The script will create some sub-directories in the root directory and tables in the database. You will get the following output, where you will be required to take some actions. Provide new values for the server name and IP address and leave the rest to the default values. The script will ask if you want to use an ex-

# /var/www/sea/seafile‑server‑latest/U / start # /var/www/sea/seafile‑server‑latest/U / start‑fastcgi

When you run the second command for the first time, it will create the admin ac-

count for your Seafile server. Provide this account with an existing email address and a new password, which you will use to log into your server. Open a browser and enter the site URL. If everything is running fine on your server, it will open the web interface of your Seafile cloud server. Log into your account using the email ID and password. Congratulations on your very own Seafile cloud server. By default, you will need to start Seafile manually between server reboots. To avoid the need for a manual restart, you’ll need to create an init script.

How to Use Seafile When you log into your Seafile web interface, you will see the page shown in Figure 1. Seafile comes with a default library, which you can delete if you are looking for client-side encryption. The Starred link shows the files or directories you selected as favorites for quicker access. The Messages link shows all the messages you sent or received on the server. Devices shows all the devices you use to log into the sever. You can block access to any device by deleting it from the server. Contacts allows you to manage address books of other users on the server for messaging and easy sharing. Click Share Admin to manage your sharing activities – the libraries, folders, files, and links you have shared. The top bar in Figure 1 shows that you are on the My Home page. Choose the Groups option to create and manage groups. The Organization option shows all the activities going on within the groups on the server; you won’t see libraries or documents created by individual members.

Sharing and Collaborating

Figure 1: The Seafile web interface. |

Users can easily collaborate with other users on the same Seafile server through Group or by sharing files with individuals. Sharing is not restricted to the users of the cloud server. Files and documents can be shared with outsiders through links. By default, these shared file become public and anyone with the link can access them; however, you can restrict access to the file using a password. Seafile comes with a built-in text editor that supports several file types, including RTF, markdown, and simple text

Issue 174

May 2015


Features Seafile

Figure 2: Enter a location for the Seafile libraries and content files.

files. Users can edit each other’s documents, which are shared within Groups, through libraries. Seafile online editing does not feature real-time editing, but it does offer a neat editing experience with support for discussion.

The Super Admin The admin account for the Seafile server can manage users and groups on the server from the admin panel. Access this account by clicking on the wrench icon next to the admin icon on top. The admin can see all users, groups, and Libraries. However even the admin cannot access the libraries, files, or discussions created by other users and groups. The admin might not be able to see the content of any library, but it does

have the power to delete any library, group, member, or shared item.

synced. You should always work inside the Library to ensure proper syncing.

Integrate Seafile with the desktop

Client Side Encryption

To integrate Seafile with your local machine and take advantages of the clientside encryption, download the client for your OS. The Seafile client is available for Linux, Android, Mac OS X, iOS, and Windows. When the client is run for the for the first time, it asks for the location where the libraries, and their content, will be stored on your system (Figure 2). The client will then ask for the server URL and login details (email and password). Because the SSL certificate was self-generated, it will show a warning; just accept the certificate and proceed. You will then see the client window. The client window shows all the libraries created by you or shared with you. Seafile won’t sync the Libraries automatically (at least on the desktop client). You can choose which libraries you want to sync with this machine (nifty feature if you don’t want to sync personal libraries on the work PC). To sync a library, right-click on the cloud icon and choose Sync this library (Figure 3). Seafile will to let you choose the location where the library and its content will reside on the local machine. Whenever you sync a library with a local machine, Seafile saves the library inside a folder which, by default, is called Seafile. Be aware that the content inside the library is synced with the server. If you create any folder or save any file outside the library, it will not be

To create a client-side encryption library, first create a folder on your local machine. Then, drag and drop that folder in the designated area on the desktop client. Seafile will open a window where you can change the path of the folder, give it a new name, and add a description (which is mandatory). Below this you will see the encrypted option – select it and provide the password (Figure 4). This folder will become your client-side encryption Library. You can sync it with other machines through the desktop clients or access it via a web browser, but every time you access the library, you will have to provide the password.

Back to Future: Version One of the best features of Seafile is version control. Every time a file is saved and synced with the server, Seafile creates a new version of it. You can access the file history from a web browser. Hover the mouse over the filename to see the drop-down, contextual menu. The last option on this menu is history. You can also access the history of any file by opening it in the browser and then clicking on the history icon.

Conclusion Seafile might appear a bit complex when compared with Dropbox or ownCloud, but once you understand how libraries work, Seafile becomes extremely easy to use. The library model, when combined with groups, creates a very powerful, flexible, and scalable tool for enterprise customers. n n n

Info [1] Seafile homepage: http://​­www.​­seafile.​­com/​­en/​­home/ [2] Seafile security features: http://​­manual.​­seafile.​­com/​­security/​ ­security_features.​­html [3] Seafile Editions: http://​­www.​­seafile.​ ­com/​­en/​­product/​­private_server/ [4] MariaDB repository: https://​ ­downloads.​­mariadb.​­org/​­mariadb/​ ­repositories/​­#​­mirror=mirrorpw Figure 3: Configuring Seafile’s file sync options.


May 2015

Figure 4: Configuring client-side encryption.

Issue 174

[5] Seafile server and client download: http://​­www.​­seafile.​­com/​­en/​ ­download/ |

Risk-Free Trial! Practical. Technical. Elegant.

Subscribe Now 3 issues + 3 DVDs


$ £3 / E3 / $3 / $9 rest of world

Depending on the region you live in. Terms and conditions:

Features Ask Klaus!

Ask Klaus! Klaus Knopper answers your Linux questions

By Klaus Knopper

BIOS Reaction

Klaus Knopper Klaus Knopper is an engineer, creator of Knoppix, and co-founder of LinuxTag expo. He works as a regular professor at the University of Applied Sciences, Kaiserslautern, Germany. If you have a configuration problem, or if you just want to learn more about how Linux works, send your questions to:


May 2015

Klaus: I purchased an HP Notebook 15 AMD. The BIOS was corrupted soon after. Here is how various distros reacted with secure boot and UEFI disabled: • Mint – Installed okay with a fuzzy screen appearing in the bootup process. The error message said “the hardware acceleration is disabled and this mode should only be used for troubleshooting.” I could use the OS. Mint did not allow me to su to root. • Fedora – The install process crashed when a notice appeared saying KVM is disabled. • Tisquel – The install was successful, but it reported that I was not root during the install process and tried to fix the problem. I never saw the message again. • Yet Another Distro – Reported that the SMBIOS was disabled and crashed. • Mageia – Crashed during install. • Korora – Reported the KVM message and crashed during install. With secure boot and UEFI enabled, Windows 8 booted up okay and reported nothing. I have tried flashing the BIOS, but it is blocked using the HP BIOS tool from Windows and Wine. Can you advise on any fix? I would advise that the developers merge all their error trapping with better advice on the significance of the error message. Thanks, R The “KVM” and “SMBIOS” messages are somewhat confusing, as they suggest that you did not boot Linux natively on the HP notebook but used

Issue 174

virtualization, or the distro in question just wrongly detects some kind of virtualization and tries to load the corresponding acceleration modules. Does your computer maybe run some kind of hypervisor from the UEFI firmware, instead of loading operating systems natively? Also, the fact that various distros crash during install may lead to the conclusion that the real hardware is not accessible because of a hypervisor restricting access (not really related to “root,” but could be mistaken as missing privileges by installation programs). If this is the case, there is no simple way to let any Linux distro show a verbose message like “I’m not running natively on this computer, but under a simulation,” because from the viewpoint of the OS, there is no difference between a simulated or a real computer. If, as you say, flashing the BIOS is also impossible using the vendors’ own tools, it also indicates that an intermediate software layer is running that blocks access to the real hardware. On the other hand, if the BIOS is defective, it may just give the impression that a hypervisor is blocking hardware access, and it may be the BIOS itself that does this. In either case, a pure software solution of the problem may be unlikely. You could try to turn in the notebook as a warranty case and get a replacement with an unrestricted BIOS that allows you to boot all operating systems you choose.

Filesystem Repair Dear Klaus, a little time ago I unplugged a USB extension drive with- |

Features Ask Klaus! out thinking and without unmounting the drive when the SUSE system indicated that my backup had finished, despite knowing that I should never do this. The backup disk will not mount anymore, and despite all attempts to find an answer, all I have [left] is to try ScanDisk on a Windows computer. Is this really the only answer? Yours sincerely, Terry Instead of the very limited ScanDisk, why not use the native filesystem repair tools that come with most Linux distros? For FAT32 filesystems, you can use dosfsck, for ext2, use e2fsck, and for NTFS, just use ntfs-3g to repair the filesystem. If your backup disk displays as /dev/ sdb1 (cat /proc/partitions), see Table 1 for the commands for repairing the filesystems. Please do not attempt these commands on filesystems that are already mounted, even though the tools are usually clever enough to warn and quit in this case. If the filesystem is not repairable this way, or if the partition table is corrupt (in which case ScanDisk would also fail), you may try the excellent testdisk utility on the disk device (/dev/sdb in this example) to identify and repair partitions interactively and to salvage data from them. If your Linux distro requires root access for accessing block devices, prefix each command with sudo to gain root access for the duration of the operation.

Wine Menu, Please Klaus, I’m trying to create my own Linux distribution that uses LXDE as its desktop and is based on Debian Stable. I noticed with Knoppix 7.4 that when logged into LXDE, you created a “Wine” menu that goes a total of four levels deep. I’m trying to do the same thing in order to provide menu entries for scripts that make it easy to install other programs that there’s no room for on the CD (I’m also trying to keep the ISO size to 700MB or less, for those without access to a DVD burner). I’ve followed a tutorial I found online for how to do this, but in spite of trying it multiple times and doing everything exactly as shown, I keep leaving myself with just the “Run” and “Logout” op-

Table 1: Commands for Repairing Filesystems Filesystem Type

Command for Interactive Repair


dosfsck ‑v ‑r /dev/​ sdb1


e2fsck /dev/​ sdb1


ntfsck /dev/​ sdb1


reiserfsck --rebuild-tree --rebuild-sb /dev/​ sdb1


fsck.filesystemname /dev/​ sdb1

tions in the menu; that’s obviously not what I want, and it’s driving me up the wall. Could you please tell me how you did it? Thanx in advance, Fred in St. Louis Actually, I didn’t create the Wine menu personally, but the Wine package (like some others) adds its own menu structure and manages it via userdefined configuration files, and this is done even if you install Windows programs via the “Windows” installer under Wine, not using Debian’s standard installation tools. You can create menu entries manually, though. I’d recommend NOT messing with the XML-based menu configuration

system (/etc/menu‑methods/*, /etc/xdg/ menus/*, …) that is used by the updatemenus utility, but just copy a .desktop file to the appropriate directory beneath the .local/ share/applications (hidden) directory on your default users home directory. Listing 1 shows the contents of .local/ share/applications/wine/Programs/Ja‑ va‑Editor/Java‑Editor.desktop after in-

stalling the program under Wine. As you can see, the submenus follow the directory hierarchy starting from .local/share/applications, and menu content and start program are determined by the .desktop file’s content. Figure 1 shows the resulting menu in Knoppix 7.5. n n n

Listing 1: Contents of .desktop [Desktop Entry] Name=Java‑Editor Ex ec=env WINEPREFIX="/home/knoppix/.wine" wine C:\\\\windows\\\\command\\\\start. exe /Unix /home/knoppix/.wine/dosdevices/c:/users/Public/Start\\ Menu/Programs/ Java‑Editor/Java‑Editor.lnk Type=Application StartupNotify=true Path=/home/knoppix/.wine/dosdevices/c:/Program Files/JavaEditor Icon=D416_javaeditor.0

Figure 1: Screenshot of menu. |

Issue 174

May 2015


Shop the Shop

Angst in the Astral Plane Did you miss a previous issue? To order a back issue visit our shop:

For a limited time only! Order issues #4, #5, and #6 of Raspberry Pi Geek for only US$ 30*! * ÂŁ 14.99 / E 23.99 / $ 33.50 rest of world

Gimme My Raspberry Pi Geek!

Image © Daniel Villeneuve,


Features Tshark

Analyzing network traffic with Tshark

Terminal Analyzer The simple and practical Tshark packet analyzer gives precise information about the data streams on the network. By Valentin Höbel

Author Valentin Höbel works as a Cloud architect for the VoIP specialists NFON AG in Munich. When he is not playing table football in his spare time, you will find him investigating current open source technologies.


May 2015

the traffic on the local network, and you can use filtering commands to narrow down the output to specific hosts or protocols that you want to study.

Many Packages, Many Privileges You can install Tshark via the command line or using a graphical tool of your choice. I used the 64-bit version of Tshark 1.10.6 on Ubuntu 14.04 for this article. Tshark is included with most other major distributions and accepts the same parameters, so you can use a different flavor of Linux and complete the installation with the package manager of your choice. On Ubuntu, you first need to update your system (Listing 1, first two lines). Then, install the program using the package manager (last line). Wireshark and Tshark draw on the same resources, so the two tools are bundled together. You can expect the freshly downloaded packages to occupy about 70MB of disk space in an unzipped state. Tshark should be run as root or preceded by sudo, because it does not otherwise have sufficient rights to read packages. The tool draws on the accompanying dumpcap program, which, with the aid of Pcap [4], records the current data traffic but refuses to provide any service without fairly extensive privileges.

Issue 174

During lab tests with Ubuntu, the program complained loudly about needing root privileges – you can usually safely ignore such messages. If these messages get on your nerves, and you want to correct the rights setup on your system, you have several options. Running the sudo dpkg‑reconfigure wireshark‑common

command and confirming with Yes is all it takes in Ubuntu. By doing so, you are allowing users without administrative rights to sniff data traffic. To make sure your user account benefits from this ability, add it to the wireshark local group as follows: $ sudo adduser $USER wireshark

After logging out and logging back in again, Tshark will now run on your user account without a sudo prefix.

In the Beginning As indicated by the options and switches, Tshark provides many functions (Figure 1).

Listing 1: Update and Install $ sudo apt‑get update $ sudo apt‑get dist‑upgrade $ sudo apt‑get install tshark |

Lead Image © bowie15,


hen the system logs fail to provide information on problems, or if you simply want to know what is happening on the network, it is worth taking a look at the data stream. Tools such as Tcpdump [1] or Wireshark [2] let you listen on the network to study and troubleshoot network problems. Tcpdump is the tool of choice for gurus and professionals, but Wireshark appeals to many users because of its powerful GUI. If you prefer to work at the command line, or if you don’t have time to grapple with Wireshark’s elaborate user interface, you can use Wireshark’s little brother Tshark [3] to sniff packets in a terminal window. Like other packet sniffers, Tshark switches the interface into promiscuous mode to listen for network packets. In promiscuous mode, the network adapter hands over all the packets to the operating system, instead of just the ones addressed directly to the local system with the MAC address. Tshark can therefore listen to all

Features Tshark line in Figure 2. Current Ubuntu installations add the name server to /etc/resolv.conf.) The DNS query was necessary because the ping command in Figure 2 used a DNS name, rather than an IP address. As you can see in Figure 2, once the system receives the IP address, it issues an ICMP request (which is the basis for the ping command) to the correct destination. Tshark initially reports the DNS request and displays the correct response from the responsible name server.

All About Fields

Figure 1: The output of important Tshark options is overwhelming at first sight.

The program might target advanced users and professionals, but don’t worry: You don’t need to rummage through loads of documentation or have profound knowledge to analyze data traffic. A terse tshark ‑D is fine if you first want to see which interfaces the software has found on the local or remote system. You might be surprised by the number of network adapters listed when you see the output (Listing 2). Tshark displays virtual adapters provided by the operating system. In Listing 2, eth0 represents the first interface on the test system; the nflog and nfqueue adapters are part of the Linux kernel’s Netfilter packet filtering framework [5]. The fourth adapter on the list, any, lets the user listen on all interfaces, and the last result, lo, is the loopback interface. To get started, type:

To understand the Tshark output, you need to understand the meaning of each column or field. You can define which fields the tool displays by setting the parameters ‑T and ‑e; Table 1 provides information about field names and their meanings. If you enter all the fields manually (Listing 3), you should receive output that is almost identical to Figure 2 with a ping to Only the arrows ‑> between the source and destination IP are missing. You can also process the

sult of the ping, the other reveals what happens in the background during the process. In this particular example, specifying any means you will see all the processes, because the software is listening on all the interfaces. If the sniffing were limited only Table 1: Tshark Fields to eth0, you would Field Name Description have missed the frame.number Package number in the data stream fact that the test frame.time_relative Relative package time stamp system first issues ip.src Sender’s IP address a DNS query to loip.src Receiver’s IP address calhost (as can be col.Info Received package contents seen in the first

sudo tshark‑i any

This tells Tshark to listen on all interfaces and open a second terminal in which you can ping your choice of website (Figure 2). While one window shows the re-

Listing 2: Network Adapters List $ sudo tshark ‑D 1. eth0 2. nflog 3. nfqueue 4. any 5. lo

Figure 2: The Tshark output shows what happens in the background during a ping. |

Issue 174

May 2015


Features Tshark Listing 3: Entering the Fields $ su do tshark ‑i any ‑T fields ‑e frame.number ‑e frame.time_relative ‑e ‑e ip.src ‑e ip.dst ‑e col.Info

Listing 4: Processing the Output $  sudo tshark ‑i any ‑T fields ‑e frame.number ‑e frame.time_relative ‑e ip.src ‑e ip.dst ‑e col.Info ‑E separator=, ‑E quote=d ‑E header=y frame.number,frame.time_relative,ip.src,ip.dst,col.Info "1","0.000000000","","","Standard query 0x79e1

A linux‑"

Listing 5: Checking Traffic $ sudo tshark ‑i eth0 host and port 80 $ sudo tshark ‑i eth0 host and port 80 ‑R http


$ sudo tshark ‑xi eth0 host and port 80 ‑R http $ sudo tshark ‑i eth0 host and port 80 ‑R http ‑T fields ‑e text

output as a CSV file. With the additional switches ‑E separator=, ‑E quote=d ‑E header=y, Tshark quotes the fields, comma-separates them, and outputs a line with the column name (Listing 4). Software messages that warn about running with root privileges were removed for space reasons in the example. You can redirect the results of the command to a file using standard methods as required.

might only want to monitor the relevant processes in the HTTP protocol (Listing 5, line 2). Figure 4 shows how the network node accesses the site using the HTTP GET method (frame 16); the web server at the IP address responds with an HTTP status 200 and serves up the page (frames 19, 21, and 22). The read filter also provides much more powerful possibilities for selecting specific data streams. You can find other examples [7] and detailed descriptions [8] online.

Figure 3 shows how the HTTP connection is established successfully through a TCP handshake [6]. You’ll need to fish the actual HTTP data traffic (frames 10, 13, and 15) out of the substantial output. Using read filters provides a better overview. Read filters make it possible to restrict the flood of packets using specified criteria. For example, you

Observing the data flow and subsequent analysis is often enough to troubleshoot problems. In some cases, however, it is necessary to examine the contents of the packages – the payload – in hexadecimal or ASCII form. If the data traffic you are sniffing is unencrypted on the wire, Tshark will display the hexadecimal data with the ‑x switch (Listing 5, line 3). Figure 5 shows the output. The program always considers each package individually, and therefore dis-

Plumbing the Depths Without specifying additional parameters, the program displays all data traffic that flows through the selected interface. In some cases, however, you aren’t interested in all data and might rather focus on a specific protocol, such as HTTP, Samba, or NFS connections. Tshark can employ various filtering options, and you can combine several of these options for complex filtering scenarios. In the following scenario, an Apache web server that provides several websites for an intranet is listening on the local system with an IP address of on the eth0 interface. A user on the computer with the IP address reports problems with the connection. You can investigate this problem on the server using the call in the first line of Listing 5. You can restrict the analysis to the eth0 interface using the -i eth0 parameter; specifying host means you will only study packets that run from or to this IP address. The port 80 switch focuses on HTTP traffic, and the keyword and combines the two parameters, host and port.


May 2015

Figure 3: The client successfully accesses a website on the intranet.

Figure 4: Using the http read filter, Tshark only displays the relevant data.

Issue 174 |

Features Tshark

Figure 5: Tshark will also present many details about data traffic, as required.

plays the frames in their own blocks. The example in Figure 5 is the default page that appears after installing an Apache web server on Ubuntu 14.04. In some cases, it is better to refrain from displaying hexadecimal values for reasons of readability and, instead, expand the output with the text field. Use the switch combination ‑T fields ‑e text (Listing 5, line 4). Figure 6 shows the website as HTML source code. With very little effort, it is possible to use this output to rebuild a part of the transferred website.

tions for analyzing NFS, SMTP, MySQL, and VoIP, as well as other protocols. In most cases with such searches, you will come across corresponding read filters that you can use to see only the relevant packages. In this article, I focused on observing wired data traffic. If you also want to use Tshark for analyzing WiFi networks, the comprehensive online documentation will give you some pointers [11]. Tshark can even analyze Bluetooth connections [12] and USB traffic [13].

Conclusions The Tshark analyzer is a simple, command-line tool for monitoring and analyzing data streams. Tshark filters out individual protocols from the array of packages with just a few simple steps. Tshark is easy to use and learn, and, like its GUI-based counterpart Wireshark, it works well on a small scale. However, sooner or later, Tshark will impair system performance if you need to

collect large volumes of data. See the Wireshark wiki [14] for some tips on mitigating any performance slumps that occur when you are using Wireshark or Tshark. n n n

Info [1] Tcpdump: http://​­www.​­tcpdump.​­org [2] Wireshark: https://​­www.​­wireshark.​­org [3] Tshark man page: https://​­www.​­wireshark.​­org/​­docs/​ ­man‑pages/​­tshark.​­html [4] Pcap man page: http://​­www.​­tcpdump.​­org/​­manpages/​ ­pcap.​­3pcap.​­html [5] Netfilter: http://​­www.​­netfilter.​­org [6] TCP handshake: http://​­en.​­wikipedia.​­org/​­wiki/​ ­Transmission_Control_Protocol [7] Package filtering with Tshark: https://​­thesprawl.​­org/​­research/​ ­packet‑filtering/ [8] Various filter methods: http://​­wiki.​ ­wireshark.​­org/​­CaptureFilters [9] Samba troubleshooting with Tshark: https://​­wiki.​­samba.​­org/​­index.​­php/​ ­Capture_Packets [10] Samba sniffing in the Wireshark documentation: http://​­wiki.​­wireshark.​­org/​­SMB [11] WiFi sniffing: http://​­wiki.​­wireshark.​ ­org/​­CaptureSetup/​­WLAN [12] Bluetooth sniffing: http://​­wiki.​­wireshark.​­org/​ ­CaptureSetup/​­Bluetooth [13] Analyzing USB traffic: http://​­wiki.​ ­wireshark.​­org/​­CaptureSetup/​­USB [14] Performance optimization: http://​­wiki.​­wireshark.​­org/​­Performance

Tips Tshark provides the necessary default settings to enable fast analysis of data traffic. You can also use Tshark to troubleshoot many protocols and applications. For example, the official Samba wiki recommends the following command to monitor Samba connections: $ sudo tshark ‑p ‑w file_name


port 445 or port 139

The ‑w switch tells Tshark to write the output not to the console but instead to a file [9] for further processing. The official Wireshark wiki provides tips for analyzing SMB traffic [10]. You will also find comprehensive instruc-

Figure 6: You can take a look at each detail of the transmission by pressing just a few buttons. |

Issue 174

May 2015


Features Charly’s Column: Prosody The sys admin’s daily grind: Prosody

Speed Chat Columnist Charly Kühnast has been looking into the options of running an instant messaging back end. He chose a particularly lean and easily extendable version. By Charly Kühnast


rosody [1] is a lean XMPP (Extensible Messaging and Presence Protocol, formerly known as Jabber) server in Lua. It can speak IPv6, supports encrypted transport and – in the default configuration – very little else. You can, however, extend Prosody with modules to add virtually any kind of functionality you need. The number of modules is nearly into three figures [2]. Setting up a basic configuration is a two-step process: You need to create a user and then set up a domain. For my first steps on my home test network, I

Listing 1: 01  VirtualHost "" 02  ssl = { 03 

key = "/etc/prosody/certs/";

04  05 

certificate = "/etc/prosody/certs/"; }

will be using as the domain, but you can easily replace this with another domain when you go live. The following command sets up the user: sudo prosodyctl adduser


You then need to add the account as the administrator to Prosody’s central configuration file prosody.cfg.lua. The file typically resides below /etc/prosody/, but it can also live directly in /etc on older systems. The entry for this is: admins = { "" }

If you like, you can define multiple admins.

Domains The next step is to describe the domain. You do this in older versions in prosody.cfg.lua, but most up-to-date Prosody systems store this in two separate directories: /etc/ prosody/conf.avail and /etc/ prosody/conf.d. You create the configuration file in /etc/prosody/

conf.avail. Listing 1 shows that the file

only contains a couple of lines. The dummy certificate normally comes free with your distribution, and you can replace it with a selfsigned or purchased certificate later. To make sure that Prosody recognizes the new domain, I created a symlink to the configuration in the /etc/prosody/ conf.d/ directory: sudo ln ‑s /etc/prosody/conf.avail/U /etc/prosody/conf.d/

After restarting the XMPP server by typing the following: sudo service prosody restart

you can log in to the server as the administrator. If you then create a couple more user accounts, you can put on your reading glasses (if you need them) and start chatting right away! n n n

Info [1] Prosody: https://​­prosody.​­im [2] Add-on modules for Prosody: https://​­code.​­google.​­com/​­p/​ ­prosody‑modules/​­w/​­list

Charly Kühnast Charly Kühnast is a Unix operating system administrator at the Data Center in Moers, Germany. His tasks include firewall and DMZ security and availability. He divides his leisure time into hot, wet, and eastern sectors, where he enjoys cooking, freshwater aquariums, and learning Japanese, respectively.


May 2015

Issue 174 |

REGISTER TODAY! 2015 USENIX Annual Technical Conference JULY 8 –1 0, 2 015 • SA N TA CLARA, CA USENIX ATC ’15 brings leading systems researchers together for cuttingedge systems research and unlimited opportunities to gain insight into a variety of must-know topics, including virtualization, system administration, cloud computing, security, and networking.

Co-located with USENIX ATC ’15:

HotCloud ’15

7th USENIX Workshop on Hot Topics in Cloud Computing

JULY 6 –7, 2 015 Researchers and practitioners at HotCloud ’15 share their perspectives, report on recent developments, discuss research in progress, and identify new/emerging “hot” trends in cloud computing technologies.

HotStorage ’15 7th USENIX Workshop on Hot Topics in Storage and File Systems

JULY 6 –7, 2 015 HotStorage ‘15 is an ideal forum for leading storage systems researchers to exchange ideas and discuss the design, implementation, management, and evaluation of these systems.

Stay Connected...

Features Go Language Go version 1 series

Well Executed


n 2009, Google launched the Go programming language [1]. An impressive collection of veteran developers worked on Go, including Ken Thompson (one of the inventors of Unix), Plan 9 co-creator Rob Pike, and Rob Griesemer, who previous worked on Google projects such as the Smalltalk variant Strongtalk. The goal of Go was to create a statically typed language similar to C, but with updated features such as garbage collection and better type safety. In addition to its trademark CSP (Communicating Sequential Processes) [2] concurrency solution, the influences of Limbo [3] and Inferno [4] are evident. Early versions of Go evolved at a rapid pace, with the code changing so fast that applications written one week would no longer compile in later versions. Finally,

in 2012, Go 1.0 was released with the promise that you would be able to compile any code in future versions of the 1.x series. Since then, the focus of the Go developers has been on tools, compilers, and the run time.

Build System

In the beginning, Go did not have its own build system; developers had to call the compiler and linker manually or write makefiles for larger projects, much like C. However, because Go wanted to facilitate the process of developing complex and distributed projects, it needed a superior solution. The go tool provided an easy-to-use build system without any need for configuration. Table 1 lists the most important commands. As mentioned previously, none of these functions needs any configTable 1: Important Go Tool Commands uration; a develCommand Effect oper does not go build Checks packages and dependencies and compiles, need to write but does not keep the results: a viability check. makefiles, list the go install Much like go build, but also installs and keeps the dependencies exlibrary results to accelerate later compile runs. plicitly, or link to go get Downloads and installs packages and dependencies the source code. for the current project. Two simple prin-


May 2015

Issue 174

ciples make this possible: the GOPATH variable (Figure 1), which behaves like PATH and defines workspaces, and package paths, which specify paths to the packages and which must not be the same path as your Go installation. GOPATH is an important concept. This variable, determines whether Go finds the source code for the package and where it stores object files. Each workspace comprises three folders: • src/ for source code • pkg/ for object files • bin/ for compiled, executable files In other words, when the developer runs go build, the build script looks for the source code in $GOPATH/src/, and go install writes files to $GOPATH/pkg/ and $GOPATH/bin/. All the source code is broken down into packages. Each folder and subfolder in $GOPATH/src/ represents a single package; that is, each piece of source code must reside in such a package for Go to find it. Unlike Java workspaces, however, a Go workspace does not relate exclusively to a single project. Instead, it references the collection of Go packages and projects (i.e., its own copy of part of the Go universe). Therefore, different projects typi- |

Lead Image CC BY-SA 3.0

The Go programming language helps programmers avoid the annoying routine and focus on the important stuff. By Dominik Honnef

cally exist in a single workspace. Each package can be accessed via an individual path. Whereas packages in the standard library have very short pathnames (e.g., fmt or net/http), your own packages should have intuitive names that do not collide with names assigned to packages by other developers. For this reason, programmers often use the path by which they access the package on the Internet (e.g., on GitHub [5]: github. com/<user>/<example>). This pattern is unique, because no one else will have the same username. At the same time, this type of naming convention makes it possible to retrieve the packages automatically off the Internet at a later time.

ing 1 shows the source code of the program, which resides in the $GOPATH/src/<user>/hello file and is named hello.go. You could use another name (e.g., main.go). To compile the project, the developer passes the path into the package and receives an executable file in the current directory as the result. The file is named for the folder with the source code – hello in this example. When you run the program, it says hello in a style familiar to programmers:

In Action

By using go install instead of go build, the final results end up in the specified $GOPATH/bin/ folder. Apart from this, though, the program behaves in an identical way:

The following code exemplifies the file structure and shows the tools in action. To begin, a developer creates the basic structure for a package: $ export GOPATH=~/go/ $ mkdir



$ go build<user>/hello $ ./hello

Hello, User.

$ go install<user>/hello $ $GOPATH/bin/hello

‑p $GOPATH/src/<user>/hello Hello, User.

The pathname<user>/hello assumes that the package will be hosted on GitHub some time in the future. List-

This slightly modified version of the program in Listing 2 uses an external library to hand over parameters at the command line. Although Go includes the flag package in its standard library, this implements the flag rules of Plan 9 [6], and not the GNU rules. The attempt to compile fails, as you can see in Listing 3. This compile fails because the user has simply not downloaded the pflag package yet. The go get command shown in the following example can help with this task; it retrieves packages from the Internet and installs them recursively, thus also resolving dependencies. If the developer calls go get first for their own

Tune in to the HPC Update newsletter for news, views, and realworld technical articles on high-performance computing.

Listing 1: hello.go 01 package main 02 03 import "fmt" 04 05 func main() { 06

Figure 1: The GOPATH variable and its paths in a sample project.

fmt.Println("Hello, User.")

07 } |

Features Go Language Listing 2: hello.go (Version 2)

ring tasks. The Go standard libraries come with a parser for the language, and Go follows the Unix principle, “Write programs that do one thing and do it well,” so tools of this kind work at the command line and can be combined with any editor.

01 package main 02 03 import ( 04




06 ) 07

Go Community

08 func main() { 09

name := pflag.String("Name", "unknown", "your name")




fmt.Println("Hello,", *name)

12 }

package, the program resolves all of the dependencies (i.e., pflag in this case). $ go get<user>/hello $ go build<user>/hello /junk/src $ ./hello

Hello, unknown /junk/src $ ./hello ‑‑name="dear reader"

Hello, dear reader

This workflow scales without problem for hundreds of packages and dependencies. Developers never have to say how exactly the build process needs to run, what version control system they use, or how to download dependencies.

End to All Disputes As trivial as this might sound, programmers waste a huge amount of time arguing about the correct way to format code. Do you use spaces or tabs? How many

spaces do you use? Where do you put the curly brackets? Every developer has their own preferences. This fact of coding life, means learning new guidelines for each new project and each new job. Go does not have this problem. It comes with the gofmt tool, which formats the code for the developer in the only correct way (Figure 2). It does not offer any options that influence the results, and every Go developer is prompted to use gofmt. This puts an end to discussions about code formatting and means the code is uniformly formatted all over the world, making it easily readable.

Tools, Tools, and More Tools Inspired by the official tools for the build system and code formatting, many developers have created their own tools. The most prominent examples are autocompletion, refactoring, and linting, but many other small utilities handle recur-

A programming language can be as good as the best, but without a community it will not catch on. Go has a community. Besides Google, a long list of well-known companies [7] are Go users, including the BBC, Canonical, and Dropbox. The original intention of Go was to facilitate the development of server applications, and that is precisely what it is doing today. For example, the Docker [8] container virtualization is completely written in Go; Ubuntu manufacturer Canonical relies on Go for both Juju [9] and LXD [10], and Disqus uses Go for its comment system. A huge number of user groups, meetups, and conferences additionally boost the community. Last year, three well-visited events served the Go community: GopherCon [11], dotGo [12], and GoCon Tokyo. The Go events at Fosdem sell out regularly. The Go language appears to have joined the mainstream and intends to stay there. n n n

Info [1] “Go Programming Language” by Marcus Nutzinger and Rainer Poisel, Linux Pro Magazine, issue 116, July 2010, pg. 52 [2] CSP: http://​­en.​­wikipedia.​­org/​­wiki/​ ­Communicating_Sequential_Pro‑ cesses [3] Limbo: http://​­en.​­wikipedia.​­org/​­wiki/​ ­Limbo_(programming_language) [4] Inferno: http://​­www.​­vitanuova.​­com/​ ­inferno/​­index.​­html [5] GitHub: https://​­github.​­com [6] Plan 9: http://​­plan9.​­bell‑labs.​­com/​­plan9/ [7] Well-known Go users: https://​­github.​ ­com/​­golang/​­go/​­wiki/​­GoUsers

Figure 2: The gofmt tool automatically formats the Go code.

[8] Docker: https://​­www.​­docker.​­com

Listing 3: Forgotten Dependencies

[9] Juju: https://​­juju.​­ubuntu.​­com

$ go build<user>/hello cannot find package "" in any of: /usr/lib/go/src/ (from $GOROOT) /home/dominikh/go/src/ (from $GOPATH)


May 2015

Issue 174

[10] LXD: http://​­www.​­ubuntu.​­com/​­cloud/​ ­tools/​­lxd [11] GopherCon: http://​­www.​­gophercon.​­com [12] dotGo: http://​­www.​­dotgo.​­eu |

Shop the Shop

In case you missed it last time... You ordered your Raspberry Pi... You got it to boot...what now? The Raspberry Pi Handbook takes you through an inspiring collection of projects. Put your Pi to work as a:

▪ media center ▪ photo server ▪ game server ▪ hardware controller ▪ and much more! Discover Raspberry Pi’s special tools for teaching kids about programming and electronics, and explore advanced techniques for controlling Arduino systems and coding GPIO interrupts.

the only raspberry pi reference you’ll ever need! order online:

Features Perl: Searching Git Perl script rummages through Git metadata

Under the Hood GitHub is not only home to the code repositories of many well-known open source projects, but it also offers a sophisticated API that opens up wonderful opportunities for snooping around. By Mike Schilli


ardly a software project today manages without Git. Once you have experienced the performance benefits, SVN will feel like something from the age of stagecoaches. Because GitHub has built a nice UI around this service, which is free and reliably stores your data, and because it’s so easy to contribute through pull requests, many developers like myself swear by the San Francisco-based Git hoster. Over the years, 57 publicly visible repositories have accumulated in my account; most contain CPAN modules, but more esoteric content like the text data for my blog usarundbrief.​­com is also stored there [1]. You can dig up some interesting facts by messing around in the associated metadata with Perl. Especially in the context of automatic build systems, it’s essential that access to the metadata in the Git repositories not be exclusively browser-based. In-


May 2015

Figure 1: The author page on GitHub lists your own repositories and contributions to other repositories. The API offers very similar abilities.

stead, automated scripts leverage APIs to retrieve everything you need from the cornucopia of data. To control this in Perl, the Net::GitHub collection of modules by Fayland Lam is available on CPAN. Equipped with the necessary access privileges, the API user can both access (Figure 1) and actively modify programs, say, by adding some new code. For GitHub to know who issues API requests and to intervene with corrective action where needed, an optional authentication token is included with each request to the REST API. Users can pick this up in the browser UI or from a script, as shown in Listing 1 [2]. It prompts you for your password, then uses SSL to send the data to the GitHub server, and receives an access token in exchange. Instead of a pass-

Issue 174

word, the client then sends this token going forward and gets more generous access to the server. The access privileges that were set when the token was generated determine the token user’s rights. For scripts to find the token, Listing 1 dumps it in YAML format into the ~/.githubrc file in the user’s home directory. The command $ ./token‑get Password: ***** /Users/mschilli/.githubrc written

carries out the necessary steps for the user stored in line 11. You’ll have to

Mike Schilli Mike Schilli works as a software engineer with Yahoo! in Sunnyvale, California. He can be contacted at mschilli@perl­ Mike’s homepage can be found at http://​­perlmeister.​­com. |

Features Perl: Searching Git

Figure 2: The GitHub page lists all previously generated access tokens.

adapt the string to your own GitHub account user ID. You need to watch out for the note comment value; it must have different content for each newly requested token when you call the create_authoriza‑ tion() method. Otherwise, GitHub will refuse to issue the token and report an incorrect user/​password combination as the cause. The real reason, however, is that GitHub lists the tokens keyed by this note value on the user’s account page, where you can modify or revoke the tokens (Figure 2), and the notes must be unique for each token. Additionally, the scope parameter defines what rights the account owner grants to prospective clients. If it is left empty – as in Listing 1 – the server only allows read access to publicly available

files [3]. In contrast, one or more entries like user or repo in the scope array later give users read/​write access to data (e.g., to change their email address) or code commits.

Show Your ID According to the terms of use [4], GitHub allows clients that authenticate up to 5,000 requests per hour, whereas anonymous requests are restricted to 60 per hour and IP. However, the search API that performs a pattern-based search in repository names or checked-in code is somewhat more generous; it allows

20 requests per minute with a token and five without. Figure 3 shows that the API also works without logging in – querying metadata from the GitHub mschilli/log4‑ perl repository in this case. The number of requests permitted before the server slams the door is included in the HTTP header of the response. Figure 4 shows an example of the command without a token, in which seven queries were sent in the current hour, and 53 thus remain. The script in Listing 2 queries the account metadata of the GitHub user specified on the command line. The user authenticates with the access token, which the YAML module from CPAN extracted from the ~/.githubrc file. In addition to a huge mess of other fields, the number of publicly visible repositories created in the account is available in the returned JSON data; Listing 2 simply outputs their number in line 17:

Listing 1: token-get 01  #!/usr/local/bin/perl ‑w 02  use strict; 03  use Net::GitHub; 04  use Sysadm::Install qw( :all ); 05  use YAML qw( DumpFile ); 06 07   my $gh_conf = (glob "~")[0] . "/.githubrc"; 08 09   my $pw = password_read("Password:"); 10  my $gh = Net::GitHub::V3‑>new( 11 

login => 'mschilli', pass => $pw );


Figure 3: Even without the auth token, the GitHub API reveals information about repos or users.

13   my $oauth = $gh‑>oauth; 14  my $o = $oauth‑>create_authorization( { 15  scopes => [], 16  note => 'empty scope test', 17  } ); 18 19   umask 0077; 20  DumpFile $gh_conf, 21  { token => $o‑>{token} }; 22 23   print "$gh_conf written\n"; |

Figure 4: Sixty requests are allowed per hour; the client has another 53 left before the server resets the counter at the specified time.

Issue 174

May 2015


Features Perl: Searching Git $ ./repo‑user mschilli

$ ./repo‑list‑multi

Mike Schilli owns 57 public repos

mschilli/log4perl (84.5)

$ ./repo‑user torvalds

cowholio4/log4perl_gelf (15.2)

Linus Torvalds owns 2 public repos

TomHamilton/Log4Perl (14.1) lammel/moosex‑log‑log4perl (9.7)

The repo‑user script amazingly revealed that Linus Torvalds has created only two repositories on GitHub!

Torvalds 2 – Schilli 57 Lines 9 and 12 in Listing 2 grab the access token stored previously by Listing 1, and the method chain ‑>user‑>show() accesses the user’s account information on GitHub while providing the token. The example in Listing 3 determines which repositories an author already created on GitHub; this is done by extracting and outputting the repository name strings from the returned metadata. Instead of using list(), you could use the list_user() method, which takes a username (Figure 5); if you pass it the string torvalds, you see that the Linux creator’s two repositories are named linux and subsurface.

Winner on Points If you want to know which repositories exist for a specific topic, you can find out by using the search API. For example, many years ago I launched a project named Log4perl and published it on GitHub. Listing 4 now searches for all repositories on GitHub, whose names contain the log4perl pattern. Surprisingly, the script returns a whopping 117 results:

... $ eg/repo‑list‑multi | wc ‑l 117

GitHub limits the number of results returned by the server to 100 by default. If you want more, you can increase the pagination size of the returned data with the per_page parameter. Or, as shown in Listing 4, you can ask whether there are more results for the query after the first packet of 100 has been returned, using has_ next_page(). If this is the case, a subsequent call to next_page()

dumps the next cartful of data into the search object (see Listing 4).

The script sets the sort search parameter in line 14 to stars, together with a value of desc (for descending) so that the result is sorted by popularity of matching repositories. If you are interested in a project on GitHub, you can click on its star icon to keep up with its progress. The number of stars on a given project is

Listing 3: repos 01 # !/usr/local/bin/perl ‑w 02 u se strict; 03 u se Net::GitHub; 04 u se YAML qw( LoadFile ); 05 06 m  y $gh_conf = (glob "~")[0] . "/.githubrc"; 07 08 m  y $gh = Net::GitHub‑>new( access_token => 09  LoadFile( $gh_conf )‑>{ token } 10 ) ; 11 12 m  y @repos = $gh‑>repos‑>list( ); 13 14 f  or my $repo ( @repos ) { 15  print $repo‑>{ name }, "\n"; 16 } 

Listing 4: repo-list-multi 01 # !/usr/local/bin/perl ‑w 02 u se strict; 03 u se Net::GitHub; 04 u se YAML qw( LoadFile ); 05 06 m  y $gh_conf = (glob "~")[0] . "/.githubrc"; 07 08 m  y $gh = Net::GitHub‑>new( access_token => 09  LoadFile( $gh_conf )‑>{ token }

Listing 2: repo-user

10 ) ; 11

01  #!/usr/local/bin/perl ‑w

12 m  y %data = $gh‑>search‑>repositories( {

02  use strict;

13  q => 'log4perl',

03  use Net::GitHub;

14  sort => 'stars',

04  use YAML qw( LoadFile );

15  order => 'desc',


16 } );

06   my( $user ) = @ARGV;


07  die "usage: $0 user" if !defined $user;

18 m  y @items = ();


19 p ush @items, @{ $data{ items } };

09   my $gh_conf = (glob "~")[0] . "/.githubrc";



21 w  hile( $gh‑>search‑>has_next_page() ) {

11   my $gh = Net::GitHub‑>new( access_token => 12  LoadFile( $gh_conf )‑>{ token } 13  );

22  my %data = $gh‑>search‑>next_page(); 23  push @items, @{ $data{ items } }; 24 }  25

14 15   my %data = $gh‑>user‑>show( $user ); 16

26 f  or my $item ( @items ) { 27  printf "%s (%.1f)\n", 28  $item‑>{ full_name },

17   print "$data{ name } owns ",

29  $item‑>{ stargazers_count };

18  "$data{ public_repos } public repos\n";

30 } 


May 2015

Issue 174 |

Features Perl: Searching Git an indication of how popular a project is. In a query result, the number of stars on the project is given in the stargazers_ count field, and the name of the project can be found in full_name. If you happen to use a local GitHub Enterprise installation instead of the public, you can insert a leading third parameter pair,

candy bars in programming examples but also has a tendency toward oddball humor.

Committing Code

Listing 5: repo-search 01 # !/usr/local/bin/perl ‑w 02 u se strict; 03 u se Net::GitHub; 04 u se YAML qw( LoadFile ); 05 06 m  y $gh_conf = (glob "~")[0] . "/.githubrc"; 07

show that the code in the Log4perl repository has two files with occurrences of the word snickers and prove that the author not only uses the names of tasty

Before calling the 08 m  y $gh = Net::GitHub‑>new( access_token => API to commit a 09  LoadFile( $gh_conf )‑>{ token } revised README file 10 ) ; to a software proj11 ect on GitHub, 12 m  y %data = $gh‑>search‑>code( you first need to 13 {  q => 'snickers in:file language:perl' . step back and look 14  ' repo:mschilli/log4perl' into Git’s inner 15 }  ); workings. Under 16 the hood, Git re17 f  or my $item ( @{ $data{ items } } ) { lies on a few sim18  print $item‑>{ path }, "\n"; ple but powerful 19 }  data structures, although the git command-line tool completely shields and the comment stored in the note pathem from regular users. Git represents a rameter needs to be changed. A new file in the repository as a so-called blob. call to token‑get then stores a new A collection of these blobs in a directory token with advanced privileges in the is called a tree, and a tree with a token file. check-in note is a commit (Figure 6). A thorough introduction to Git’s guts can Blobs, Trees, Commits be found online [5]. The call to the readme‑upd script (ListBefore the script in Listing 6 can coming 6) then generates a new blob of the mit to the test project mschilli/apitest, I README file with permissions of 0644 and first need to collect a new token with esdefines a tree to store the blob. On top of calated privileges. The token obtained in this is the base_tree option in line 40, set Listing 1 only let me read the repository to an already existing tree in the reposidata. To gain write access, line 15 in Listtory, which the script extracts from the ing 1 must be replaced by most recent commit of the master branch. Basing the new tree off this base tree ensures that any other existing files scopes => ['public_repo'],

Figure 5: Listing 3 outputs all the user’s GitHub repositories.

Figure 6: Under Git’s hood, commits consist of trees, whereas trees contain blobs and other trees.

api_url => https://.../api/v3

to the new() method of the Net::GitHub class; then, you will receive information from this source instead. If you are looking for patterns in the checked-in code, Listing 5 gives you a taste of what the GitHub API offers in this area. The search() method elicits a search object from the GitHub object; the search object’s code() method will in turn initiate a search in the repository code. The query snickers in:file language:perl



searches files that contain Perl code in the mschilli/log4perl repository for the word snickers. The results, $ ./repo‑search lib/Log/ lib/Log/Log4perl/ |

Issue 174

May 2015


Features Perl: Searching Git will stay in the newly created version of the repository. Line 43 then converts the tree into a commit, with a fun date of 11/​11/​2011, which illustrates that date and author information in Git can be arbitrarily set.

Showing Pointers Previously, line 19 of the listing retrieved the references to the development branches of the repo. After the new commit is in place, line 54 redirects the head pointer of the project’s master branch to

it. This can be accomplished without using force (force => 0) because the new commit was derived straight from its ancestor. Figure 7 shows what the automatically injected commit looks like on the GitHub website of the project. When this issue went to press, version 0.71 of the Net::GitHub module still had a bug in its implementation of update_ ref(). However, a quick pull request provided a remedy [6]. With a little bit of luck, the module author will have already added the patch in the latest re-

lease, which will be on CPAN by the time this article is published. If you want, you can run one of the query scripts in this article as a daily cronjob to create a timeline that illustrates how hard you have been working on your GitHub projects day in and day out. Or, perhaps the boss would like to be alerted if an employee suddenly experiences a sudden productivity boost. n n n

Info [1] “An Alternative Use for GitHub” by Mike Schilli, Linux Magazine, issue 147, pg. 46: http://​­www.linux-maga­ zine.​­com/​­index.​­php/​­Issues/​­2013/​­147/​ ­Perl‑CMS‑with‑GitHub/​­(tagID)/​­167 [2] Listings for this article: ftp://​­ftp.​­linux‑magazine.​­com/​­pub/​ ­listings/​­magazine/​­174 [3] GitHub API scopes: https://​­developer.​ ­github.​­com/​­v3/​­oauth/​­#​­scopes [4] GitHub API rate limits: https://​­developer.​­github.​­com/​­v3/​ ­search/​­#​­rate‑limit [5] Ry’s Git Tutorial: http://​­rypress.​­com/​­tutorials/​­git

Figure 7: Shortly after the commit was created by script, it is visible on the GitHub website.

[6] Pull request for a bug in update_ref(): https://​­github.​­com/​­fayland/​ ­perl‑net‑github/​­pull/​­58

Listing 6: readme-upd 01  #!/usr/local/bin/perl ‑w

29 m  y $commit_old = $gdata‑>commit( $head );

02  use strict;

30 m y $tree_old =

03  use Net::GitHub;

31 $ commit_old‑>{ tree }‑>{ sha };

04  use YAML qw( LoadFile );


05  use Digest::SHA1 qw( sha1_hex );

33 m  y $content = "Updated by script $0.";



07   my $gh_conf = (glob "~")[0] . "/.githubrc";

35 m  y $tree = $gdata‑>create_tree( {


36  tree => [ {

09   my $gh = Net::GitHub‑>new( access_token => 10  LoadFile( $gh_conf )‑>{ token } 11  );

path => "README", mode => "100644",


type => "blob", content => $content,

39  } ],


40  base_tree => $tree_old,

13   $gh‑>set_default_user_repo( 14 


41 }  );

"mschilli", "apitest" );


15  my $gdata = $gh‑>git_data();

43 m  y $commit = $gdata‑>create_commit( {


44  message => "Updated via API",

17   my $head;

45  author => {



name => "Mike Schilli",


email => '',

"refs/heads/master" ) {


date => "2011‑11‑11T11:11:11+08:00",


$head = $ref‑>{ object }‑>{ sha };

49  },



50  parents => [ $head ],

19   for my $ref ( @{ $gdata‑>refs() } ) { 20  21 

if( $ref‑>{ ref } eq

24  }

51  tree => $tree‑>{ sha },

25  }

52 }  );



27   die "Head not found" if !defined $head;

54 $  gdata‑>update_ref( "heads/master",


55  { sha => $commit‑>{ sha } } );


May 2015

Issue 174 |

Don’t miss a single issue! Ubuntu User is the first print magazine created specifically for Ubuntu users. Ease into Ubuntu with the helpful Discovery Guide included in each issue, or advance your skills with in-depth technical articles, HOW-TOs, reviews, tutorials, and community reports.

subscribe now!

4 issues per year for only £ 24.90 / EUR 29.90 / US$ 39.95

✔ Don’t miss a single issue! ✔ Huge savings – Save more than 35% off the cover price! ✔ Free DVD – Each issue includes a Free DVD!

LinuxUser Darktable 1.6

You can imagine this as keeping the dialogs of all effects permanently open so that the appropriate parameters can be adjusted at any time. The red buttons circled in Figure 1 switch the respective effects on and off – either temporarily, for trial purposes, or permanently. The whole effect pipeline – that is, the list of all applied effects and their settings – is preserved until the next change and even survives a restart: The software saves the changes in separate files with the .xmp extension and in an internal database as well.

Photo editing with Darktable 1.6

Mixed Vegetables Hardly anything affects the quality of photos more than play of light and shadow, or the brilliance of colors. Darktable fixes incorrect exposure, conceals unfavorable lighting conditions, and ensures harmonious colors. By Peter Kreußel


arktable is not an easy-to-use piece of software. Its range of functions is limited to post-editing light, shadows, and colors. It lacks artistic effects offered in Gimp or Photoshop. However, there is no better piece of software for refining successful and less successful photos – including commercial applications such as Adobe Lightroom. Unlike the first-generation image editing programs, Darktable [1] uses a nonlinear principle: An effects pipeline replaces the step-by-step undo function.


May 2015

Issue 174

Darktable doesn’t need a save button: Each step is immediately stored to disk. Because it is just a text file with effects names and parameters, this happens in the background without causing any significant load on your computer. You can undo unintended changes in the history list. The software essentially does not touch the output file, which lends itself to a professional workflow: It is impossible to overwrite the original accidentally. Creating duplicates of an image only takes up a few kilobytes of disk space. Darktable also lends itself to using a versioning tool such as Git or SVN thanks to the small amount of data it creates. Initially, you can only see the results of your editing in Darktable itself. In other words, you don’t use an ordinary file browser to browse your edited photos, instead you use Darktable’s lighttable mode (Figure 2), which is the mode in which the software comes up. You can open an image for editing (darkroom) by double-clicking the image. You only use the export button in lighttable to export the image as a new image file after you have finished editing. However, the non-linear operating principle is demanding on the CPU: The computer needs to compute the entire pipeline from the original for each change. Depending on the number of active effects, this can take a few seconds for changes in sliders to take effect in the display. The small preview at top left in the Darktable window (Figure 3.7) responds almost instantly, however. The software also relies on OpenCL [2] to tap the processors on ATI and NVidia graphics cards with at least 1GB of video memory, which can handle this task |


Safe Data

LinuxUser Darktable 1.6

Figure 1: With non-linear image editing programs, the settings for all the applied effects (1-6) can be adjusted independently. This gives you far more scope for experimenting than a classic undo function.

much faster. Darktable automatically enables hardware support if you have working 3D acceleration. The processing times are still just a few seconds, even without this.

Stemming the Chaos Dialogs that are only opened to apply an effect violate the non-linear operating principle. Thus, each function in Darktable has its own palette, which you can to collapse to save space but can never close. Nevertheless, the program, unlike many Adobe applications, fits well on a normal-sized display. Toggle buttons group the effects into categories and are Darktableâ&#x20AC;&#x2122;s core element of operation (Figure 3.1). From left to right, these are active effects and favorites, and the basic group, tone group, color group, correction group, and effect group. You will find a preselection of modules that should be visible in the toggles referred to in the more modules list box. If you click again on an entry that is already active, it assumes a Favorite status and appears in the favorites toggle, as well as in its original category. Always check whether a desired effect is disabled if you canâ&#x20AC;&#x2122;t find it as described here. The toggle on the left bundles all the effects you have enabled by clicking on the switch symbol in the palette header. Use the view selector to switch back to lighttable mode (Figure 3.8). The export function, which renders the resulting image as a new TIFF, OpenEXR, PNG, or JPEG file, and thus makes it available for other programs, is in the lighttable right sub-window and may only be visible after scrolling down.

Shaded Figure 4 shows a poorly exposed image of a ripe bunch of grapes. The unfavorable position of the sun obviously affected the exposure quality when this shot was taken. Darktable can prove its worth on this image. You can download the image for free at to try these steps for yourself [3]. To import this image in the lighttable view, click import | image (Figure 2, top left). You can see the work steps for this image in the history panel (center left), which you open by clicking the panel triangle.

Figure 2: Along with its image editing functions, Darktable also provides a complete photo management feature that eliminates the need to use third-party programs such as digiKam. |

Issue 174

May 2015


LinuxUser Darktable 1.6

The program automatically creates the first two, crop and rotate and exposure, when opening new files. It rotates the image if necessary as per the Exif tag. For RAW images, the exposure step selects a displayable and printable section from the camera’s brightness range. Neither is relevant for the sample JPEG. First, click on 0 – original in the history list to see the uncorrected original image. Then, select 3 – shadows and highlights. This function, which brightens the excessive shadows and reduces overly bright highlights, is ideal for this photo. Figure 5 shows the results, which are already much better, along with the settings in the shadows and highlights palette (basic group category). I have changed the shadows setting from a neutral value of 50% to about Figure 3: The Darktable GUI is straightforward thanks to toggles (1) and the palettes that can 80%. Some of the brightened dark areas be expanded or closed (2). The setting dialogs for all available effects are virtually always therefore now appear a greenish gray open, however. (Figure 5 inset). Darktable intensifies colors by lightening proportionally to the brightness – just as in nature – with stronger lighting. Here, however, the green light let through by the leaves emerges too strongly. To remedy this, you can reduce the shadows color adjustment to almost zero because the dark parts of the bunch of grapes also appear an almost colorless gray in reality. The soft focus of radius adjusts the hardness of light-to-dark transitions and was reduced by visual judgement compared with the default value. You can recall a slider’s default value by double-clicking.

Not Totally Green Parts of the white wall also radiate in a bright green hue after lightening the image, thus distracting from the grapes. The complex functions in color zones (color group category), which selectively modify the lightness, saturation, and hue, can help with this. You can use them to darken, desaturate, and color the yellow-green shades in the picture to a warmer (i.e., redder) hue.

Figure 4: Nice try – but with the prevailing light conditions, it was simply impossible to take a high-quality photo. (Image: vclare,


May 2015

Figure 5: The shadows and highlights tool principally gets rid of the drop shadow (top arrow). Undoing the shadows color adjustment (bottom arrow) removes the wine leaves’ excessive reflection at the grapes’ bright points.

Issue 174 |

LinuxUser Darktable 1.6

The saturation curve in the tab of the same name (Figure 6) makes the most important contribution by turning the overly lavish green into a subtle neutral gray. If you mouse over the color chart, a circle and white background shows the strength of an adjustment on adjacent nodes. You can use the mouse wheel to modify the size of the circle. Clicking a node and dragging it vertically adjusts saturation accordingly. Smooth, granular operations are thus easy to achieve. You can also reduce the lightness of the greens in the same color range. The curve superimposed from the saturation tab shows the section of the spectrum that was already used in desaturation. The effect of the third tab hue is a little more difficult to understand: Moving the control points causes a color transformation from the color square where the shift starts to the color square where it ends (Figure 6, right inset). In other words, if you drag a control point from deep green toward orange, as shown in the picture, you are modifying the green tones in the image accordingly. As so often happens in image editing, the best results can occur when trying things out â&#x20AC;&#x201C; and, as always in Darktable, a double-click will reset the curve.

Contrasts The image now appears to have a more balanced exposure; however, it also seems flat and has little contrast. To spice up overly gray photos, the zone system tool (Figure 7) under the tone group toggle is a big help. It divides the brightness spectrum into nine zones which the gray wedge symbolizes. If you mouse over a zone, any areas in the image with that brightness level light up in yellow in the thumbnail in the panel. If you move the mouse to the bottom edge of one of the zone boundaries, a slider appears. You can use the slider to move the edge of the gray field and increase (by dragging to the right) or reduce (by dragging to the left) the brightness of the corresponding image areas. In this way, you can redistribute light and shadows. Expanding the dark areas and highlights increases the contrast and adds a touch of sunshine to many a gloomy, rainy image. In the example image, it restores light conditions to match what is actually a sunny scene. Figure 7 shows the results and how to move the brightness zones.

Figure 6: You can get rid of the annoying green color cast under the leaf canopy using the color zones tool.

Color Play The lead image for this article [4] does not require any rework in terms of exposure. Because there are no technical shortcomings to compensate for here, I will just play around with it instead. Figure 8 shows a brightly colored variant, which was created by applying color filters to parts of the photo. In Darktable, two image segment selection techniques work hand-in-hand:

Figure 7: The sun is shining: The zone system influences light and shadows in the image in a granular way, which is far more appropriate for sunny surroundings in this example image. |

Issue 174

May 2015


LinuxUser Darktable 1.6

Figure 9: The brush selection (i.e., everything inside the dotted line) slightly exceeds the desired image section. That doesnâ&#x20AC;&#x2122;t matter here because of the subsequent selection based on the color.

the shape selection, with which you can roughly cut out areas using a brush, and the parametric selection, with which the program again isolates specific color or brightness areas within Figure 8: The Darktable color tools are good for smooth, realistic operations. They were used this initial selection. All you need to do, for colorful effects in this playful demonstration. as shown in Figure 9, is paint over a tomato with a generous border using the mask brush. The decisive thing is that the selection does not include any other red objects; thankfully, this action does not require fine motor skills. To begin, switch on the color contrast filter from the color group toggle in the palette header. You will be using this to paint the front tomato purple. First, create a mask so that the filter does not affect the whole image: The somewhat misleadingly named blend list box is responsible for this. Next, select drawn & parametric mask for a rough selection made with the brush, which further narrows down the color selection. Then, create the brush preselection. To do this, click the pencil icon below drawn mask. The cursor turns into a gray brush; you can control the size of this using the mouse wheel. Generously paint over the whole tomato while holding down the mouse button.

Smooth Transition

Figure 10: The parametric selection filters the red hues from the preselection using the brush.


May 2015

As mentioned, the granular selection is the domain of the parametric mask whose control is at the very bottom in the pallet. You need to click the h (hue) toggle. Drag the two right wedges in the color bar for input to the left up to the border with the red hues (Figure 10, bottom). Then, you only need to adjust the color contrast: If you drag the green vs magenta slider to the right and the blue vs yellow slider to the left, the tomato appears bright purple. Parametric and drawn masks are available for almost all filters in Darktable. The filled-in wedge controller in the color bar controls the value at which the mask completely finishes. You can use the hollow wedge to create a smooth transition. The circled plus icons to the right of the color bars switch between inclusive and exclusive selection modes. The parametric mask selects image areas by lightness (L toggle, luminance) and by saturation (C, chrominance) as well as by color. The a and b toggles represent Green/â&#x20AC;&#x2039; Magenta, and Blue/â&#x20AC;&#x2039;Yellow from the LAB color model [5], which maps human color perception better than the RGB model. The two differently colored tomatoes in Figure 8 were created using the already known color zones tool. You need several instances of the tool to create different colors; Darktable only introduced this feature a few versions back. To accomplish this, first click on the clipboard icon in the color zones palette header and then select the new instance option.

Issue 174 |

LinuxUser Darktable 1.6

A rough selection with the mask brush again follows. Here, you can leave out the parametric mask, which separated the red of the tomato from the surrounding green in the first color example. The color zones tool itself is color-selective. Thus, it’s sufficient to select the drawn mask point in the blend list box and make a rough selection with the mask brush in the normal way.

Drawing Rather than Painting Often you can accurately cut an object out of an image using a rough preselection in combination with selecting the color. However, this principle does not work in the next example with the chili pepper, which I want to paint red. It directly borders other objects of the same color. Selection by Bézier curve is also possible in Darktable for such cases, as used in most professionally knocked out images: After selecting drawn mask, do not click on the pencil icon as before, instead click on the second icon from the right (add path). Figure 11 shows the mask created with the vector tool for selecting the red pepper including its control points. You can also see the colorize tool settings, which provide the red color. A closed path, to which you can add new control points, appears after clicking on the image. If you have experience with Inkscape, then you will be familiar with the principle of Bézier curves, although the details of the procedure are slightly different in Darktable. A right-click exits drawing mode and releases the mouse from the curve. Existing control points can then be moved. As in Inkscape, an active control point also has a control tangent that lets you manage the progression and degree of the curve. Ctrl+mouse-click creates a new control point, a right-click deletes a control point. Ctrl+left-click on an existing point converts this into a corner without its own curve. The mouse wheel scales the whole mask as long as the pointer remains within the path. A thinner dashed line then appears, and the mask gradually fades out toward this line in Darktable. If you mouse over this line, then the mouse wheel changes the width of this transition area. Darktable also converts the brush selections into vector curves after you release the mouse button. The vector curves’ control points can be moved just as for curves created with the vector tool, which allows for minor, retroactive corrections. The vector tool, which you can use specifically to set the control points manually, provides more control.

Humble Beginnings In this article, I described just five of the more than 45 filter modules in Darktable. In part, the filters affected the entire image, although I used the program’s powerful masking functions for other edits. An additional version of the color-manipulated vegetable still life (Figure 12) gives you another impression, with this example show-

Figure 11: The Bézier curves known from vector graphics are also available for area selection. They are painstaking to draw but can accurately map even complex shapes. |

Issue 174

May 2015


LinuxUser Darktable 1.6

Figure 12: Even if the colors of the tomato are purely imaginary, a few more alienating effects (blurring, Velvia colors, artificial film grain) can’t hurt.

[4] Chili peppers: http://​­www.​­freeimages.​ ­com/​­photo/​­594698

ing blurred edges. An artificial film grain accentuates the outline of the white table; the Velvia filter, which is named after a type of color film, also slightly exaggerates the hues of those parts of the image that you did not colorize manually. The color mapping feature, which transfers an image’s dominant colors to another photograph was particularly impressive in our lab. Designs with multiple images represent an obvious field of application: The color mapping filter then ensures a uniform color scheme, but maybe you just want to apply the romantic shades of a sunset from another picture. The Darktable manual [6] updated for the current version 1.6 explains all other filters in enough detail that you should understand the program with your own experiments. You can set the value for ui_last/gui_language to C in the ~/.config/darktable/darktablerc file; there should be no spaces before and after the C. I worked exclusively with JPEG originals with 8-bit color depth, which barely exploits Darktable’s huge internal color depth of four 32-bit floating points. You only leverage this if you work with RAW originals. The program relies on the LibRaw [7] library for RAW support but also provides optimized lightness curves and color tables for a number of popular SLR cameras [8]. I only briefly touched on the image viewer (lighttable) here, although – with indexing; geotagging, including map projection; and a search function – it makes standalone management software such as digiKam unnecessary. The current version of Darktable 1.6 also comes with a slideshow function. The program works better on the high-resolution monitors popular with photographers; it can read and write TIFFs, even in 32-bit floating point color depth, and should work faster overall. A chromatic aberrations filter that tries to iron out lens aberrations, an automatic mode for exposure compensation (exposure filter), and improved highlight reconstruction are new additions

[5] LAB color model: http://​­en.​­wikipedia.​ ­org/​­wiki/​­Lab_color_space


Info [1] Darktable: http://​­www.​­darktable.​­org [2] OpenCL: http://​­en.​­wikipedia.​­org/​­wiki/​ ­OpenCL [3] Grapes photo: http://​­www.​­freeimages.​ ­com/​­photo/​­1165235

[6] Handbook: http://​­www.​­darktable.​­org/​­usermanual/ [7] LibRaw: http://​­www.​­libraw.​­org [8] Cameras with enhanced support: http://​­www.​­darktable.​­org/​­resources/


May 2015

No commercial manufacturer has managed to put together a better photo editing program than the free Darktable. The software is available for Linux and Mac OS X and brings together numerous effects in what is still a simple interface. It even uses the graphics card to accelerate computations, which has only been common in far more CPU intensive 3D rendering until now. Darktable thus only disappoints one group of users: people who want to sort out everything with just a few mouse clicks. n n n

Issue 174 |

Shop the Shop

EXPERT TOUCH Linux professionals stay productive at the Bash command line – and you can too! The Linux Shell special edition provides hands-on, how-to discussions of more than 300 command-line utilities for networking, troubleshooting, configuring, and managing Linux systems. Let this comprehensive reference be your guide for building a deeper understanding of the Linux shell environment. You‘ll learn how to: n

Filter and isolate text


Install software from the command line


Monitor and manage processes


Configure devices, disks, filesystems, and user accounts


Troubleshoot network connections


Schedule recurring tasks


Create simple Bash scripts to save time and extend your environment




The best way to stay in touch with your system is through the fast, versatile, and powerful Bash shell. Keep this handy command reference close to your desk, and learn to work like the experts.

FREE DVD INSIDE! The world’s greatest Live distro!



Command Line: Desktop Recorders

Recording desktop activity

For the Record We look at several tools, ranging from very simple to more complex – that can help you record various desktop activities. By Bruce Byfield


Bruce Byfield Bruce Byfield is a computer journalist and a freelance writer and editor specializing in free and open source software. In addition to his writing projects, he also teaches live and e-learning courses. In his spare time, Bruce writes about Northwest coast art. You can read more of his work at http://​­brucebyfield.​­wordpress.​­com


May 2015

ecording your desktop can serve many purposes: It can be a way of permanently recording a complicated procedure; it can prove that a student has completed an assignment, as the man page for the script command suggests; or it can enhance documentation, provide animated how-tos, and even assist with automatic testing, depending on the tools you choose. The tools described in this article operate on several levels. At the simplest level, commands like script, ttyrec, and shelr serve as more permanent alternatives to a shell’s history. By contrast,

Issue 174

scrot takes stills, cnee records not so

much visual events as the technical information behind them, and record‑ mydesktop produces movies made from desktop events. You could accurately say that recording tools are available for every purpose and level of users.

script The script command writes a record of all actions within a shell (Figure 1). It is not much different from viewing a shell’s history, except that it writes to file and is stored permanently. Script is one of several dozen commands installed in distributions as part of the linux-utils package. |


Command Line: Desktop Recorders

Figure 1: The script command records shell sessions.

Once started, script records every command entered and its output. You can add annotations at the command line between commands if desired. These annotations can be located later by searching for the “command not found” that the shell adds after them. At its simplest, script runs from the bare command, saving to the file typescript and ending when you press Ctrl+D. However, you can record to whatever file you want with the command script FILE. You can further specify how the recording file is used with the option ‑a or ‑‑append to add new input at the bottom of the content of a previous session. You can use ‑f or ‑‑flush to remove previous content and write to the same file. Additionally, you can start script with the name of the command you want to run. For example, you can start script and vi together with the command: script ‑‑command vi FILE

After recording, you can read the output file with less, more, or cat.

ttyrec, ttytime, and ttyplay The three related commands ttyrec, ttytime, and ttyplay are intended as an improved version of script – to be exact, a simplification and a division into three separate commands. Unless another file is specified, the ttyrec command saves to ./ttyrecord. You can use the ‑a option to append the current recording to a previously record file or use ‑e COMMAND to start the command within another application. Unlike script, ttyrec contains no GNU options (longer commands prefixed with two hyphens). It does include an option for uuencode for protection when transferring remote files, but because uuencode is practically obsolete today, you should check first that the file’s receiver knows how to read them. Once a recording is finished, you can use ttytime FILE to see a file’s length in seconds. This function may help you to identify the contents of the file when the name does not. To play a recording, use the command ttyplay FILE. Options in ttyplay are Figure 2: Shelr records events in the terminal and replays them. mostly for speed. The ‑s SPEED option is a multiple of the default speed. During playback, you can double the speed by pressing either + or f and halve it by pressing - or s. Similarly, pressing 1 reFigure 3: You can set a delay with a countdown before taking a screen capture with scrot. |

Issue 174

May 2015



Command Line: Desktop Recorders

Table 1: Special Strings for scrot File Names String



Image path/​file name


Image name


Image size (bytes)


Image pixel size


Image width


Image height


Image format

turns the playback to normal speed, and pressing 0 stops playback until 1 is pressed again.

Shelr Shelr is a combination of script and ttyrec that records terminal output and replays it in the terminal (Figure 2). Like many Debian commands, it consists of the basic command, followed by sub-commands. More unusually, it includes no options to modify behavior, but this simple structure is still adequate for its purpose. To begin recording terminal events, enter the command shelr record. After you give the recording a name, Shelr will continue to record until you either type exit or press Ctrl+D, then it will save the recording to a file in the sub-directory ~/.local/share/shelr. The saved file has a random number for a name, but users are apparently not expected to interact directly with the file. Instead, you can use shelr list to see a list of recordings, each with the name you entered to start recording. To replay, run shelr play RECORDING. Alternatively, you can enter the full path, or on the local machine, run replay last to show the last recording you made. The playback in all these cases takes a moment to start and can slow when replaying typing, then speed up when a program produces standard input.

scrot Scrot is a command-line screen capture program. It requires a minimum of a target file, which can be in most standard graphic formats, including .png, .jpeg, and .tiff. It does not include .gif, most likely because of the boycott of the format that once existed. The command includes several options for setting up the screen capture. By default, scrot captures the entire screen, but you can use ‑s or ‑‑select, then press Enter to select an area on the screen. Similarly, ‑u ‑r ‑‑focused selects the currently active window to capture. You can set a delay before the shot is taken with ‑d SECONDS or ‑‑delay SEC‑ ONDS, in case you have to set up something such as the active window, perhaps accompanying it with ‑c or ‑‑count Figure 4: Cnee shows information about the movements of a mouse across the screen. SECONDS to provide a countdown to the moment of the shot (Figure 3). If you have multiple monitors, then using ‑m or ‑‑multidisp captures all of them in the single shot. Other options can control output; for example, you can use ‑b or ‑‑border to capture the window decorations and ‑t PERCENTAGE or ‑‑thumb PERCENTAGE to create a thumbnail shot. If you know that you will be editing or using the screen capture immediately, still another option is to use ‑e or ‑‑exec AP‑ PLICATION to open it in the application immediately after it is created, thereby streamlining your work flow by eliminating one step. By default, scrot saves to a file with the naming format YYYY‑MM‑DD‑HH‑MM‑SS_ width_height_scrot.png. This name can be modified with special strings placed immediately after the basic command or immediately before the file name Figure 5: recordMyDesktop records all desktop activity to a free file format.


May 2015

Issue 174 |


Command Line: Desktop Recorders (Table 1). These special strings do not modify file name characteristics but extract them from the file, thus allowing you to see file characteristics without opening the file in a graphics editor. Note that the first three options only work when placed directly after the basic command.

cnee The cnee tool records and replays activity in an X Window session – showing not so much events as they appear on the desktop but rather the technical details behind what is displayed. This orientation makes cnee ideal for automated testing. In both recording and replaying, cnee can display across multiple monitors. By default, its output is saved to ./​standard output – in other words, to the terminal, However, you may want to specify an output file instead with ‑f or ‑‑file FILE. To begin recording with cnee, add the option ‑‑record or ‑rec after the basic command. You need to specify ‑‑mouse and ‑‑keyboard if you want to include their actions in the recording (Figure 4). If necessary, you can add ‑t or ‑-time SECONDS to delay the start of the recording and save file space. To save even more space, use ‑‑first‑last to record only the start and finish of each event. Another choice is to set the number of events to record with ‑etr or ‑‑events‑to‑record NUMBER or the time to record with ‑str or ‑‑seconds‑to‑record SECONDS. You probably will not need any further options, but ‑‑speed‑percent PERCENT sets the speed of playback and ‑‑replay‑resolution RESOLUTION the resolution for playback.

recordMyDesktop The recordmydesktop tool captures all activity on the desktop and stores the result for playback in an Ogg-Theora-Vorbis file (Figure 5). By default, this file is ./, but you can substitute another file name, so long as it has the proper extension. At its simplest, recordMyDesktop requires only the basic command to start recording, and Ctrl+C to stop recording and save the file. However, in addition to the file name, you can fine-tune much of the recording. To start, you set recordMyDesktop to record only events on part of the desktop. With the ‑x PIXELS and ‑y PIXELS options, you can define a region by its offset from the upper left corner, and with ‑width PIXELS and ‑y PIXELS, you can set the size of the region recorded. However, getting the exact positioning you want with these options will generally involve trial and error. Still, by combining these options with ‑‑follow‑mouse, you can reduce the commands you need significantly. Getting more complicated, you can also use ‑fps NUMBER to set the frame rate, with the higher number making the smoother recording. The number of sound channels is set with ‑‑channels NUMBER and the sound frequency with ‑‑freq NUMBER. If you want to add narration, ‑‑use‑jack PORT will make recordMyDesktop aware of input from a microphone. Or, if such options seem too complicated, ‑‑no‑sound can greatly eliminate many problem sources. In much the same way, you can adjust the encoding of the recording with ‑‑on‑the‑fly‑encoding, ‑‑v_quality NUMBER (0‑63), or ‑‑v_bitrate NUMBER (45,0002,000,000). However, if none of these options are meaningful for you, the more generalized option s_quality NUMBER (1‑10) may be a surer alternative. recordMyDesktop has no jurisdiction over playback, but it has no need to. Make a note of the file where its output is stored, and you can play it simply by running the file from the command line.

The Need for Full Paths One final word: The applications described here generally assume that files are saved to the working directory. However, often the applications open in a sub-directory of /usr – where, of course, you cannot write unless you have set up your user account very wrongly. To save yourself frustration when you are sure that you have put together a command correctly and shouldn’t be receiving error messages, make sure that you run these applications somewhere in your home directory. Either that, or never accept the default file and include the full path to an alternative file. Your blood pressure will thank you for it. n n n |

Issue 174

May 2015


LinuxUser FreeFileSync

Keeping databases in sync

In Sync As your data volume grows on your home computer, you can quickly and easily create a reliable backup using FreeFileSync. By Erik Bärwaldt


hanks to digital cameras, MP3 players, and smartphones with HD video capabilities, users can easily fill up their terabyte-sized mass storage devices. The stored data often includes irreplaceable material, such as photos from birthday parties or holiday videos. Thus, it becomes even more important to back up your data so that it’s not lost if the hard drive fails. Traditional backup solutions, however, are often cumbersome to operate and can overwhelm home users with a wealth of functions they don’t need. This is where FreeFileSync [1], which is aimed specifically at private users, comes in.

Author Erik Bärwaldt is a self-employed IT admin and works for several small and medium-sized companies in Germany.


May 2015

First Use Most common distributions have FreeFileSync in their repositories, and

Issue 174

you can usually install it easily using a package manager such as Synaptic or YaST. However, if the current latest version (6.13) is important to you, you will need to download it from the project website. There, the developers provide both customized tarballs for some large distributions as well as the source code for a manual build [2]. After successful installation, FreeFileSync appears in the menu structure with a starter, which you can click for easy access. The intuitively designed program window will catch your eye when you first start it. The menubar is in the header with the toolbar underneath. The two buttons Compare and Synchronize particularly stand out; the cogwheel buttons next to them can be used to access the corresponding settings. A routine for creating filter criteria is hiding behind |

LinuxUser FreeFileSync

Figure 1: Clicking Compare displays in the main window which files and folders the program intends to synchronize.

the button with a funnel icon. The software also has a small statistics display at the bottom right. The main window with its three panes displays the directories to be synchronized and a checklist. The first step is to determine which disk or directories you want to include in the synchronization. To this end, above the lists, youâ&#x20AC;&#x2122;ll see an input field where you can enter the respective paths. By clicking on Browse to the right of the input fields, you can select the paths using the integrated file manager. To get an overview of the differences of the existing databases, click on Compare. You can control the behavior by clicking on the cogwheel next to it. The selections include, among other things, which method the software uses to compare the databases. Available options are file content, date stamp, timestamp, and file size. Depending on the size of the scheduled backup, synchronizing the contents of the file can take a lot of time. In the test, the software compared about 25-30MB per second. Thus, itâ&#x20AC;&#x2122;s advisable to make the default comparison by date and size for larger volumes of data. After you click Compare, the program lists the files contained in the directories and subfolders of the source and target that are missing on the other side. An overview window to the left also shows the percent differences, ordered by the directories con-

Figure 2: In the filter dialog box, you can determine explicitly which files, file types, or directories you would like to include or exclude from the synchronization. |

Figure 3: In addition to defaults such as Mirror and Update, the program allows you to implement your own synchronization methods.

Issue 174

May 2015


LinuxUser FreeFileSync

Figure 4: Before synchronizing the data, a small window again displays a quantitative summary.

cerned. You will also find a column with three elements between the file lists. The checkbox lets you exclude individual files and directories from the sync; the action set in the program is shown in the right-hand column (Figure 1). Keep in mind that a continuous comparison of source and target disks can take a long time, especially when using flash memory cards and USB memory sticks. This increased amount of time required is caused by inferior-quality memory chips often used for removable storage devices, which only allow relatively low speeds for reading data and even lower write speeds. For reasons of data safety, it is not advisable to use such media as a primary storage when backing up important data. Using the integrated filter options, you can define certain file formats or search paths that you want the tool to include or exclude explicitly during the synchronization. To this end, press the button with the funnel icon at top center in the program window. You can determine the criteria and apply them by clicking OK in the straightforward dialog box (Figure 2).

Mirror, Mirror â&#x20AC;Ś You need to adjust the synchronization settings to receive a complete mirror copy of the source medium when first synchronizing the databases. To this end, the program provides several variants after clicking on the green cogwheel next to Synchronize. Choose the Mirror option for the first sync (Figure 3). The software mirrors all data from the selected path to the backup medium. This step also includes delete actions if the path already contains data. After pressing OK, the software synchronizes the lists displayed and shows statistics bottom right in the program window about the databases it is deleting, overwriting, or recopying. On the left in the Overview pane, the program window displays all the folders in alphabetical order with the respective percentages of data to be modified. Click on the folder in question to see more information about which databases will be deleted, recreated, or overwritten in the listed directories. FreeFileSync now changes the display in both list windows so that only the selected folder and its subdirectories appear in it. Green symbols arranged line-by-line between the list views show you what happens to the respective file. You can determine which actions the windows display in the lower section next to Select view. The available options are copying in one direction or the other and displaying identical files that remain unchanged. The software then correspondingly updates the list views so that you can see an overview of how the software will handle these files, with just a few clicks of the mouse; this is especially useful for extensive databases. After subsequently clicking on Synchronize, the program again opens a small window that displays the pending actions for you to check. Press Start in this window to start the sync process (Figure 4). By default, the software adopts the synchronization

Figure 5: The progress indicator keeps you informed about the synchronization.


May 2015

Issue 174

Figure 6: In the Synchronization tab, your own rules let you synchronize simultaneously in both directions without having to start a new process. |

LinuxUser FreeFileSync

requirements you adjusted in the Compare settings. To change this, click on the cogwheel to its right and adjust the values to suit your needs. The software displays a progress indicator in a dialog box during the synchronization run so you can follow its progress (Figure 5). The list views remain empty because the databases no longer differ from each other after synchronization. It is no longer necessary to mirror the complete database to synchronize directories that have been synchronized in the past. For one thing, depending on the synchronization interval, each sync can take quite a while to complete; for another, you are overwriting data despite it having identical content. It is therefore advisable to shift from Synchronize to Update in the configuration menu. The software now only copies new databases from left to right or those that have changed since the last sync. The statistics display shows that the database to be copied is significantly smaller than it would be for a complete mirror image.

Semi-Automatic The Custom button in the Synchronization Settings dialog lets you create individual rules so you can synchronize your databases more flexibly than with the three preset options Two-way, Mirror, and Update. To this end, click the corresponding option on the right in the Action line. This way, you can synchronize databases simultaneously in both directions, for example, without having to start a second process (Figure 6). The software immediately displays any errors that occur during data synchronization. Typically, it gets stuck when using different filesystems. Errors can build up especially if one of the two storage devices uses FAT32. You can safely ignore the error messages at first and sort out the remaining problem cases manually once the process has completed, because the software leaves non-synchronized files in the list window.

Figure 7: RealTimeSync permanently synchronizes the data of monitored folders with the synchronization target.

Fully Automatic The main disadvantage of manual data backups is that people often forget to start them. Automatic synchronization provides the solution. FreeFileSync therefore provides the RealTimeSync module, for which it also creates an appropriate starter. RealTimeSync facilitates the automatic synchronization of multiple data storage devices by constantly monitoring data changes using a batch file and updating them in line with defined rules. A batch file that contains all the necessary settings for the synchronization runs is required to use RealTimeSync. To create this, create a job to your liking in FreeFileSync and save it via File | Save as batch job as a file with the extension .ffs_batch in a directory of your choice. After starting RealTimeSync, drag this file into the program window. The software thus automatically adopts all the settings contained, and clicking on Start will begin the synchronization (Figure 7). The program is then minimized into the system tray of your desktop environment and monitors the designated folder for changes from now on. If changes occur, it automatically synchronizes the files concerned and briefly displays a progress bar on the desktop during this process. Depending on the chosen interval – Idle time (in seconds) – the software synchronizes the directories with a delay. This delay is a very good idea, especially on busy file servers, because continuous synchronization of data with external mass storage devices would significantly affect the speed at which your server provides its services.

Conclusions FreeFileSync is a powerful tool for data synchronization on small networks; it does not, however, perform real backups with incremental or differential runs. The add-on program RealTimeSync automatically carries out synchronization jobs; you do not need to worry about anything after a one-off setup. FreeFileSync gets along perfectly with external mass storage devices as the target disk and can also quickly synchronize larger databases. Every single-user system should have this software for redundant storage of critical data. n n n |

Info [1] FreeFileSync: http://​­www.​­freefilesync.​­org [2] Download FreeFileSync: http://​­sourceforge.​­net/​­projects/​ ­freefilesync/​­files/​­FreeFileSync/

Issue 174

May 2015


LinuxUser Workspace: Bottle

ware repositories of many mainstream Linux distributions, so it can be installed using the default package manager. To deploy pip on Debian or Ubuntu, run apt‑get install python‑pip

as root and then install Bottle using pip install bottle

as root.

Bottle Basics A typical Bottle app consists of several functions, each performing a specific task. Usually, the result returned by a function is used as the dynamic content for generating pages. Each function has a so-called route, or an address on the server. When the browser calls this address, it triggers the function. The simple Bottle app below demonstrates how this works in practice:

Using the Bottle framework to build Python apps

Python in a Bottle The Bottle framework provides the fastest and easiest way to write web apps in Python. In this article, we help you get started with this lightweight framework. By Dmitri Popov


Dmitri Popov Dmitri Popov has been writing exclusively about Linux and open source software for many years, and his articles have appeared in Danish, British, US, German, Spanish, and Russian magazines and websites. Dmitri is an amateur photographer, and he writes about open source photography tools on his Scribbles and Snaps blog at scrib­bles­and­ snaps.word­


May 2015

ython lets you quickly whip up simple and more advanced standalone applications, even if your coding skills are relatively modest. What if you want to build a Python-based web app, though? Several frameworks let you do that, and if you are looking for something simple and lightweight, Bottle [1] is exactly what you need. This micro framework offers just the right mix of functionality and simplicity, which makes it an ideal tool for building Python-based web apps with consummate ease.

Installing Bottle The easiest way to install Bottle is using the Python Package Manager (also known as pip). It’s available in the official soft-

Issue 174

#!/usr/bin/python from bottle import route, run @route('/hello') def hello(): return "Hello World!" run(host='localhost', port=8080)

The first statement imports the route and run modules that are used to define routes and run the app. The app itself consists of a single hello() function, which returns the “Hello World!” message. The /hello route specifies the app’s address, and the run() routine makes the app accessible on port 8080 of the localhost. To run this simple app, create a text file, paste the code above into it, and save the file as Make the script executable using the chmod +x

command, then run the server by issuing ./ Point the browser to the http://​­localhost:8080/​­hello address, and you should see the “Hello World!” message.

What’s in My Bag? To demonstrate Bottle’s basics, I’ll build a simple web app called What’s in My Bag (or wimb for short) that can be used for keeping tabs on the contents of your bag. The app uses an SQLite database to |

LinuxUser Workspace: Bottle

store data, and it allows you to add, edit, and remove records. Each record consists of three fields: id (a unique identifier), item (the item’s description), and serial_no (the serial number of the item). The core of the app is the wimb() function shown in Listing 1. This function does several things. It starts by checking whether the database wimb.sqlite exists. If not, the function creates it; otherwise, the function establishes a connection to the database and fetches the records from the wimb table. To show the fetched data as a properly formatted table, the function uses the template specified in the output statement in line 12. A template in Bottle is a regular text file with the .tpl extension. A template can contain any text (including HTML markup), additional Python code, and arguments (e.g., the result of a database query). For example, the following simple template takes the record set returned by the wimb function and formats it as an HTML table.

Listing 1: The wimb() Function 01  #!/usr/bin/python 02  import sqlite3, os 03  from bottle import route, redirect, run, debug, template, request, static_file 04  @route('/wimb') 05  def wimb(): 06 

if os.path.exists('wimb.sqlite'):


conn = sqlite3.connect('wimb.sqlite')


c = conn.cursor()


c.execute("SELECT id,item, serial_no FROM wimb")


result = c.fetchall()




output = template('wimb.tpl', rows=result)

13  14 

return output else:


conn = sqlite3.connect('wimb.sqlite')


co nn.execute("CREATE TABLE wimb (id INTEGER PRIMARY KEY, item char(254)




return redirect('/wimb')

NOT NULL, serial_no char(100))")

Listing 2: Extended Template 01  <h1>What's in My Bag:</h1> 02  <table border="0"> 03  <tr><th>ID</th><th>Item</th><th>Serial no.</th></tr> 04  %for row in rows:

<table border="1"> %for row in rows: <tr> %for col in row: <td>{{col}}</td> %end </tr> %end </table>


%id = row[0]


%item = row[1]


%serial_no = row[2]










<td><a href="/edit/{{id}}">Edit</a></td>


<td><a href="/delete/{{id}}">Delete</a></td>

14  </tr>

Although this template does the job, it 15  %end also has a couple of limitations. Apart 16  </table> from the fact that it’s rather bare-bones, the template doesn’t give you control over individual columns, so you can’t, for example, apply different styles to individual columns. More importantly, the template provides no way to edit and delete records. The extended template in Listing 2 addresses these limitations. Each row in the result set contains a list of columns, and the template uses simple Python code to assign values of individual columns to separate variables. These variables are then used as arguments in the HTML table. In this way, you can style columns individually. For example, you can create the following CSS class: td.col1 { color: #3399ff; }

Then you can assign this class to the first column to apply the specified styling: <td class="col1">{{id}}</td>

The id variable is also used as an argument in links for editing and deleting records (more about this later). To be able to add records, the app needs another function and |

Issue 174

May 2015


LinuxUser Workspace: Bottle

template. The latter is a simple HTML form (Listing 3) consisting of two input text fields  <h1>Add a new item:</h1> (one for the item description and another for  <form action="/add" method="GET"> the serial number) and a submit button.  <p><input type="text" size="50" maxlength="254" name="item"></p> When the Add button is pressed, the values  <p><input type="text" size="50" maxlength="100" name="serial_no"></p> entered in the fields are sent to the dedicated  <p><input type="submit" name="add" value="Add"></p> function that processes the values and inserts  </form> them into the database (Listing 4). To obtain values from the form’s fields, the function uses the statements in lines 4 and 5. Once the function has done its job, it redirects to the app’s root using the return statement in line 12. All these actions are performed when the user presses the Add button in the form; otherwise, the function simply displays the appropriate template (i.e., an empty form). The app uses yet another function and template for editing and updating existing records. Each record in the database has a unique identifier, which is used to fetch the correct record and its data and then save the changes made to it. If you take a look at the edit_item() function shown in Listing 5, you’ll notice that its route contains the :no variable. This is a so-called dynamic route, in which the value of the variable is a part of the route. The value of the variable is also passed to the function assigned to that route, and this value can be processed by the function. In this particular case, the variable contains the ID number of the target record, so when you call the /edit/1 address, the

Listing 3: HTML Form 01 02 03 04 05 06

Listing 4: Adding a Record 01  @route('/add', method='GET') 02  def new_item(): 03 

if request.GET.get('add','').strip():


item = request.GET.get('item', '').strip()


serial_no = request.GET.get('serial_no', '').strip()


conn = sqlite3.connect('wimb.sqlite')


c = conn.cursor()


c. execute("INSERT INTO wimb (item,serial_no) VALUES (?,?)", (item,serial_no))


new_id = c.lastrowid





12  13  14 

return redirect('/wimb') else: return template('add_item.tpl')

Listing 5: The edit_item() Function 01  @route('/edit/:no', method='GET') 02  def edit_item(no): 03 

if request.GET.get('save','').strip():


item = request.GET.get('item','').strip()


serial_no = request.GET.get('serial_no','').strip()


conn = sqlite3.connect('wimb.sqlite')


c = conn.cursor()


c. execute("UPDATE wimb SET item = ?, serial_no = ? WHERE id LIKE ?",



(item, serial_no, no))

10  11 


May 2015

Issue 174

return redirect('/wimb') else:


conn = sqlite3.connect('wimb.sqlite')


c = conn.cursor()


c.execute("SELECT item,serial_no FROM wimb WHERE id LIKE ?", (str(no)))


cur_data = c.fetchone()


return template('edit_item.tpl', old=cur_data, no=no) |

LinuxUser Workspace: Bottle

function fetches the records with ID 1, obtains its existing values, and inserts them into the appropriate fields of the edit_item.tpl template. Once you’ve edited the data and pressed the Save button, the function obtains the modified values of the form’s fields and updates the appropriate record. Similar to the edit_item() function, the edit_item.tpl template in Listing 6 uses the {{no}} argument to determine the record’s ID along with arguments for populating the fields with existing values from the record. Deleting existing records is the final piece of the puzzle. Here, too, the app uses a combination of a function and a template. The delete_item() function in Listing 7 is basically a simplified version of the edit_item() function. This function uses the :no variable to pick the right record and deletes the record when the user presses the Delete button in the delete_item.tpl template:

Listing 6: The edit_item.tpl Template 01  <h1>Edit item number {{no}}</h1> 02  <form action="/edit/{{no}}" method="GET"> 03  <p ><input type="text" name="item" value="{{old[0]}}" size="50" maxlength="254"></p> 04  <p ><input type="text" name="serial_no" value="{{old[1]}}" size="50" maxlength="100"></p> 05  <p><input type="submit" name="save" value="Save"></p> 06  </form>

Listing 7: The delete_item() Function 01  @route('/delete/:no', method='GET') 02  def delete_item(no): 03 

if request.GET.get('delete','').strip():


conn = sqlite3.connect('wimb.sqlite')


c = conn.cursor()


c.execute("DELETE FROM wimb WHERE id LIKE ?", (no))




return redirect('/wimb')

09  10 

else: return template('delete_item.tpl', no=no)

<h1>Delete item number {{no}}?</h1> <form action="/delete/{{no}}" method="GET"> <input type="submit" name="delete" value="Delete"> </form>

So far, all content in the app has been generated dynamically. What if you want to include static content, though? For example, you might want to prettify the templates using styles defined in a separate CSS stylesheet. To do this, you need to specify yet another function that defines the path to the static content (in this case, it’s any file and folder inside the static directory): @route('/static/:path#.+#', name='static') def static(path): return static_file(path, root='static')

Then you would put your CSS stylesheet into the static directory and the reference to it in the templates in the usual manner: <link rel="stylesheet" type="text/css" href="static/styles.css">

Figure 1 shows what the app looks like.

Final Word The simple web app described in this article is intended to give you a general idea of Bottle’s capabilities. The source code for the app is available at GitHub [2], so you don’t have to write it from scratch. Despite (or perhaps thanks to) its simplicity, you can repurpose the app for other uses. With a few tweaks, you can transform the app into a bookmark manager, note-taking app, code snippet repository, or other personal helper. n n n Figure 1: What’s in My Bag app. |

Info [1] Bottle: bottlepy.​­org [2] What’s in My Bag app source code: github.​­com/​­dmpop/​­wimb

Issue 174

May 2015


Community Notebook Doghouse: BBC Computer Education Scheme

Made in the UK BBC digital competency initiative

The BBC and partners in the United Kingdom start another program to educate young people for the digital future, but FOSS is sorely lacking. By Jon “maddog” Hall


for the BBC, told me about the Make it Dighe British Broadcasting Corporaital program (announcements had already tion (BBC) has a history with gone out that day) while she was intercomputers. In the 1980s, they viewing some of the CoderDojo students. helped produce and distribute to Later, Jane asked me a question that schools, for the use of school children, one seemed to be burning in her mind: of the first micro-computers of the day, the “Should every child need to program?” BBC Micro. This week, the BBC announced I do not think every child needs to have a new initiative, Make it Digital [1], and a the skills to write large, complex pronew piece of hardware, the Micro Bit (Figure grams, but I do think basic training is use1), which they and their partners want to ful so every person knows the basics distribute to all seventh graders (11-12 years Figure 1: The Micro Bit. The website notes: about how to get a computer to solve a old) in the United Kingdom next September. “The project is still in development and the problem. Later, this will help people estiAlthough there is not a lot of solid inforfinal name, appearance, and specification is mate if what they are requesting is going mation about the Micro Bit, the prototype likely to change” [2]. to take a programmer 10 minutes or 10 for it is very small, has an array (5x5) of years to program, or whether the data needed to solve the probLEDs, Bluetooth low energy (LE), and various items like an accellem would fill up a thumb drive or a datacenter. Likewise, techerometer, taking its power from a micro USB port. It is advertised nical people should learn more about formal areas of business, as being “wearable” and appears to be more along the lines of an although in a lot of ways technical people are exposed to busiArduino type of processor, able to run one program at a time, than ness every day, and the opposite is not always true. a computer that could run multiple programs at one time. The Micro Bit is a prototype at this stage, and the program itself The bigger message is not just another small board that stuis still a concept. The Partners have a little time between now and dents can use to build things, but the “full court press” that the September for tweaking both. One thing that disturbs me, howBBC is putting behind the Make it Digital program to try and ever, is the seeming lack of FOSS entities in the development of address the 1.4 million digital professionals that will be needed in the UK over the next five years. More than 25 companies and both the hardware and the software, the lack of public specifications on what might be going into the Micro Bit, and the lack of associations (with indications that might expand to 50) are in openness as to where the training might be headed. If this is simthe program. The BBC sees a shortage of trained IT people in ply an oversight, then I hope the BBC starts being more inclusive the UK and, rather than try to import them from other counin their planning and reaches out to organizations such as the Free tries, decided to improve the computer skills of students in the Software Foundation, the Linux Foundation, and even the general seventh grade through the Micro Bit program and in other public to review and comment on their plans. grades through other programs. As simple as the Micro Bit is, not needing any binary blobs in Ten of the partner companies are called “Product Partners” its programming might be a refreshing change. And a Free and (ARM, Barclays, Element14, Freescale Semiconductor, Lancaster Open programming tool chain would also be nice. n n n University, Microsoft, Nordic Semiconductor, Samsung, ScienceScope, Technology Will Save Us) and the other 16 companies are “Product Champions” (Bright Future, CISCO, Code Club, CoderInfo Dojo, Code Kingdoms, Creative Digital Solutions, CultureTECH, [1] BBC Make it Digital: http://​­www.​­bbc.​­co.​­uk/​­mediacentre/​ Decoded, Institution of Engineering and Technology, Kitronik, ­mediapacks/​­makeitdigital London Connected Learning Centre, Open University, Python [2] Micro Bit: http://​­www.​­bbc.​­co.​­uk/​­mediacentre/​­mediapacks/​ Software Foundation, STEMNET, TeenTech, Tinder Foundation) ­makeitdigital/​­micro‑bit who will be working with training utilizing the Micro Bit. People should not look at this effort as only Micro Bit-oriented. The The author group also plans to train 5,000 young, unemployed people to increase their computer skills, so they can get new jobs. Jon “maddog” Hall is an author, educator, computer scientist, I first became aware of this effort while I was at a CoderDojo and free software pioneer who has been a passionate advocate for Linux since 1994 when he first met Linus Torvalds and facilimeeting recently held in London. Young people of all ages were tated the port of Linux to a 64-bit system. He serves as president encouraged to do simple (and not so simple) programming jobs of Linux International®. under close mentorship. Jane Wakefield, a technology reporter


May 2015

Issue 174 |

Now Appearing on

APPLE NEWSSTAND New age convenience... Our inspired IT insights are only a tap away. Look for us on Apple Newsstand and the iTunes store.

LNM_Apple_Store_1-1.indd 1

ad o l n w Do of e u s E is E R F ion t a a c i l pub h c a e now!

3/5/15 9:56:38 AM

Community Notebook Laidout Book Creator

Bookbinding on the screen

Turning a New Leaf The Laidout graphic application simplifies the design of books and booklets. By Bruce Byfield


Info [1] Laidout: http://​­sourceforge.​­net/​ ­projects/​­laidout/ [2] Tom Lechner booklets: http://​­www.​­tomlechner.​­com [3] Tom Lechner cartoon booklets: http://​­www.​­tomlechner.​­com/​ ­cartoons/ [4] Signature Editor: http://​­laidout.​­org/​­screenshots/​ ­img‑signatureeditor.​­html [5] Spread Editor:http://​­laidout.​­org/​ ­screenshots/​­img‑screenshot5.​­html [6] Paper Tiler: http://​­laidout.​­org/​ ­screenshots/​­img‑papertiler.​­html [7] Alignment tool: http://​­laidout.​­org/​ ­screenshots/​­img‑align.​­html [8] Gradients: http://​­laidout.​­org/​­screenshots/​ ­img‑radial‑gradients.​­html [9] Warping: http://​­laidout.​­org/​­screenshots/​ ­img‑imagepach‑0.​­02.​­1.​­html [10] Comparison of vector graphics programs: http://​­laidout.​­org/​ ­dtpcompare.​­html


May 2015

ree software is supposed to be about scratching your own itch. In that tradition, Tom Lechner has spent more than nine years working, mostly alone, on Laidout [1], a graphic application designed primarily to lay out and bind booklets that is full of tools found nowhere else. Lechner is a cartoonist and artist living in Portland, Oregon, who often publishes booklets [2]. “I went to school for physics and math at Caltech,” he remembers, “but spent a lot of time making artwork instead of finishing my homework. When I finally realized that, I went off to be an artist instead.” The problem was, “as a starving art student, then a starving artist, there was no way I could buy software or hardware sufficient for anything I wanted to do.” Hearing rumors about Linux, Lechner investigated and soon found that its lack of cost was only one of its advantages. “Since the entire tool chain is open source,” he said, “if something doesn’t quite do what you want, there is the option to dive in and improve it oneself. Artists’ needs tend to be very unpredictable, and often tools have to be used in ways the tool designer never intended. Having complete access to all levels of the tools at hand is a tremendous asset. Open source also generally has a great community behind it. If you get stuck somewhere, there’s usually a forum or mailing list somewhere that has a solution. I definitely gravitate to more obscure and experimental approaches to things, and open source generally is very open to that sort of thing.” Lechner’s particular itch was to produce books, but manual layout took “long hours photocopying, gluing, and applying white out.” To this day, the available free software tools require manual configuration and scripting. “I don’t want to relearn an obscure computer dialect to perform this task every

Issue 174

time I want to make a book,” Lechner says, so he began working on Laidout. Initially, Laidout was a Python script that took a directory of images as input and output to a file for Passepartout, a desktop publishing tool for X Window. “I would have done it in Scribus,” Lechner says, “but at the time, Scribus would frequently crash whenever I tried to do anything in it.” He adds that Scribus seems “quite stable now, but remains more oriented to the page than to assembling a book. By contrast, he says, “TeX has been a big inspiration philosophically for me. There’s a project that saw an obvious use for computers that could greatly enhance the quality and speed of document production, particularly for math text. However, as brilliant as TeX is for font layout, it is terrible for image based or designheavy documents. Also, as a cartoonist, I almost always draw all of my own text, so, unfortunately, TeX was totally useless to me.” In the end, Lechner concluded, if he wanted something to suit his needs, he would have to write it himself. Lechner did not plan to work alone – that was just the way things worked out. In fact, at times, improving Laidout has taken more of his time than his artwork. “I have definitely been sucked down holes of thinking that if I just spend the next month only coding, I’ll have this really cool feature. Then three months go by. At other times, doing his own artwork interrupts coding Laidout – which, he says, is probably “a deterrent to getting other people interested in contributing.” However, in the last few years, Lechner has taken to attending the Libre Graphics Meetings, which, he says, “helps me not get stuck in my own preconceptions.”

Tools for Bookbinding Laidout’s main purpose is imposition – the arrangement and binding of pages into books. “I’ve been making my cartoon books [3] with it since 2006,” Lech- |

Community Notebook Laidout Book Creator

ner says. To help in the process, he has developed several tools that have yet to appear in other graphics editors. Working with Laidout begins with a series of images readied in a standard graphics editor such as Gimp, MyPaint, or Krita. Using Laidout’s Signature Editor [4] (Figure 1), you define the paper size and margins and how the paper will be folded to produce separate pages when bound, then import the images, including any images used for page numbers. The pages appear as thumbnails and can be rearranged as needed by dragging them around in the Spread Editor [5] (Figure 2), a tool that mimics laying out images by hand on a sheet of paper and resembles an advanced version of the slide sorter found in a presentation application such as LibreOffice’s Impress. Other tools for positioning images include the Paper Tiler [6], which sets how an image displays across multiple pages, and the recently added Alignment tool [7], which distributes images along Figure 1: The Signature Editor with interactive folding. a pre-set path. Laidout also includes some conventional tools for graphics editors, such as gradients [8] and image warping [9] (Figure 3), as well as the ability to rotate, shape, and resize images. However, Laidout has yet to add the facility to draw most primitives – let alone fill tools, a selection of brushes, or a text tool – because none of these are directly relevant to the main purpose of bookbinding. As Lechner’s comparison chart [10] emphasizes, Laidout is concerned with a set of goals different from other proprietary or free software graphics tools. Instead of making Laidout a complete editor, Lechner himself typically exports his results into .pdf or .svg format and prints out his results in applications, “like Evince or Inkscape, which have much better printer options.”

The Never-Ending Project Laidout remains far from finished. Lechner’s immediate plans includes a hatching tool to remove some of the drudgery from his own drawing. Scripting is another priority. Further out, he would like to add features such as color profiles and color sepFigure 2: Layout takes place in the Spread Editor. aration and to add support for mobile devices. Additionally, he says, “I also want to pursue the idea of tool interface sharing, to be able to use Laidout tools in Inkscape, for instance. No one piece of software is going to do everything one wants, and if the behavior of your favorite tools could be transported to different applications, it would free up developers to work on new features and performance issues, instead of having to invent new GUIs all the time.” “Ultimately,” Lechner says, I want Laidout to be a good choice for many kinds of strange layout tasks, like oddly folded brochures, strange booklet cuts, or layout to any strange surface that can be unfolded flat.” Asked when Laidout might reach a 1.0 version, Lechner replies that “my release numbers follow a relativistic aesthetic, in that 1.0 is analogous to the speed of light. Unless there is an unprecedented advance in software and hardware technology, such as using a tachyon-based wireless keyboard, 1.0 will never happen.” However, that hardly matters. By designing his tools for what he needs, Lechner has come up with a series of unique tools, most of whose functionality is evident at a glance. In an era in which free software projects are increasingly dominated by corporations, it is reassuring to know that there is at least one developer still innovating by scratching his own itch. n n n Figure 3: Warping images with Bézier curves. |

Issue 174

May 2015


Community Notebook Linux Jobs

A report on Linux jobs

Where the Jobs Are

We look at some recent reports about the Linux job market and interview experts from Red Hat and SUSE. By Swapnil Bhartiya


May 2015

Issue 174

cording to SUSE’s Global HR Director Marie Louise van Deutekom, “Contributing to open source projects certainly helps with your visibility in the market and creates a certain level of interest from commercial open source companies.”

Who Else Is Hiring? Linux jobs are not limited just to Linux companies like SUSE or Red Hat. Open source is being used in almost every IT infrastructure. “Linux Jobs Report 2015” [2], published by the Linux Foundation, says that more than 97 percent of hiring managers said they would bring on Linux talent relative to other skill areas in the next six months. Additionally, more than 50 percent of hiring managers said they would hire more Linux talent this year than they did last year. The Linux Jobs Report also states, “Hiring managers are still struggling to find professionals with Linux skills, with 88 percent reporting that it’s ‘very difficult’ or ‘somewhat difficult’ to find these candidates.” This undersupply also means companies will go out of their way to retain Linux talent. “The majority of hiring managers (70 percent) say their companies have increased incentives to retain Linux talent, with 37 percent offering more flexible work hours and telecommuting and 36 percent increasing salaries for Linux pros more than in other parts of the company,” according to the jobs report.

Jobs Are In the Cloud Everyone is moving to the cloud, and Linux jobs are, too. The rise of cloud |

Lead Image © Author,


inux is omnipresent – it runs things ranging from tiny embedded devices to drones, supercomputers, and space stations – and that creates a big demand for Linux skills. A recent Linux Foundation report titled “Who Writes Linux” [1] states that the percentage of unpaid contributors is declining, while paid contribution are on the rise. Almost 80 percent of kernel development is being done by paid developers – which means the people who have money to pay for kernel development need to stay on the lookout for talent. An important way to get the attention of potential employers is to contribute to open source projects. L.J. Brock, Vice President of Global Talent Group & People Infrastructure for Red Hat, says, “If you do great work on an open source project, it’s likely that technology companies will fot sr, notice.” dre an © Even if the dream job doesn’t fall into your lap, however, you can certainly build a network “When you love what you do, with Linux develBecause alpeople want to work with you.” opers. most all of the de– L.J. Brock velopment happens publicly, it is easy for a developer to see the opportunities that could enrich a company’s product. Red Hat is not the only major open source company that feels that way. Ac-

Community Notebook Linux Jobs

technologies in recent years has also into the mainline tree increase the possigiven rise to Linux cloud-related jobs. bilities of landing better jobs? Open source-based cloud technologies are gradually taking over the market – Marie Louise van ­Deutekom: Conwith OpenStack and CloudStack being tributing to open source projects certhe primary players. tainly helps with your visibility in the Knowledge of and experience with market and creates a certain level of inthese cloud platforms play a major role terest from commercial open source in the hiring companies. Isn’t it great decisions. The There’s a saying among for an engineer to get report said, paid for what you love open source developers: doing? And isn’t it great “… 49 percent of Linux proif your contributions are “The code talks.” fessionals berecognized by other – L.J. Brock high-profile contribulieve open cloud will be tors? At SUSE, we are the biggest growth area for Linux in convinced that we can make Open 2015.” Source a truly profitable business model; The flip side to this coin is that, alhence, we pay our engineers. And then though experience and knowledge of the circle is round – these cloud platforms influence hiring we are proud of our “More and more companies implement managers, knowledge of containers plays contributing engian open source strategy and look at “almost” no role in ensuring a job. neers, which helps their and SUSE’s visiLinux as a viable alternative.” bility and credibility in Talent Alone Won’t Do –Marie Louise van Deutekom the market. Although talent does speak for itself; talent alone might not be enough to bag a L.J. Brock: There’s a job. You might need to do more than saying among open source developers: submit patches. Certification is also im“The code talks.” What that means is portant for many companies. that open source communities respect The report said that more than 44 pergood work and that’s true for Red Hat, cent of the hiring managers are likely to too. We certainly pay a lot of athire a candidate with Linux certification. tention to and ultimately hire When it comes to hiring a system adminmany of the top open source conistrator, more than 54 percent of hiring tributors. managers expect the candidates to have either certification or formal training. SB: In other words, how can a If you want to access the full range of talented developer ensure his visiavailable positions in system administrability for companies like Red Hat? tion, it is a very good idea to get certifiDo you ever pick a “talented” cations and training. According to the reperson even if he/​she has never port, “66 percent of hiring managers are applied for a position? looking for system administrators; Linux professionals with certifications will be LJB: Yes, we do. If you do great the most in-demand talent in this year’s work on an open source projjob market.” ect, it’s likely that technology The following interviews provide more companies will notice. That details on skills and trends affecting being said, if you’re interested Linux jobs globally. For these interviews, L.J. Brock, V ice President in Red Hat, I’d encourage you I spoke with L.J. Brock, Vice President, , Glo Group & Peop le Infrastructu bal Talent to get to know some people Global Talent Group & People Infrastrucre, Red Hat who work here and apply for a ture, at Red Hat, and Marie Louise van job with us. Deutekom, Global HR Director at SUSE.


Swapnil Bhartiya: The recent Linux Foundation report stated that the contribution by paid developers is increasing. So, does getting one’s patches merged

SB: What kind of skills are companies like Red Hat and SUSE looking for? MLvD: SUSE is always looking for a variety of skills, experience, and personal- |

Issue 174

May 2015


Community Notebook Linux Jobs

ity. For example programming skills in Python, Ruby, or Perl are often in demand. But also, experience with virtualization, with OpenStack, or with Software-Defined Storage solutions. And for SUSE, it’s important that new colleagues not only bring skills and experience, but fit well in our global teams. LJB: In addition to the Linux and middleware skills that we are always hiring for, lately, we’ve been recruiting a lot of people with OpenStack experience to join our engineering, consulting, and solutions architect teams.


nosov, 123RF.c

© Dmitriy Shiro

SB: If you look at the global landscape, where are most Linux jobs concentrated? Are there some parts of world/​some countries where you find more Linux talent?

“… it’s important that new colleagues not only bring skills and experience, but fit well in our global teams.”

LJB: Red Hat has 80 offices in 38 countries around the world. In addition, 25 –Marie Louise van Deutekom percent of our 7,000+ associates work remotely. We have a large engineering presence in Westford, Massachusetts; Brno, Czech Republic; India; China; and more. MLvD: A number of our technical roles are concentrated in our main hubs in Nürnberg, Prague, Provo, Utah, or Beijing. However, many of our jobs can be done from anywhere in the world, including home office, as long as the candidate has the necessary skills/​experience and is willing to make it work in a virtual global team. That makes our talent pool larger.

, SUSE van Deutekom SB: As a company, what Marie Louise ector would be your advice to aspiring develGlobal HR Dir

opers; what should they focus on to ensure jobs?


May 2015

Issue 174

LJB: Pay attention to emerging technologies, but most importantly, focus on contributing to the technologies and projects that you’re passionate about. When you love what you do, people want to work with you. MLvD: Follow your dream. Pick an open source project on a topic that really energizes you and look at how others contribute. Then start contributing – start small. These projects can be a great way to learn and earn your marks as a contributor, which will then allow you to turn your passion into a job. SB: Is there any increasing demand for Linux skill sets? Are you able to get enough candidates? LJB: Yes, open source has gone more mainstream and so too has the demand for Linux skill sets. While we’re fortunate to have many great associates and high interest in Red Hat, we’re always looking to get more great talent in the company. SB: What’s creating this demand? And, what is the reason behind a gap between demand and availability of talent? LJB: There’s been plenty of media attention on the need for increased diversity in the technology industry and in open source. I believe we can make big strides in plugging the skills gap by investing in STEM education and promoting inclusive environments in our companies and projects. MLvD: More and more companies implement an open source strategy and look at Linux as a viable alternative. With the growth in the Linux market, there is inevitably a growing demand for Linux skills. At SUSE we feel that as well. Developing our own talent and hiring talent from the market go hand in hand at SUSE. n n n

Info [1] Who Writes Linux: [2] Linux Jobs Report 2015: publications/linux-foundation/linuxjobs-report-2015 |


It s Here! A brand new magazine for the Raspberry Pi Community

Look for us at your local newsstand! Or find us online at LNM_NEW_Rasp_Pi_1-1.indd 1

3/5/15 9:51:43 AM

Community Notebook Kernel News

Zack’s Kernel News Chronicler Zack Brown reports on the latest news, views, dilemmas, and developments within the Linux kernel community. By Zack Brown

Zack Brown The Linux kernel mailing list comprises the core of Linux development activities. Traffic volumes are immense, often reaching 10,000 messages in a week, and keeping up to date with the entire scope of development is a virtually impossible task for one person. One of the few brave souls to take on this task is Zack Brown.


May 2015

Permanent Deletion

Alexander Holler was unsatisfied with the way filesystems typically delete files. To save time, deleting a file typically means that the filesystem treats that range of data as available instead of in use. The problem with this approach is that there are relatively easy-to-use tools that some stranger might use to recover your private data after obtaining your hard drive. Alexander wanted to allow regular users to truly wipe their data off of storage media, rather than just have it *appear* to be gone. Alexander’s simple solution was to “overwrite the contents of a file by request from userspace. Filesystems do know where on the storage they have written the contents to, so why not just let them delete that stuff themselves instead.” He posted some patches, implementing a new system call that would delete files this way. Alan Cox, however, rained all over that parade. Alan said: The last PC hard disks that were defined to do what you told them were ST-506 MFM and RLL devices. IDE disks are basically ‘disk emulators’, SSDs vastly more so. An IDE disk can do what it likes with your I/O so long as your requests and returns are what the standard expects. So for example if you zero a sector, it’s perfectly entitled to set a bit in a master index of zeroed sectors. You can’t tell the difference and externally it looks like an ST506 disc with extensions. Even simple devices may well move blocks around to deal with bad blocks, or high usage spots to avoid having to keep rewriting the tracks either side. An SSD internally has minimal relationship to a disc. If you have the tools to write a file, write over it, discard it and then dump the flash chips you’ll probably find it’s still there. Alexander thanked Alan for the info but said that he wasn’t looking for ways to truly make data recovery impossible. He just wanted to make it inconvenient for ordinary “black hat” type people, who didn’t have governmentsized resources. Russ Dill pointed out that Alexander’s hopes were most likely doomed to failure. He posted some Strace output of a vim session, showing that the data was copied to new locations as a matter of course, as a way to avoid catastrophic misery after unpredicted system

Issue 174

crashes. He also reiterated what Alexander himself had said – that filesystems don’t cooperate with real deletion. Alexander pointed out that even in spite of these obstacles, his patch was still an improvement over the kernel’s current behavior. So, even if the data still to some extent existed on the drive, it would at least require significant resources to re-humpty-dumptify. The discussion ended with no real conclusion. I would guess, however, that Alexander’s patch would not be seen as a true improvement by Linus Torvalds or the other bigtimers. They’d probably say that if data were still available to be recovered, then folks would write code to make it easier to recover. They’d also probably say that the right place to implement Alexander’s features would be at the filesystem layer, providing a given filesystem with the ability to track and permanently delete all data associated with a given file. But, I don’t know for sure.

Resource Constraints in cgroups

Aleksa Sarai wanted to enhance cgroups (the building blocks of the whole anything-aaS explosion currently sweeping the globe) to limit the number of open processes. The whole point of cgroups is to create a bubble of limited resources that resembles an independently running Linux system. The bubble includes CPUs, RAM, physical storage, and whatnot. Aleksa wanted to add an open process constraint to the bubble and posted a patch to implement it. Tejun Heo, however, replied that this type of resource wasn’t appropriate for cgroups to control. He said that a better approach, and one that had already been implemented, was to have cgroups constrain the amount of memory available to the virtualized kernel. Richard Weinberger asked Tejun if the plan was “to limit kernel memory per cgroup such that fork bombs and stuff cannot harm other groups of processes.” Tejun said that, yes, they were very close to implementing that in the kmemcg code. Austin Hemmelgarn, however, pointed out that RAM limitation wasn’t the only reason to want to limit the number of open |

Community Notebook Kernel News

processes. Constraining the number of open processes would make it easier to ensure that certain tools like the NTP daemon, which needed just so many processes and no more, were running properly. It would also prevent certain denialof-service attacks. Tejun thought that all of Austin’s examples represented niche areas that could be handled in a simpler and less heavyhanded way than adding another cgroup controller. Tejun added, “I’m pretty strongly against adding controllers for things which aren’t fundamental resources in the system.” So, he went on, constraints on things like the number of open files, number of pipe buffers, and so on, were all things he’d oppose. Tim Hockin, however, pointed out that Tejun’s idea of limiting kernel memory via kmemcg had been promised years earlier and was so long overdue that something like Aleksa’s patch might as well be accepted as actually addressing the problem right now. Tejun agreed that the kmemcg plan was taking longer than expected, but that “kmemcg reclaimer just got merged and … the new memcg interface which will tie kmemcg and memcg together.” And, he told Tim to butt out or make a meaningful contribution. Tim replied, “I’m just vocalizing my support for this idea in defense of practical solutions that work NOW instead of ‘engineering ideals’ that never actually arrive. As containers take the server world by storm, stuff like this gets more and more important.” Tejun said, “As for the never-arriving part, well, it is arriving. If you still can’t believe, just take a look at the code.” He added: Note that this is [a] subset of a larger problem … there’s a patchset trying to implement writeback IO control from the filesystem layer. cgroup control of writeback has been a thorny issue for over three years now and the rationale for implementing this reversed controlling scheme is about the same – doing it properly is too difficult, let’s bolt something on the top as a practical measure. I think it’d be seriously short-sighted to give in and merge all those. These sorts of shortcuts are crippling in the long term. Again, similarly, proper cgroup writeback support is literally right around the corner. The situation sure can be frustrating if you need something now but we can’t make decisions solely on that. This is a lot longer term project and we better, for once, get things right. Austin reentered the discussion at this point, addressing Tejun’s idea of only wanting to constrain fundamental system resources like RAM size and disk space. He said, “PIDs are a fundamental resource, you run out and it’s an only marginally better situation than OOM, namely, if you don’t already have a shell open which has kill built in (because you can’t fork), or have some other reliable way to terminate processes without forking, you are stuck either waiting for the problem to resolve itself, or have to reset the system.” So, Austin supported Aleksa’s patch as a way to constrain the number of PIDs used by a virtual system. Tejun acknowledged that this was a valid point and said he’d give it more thought and see what he could come up with. On a technical note, he added, “Currently, we’re capping max pid at 4M which translates to some tens of gigs of memory which isn’t a crazy amount on modern machines. The hard(er) barrier would be around 2^30 (2^29 from |

Issue 174

May 2015


Community Notebook Kernel News

futex side, apparently) which would also be reachable on configurations w/​terabytes of memory.” The thread actually devolved into a minor flame war between Tejun and Tim, and so the technical side of things petered out. However, if Tejun is right that the kmemcg code is nearly ready, the disagreement may become moot at some point. In the meantime, nothing more on the PID issue was said. Ultimately, it seems that Tejun’s fundamental point is that cgroups should be implemented in the way that makes the best abstract sense, rather than the way that solves the most immediately desired problems. The unspoken argument behind this is that cgroup security is hard, and we don’t want our future selves to regret shortcuts we took today. On the Aleksa side of things, the main point seems to be that cgroups are useful and they should support useful features rather than mapping to an arbitrary metaphor like creating a “virtual system.” Both sides of the argument have merit, but I’m betting the security-obsessed side will tend to win out when it comes time to convince Linus Torvalds to accept a patch.

Tracing Gets Its Own FS

Steven Rostedt submitted patches to implement a new TraceFS filesystem, which had used DebugFS up until that point. The problem with using DebugFS for tracing, Steven said, was that if you mounted DebugFS, you got all the debugging from subsystems throughout the kernel, which might not be what you wanted. He said, “there are systems that would like to perform tracing but do not mount debugfs for security reasons. That is because any subsystem may use debugfs for debugging, and these interfaces are not always tested for security.” A new TraceFS would allow users to access the tracing subsystem without all that overhead and risk. Steven also pointed out that tracing was beginning to outgrow DebugFS’s features. He said, “debugfs does not support the system calls for mkdir and rmdir. Tracing uses these system calls to create new instances for sub buffers. This was done by a hack that hijacked the dentry ops from the ‘instances’ debugfs dentry, and replaced it with one that could work. Instead of using this


May 2015

Issue 174

hack, tracefs can provide a proper interface to allow the tracing system to have a mkdir and rmdir feature.” He added, “To maintain backward compatibility with older tools that expect that the tracing directory is mounted with debugfs, the tracing directory is still created under debugfs and tracefs is automatically mounted there.” It seems very clear that Linus Torvalds will accept this code – he tried to accept it into Linux 4.0, but Steven held it back himself. It turned out that there were some technical obstacles to overcome before the code would fit properly into the kernel. Specifically, the perf tools had hardcoded the assumption that the tracing directory would be mounted under DebugFS; so they wouldn’t see the tracing directory if it were mounted in any other way. Steven posted a patch to fix this, and it was accepted by Arnaldo Carvalho de Melo. However, that change didn’t make it into Linus’s 4.0 code, so Steven decided to wait for perf to catch up before resubmitting the TraceFS filesystem. Another interesting issue that emerged briefly but went nowhere was the possibility that TraceFS should be based on KernFS. Greg Kroah-Hartman originally made the suggestion, and Tejun Heo argued in favor of this as well. However, it turned out that KernFS had its own complexities, as well as poor documentation – Tejun said at one point, “I didn’t write any while extracting it out of sysfs. Sorry about that. I should get to it.” In response to Greg’s suggestion, Al Viro said, “I would recommend against that – kernfs is overburdened by their need to accommodate cgroup weirdness. IMO it’s not a good model for anything, other than an anti-hard-drugs poster (‘don’t shoot that shit, or you might end up hallucinating _this_’).” Steven remarked, “OK, I’m not the only one that thought kernfs seemed to go all over the place. I guess I now know why. It was more of a hook for cgroups. I can understand why cgroups needed it, as I found that creating files from a mkdir and removing them with rmdir causes some pain in vfs with handling of locking.” Eventually, he said, “I think I’m convinced that kernfs is not yet the way to go. I’m going to continue on with my current path.” n n n |

Shop the Shop


Perl 11 Cool ProjeCts! ▪ Math Tricks: Solve math problems with Perl ▪ Daily Tip: Perl with an SQLite database ▪ AJAX: Add dynamic updates to web pages ▪ isp-switch: Switch your computer to another ISP if your connection goes down

▪ MAC Addresses: Monitor your network for unfamiliar MAC addresses

▪ Multimeter: Read and interpret data from an external device

▪ Google Chart: Create custom graphs ▪ Twitter: Help your scripts send Tweets ▪ Webcam: Control a web camera with Perl ▪ Perl Gardening: Water your house plants with a little help from Perl

▪ GPS: Extract data from a GPS device and plot it on a Yahoo! Map

Free dVd inside!

fedora 18 New to Perl? Perl expert Randal L. Schwartz provides an in-depth introduction to the principles of the versatile Perl language. Then Perlmeister Mike Schilli explains how to speed up and debug your scripts. Also inside: Get hands-on with a collection of some of the Permeister‘s best columns!

Find it on newsstands now or order online:

shop. l i nux newme dia.c om/s pe c ia ls LNM_Special_Perl_1-1.indd 1

9/6/13 12:24:29 PM

Service Events

Featured Events Users, developers, and vendors meet at Linux events around the world. We at Linux Magazine are proud to sponsor the Featured Events shown here. For other events near you, check our extensive events calendar online at If you know of another Linux event you would like us to add to our calendar, please send a message with all the details to

DrupalCon 2015

Automotive Linux Summit

Date: May 11–16, 2015

DebConf 15 Date: August 15–22, 2015

Date: June 1–2, 2015

Location: Los Angeles, California Website: Developers, designers, users, and supporters of Drupal unite in Los Angeles to meet and connect with more than 4,000 of the world's top Drupal contributors, influencers, and organizations. Choose among keynotes, sessions BoFs, sprints, and training opportunities.

Location: Heidelberg, Germany

Location: Tokyo, Japan


Website: events/automotive-linux-summit The Summit delivers program content from the innovative minds of engineers, Linux experts, R&D managers, business executives, and others in the automotive and open source realm, with a variety of opportunities to connect with peers.

The annual Debian developers meeting convenes at the Heidelberg International Youth Hostel for technical discussions, workshops, and coding parties. Join others to hear speakers from around the world, and be a participant in developing key components of the Debian system, infrastructure, and community.

Events May 11–15

Maker Faire Bay Area

May 16–17

Women in Technology International May 30–June 1 Annual Summit

Los Angeles, California San Mateo, California

Santa Clara, California

Automotive Linux Summit June 1–2 Tokyo, Japan automotive-linux-summit

CloudOpen Japan June 3–5 Tokyo, Japan cloudopen-japan

LinuxCon Japan June 3–5 Tokyo, Japan linuxcon-japan

Maker Faire Kansas City

June 26–27

Kansas City, Missouri


July 7–9

Santa Clara, California


July 28–30

Boston, Massachusetts

USENIX Security '15

August 11–13

Washington, D.C.

DebConf 15

August 15–22

Heidelberg, Germany

LinuxCon North America Aug 17–19 Seattle, Washington linuxcon-north-america

CloudOpen North America Aug 17–19 Seattle, Washington cloudopen-north-america

#MesosCon August 20–21 Seattle, Washington mesoscon


August 23–26

Boston, Massachusetts


September 8–10

Las Vegas, Nevada

HostingCon EU September 22–23 Amsterdam, The Netherlands


May 2015

Issue 174 |

Images © Alex White,

DrupalCon 2015


Contact Info / Authors

Call for Papers We are always looking for good articles on Linux and the tools of the Linux ­environment. Although we will consider any topic, the following themes are of special ­interest: • System administration • Useful tips and tools • Security, both news and techniques • Product reviews, especially from real-world experience • Community news and projects If you have an idea, send a proposal with an ­outline, an estimate of the length, a description of your background, and ­contact information to edit@​

The ­technical level of the article should be consistent with what you ­normally read in Linux Magazine. Remember that Linux Magazine is read in many ­countries, and your article may be translated into one of our ­sister publications. Therefore, it is best to avoid using slang and idioms that might not be understood by all readers­­­­. Be careful when referring to dates or events in the future. Many weeks could pass between your manuscript submission and the final copy reaching the reader’s hands. When submitting proposals or manuscripts, please use a ­subject line in your email message that helps us identify your message as an article proposal. Screenshots and other supporting materials are always welcome. Additional information is available at:

Contact Info Editor in Chief Joe Casad, Managing Editor Rita L Sooby, Localization & Translation Ian Travis News Editor Joe Casad

Authors Erik Bärwaldt Swapnil Bhartiya Zack Brown Bruce Byfield Joe Casad

76 38, 88 92 72, 86 3, 8

Jon “maddog” Hall


Valentin Höbel


Dominik Honnef


Klaus Knopper


Peter Kreußel


Charly Kühnast


Martin Loschwitz


Dmitri Popov


Amit Saha


Mike Schilli


Dr. Udo Seidel Ferdinand Thommes

12, 18 26

Copy Editor Amber Ankerholz Layout Dena Friesen, Lori White Cover Design Lori White Cover Image © Tsung-Lin Wu, and BY-SA 3.0 Advertising – North America Ann Jesse, phone +1 785 841 8834 Advertising – Europe Penny Wilby, phone +44 1787 211100 Publisher Brian Osborn,

For all other countries: Email: Phone: +49 89 9934 1168 Fax: +49 89 9934 1199 – North America – Worldwide While every care has been taken in the content of the ­magazine, the publishers cannot be held responsible for the accuracy of the information contained within it or any ­consequences arising from the use of it. The use of the disc provided with the magazine or any material provided on it is at your own risk. Copyright and Trademarks © 2015 Linux New Media USA, LLC. No material may be reproduced in any form whatsoever in whole or in part without the written permission of the ­publishers. It is assumed that all correspondence sent, for ­example, letters, email, faxes, photographs, articles, ­drawings, are supplied for publication or license to third ­parties on a non-exclusive worldwide basis by Linux New Media USA, LLC, unless otherwise stated in writing. Linux is a trademark of Linus Torvalds.

Marketing Communications Darrah Buren, Linux New Media USA, LLC 616 Kentucky St. Lawrence, KS 66044 USA

Microsoft, Windows, and the Windows logo are ­either ­registered trademarks or trademarks of ­Microsoft ­Corporation in the United States and/or other countries.

Customer Service / Subscription For USA and Canada: Email: Phone: 1-866-247-2802 (Toll Free from the US and Canada) Fax: 1-785-856-3084

Printed in Germany |

Distributed by COMAG Specialist, Tavistock Road, West Drayton, Middlesex, UB7 7QE, United Kingdom Published in Europe by: Sparkhaus Media GmbH, Putzbrunner Str. 71, 81749 Munich, Germany.

Issue 174

May 2015


Next Month Issue 175

Issue 175 / June 2015

Interoperability The version numbers keep getting higher, but the challenges are still the same. Next month, we look at tools and techniques for mixing Linux with Android and Linux systems.


UK / Europe USA / Canada Australia

May 29 June 29

On Sale Date

Lead Image © Nookiez,

Preview Newsletter The Linux Magazine Preview is a monthly email newsletter that gives you a sneak peek at the next issue, including links to articles posted online. Sign up at:


May 2015

Issue 174 |

Need more Linux? Our free Linux Update newsletter delivers insightful articles and tech tips to your mailbox twice a month. You’ll discover: • Original articles on real-world Linux • Linux news • Tips on Bash scripting and other advanced techniques • Discounts and special offers available only to newsletter subscribers Ft Photography, Fotolia

Lmi 174 15 digisub  
Lmi 174 15 digisub